ALU

testing

Why I Practice TDD: Speed and Need

“We need you to go faster, so we need you to stop practicing test-driven development,” said the manager. “Just ship it, and we’ll worry about problems later.”

For developers who have ingrained TDD as just how they develop software, the manager’s proposition is laughable. It’s akin to telling a race car driver, “We need you to go faster, so we’re going to take out the steering wheel, and we’re going to turn the windshield into seamless, opaque body molding for aerodynamic reasons.”

Original Link

Introduction to Selenium Automation Testing

In an era of extremely interactive and responsive software processes where several enterprises are using some form of Agile methodology, automation testing has become crucial for many software projects. Automation testing beats manual one all the time as it requires less time and human resource has a lower risk for errors, allows regular execution, supports lights out the execution, regression testing and also functional testing. There are many commercial and open source tools available for supporting the growth of automation testing. Specifically, Selenium is one of the most widely-used tools to build test automation for web applications.

1. What Is Selenium Testing?

Selenium introduction

Original Link

Why Should You Be Invited to A Meeting?

There was a Twitter interaction about testers being invited or not to team meetings and about providing value to the team.

And it got me thinking.

Original Link

Best Practices for Reducing Testing Time and Effort

When it comes to quality assurance and the delivery of software and application changes, every industry has unique challenges that can slow or limit the process. One industry that’s seen it’s fair share of challenges is the aviation industry, where quality is important not only for the airplanes and airports, but for the backend systems that align the majority of airlines across the world – specifically, 280 airlines across 120 countries.

The governing body responsible for regulating airline affairs worldwide for such a large, globally dispersed organization, the International Air Transport Association (IATA), needed to ensure there were no defects affecting its organization whenever change was needed. And unlike some organizations that may have slower seasons due to consumer demand, the aviation industry’s operations are always in demand, and change is a required constant in day-to-day operations.

Original Link

How to Verify API Responses in Katalon Studio

Verifying an API response is always a challenging task in API testing. Some testers may find it hard to understand the JSON/XML response format; while some others struggle with getting the value of a specific key to verify. It is even harder when the response is big enough with the complex data structure.

Starting from version 5.8.3, Katalon Studio has released a new feature that targets solving those issues with a simple step. In this tutorial, you will learn how to use this feature to verify API responses.

Original Link

Feature Management for DevOps [Video]

At our September meeting of Test in Production, our own Tim Wong—LaunchDarkly’s Principle TAM and Chief of Staff—gave a talk about Feature Management for DevOps.

“You can get to a point where you are pushing configuration based on routes or other aspects of your instance. This is kind of poor man’s tracing in some ways or you can degrade functionality based on a particular route or a particular IP or a particular node set, particular method, particular account. You can provide different configuration sets for different types of things as long as you instrument them that way.”

Original Link

How You Could Stop Top Software Fails

Software defects are unavoidable. But developers and testers can save the day by catching critical defects before production – or face the aftermath when the company ends up in the software fail headlines.

In this webinar, Tricentis Product Manager Ingo Philipp and guest panel of developers, testers, and performance testing specialists will be analyzing specific examples from this years’ top software fails. We’ll dissect a handful of specific examples and, for each, explore:

Original Link

Testing Configs in Production

At our Test in Production Meetup in September, we heard from TR Jordan from Slack. He spoke about how his team from Turbine Labs tests configs in production.

"I spend my days rolling out Envoy in larger environments, and that’s mostly an exercise in writing a ton of config. Rolling it out to a huge number of servers…there’s some hard-learned lessons there, we ended up in a lot of conversations about how to safely roll out configuration, and how to really understand the behavior of this and build confidence in configuration files before going to production."

Original Link

10 Effective Tips on Using Maven

Maven is — without a doubt — the most popular build automation tool for software projects in the Java ecosystem. It has long replaced Ant, thanks to an easier and declarative model for managing projects, providing dependency management and resolution, well-defined build phases such compile and test, and support for plugins that can do anything related to building, configuring, and deploying your code. It is estimated to be used by 60 percent of Java developers in 2018.

Over the years, a number of usage scenarios and commands turned out to be quite useful for me when working on Maven-based projects. Here are a few usage tips that help in using Maven more effectively. There are many more, and one can obviously learn something new every day for a specific use case, but these are the ones I think can be commonly applied. The focus here is on aspects like command line usage, troubleshooting a certain issue, or making repetitive tasks easier. Hence, you won’t find practices like using dependencyManagementto centralize dependencies, which are rather basic anyway and more used in initially composing a POM.

Original Link

The Role of DevOps in Mobile App Development

Over the past five years, mobile devices have become the primary source for accessing the internet for millions of people around the globe. These trends have scrambled many industries to adapt towards the shift in business application users by developing a mobile app for their business.

During the early years of this shift, the IT industry focused on meeting market demand and businesses focused on creating a market presence. They overlooked to focus on app development costs, security, maintainability, and code quality.

Original Link

Testing as the Driver Towards a DevOps Culture

At Abstracta, we work with many companies, several of whom already have a DevOps culture and others whom we’ve helped to define and promote it. Over the years, we’ve seen DevOps gaining popularity, and most teams are on their way there. In this post, I’ll share some lessons from our experiences in helping companies with their agile transformations.

From what we’ve seen, we’re convinced that teams with a DevOps culture work better, obtain better results and are, just… happier.

Original Link

Demystifying Testing

Many of you have messaged me, confused about where to get started with testing. Just like everything else in software, we work hard to build abstractions to make our jobs easier. But that amount of abstraction evolves over time until the only ones who really understand it are the ones who built the abstraction in the first place. Everyone else is left with taking the terms, APIs, and tools at face value and struggling to make things work.

If there’s one thing I believe about abstraction in code, it’s that the abstraction is not magic, it’s code. If there’s another I thing I believe about abstraction in code, it’s that it’s easier to learn by doing.

Original Link

Making Code Testable

I believe that testability is one of the key characteristics of good, maintainable software. But what do I mean by testability?

Testable code is code that’s written in such a way that it is independently verifiable. It has a well-defined programmatic interface and it can be fully tested based on that interface. Testable code receives dependencies as input parameters so that during testing fake dependencies can be injected instead. Testable code is made of small and functionally independent behaviors that make up a system.

Original Link

What Continuous Delivery Means for Testers, QA, and Software Quality

If you’ve been a software tester for any length of time, you’ve likely noticed the shift toward continuous delivery, whereby businesses and project and operations teams aim to safely and quickly release new builds to production, ostensibly at the push of a button. The realization of continuous delivery means faster feedback, improved time to market, increased quality, and a better customer experience, though not necessarily in that order.

What’s All the Fuss About Continuous Delivery?

Continuous Delivery gives developers rapid feedback on their code, which leads to improved productivity. In theory, code can be written, tested, reviewed, merged, and integration and acceptance tested before it even gets into a tester’s hands. Rapid, reliable and high-quality releases mean happier customers, which often translates into increased business revenue.

Original Link

A Word to the Wise for Selenium WebDriver Testers

Software testing has drifted towards automation. Testers no longer like to execute manual testing processes to accomplish testing tasks. Automated software testing is touted as one of the best inventions in the software testing world. Through automated testing methods, testers get the power to perform better functions and accomplish tasks more efficiently. When we talk about software testing, one of the most popular and highly-used testing software that comes to mind is Selenium. Selenium has played a major role in enhancing the overall quality of the software development lifecycle.

A Brief About Selenium WebDriver

A collection of topnotch open-source APIs, Selenium WebDriver is used to automate the testing of any web application. Selenium WebDriver tool supports different browsers like Internet Explorer, Firefox, Chrome, and Safari. Through Selenium WebDriver, testers can ensure the perfect working of a web application (You can get more of an idea about it by watching a video on the Download and Install process of Selenium WebDriver by Software Testing Material channel on YouTube).

Original Link

The Xth-Code Files: Xcode 10 Tips

Now that we’ve hopefully all got our iOS 12 and Swift 4.2 migrations under control, let’s catch our breath and dive a little deeper into what’s new since last year with Xcode 10, shall we?

What’s New in Xcode:

Dark Mode Interface and Mac App Support

  • All-new dark appearance throughout Xcode and Instruments
  • Asset catalogs add dark and light variants for custom colors and image assets
  • Interface Builder switches between dark and light previews of your interface
  • Debug your Mac apps in dark or light variants without changing OS settings

Source Control

  • Changes in the local repository or upstream on a shared server are highlighted directly within the editor…
  • Support for cloud-hosted and self-hosted Git server offerings from Atlassian Bitbucket, as well as GitLab to go along with existing GitHub support.
  • Xcode offers to rebase your changes when pulling the latest version of code from your repository.
  • SSH keys are generated if needed, and uploaded to service providers for you.

Editor Enhancements

  • Place multiple cursors in your code editor to make many changes at once.
  • Code folding ribbon can now hide any code block surrounded by braces.
  • Over-scroll makes it easy to center the last lines of code in the middle of your screen.

Playgrounds Built for Machine Learning

  • New REPL-like model reruns your existing playground code instantly.
  • Run your code up to any specific line, or type shift-return to run the code you just added.
  • Import the Create ML framework to interactively train new models, and then write code to test the model right in the playground. When finished, drag the model into your app.

Testing and Debugging

  • Debugging symbols are downloaded from a new device five times faster than before.
  • Xcode will spawn a collection of identical Simulators to take advantage of your multi-core Mac, and fan tests out to run in parallel, completing your test suite many times faster.
  • Run tests in random or linear order.
  • Instruments automatically show OSLog signposts you add into your code.
  • Build and share your own custom instruments package to provide unique data visualization and analysis for your own code.
  • Memory debugger uses a compact layout to make it easier to investigate your memory graph.
  • Metal shader debugger lets you easily inspect the execution of your vertex, fragment, compute, and tile shader code.
  • Metal dependency viewer provides a detailed graph of how resources are used in your Metal-based app.

Build Performance

  • New build system enabled by default with improved performance throughout.
  • Swift compiler builds each individual file significantly faster.
  • Large Swift projects build for debugging dramatically faster when using the incremental build setting.

That’s a pretty good selection of improvements, yes? And we have an admirably exhaustive collection of release notes to dig deeper:

Original Link

Kotlin Testing With Spock Part 3: Interface Default Method

Kotlin allows you to put method implementation in an interface. The same mechanism can be found in Java interfaces as default methods (and also Groovy or Scala traits). Let’s see the difference between the Kotlin and Java default methods in the interface by testing it with Groovy and Spock.

What Do We Want to Test?

We often have an interface for access object from the database. In the domain, they might look similar to this KotlinOrderRepository:

Original Link

Take Unit Testing to the Next Level With JUnit 5

JUnit is the most popular testing framework in Java, and with JUnit 5 testing in Java 8 and beyond, it takes another step forward. This version was released in September 2017 and has been actively updated to fix bugs and add new features. Moreover, JUnit 5 is also compatible with version 3 and 4 by adding junit-vintage-engine to your classpath path.

Migrating From JUnit 4

When migrating from JUnit 4, there are a few considerations to bear in mind:

Original Link

Not Only Cars: The Six Levels of Autonomous Testing

There is something similar about driving and testing. While testing is an exercise in creativity, parts of it are boring — just like driving is. Regression testing is tedious in that you need to do the same tests over and over again, every time a release is created, just like your daily commute. And just like during your daily commute, doing something repetitively is a recipe for mistakes, so repetitive testing, just like driving, is a dangerous activity, as can be seen from the various crash sites strewn over our commute highways. Or by the various
bugs that slipped from us during our regression testing.

Which is why we automate testing. We write code that runs our tests and run them whenever we want. But even that gets repetitive. Another day, another form field to check. Another form, another page. And one gets the feeling that writing those tests is a repetitive process. That it too can be automated.

Original Link

Building a Quality QA Test Team

We think, mistakenly, that success is the result of the amount of time we put in at work, instead of the quality of time we put in.

If we didn’t know any better, we could say Arianna Huffington was specifically talking about mobile apps.

It’s predicted that 5.5 billion people will use mobile devices by 2020. This consistent increase in mobile devices creates a constant demand for unique and useful mobile applications.

Original Link

5 Things Agile Testing Does Differently

Agile testing methodology has been adopted by enterprises who need continuous changes throughout the software development and testing lifecycle. The practice demands that development and testing activities are conducted alongside each other, which is very differently structured when compared to the Waterfall model. Hence, an Agile testing approach takes a completely different approach as opposted the traditional testing approach.

In reference to Enterprise Agile Planning Tools, Gartner states that, "enterprise Agile planning (EAP) tools help organizations to make use of Agile practices at scale to achieve enterprise-class Agile development. This is achieved by supporting practices that are business outcome-driven, customer-centric, collaborative and cooperative, as well as with continual stakeholder feedback." This practically defines how Agile takes an absolutely distinct approach throughout the development cycle.

Original Link

How to Expedite the Creation of JUnit Parameterized Tests

When writing unit tests, it is common to initialize method input parameters and expected results in the test method itself. In some cases, using a small set of inputs is enough; however, there are cases in which we need to use a large set of values to verify all of the functionality in our code. Parameterized tests are a good way to define and run multiple test cases, where the only difference between them is the data. They can validate code behavior for a variety of values, including border cases. Parameterizing tests can increase code coverage and provide confidence that the code is working as expected.

There are a number of good parameterization frameworks for Java. In this article, we will look at three different frameworks commonly used with JUnit tests, with a comparison between them and examples of how the tests are structured for each. Finally, we will explore how to simplify and expedite the creation of parameterized tests.

Original Link

Problems Missed When Only Testing APIs Through UI

UI testing is an important part of quality assurance. Specifically, UI testing refers to the practice of testing front-end components to make sure that they do what they’re supposed to. If a user clicks the Login button, the login modal appears. If they click a link, they’re brought to the appropriate part of the application. With automation platforms, these individual tests can be linked together into workflows and automated. Business-driven development style tests can be created in this fashion. The UI can be tested to see that each individual path that a user may take is functional and that the interface is responding appropriately. Other platforms exist that allow these workflows to be tested on simulated resolutions and devices, ensuring that the user experience is consistent across all possible combinations of browser and device.

API testing lives a layer below UI testing. The UI is fed by these APIs and renders the DOM based upon conditions set by both the user and the developer. These conditions determine the sort of API call that’s made to populate the viewport. When we’re UI Testing, it could be argued that we are indirectly testing the API layer. It’s actually pretty fair to say so. Many of the actions that our UI platform will take will issue API calls. If the DOM rerenders correctly, we can assume to an extent that the API call was successful. The dangerous ground here is the assumption.

Original Link

13 Reasons a Staging Environment Is Failing in Your Organization

The staging environment is something which is suggested as a best practice but considered a burden. Many of us feel weighed down with the thought of the extra investment and effort involved to keep it up. It happens very often that a company, in spite of having a staging environment, ends up failing to reap the proper results from it. This makes us ponder, what went wrong in our QA environment? Why is a change which performed so well in QA, happened to walk south after migrating to production?

This post is aimed towards the importance of having a dedicated staging environment for QA in all companies. If you think you are better off with your staging environment, give it a quick read and think again!

Original Link

5 Steps to a Clear S/4HANA Migration Strategy

If you’re not thinking about S/4HANA already, you definitely should be. It’s fair to say that SAP’s latest platform has taken some time to gain traction amongst the global SAP user base, but the wheels are slowly starting to turn, with adoption rates increasing sharply over the past nine to twelve months.

Uncertainty about the journey to SAP S/4HANA leads many customers to fear a lengthy, expensive project that will disrupt operations, a fear that is clearly evident amongst many SAP users I speak to. It’s true that for any organization with a relatively complex SAP landscape today (i.e. most large organizations that run SAP), the ECC to S/4HANA migration is going to take time and will likely not come cheap. However, with sufficient preparation, appropriate change management, and the right tools and resources, real benefits can be unlocked through a smoother, lower-risk transition.

Original Link

Simplifying Packaging Spring Boot 2 Applications Into Docker Images Using Google’s Jib

1. Introduction

Ever since I published Microservices using Spring Boot, Jersey, Swagger and Docker, I had entertained the idea of making Package the service into a Docker image section its own blog post.

Back then I used Spotify’s docker-maven-plugin, which required connecting to a Docker host. Also felt it would have been almost identical to the section mentioned.

Original Link

Keys to Effective API Testing

API endpoints make websites work. Simply put, they are the conduits that data moves through. Login functionality? Frequently an API call for authentication. Click on a new section of a webpage? Often an API call for content. Clearly, APIs are a critically important part of any web application. The way that we test these endpoints is incredibly important. At API Fortress, we like to maintain the best practices that follow for effective API testing.

Rule 1: Keep It DRY

DRY is an acronym for “Don’t Repeat Yourself.” This simple idea forms a core principle of good programming. When we are writing tests, even in API Fortress’ visual composer, we’re still programming and should make every effort to adhere to the principles of writing good code. Let’s say that I have an endpoint that provides user data. The relational database providing the user data has ten entries, and I’d like to write tests to validate the responses when each one of these entries is called. There’s no “All Users” route as our organization has no real business need for one. The only way to access the data in each of these endpoints is to send multiple calls, one for each entry in the database. How could we accomplish this without repeating code?

Original Link

Stop Rerunning Your Tests

Tests are usually the longest running operation in your development process. Running them unnecessarily is the ultimate time waster. Gradle helps you avoid this cost with its build cache and incremental build features. It knows when any of your test inputs, like your code, your dependencies or system properties, have changed. If everything stays the same, Gradle will skip the test run, saving you a lot of time.

So you can imagine my desperation when I see snippets like this on StackOverflow:

Original Link

Certification of the Couchbase Autonomous Operator for K8s

The Couchbase Autonomous Operator enables you to run Couchbase deployments natively on Open Source Kubernetes or Enterprise Red Hat OpenShift Container Platform. I’m excited to announce the availability of Couchbase Autonomous Operator 1.0.0 today!

Running and managing a Couchbase cluster just got a lot easier with the introduction of the Couchbase Autonomous Operator for Kubernetes. Users can now deploy Couchbase on top of Kubernetes and have the Couchbase Autonomous Operator handle much of the cluster management, such as failure recovery and multidimensional scaling. However, users may feel a bit uncomfortable just sitting back and watching the Couchbase Autonomous Operator do its thing. To alleviate some of their worry, this three-part blog series will walk through the different ways the Quality Engineering team here at Couchbase gives our customers peace of mind when running Couchbase on Kubernetes.

Original Link

The Business Case for Unit Testing — Part 2

In the previous part, we learned valuable lessons on how often to push code into production and studied how Facebook handles this. Let’s continue, and get on top of business!

5. QA Has Time to Focus on Other Types of Tasks and Testing

Here’s another subtle benefit, and it has to do with how the QA group spends its time. Specifically, they can spend their time doing higher value things.

Original Link

How to Build a Culture of Continuous Testing in Your Organization

The trends in software development are showing that more and more companies are adopting CI/CD methodologies to deliver their software applications. We all know that the market demands quicker releases. The days of waiting for months for new releases are gone. Software is now being released at record speeds! Adopting CI/CD does just that. It helps get your application out the door to the market as often as possible. However, one key aspect that seems to be overlooked is Continuous Testing. It’s great that CI/CD is getting software out quicker but quality should not be sacrificed. To solve that you have to test early, test often! Adding a culture of Continuous Testing to your model will provide the following benefits because now you’re focused on testing from the beginning of your SDLC

  • Faster Release Cycles
  • Better Code Quality
  • Better Test Coverage
  • Better Reliability

Now that we know to truly have a CI/CD methodology you need continuous testing as well. But just like adopting CI/CD, continuous testing requires an organizational culture shift. So how do you build that culture? It’s a different mindset that relies heavily on automation: Automated Unit Tests, Automated Functional and Non-Functional Tests, Automated Regression Tests, and Automated Deployments. Basically, anything that can be automated should be automated! That is the key principle of Continuous Testing, test from the beginning and automate as much as possible to ensure faster release cycles.

Original Link

Effective Test Automation in Platform Modernization

Enterprises are at a technology crossroad. The business need to transform towards a digital model in order to better serve (and compete for) the next generation of customers has witnessed the rise of coding, or for most Independent Software Vendors (ISVs), the re-coding of a software product. This process of rewriting code by leveraging new technologies with modern infrastructure, user interfaces, and integrations is Platform Modernization.

There is a drastic increase in expectations from the next generation of customers, who demand the delivery of new capabilities and features faster.

Original Link

Got a Plan for Digital Change With Digital Assurance?

Going by the recent news update, "Google’s next version of Android finally has a name: ‘Pie.’ It’s rolling out right now and is packed with new features, from extended battery life to new gesture navigation." This version has a collection of new animations and features that automate various functions. Along similar lines, devices and application development teams are rapidly working towards enhancing their digital goals. Major brands and tech firms are leveraging new technologies and digital platforms to transform their brand strategy for better consumer outreach.

Social Media and digital technologies such as Artificial Intelligence, Virtual Reality (VR) Augmented Reality (AR) are increasingly influencing the branding strategy for many enterprises. For instance, international brands like IKEA are leveraging VR/AR to generate brand awareness in new markets. Digital Experimentation is a trending area that many brands are exploring and are also finding a great deal of value.

Original Link

What Is the Importance of DevOps Certification?

While conducting recruitment drives, HR managers look for different characteristics in a prospective employee, having one and only one criteria in mind: How will this specific competitor add value to my enterprise, particularly in contrast with others on the list? One way that can surely tilt the scales in your favor is if you have good certifications attained from recognized institutes.  Let us talk about the importance of DevOps certification today. To do that, we will first see the list of important DevOps certifications one needs to pursue and then we will look into the benefits that it provides to you.

List of DevOps Certifications

There are several DevOps certification courses available, but the following ones are the world-recognized list of DevOps certifications.

DevOps Foundation® Certification

The DevOps Foundation course gives a basic comprehension of key DevOps phrases to guarantee everybody is using a similar language of DevOps and highlights the advantages of DevOps to help authoritative achievement.

DevOps Leader (DOL)®

The DevOps Leader course is a one-of-a-kind and useful experience for members who need to adopt a transformational authority strategy and have a substantial impact inside their enterprise by actualizing the features DevOps.

DevSecOps Engineering (DSOE)℠

This course clarifies how DevOps security differs from other security approaches and gives the instructions to comprehend and apply information and security sciences.

Continuous Delivery Architecture (CDA)℠

This course is intended for members who are engaged with the planning, execution, and administration of DevOps organization pipelines and toolchains that help Continuous Integration, Continuous Delivery, Continuous Testing, and conceivably Continuous Deployment.

DevOps Test Engineering (DTE)®

This far-reaching course tests concepts in a DevOps domain and spreads ideas, for example, the dynamic utilization of test automation, testing earlier in the advancement cycle, and ingraining testing abilities in engineers, quality confirmation, security, and operational teams.

Why Is DevOps Certification Important?

DevOps certification comes in handy in many places and you can benefit a lot from it. There are several reasons it is important.

Benefit Your Organization

By acquiring a DevOps certification, you can offer your association heaps of quantifiable advantages. The DevOps belief system advances expanded cooperation and correspondence between operation and development groups. Code that goes into creation is expanded because of a shorter advancement cycle. What used to take 3-6 months will now take just a couple of hours with DevOps functionality.

Better Job Opportunities

DevOps is a very new and novel idea in business and an ever-increasing number of organizations are deploying the practices of DevOps. There is a deficiency of confirmed experts who can successfully deliver their DevOps skills to the organizations they are associated with. A DevOps accreditation will help you grow your mindset as an IT expert and better job openings for your desirable work profiles will definitely come your way.

Improved Skills & Knowledge

The DevOps ideology encourages a completely new way of thinking and decision-making. The business and technical benefits of DevOps are many and you can learn how to implement them in your organization. You learn to work in a team consisting of cross-functional team members: QA, developers, operation engineers, and business analysts.

Increased Production & Effectiveness

With a DevOps certification on your resume, your efficiency as an IT expert will increase. Under ordinary IT conditions, a considerable measure of time is squandered waiting for other individuals and other software. Everybody likes to be productive at work, and the time you squander waiting is certain to cause you some disappointment. With DevOps, you can dispose of this unsuitable aspect of your responsibilities and invest the energy in increasing the value of your organization and your staff.

Increased Salary

As per an ongoing review, DevOps certified experts are among the most generously compensated professionals in the IT business. The market demand for them is expanding quickly with its expanded usage worldwide and this pattern does not seem likely to change at any point in the near future.

Rejuvenates the Employees

DevOps certification helps you as an organization when your employees acquire the new specialized practices, as well as aids in adopting new things every day by taking various DevOps certifications. It gives a genuinely needed shake to the working existence of the employees. With new workplaces (shared workplaces as opposed to separate ones for every division), new individuals communicate and gain, and perhaps add a few new approaches to tackle the errand that was being performed in a customary way. DevOps resembles a much-needed refresher in the IT business.

Conclusion

DevOps is significantly acquiring a good market share in the IT industry. The demand for professionals who are skilled in DevOps is on an all-time rise. This pattern is likely to continue for a while. The career path and future scope look very bright if you attain the necessary DevOps certifications. Without certifications, your skills are of no use, as the recruiters may not believe you. The certification acts as a testimonial of your skills, and that is why it is very important for you to acquire the necessary certifications.

Original Link

What Is the Importance of DevOps Certification?

While conducting recruitment drives, HR managers look for different characteristics in a prospective employee, having one and only one criteria in mind: How will this specific competitor add value to my enterprise, particularly in contrast with others on the list? One way that can surely tilt the scales in your favor is if you have good certifications attained from recognized institutes.  Let us talk about the importance of DevOps certification today. To do that, we will first see the list of important DevOps certifications one needs to pursue and then we will look into the benefits that it provides to you.

List of DevOps Certifications

There are several DevOps certification courses available, but the following ones are the world-recognized list of DevOps certifications.

DevOps Foundation® Certification

The DevOps Foundation course gives a basic comprehension of key DevOps phrases to guarantee everybody is using a similar language of DevOps and highlights the advantages of DevOps to help authoritative achievement.

DevOps Leader (DOL)®

The DevOps Leader course is a one-of-a-kind and useful experience for members who need to adopt a transformational authority strategy and have a substantial impact inside their enterprise by actualizing the features DevOps.

DevSecOps Engineering (DSOE)℠

This course clarifies how DevOps security differs from other security approaches and gives the instructions to comprehend and apply information and security sciences.

Continuous Delivery Architecture (CDA)℠

This course is intended for members who are engaged with the planning, execution, and administration of DevOps organization pipelines and toolchains that help Continuous Integration, Continuous Delivery, Continuous Testing, and conceivably Continuous Deployment.

DevOps Test Engineering (DTE)®

This far-reaching course tests concepts in a DevOps domain and spreads ideas, for example, the dynamic utilization of test automation, testing earlier in the advancement cycle, and ingraining testing abilities in engineers, quality confirmation, security, and operational teams.

Why Is DevOps Certification Important?

DevOps certification comes in handy in many places and you can benefit a lot from it. There are several reasons it is important.

Benefit Your Organization

By acquiring a DevOps certification, you can offer your association heaps of quantifiable advantages. The DevOps belief system advances expanded cooperation and correspondence between operation and development groups. Code that goes into creation is expanded because of a shorter advancement cycle. What used to take 3-6 months will now take just a couple of hours with DevOps functionality.

Better Job Opportunities

DevOps is a very new and novel idea in business and an ever-increasing number of organizations are deploying the practices of DevOps. There is a deficiency of confirmed experts who can successfully deliver their DevOps skills to the organizations they are associated with. A DevOps accreditation will help you grow your mindset as an IT expert and better job openings for your desirable work profiles will definitely come your way.

Improved Skills & Knowledge

The DevOps ideology encourages a completely new way of thinking and decision-making. The business and technical benefits of DevOps are many and you can learn how to implement them in your organization. You learn to work in a team consisting of cross-functional team members: QA, developers, operation engineers, and business analysts.

Increased Production & Effectiveness

With a DevOps certification on your resume, your efficiency as an IT expert will increase. Under ordinary IT conditions, a considerable measure of time is squandered waiting for other individuals and other software. Everybody likes to be productive at work, and the time you squander waiting is certain to cause you some disappointment. With DevOps, you can dispose of this unsuitable aspect of your responsibilities and invest the energy in increasing the value of your organization and your staff.

Increased Salary

As per an ongoing review, DevOps certified experts are among the most generously compensated professionals in the IT business. The market demand for them is expanding quickly with its expanded usage worldwide and this pattern does not seem likely to change at any point in the near future.

Rejuvenates the Employees

DevOps certification helps you as an organization when your employees acquire the new specialized practices, as well as aids in adopting new things every day by taking various DevOps certifications. It gives a genuinely needed shake to the working existence of the employees. With new workplaces (shared workplaces as opposed to separate ones for every division), new individuals communicate and gain, and perhaps add a few new approaches to tackle the errand that was being performed in a customary way. DevOps resembles a much-needed refresher in the IT business.

Conclusion

DevOps is significantly acquiring a good market share in the IT industry. The demand for professionals who are skilled in DevOps is on an all-time rise. This pattern is likely to continue for a while. The career path and future scope look very bright if you attain the necessary DevOps certifications. Without certifications, your skills are of no use, as the recruiters may not believe you. The certification acts as a testimonial of your skills, and that is why it is very important for you to acquire the necessary certifications.

Original Link

Testing in Production the Netflix Way [Video]

In June we focused our Test in Production Meetup around chaos engineering. Nora Jones, Senior Software Engineer at Netflix, kicked off the evening with a talk about how Netflix tests in production.

“Chaos engineering…is the discipline of experimenting on production to find vulnerabilities in the system before they render it unusable for your customers. We do this at Netflix through a tool that we call ChAP…[It] can catch vulnerabilities, and allows users to inject failures into services and prod that validate their assumptions about those services before they become full-blown outages.”

Watch her talk below to learn more about how her team helps engineers across Netflix to safely test in production and proactively catch vulnerabilities within their systems. If you’re interested in joining us at a future Meetup, you can sign up here.

Transcript

I’m super excited to be here today. Netflix is a huge fan of testing in production. We do it through chaos engineering, and we’ve recently renamed our team to Resilience Engineering because while we go chaos engineering still, chaos engineering is one means to an end to get you to that overall resilience story. I’m going to talk a little bit about that today.

Our goal as a team is to improve availability by proactively finding vulnerabilities in services, and we do that by experimenting on the production system. Our team has an active belief that is a certain class of vulnerabilities and issues that you can only find with live production traffic. I’m going to talk to you a little bit about how we do that today.

First and foremost, our focuses with testing in production are safety and monitoring. You really can’t have great testing and production unless you have these things in place. And testing in production can seem really scary, and if it does seem scary in your company, you should listen to that voice and figure out why it seems scary. It might be because you don’t have a good safety story. It might be because you don’t have a good observability story. We really focus on these two worlds within Netflix and within our tools.

To define chaos engineering just in a simple sentence, it’s the discipline of experimenting on production to find vulnerabilities in the system before they render it unusable for your customers. We do this at Netflix through a tool that we call ChAP, which stands for Chaos Automation Platform. ChAP can catch vulnerabilities, and it allows users to inject failures into services and prod that validate their assumptions about those services before they become full-blown outages.

I’m going to take you through how it works at a high level. This is a hypothetical set of microservice dependencies. There’s a proxy. It sends request to service A, which fans out to service B, C, and D, and then there’s also a persistence layer. Service D talks to Cassandra, and then service B talks to a cache.

I went ahead and condensed this, because it’s about to get busy in a second. We want to see if service D is resilient to the failure of a cache. The user goes into the ChAP interface, and they select service D as a service that will observe the failures in a cache as a service that fails. ChAP will actually go ahead and clone service B into two replicas. We refer to them as the control in the experiment clusters, and it kind of works like AB testing or like a sticky canary. These are much smaller in size than service B. We only route a very, very small percentage of customers into these clusters because obviously we want to contain the blast radius. We calculate that percentage based on the current number of users currently streaming, currently using the service.

It will then instruct our failure injection testing to tag these requests that match our criteria. It does this by adding information to the header of that request. It creates two sets of tags. One set will have instructions to both fail and be routed to the canary, and then the other will have instructions just to be routed to the control.

When the RPC client and service A sees in the instructions that it needs to route a request, it will actually send them to the control or the experiment cluster. And then once failure injection testing in the RPC layer of the experiment cluster sees that the request has been tagged for failure, it will then return the failed response. As before, the experiment cluster will see that as a failed response from the cache it will execute the code to then handle a failure. We’re doing this with the assumption that this is resilient to failure, right? But what we see sometimes is that that’s not always the case. From the point of view of service A, it looks like everything is actually behaving normally.

How do we monitor this while these chaos experiments are running because it has the potential to go very poorly. When Netflix started our chaos engineering story, we didn’t have good gates in place. We would run a failure experiment, cross our fingers, and then all sit in a war room watching the grass and making sure that nothing actually went incorrectly. Now, we have much more of a safety focus.

We look at a lot of our key business metrics at Netflix. One of our key business metrics is what we SPS, or stream starts per second. If you think about what is the most important thing to the business of Netflix, it’s that a customer can watch Friends or The Office or whatever they want to watch whenever they want to watch it.

What you see in these graphs here are an actual experiment, and it shows the SPS difference between the experiment and control during a chaos experiment. You can see here that these are deviating a lot from each other, which shouldn’t be the case because there’s the same percentage of traffic routed to both clusters.

Because of that, the experiment will use automated canary analysis and see wow, these deviated really far from each other. I’m going to short the experiment. I’m going to stop failing these requests for the customer, and they’ll have a normal experience. From a customer perspective, it’s more seen as a blip when something like this happens.

We have a bunch of other protections in place as well. We limit the amount of traffic that’s impacted in each region so we’re not just only doing experiments in U.S. West 2. We’re doing them all over the place and limiting the amount of experiments that can run in a region at a time. We’re only running during business hours so we’re not paging engineers and waking them up if something goes wrong. If a test fails, it can actually not be automatically run again or picked up by anyone until someone actually explicitly manually resolves it and acknowledges hey, I know this failed, but I fixed whatever needed to be fixed.

We also have the ability to apply custom fast properties to clusters, which is helpful if your service is sharded, which a lot of services are at Netflix. Additionally, and I don’t have this as a bullet point, we also have the ability to fail based on device. If we’re assuming that Apple or a certain type of television is having a bunch of issues, we can limit it to that device specifically and see if that issue is widespread across that device.

ChAP has found a lot of vulnerabilities. Here’s some examples. This is one of my favorite ones. The user says, “We ran a ChAP experiment which verifies the service’s fallback path works, which was crucial for our availability, and it successfully caught an issue in the fallback path and the issue was resolved before it resulted in availability incident.” This is a really interesting one, because this fallback path wasn’t getting executed a lot, so the user didn’t actually know if it was working properly, and we were able to simulate. We were able to actually make it fail and see if it went to the fallback path and the fallback path worked properly. In this case, the user thought their service was noncritical or tier two or whatever you label it as, but really it actually was a critical service.

Here’s another example. We ran an experiment to reproduce a signup flow fallback issue that happened with certain deploys intermittently at night. Something kind of weird was happening with their service. We were able to reproduce the issue by injecting 500 milliseconds of latency. By doing the experiment, we were able to find the issues in the log file that was uploaded to the Big Data Portal. This helped build context into why Signup fallback experiences served during certain pushes. That fallback experience kept happening, but these users didn’t know why. And they actually ran a ChAP experiment to see when it was happening and to see why it was happening.

To set up ChAP experiments, there’s a lot of things the user needs to go through. They need to figure out what injection points they can use. Our teams had to decide if they wanted failure or latency. These are all of our injection points. You can fail Cassandra, Hystrix which is our fallback layer, RPC Service, RPC Client, S3, SQS, or our cache, or they can add latency. Or you can add both. And you can actually come up with combos of different experiments.

What would happen is we would meet with service teams and we’d sit in a room together, and we’d try to come up with a good experiment. It would take a really long time. When we were setting up the experiment too you have to decide your ACA configurations, or your automatic canary configurations.

We had some canned ACAs set up. We had a ChAP SPS one. We had one that looked at system metrics. We had one that looked at RPS successes and failures. We had one that looked at whether our service was actually working properly and injecting failures, and we learned that experiment creation can be really, really time-consuming, and it was. Not a lot of experiments were getting created. It was hard for a human to actually hold all the things in their head that made a good experiment. We decided to automate some of this from ChAP. We were looking at things like, who was calling who? We were looking at timeout files. We were looking at retries, and we figured out that all of that information was in a lot of different places. We decided to aggregate it.

We zoomed into ChAP, and we got cute and we gave it a monocle, and the Monocle provides crucial optics on services. This is what Monocle looks like. It has the ability for someone to look up their app and their cluster and they can see all this information in one place. Each row represents a dependency, and this dependency is what feeds into chaos experiments.

We were using this to come up with experiments, but what we didn’t realize was this information was actually useful to just have in one place as well, so that was an interesting side effect. Users can come here and actually see if there are anti-patterns associated with their service like if they had a dependency that was not supposed to be critical but didn’t have a fallback. Obviously, it was critical now. People could see timeout discrepancies. People could see retry discrepancies. We use this information to score a certain type of experiment’s criticality and fed that into an algorithm that determined prioritization.

Each row represents a dependency, and they can actually expand the rows. Here’s an interesting example. That blue line represents someone’s timeout, and the purple line represents how much time it was actually taking most of the time. You can see it is very, very far away from the timeout. But a lot of this information wasn’t readily accessible. What would happen if we did a chaos experiment just under the timeout. You know? Is that going to pass? It never executes that high. It’s an interesting question. We’re trying to provide this level of detail to users before these chaos experiments get run to give them the opportunity to say, “Wait, this doesn’t look right.”

I’m going to play a little game. I know a lot of you don’t have contacts on the Netflix ecosystem, but there’s a vulnerability in this service, and I want to see if you can spot it. Take a second to look at it. To give you some context, sample remote Hystrix command wraps both the sample-rest-client and the sample-rest-client.GET. The Hystrix timeout is set to 500 milliseconds. Sample-rest-client.GET has a timeout of 200 with one retry, and this is fine because it’s a total of 400 milliseconds with exponential backoff, which is within that Hystrix limit. The sample retry client has timeouts of 100 and 600 with one retry.

In this case, the retry might not have a chance to complete given the surrounding Hystrix wrapper timeout, which means that Hystrix abandons the request before the RPC has a chance to return. That’s where the vulnerability lies. We’re actually providing this information to users, and what’s interesting is a lot of this logic lies in different places. They weren’t able to have this level of insight before. Those were okay, and this is where the vulnerability lies.

Why did this happen? It’s easy for a team to go in and look at their conflict file and just change this surround, right? But we want to figure out why this happened. We can change the timeout, but who’s to say this won’t happen again? We also help with figuring out why these things happen. Engineers weren’t making bad choices, it was just a lot of things to update at once. That’s something to be learned as well.

We use Monocle for automatic experiment creation as well. A user creates an experiment based on infactorial types of inputs. We take all these things, and we’re working to automate the creation of running these experiments so that users don’t have to. We’re automatically creating and prioritizing latency failure and latency-causing failure RPC and Hystrix experiments. ACA configs are added by default. The deviation configurations. We have SPC, system metrics, request statistics, and experiments are automatically run as well. Prioritizing experiments are also created. I’ll go through the algorithm for that a high level. We use an RPS stats range bucket. We use a number of retries and the number of Hystrix commands associated with it. These are all weighted appropriately.

Something else we’ve also taken into account is the number of commands without fallbacks and any curated impacts that a customer adds to their dependency. Curated impacts is this has a known impact on login. This has a known impact on signup. This has a known impact on SPS. And we actually weigh these negatively and don’t run the experiments if the score is negative. Test cases are then ranked and run according to their criticality score. The higher the score, the sooner it’s run, the more often it’s run.

Ironically enough, Monocle has given us some feedback that allows us to run less experiments in production. Right? It’s ended up as a feedback loop because we’ve been running so many experiments we’ve seen patterns in between them where we can look at certain configuration files now and see certain anti-patterns and know that that’s actually going to cause a failure, whereas we didn’t know that information before.

It has led to new safety. Before if an experiment failed, it needed to be marked as resolved. Currently, it needs to be marked as resolved before it can run again. But now we can explicitly add curated impacts to a dependency. A user can go into their Monocle and actually add this has a known login impact. This has a known SPC impact. And we’re working on a feedback loop to where it fails, it will add a curated impact as well. The runner will not run experiments with known impacts.

In summary, ChAP’s Monocle is crucial optics in one place, automatically generated experiments, automatically prioritized experiments, and finding vulnerabilities before they become full-blown outages. If I can leave you with one tangent, one side piece of advice, it’s to remember why you’re doing chaos experiments and why you’re testing in production. It’s to understand how customers are using your service and not lose sight of them. You want them to have the best experience possible. So, monitoring and safety are of utmost importance in these situations. Like at Netflix, not being able to stream a video. Thank you. Appreciate it.

Original Link

Continuous Testing Live: Automate Everything in DevOps? Not So Fast

“If testers are curious enough and they get in there and poke around and not just follow, in this case, the software testing test case, and they see connections, see different scenarios and things that have been come up with, that helps them know the product more, which makes them much better testers, too.” -Adam Bertram

On this week’s episode of Continuous Testing Live, Ingo Philipp and Adam “The Automator” Bertram share their thoughts on the increasing presence of test automation and DevOps in modern software delivery lifecycles. While there are certainly cries to “automate everything” to get the most ROI, there’s one software testing practice that shouldn’t be automated or ignored.

Subscribe today to Continuous Testing Live so you never miss a single episode! Now available at iTunes, Google Play, and SoundCloud.

Noel: I wanted to kick this conversation off by filling in the audience on how it came to be. I thought it was a cool story as to how we all got introduced to each other. Ingo was going to be presenting a webinar of ours about manual testing’s role in DevOps. And then Adam Bertram, who is maybe more-often known as “Adam the Automator,” saw a tweet Tricentis sent out about this webinar and raised a point about “Why in the world would there still be any need for manual testing in DevOps when the goal of DevOps is to automate everything?

I immediately tried to figure out where Adam was coming from, and I started to wonder if it was the case of different teams-like testers and developers-working in silos and having so much work to do that you don’t always know what another department is working on, or maybe even what they call something. And what this confusion in this example came from was that we had different understandings as to what manual testing is, when it’s a good idea, and when it’s not. It created this whole conversation the three of us had before this podcast where we actually didn’t end up having anything to really disagree with. We all just learned where the other side was coming from and why they felt the way they did.

I’d love to, let’s start with you Adam, with where, at least before this conversation, where you associated manual testing, and what you thought of as being included in manual tests.

Adam: I’ve always said the M word is the bad word. I call it the M word because I’m so focused on automation. The first thing I saw there, I saw manual testing like, “Oh no, no, no, no.” My gut reaction is, “No, no, no, no, we don’t want that.” I originally just balked at that because it’s like, “Well everything can be automated.”

My initial reaction to that was, my thought for manual testing is the testing that I have seen. I’m an automation engineer. I’m on more of the development team, and I’ve worked with QA and QE teams and lots of testers over the years. What my thought with manual testing was when a development team ships a product and they deploy to a test environment and then QA goes in and then manually brings up browser, clicks this, clicks this, moves a mouse around, clicks this, notices that this is doing what it was supposed to do, moves the mouse around again, does this. It just takes forever and a day because they have to click each UI element and manually go through it. A human has to be physically present, clicking and typing things and filling in forms to actually make that work.

Through my experience, I know I’m not a software testing guru by any means. I’m on the automation side, but I’ve seen people use tools like Selenium and even scripts and things to automatically fill in forms, grab input, automatically put in service tickets and things like that to give feedback to the development team during that. So, my first impression of manual testing was, “Oh, no, no, no. There’s no way. I do not want anybody … There’s no reason for anybody to go in and just manually start clicking around the mouse and typing in fields and stuff.”

I’ve seen some people even create a Word document of, “Okay, I did this, then I did this,” like a journal thing about, “Oh that’s definitely not going to work.” That was my initial take on what you meant by manual testing.
Ingo: Absolutely. I just want to highlight the way this conversation started. You just scratched the surface, Noel. For me, it’s just a funny fact because when we were tweeting about the role of manual testing in DevOps, when we were promoting our webinar, then Adam you immediately replied, “This is so wrong on so many levels,” right?

Adam: Yeah, I was very adamant about it.

Ingo: The good thing about this is that you didn’t just state this without any further reasoning. You really provided arguments. In that case, your … How should I say? Your skepticism definitely goes hand-in-hand with rationality. I think that’s crucial since the discussion we had afterward was then really based on healthy doubts, on thoughtful inquiries I would say. It wasn’t based on strict denial, and that just said one thing to me. Your skepticism is definitely strong, Adam, but your curiosity is even stronger. That’s the reason why I’m so happy to have you on the show.

Adam: Yeah, that’s a really good point. One thing I really hate is for people to say the popular quote, “It’s the way things have always been done.” They deny any of the ways… That just stunts growth and stunts your ability to learn more. I hate that approach. You’re right. I’m very passionate and adamant about things I feel strongly against, but you’re right. I never really thought about that way. You’re right; curiosity really trumps everything in my book.

Ingo: Absolutely. To me, you expressed two ingredients a great tester should have, and that was skepticism on the one hand and curiosity on the other hand.

That is what makes for me a great tester, what makes a tester hard to fool at the end of the day. Having the ability to doubt and to ask that these are two attributes that must be … I should say … at the fundamental part of every tester’s soul. Because when you doubt and ask, then it gets a little harder to believe. You really brought that up, and for that, I just want to say thank you.

Adam: Good, no problem. I know that something that I know that would also be beneficial to a tester is you should never assume anything. That’s one thing. Another thing I hate. “Well I assumed that this functionality was working,” or, “I assumed that as a good tester, you have to…” You can’t assume anything. You have to just go through there and verify yourself, the whole “trust but verify” statement. “I trust you, but I’m going to verify what you’re saying is actually true.”

Ingo: Yeah, absolutely, absolutely. Can’t agree more with that.

Noel: Deciding which manual testing should stay a manual process and which testing should or not just can be automated, but should be automated, Adam, I can see your point of view of what things you would be looking to automate, like some of the things you just described there. Then Ingo, as testers are looking for what needs to be automated, what should stay manual, how do those decisions come to be? What goes into the decision of “We’ve decided we’re going to automate this set of tests or this suite of tests?” How do you decide to do that?

Ingo: Well, in our projects, these decisions get made based on risk. For example, when a new user story gets into the game, we start modeling these bits of functionality by user scenarios. While doing so, ask ourselves what are the scenarios that contribute most to the overall risk? Since risk in its most fundamental form just quantifies the potential of losing something of value, we try to identify these scenarios that have the highest potential of losing something of value. For example, from a financial perspective.

Now, this means that we try to identify user scenarios that are most probably carried out most often in the production system by certain stakeholders, by certain end users. The usage frequency of these scenarios is one dimension of our risk assessment, and so it’s one basis for making this decision. The other dimension is the potential damage. We don’t only ask ourselves how often a certain user scenario will be processed in production, we also want to know what would be the potential damage that would result when this user scenario wouldn’t work. The potential damage is the second dimension of our risk assessment.

Once we identify these user scenarios and estimated their risk contribution, we move on and create test cases to model these user scenarios that contribute most to the overall risk. Then we automate these test cases and then we embed these test cases into our automated CI and CD pipeline. Of course, this decision is also based on the effort it takes to automate these test cases. What you’re trying to do is to find the right balance between the effort it takes to automate certain scenarios and the value in terms of risk mitigation we create through test automation. That’s in rough terms how we decide what to automate and what not to automate that resonates with you, Adam.

Adam: Yeah, I agree with you completely. The risk thing, that’s something I’ve really never thought about. I take whether to decide to automate something because I’m not necessarily in the testing realm, but more of the general automation part as part of operations and IT in general. I automate just about everything around me, and I think of automation as to the best task to automate something is the task that’s going to be repeated the most often, which, in this case, we’re talking about testing. I’m sure that you have a test case. You’ve not going to just run a test case once. You’re going to run that for every build that comes across that you have to test. I think in software testing, this is what makes automation so key because you have a simple task as part of that test case that you have to run for every single build to really get the quality up.

Second, like you were saying, the ROI on “Is it worth it to automate something?” In the software testing realm, I think the ROI is great because you’re going to have to perform that same test case for every single build like I said. At the same time, you’re going to have to … That case, like you said, there’s going to be huge risk because you perform that testing upfront in the test environment, and you know that that’s going to eventually go in production. So that risk is going to be a lot higher.

Plus, another advantage of deciding if you want to go with automation is because not only the repetitiveness but like you said, how important is it? How much time does it take for you to automate it, take the time upfront, to then save that time down the road?

Just talking about any kind of general automation, even in manufacturing, anything, it always takes a lot of time, a huge chunk of time upfront to then be able to save that time down the road. Does it make sense to spend those resources upfront, and time upfront to build that kind of automation framework, and everything upfront and spend so much more time to eventually know you’re going to get it done in the end? That doesn’t work for some teams because some teams are so strapped for time. They don’t have very many people at all on the testing team that they really … I hate to say it; they just don’t have time to automate. You have to keep the lights on. They have to just keep the pipeline flowing and they just simply can’t carve out the resources to automate it.

It’s a terrible situation that people get into there because they’re never going to catch up.

Ingo: Yeah, absolutely. I wouldn’t call it a terrible situation. I would call it a challenging situation.

Adam: There you go. You’re more politically correct.

Ingo: You’re right, Adam. I absolutely agree because the time needed for testing is always infinitely larger than the time available. As you’ve mentioned, you will never have the time, and you will never have the resources, and also the budget to test everything full exhaustedly down to the very last detail. What you can do is you can always test as much as possible.

The only decision I would say we can make at the end of the day is, “How much risk leftover are we willing to accept when we release our software products?” That’s what we do by applying such a “risk-based testing approach.” What we do is, we align the risk objectives of our stakeholders with our testing, but also our test automation activities. I think that’s the big, big benefit that brings such a risk-based testing approach to test automation itself.

Adam: I like that. I like that you base what to automate based on the risk. If it’s a really, really high risk, that’s something we have to automate right now because when the automation is going to naturally reduce human error, we don’t want to increase the risk by having 10 people in with their hands in the cookie jar trying to do whatever they are going to do. I really like that risk-based approach.

Ingo: Yeah, absolutely. The bottom line here is that we don’t just want to do the things right in terms of just doing them faster, in terms of doing more and more automation. During product testing, what we want to do is we want to do the right things right at the same time. This means we don’t just want to move fast. We want to move fast in the right direction, to keep up with the rapid pace of software development. That’s how we add quality to test automation. That’s how we add quality to speed. In simple terms, that is how we translate quality and speed into reality during every single iteration. That’s the big secret.

Adam: That makes complete sense to me. You have a really good point.

Noel: Let’s talk about ROI a little bit. One of my questions has been answered really well already, but we talked about the fear of disrupting, keeping the lights on kind of thing by keeping things the way that they’ve always been done. Sometimes there’s some fear of change in that, but when looking at what to automate, and you look at the risk involved. You determine that you’re going to be able to automate something, but you want to know what you’re going to get out of it. How do you set realistic goals for our ROI once you’ve evaluated that you can afford the risk it’s going to take to automate something? Where are some of those areas that you’re going to be able to not just look for a return of investment, but to measure it as well?

Adam: I can take this first. I think that automation is probably the easiest area to measure ROI because … Let me take that back. I’m going off on a tangent here … This reminds me of something that was recently in the news about Tesla, and I don’t know if you guys saw that where Elon Musk said humans are underrated and automation is overrated or something because they couldn’t release their Tesla Model 3 models fast enough.

Noel: Musk said they tried to “automate too much,” or something like that.

Adam: Yeah, they tried to automate too much. I saw a few articles that said Tesla differed from traditional car manufacturers because they tried to automate everything that the traditional car manufacturer does, but also final assembly. The problem with that is what Elon Musk was alluding to was they didn’t seem to have the process down ahead of time before they automated. Toyota and all the Japanese car manufacturers is another area I’m interested in a lot. They take the approach of, “I need to understand the process first. Manually go through and figure out where all the pieces go. And figure out the whole process manually, and build an automation document or an idea of how this process is going to go, then introduce automation.

I think Tesla went the other way and said, “Well, no, no, no. We don’t need all the manual stuff.” They just went whole-hog right into automation without actually understanding when he said automation is underrated, they automated too much stuff. That’s a fallacy to me. In my opinion, you can’t automate too much stuff. You automate it too quickly, you didn’t understand the process ahead of time before you actually introduced automation.

Yeah, yeah. ROI I think is the easiest to measure. The reason I went on that tangent is because they didn’t have the manual process. They didn’t know what it took to do it manually, so they didn’t know what resources that needed to be done ahead of time to produce a Model 3. They just threw in robots all of a sudden and tried to automate a process that they didn’t know how to manually do it with humans. If they would have done that, they would have been able to measure a metric. How long does it take for a Model 3 to go from the start of the line, into a line and final assembly?

If they would have had humans, they would have said, “Well looks like after 100 cars, the average time it takes, we have five humans working for 40 hours a week. Each of these humans is making 20, 30, whatever it is, dollars an hour. You add all those up. It took these humans to do this. Then you go into the labor expenses and how many labor resources that takes to do that.” Then also everything all the other expenses that take … In my opinion, it’s really easy to measure ROI for automation if you know how to manually do it and if you had actually tracked how long that manually takes. Because if you say, “Well it takes 5 humans 40 hours a week to do this 200 working hours, and I pay my employees X number of hours, it cost me X number of dollars labor wise to do this.” Then you calculate that or time and then you say, “Well after six months, it’s going to be X dollars. I know for a fact that it’s going to be this due to our current processes.”

Then you can go back to robots or whatever other way you’re going to introduce automation and say, “Well if it takes me $10,000 over six months to build this and it’s going to take me $100,000 in investment to automate this process upfront, but then once it’s done, then it’s only going to cost … If best case scenario, if there’s going to be no humans involved in the process, I can just completely cut out that $10,000 and it’s going to cost me $1000 to get one. I think it’s really easy to measure ROI if you know how to manually do things and if you manually track how that process goes in the first place.

Noel: That makes me think that maybe Elon should have said, “We poorly automated too much”

Adam: There you go.

Noel: And not just blaming it on a blanket automation.

Adam: Correct.

Ingo: I can give you a concrete example because about one and a half years ago we started off a project in the banking sector. It was all about securities trading. We also calculated the ROI of test automation there. I just want to quickly move you through that because I think it’s really impressing how fast test automation really can pay off. Now the project was composed of about 11 manual testers. They had to execute and to maintain about 5000 regression test cases. It was scattered across eleven 11 different technologies and platforms, so quite expansive test cases. It took those tests about 10 weeks to just execute that set of test cases.

Now after we optimized that test case based on our risk-based testing approach, and when we applied structure and methodic test design, then we created a solid basis for test automation. Just by doing test automation, we were able to drill down the entire regression test time to 8 hours, and that’s huge. That is what we managed to do in about three months by just 2 persons. It means 2 manual testers, from 10 weeks minimal regression time down to 8 hours by 2 people. When you calculate the return on investment there, you will see that automation, that the entire investment into automation already paid off after the very first regression run. That’s pretty impressive in my point of view how fast that can be.

Adam: I agree. That’s one thing I think automation it’s a no-brainer. If the process is done often enough and you’re able to still keep the lights on and keep doing things like you’ve always done, but automate at the same time, it’s a no-brainer to me.

Ingo: Yeah, absolutely. It’s not just that you can test your product faster. It’s more about testing your product more frequently so that you have multiple feedback loops. When we automate as mentioned, we apply risk-based testing to that, and we focus on the main functionality, on the basic functionality. Usually, it turns out that with just a low number of test cases, you can already do a lot. That means you can already cover a lot of business risk. The savings you have, also the cost savings you have is just dramatic, and that is what you can achieve in a couple of days or even a couple of weeks I would say.

Noel: We talked a little bit about how we decide or how developers or how testers decide what to automate by looking at risk and looking at how hard is it going to be to get this up and running, how hard is it going to be to maintain, things like that. Adam, you had written an article for CIO.com talking about DevOps and saying that, “For a deployment pipeline, it should integrate continuous integration, continuous development, continuous testing and continuous deployment into a single entity,” which sounds wonderful and also sounds really difficult.

As you’re looking at things like, “What’s going to save us some time, what’s going to free up some hours here,” Where are you then able to look bigger picture with a deployment pipeline like that? Let’s assume someone’s got none of this in place, yet they know they need it as they’re trying to build something like a deployment pipeline, not just save some hours on testing or just trying to get rid of a manual task by creating a Word doc. Where do the decisions get made for automation, for things like that when you’ve got a goal that big of getting to, what some would call, real DevOps?

Adam: A goal that big, as you said, it’s a big undertaking, especially for an organization that doesn’t have anything to begin with. I would recommend in the first place something that it seems so obvious that so many companies don’t have. Do you know what you’re doing today? Do you have any documentation? Can you hand over your current workflow to somebody else that has no idea what you’re doing and can they do it from start to finish exactly the same way you did manually?

That’s definitely the very first step which is understanding what you’re doing today and can offload that to somebody else. We do that. My current position, we have various workflows in place to where we don’t have good documentation. It depends on me or it depends on somebody else, a specific subject matter expert to do that.

One of our roles is we’re trying to extract all the knowledge out of people’s heads and put it into some documentation, some portal or in a Word document or something to hand over to somebody overseas to say, “Here, now we’ve done all of the design and architecture and we’ve boiled all this down, all this stuff, knowledge in my head how to deploy this software. Here, now all you have to do is read this Word document and just follow down one, two, three, four, five and be able to replicate it there.” By far and away, that is the very first step. Just like Elon Musk, don’t try to automate right away and build this build pipeline before you can understand your current process.

Then once you do that, you go to the next phase of once you understand the core process, then you can start building some automation around this. We’re not even talking about the pipeline yet, so I’m talking about … Obviously, I’m sure you have source control and I’m sure that your developer checks something into a repo somewhere, and then maybe then they send an email to the testers and say, “Here, can you test this thing?”

Your next step is to link the two. You understand what you’re doing and the tester understands what they’re doing. First, you link the two. You go from no automation to your check it in and your build tool automatically, then maybe even as an interim step, maybe automatically will send that email. It’s just baby steps. “Hey I saw all the build tools,” said, “Hey, this developer checked this piece of code in. Now can you start testing, send all the parameters of what they need to do.” That’s the next step and maybe then say, “Build the automation around that test case that they’re working with and then have the testers then execute the test manually once they get the notification.”

Once they do that, you go to the next step and then have the build tool then automate it. At that point, you’ve got the code being checked in, the build being run. That was probably another step. You go from manual build to a continuous build and then you can go from manual testing to self-triggered automated testing to your build tool triggering your testing. That means then you go from the source check in to the automated build, to the automated test.

Then the final step is continuous deployment. There you’re doing continuous integration. Then you go all the way, if you want to, go to continuous deployment all the way out to production when you have the whole build pipeline automated. It’s all about first understanding what you’re doing, automating each piece from the development team to QA to operations to the deployment and getting everything all going into production.

Eventually you’ll get to the point if you keep doing that and building, building, building upon that framework, you will eventually have a pipeline that is the holy grail for many DevOps organizations where a developer checks the code in, a build is automatically generated, goes through the whole process, the testing is done, the unit testing is done, the integration tests, the acceptance tests, all this testing is done. Then it gets put into production and then it gets tested again, and then at that time, that’s how high-performance organizations like Facebook and Netflix and Etsy and all these web platforms are able to just release hundreds of times a day to really give value to customers as soon as possible.

Ingo: Exactly, Adam. One common mistake we see when engineering teams go for continuous delivery is that they miss to treat architecture as really an engineering practice. I really would like to know how you see that because when you put an architecture in place, that architecture will change obviously as your product evolves and as your services evolve. The architecture also will change as your business plan changes. It’s important, at least from my perspective, that architects are always involved in your continuous delivery process.

Ingo: You won’t get it right the first time. This is just the nature of those complex systems as you have mentioned, so the continuous delivery pipeline is useful because it allows you to validate your architecture also early on by constantly running your performance tests, your availability tests. This then also allows you to make changes to your architecture as soon as possible so that you can make sure that you build the right product, that you’ve built the right architecture.

In my point of view, you should validate and refine also your architecture from early on because those changes are expansive to make late, and you won’t make them early. You want to make them when they are cheap to make, and this is what most companies, at least from my naïve perspective, all too often do not consider in the first place. How do you see that?

Adam: Let me explain it a little bit in a different kind of term. I am more of the fan of starting from nothing, delivering that out to production as soon as absolutely possible. If somebody is just starting out and wants to build the pipeline, in my opinion, architecture shouldn’t even be a word that you’re talking about. It’s all about how do I just get something out there? Customers are asking for this feature or we’re starting with this new software.

Of course, the architecture and design component of the software development process obviously needs to happen more when you’re designing a new piece of software or you need to build this huge feature. In my opinion, I think that one of the big advantages of having a build pipeline is being able to not necessarily worry about the overall architecture of a component or other software system rather than worrying about just get something out there soon as possible.

I think the argument around waterfall versus agile, a lots people are going towards agile and DevOps now just because they’re spending way too much time on the planning design architecture discussions ahead of time when they can just say, “Let’s just get work done. Just get those features out there, get that new software out there as soon as possible and get feedback from the community.” Then that feedback, your architecture plan, if you will, in a rough sense of terms will evolve as it’s going on. You start out when you just … Obviously, there’s going to be planning to know what you need to do, but in my opinion, that is trumped by just getting something out there as fast as possible and getting feedback on.

Then once you have that feedback loop going, the customers provide you with the feedback and then they say, “Well I didn’t like this component.” Then you start your rough overall architectural diagram and say, “Well, okay, then you start that.” It’s the whole MVP thing to where you just put something out there as soon as possible, and you worry about the overall “how we’re going to get there” as you go. It’s definitely a different mindset or mentality that I know a lot of people have, and I can’t really put my finger on how you would actually teach that kind of mentality. Because a lot of people that I know of, I have to do everything just perfect. I have to get the whole process down and architecture fine and everything designed out perfectly.

Then when they actually get it out there and they actually execute it, they realized it’s like they design and spend months and months and months on this great feature of this new product. Then the company itself … Everybody in the company, “This is great.” As soon as they deliver to customers, customers can say, “This is terrible. I don’t want this.” They then get feedback as they were going to constantly improve upon it.

Ingo: Yeah, I absolutely agree. I think your bottom line is that you shouldn’t strive for ultimate perfection. You should rather strive for continuous improvement, and that’s right from the beginning, right?

Adam: Correct. Yes. Exactly. Let’s learn from the Japanese auto manufacturers.

Ingo: Exactly. This was also the lesson we have learned in the last couple of years while we were building our internal pipeline at Tricentis. What we recognized is that we should mature our pipeline slowly instead of in one go. What we realized that we need to solve our current problems and we need to solve them step by step and if all the other, the theoretical problems for the future.

Adam: Exactly. Don’t try to future-proof the pipeline too much right off because the future is going to blow up on you when you least expect it. You can’t predict the future. You have no idea what it’s going to have in store for you.

Ingo: Yeah, absolutely.

Noel: Well, the last question that I had for you guys was, Adam, from another piece you had done. You wrote, “Get in on the DevOps revolution by freeing yourself from mundane server tasks and tapping your inner coder.” That stood out to me as very applicable in software testing in the world as well, and you can leave DevOps in there because we at Tricentis certainly view continuous testing as essential DevOps, but instead of “inner coder,” thinking about your inner tester. We think of automation freeing testers from these mundane tasks that are not their passion or likely what they were even really brought in to do. That it’s really freeing them to do the more creative, more innovative, testing activities that are really going to contribute a lot more value than just these manual efforts.

Adam: I can start with that. I think that one of the biggest shifts that I have seen lately in just IT, in general, is how companies have, number one, seen the benefits of automation. Everything is being automated this time because they see the definite … the great ROI that it has. One thing that I have seen some employees do is, and I’ve read some horror stories about this where they reject automation not only because it totally goes against the way that they’ve always done things, and it’s not very comfortable.

They’re not very receptive of change, but they also worry about, “Well, if I’m automating this, I’m automating myself out of the job.” That’s another mindset that people really need to change because what they don’t realize is they should automate themselves out of a job because if they automate themselves out of that job, you’re going to show your expertise and you’re going to get promoted, or you’re going to do more interesting things rather than, “Oh, here comes another build.” Click, click, click. “I’ve got to do this same thing over and over again. It’s boring.”

I have seen some people just do not want to automate that at all because not necessarily because they don’t care about the process in the company and in what they’re doing. They’re legitimately concerned about, “If I automate all this stuff, I’m not going to have anything to do, and the company is not going to need me anymore.” I think that is a very important thing. I think that’s one of the biggest resistance to automation that I have seen out there.

Ingo: That’s quite funny because that is what I hear constantly, Adam. The sentence, “I’m automating myself out of the job.” I simply don’t get it. For me, there are two testing cultures out there, and we’ve previously talked about it before. The first one is formal testing. It’s also confirmatory testing. It’s all about checking. It’s just asking a question. “Pass or fail?” You can purely automate that process because that’s all about mindless checking.

There is another dimension to testing, and that is exploratory testing, and it’s the second testing culture next to formal testing. To me, exploratory testing is a way of thinking. Much more than it is a body of mechanical checks. Here it is all about figuring out if there is a problem in your product rather than asking if certain assertions pass or fail. That’s a subtle difference, and that kind of decision usually requires the application of a variety of different human observations such as questioning, study, modeling, inferences and many, many more.

Exploratory testing is really all about evaluating a product by learning it through exploration and experimentation. It’s not about creating test cases; it’s about creating test ideas. It’s about performing powerful experiments, and this for me implies that exploratory testing is any testing that machines can’t get to do because machines, they just check, they do not think. I don’t really understand why people are so afraid that they automate themselves out of a job. In that case, I just need to say that they simply don’t understand the game that they are playing.

I think they can’t envision themselves doing something different than what they’re currently doing today. They can’t see themselves doing exploratory testing. To them, it’s all about the way they’ve always done and they can’t really fathom to do it a different way. I agree completely. We need more people not just being told what to do, being passed instructions and running a bunch of test cases in a checklist.

Adam: We need people to get in there, and like I’ve never heard of the term exploratory testing, but I definitely agree with that. We need people who can think for themselves. We need human beings that can put two and two together and come up with a test case that was never even thought of in the first place. “Oh, you did this when this did this. How did you know that button linked to this process?” Or to be able to come up and instinctively come up with these test cases. You can create the test cases rather than just being, just like you said, going down through, “Did it work? Fix … Try this. Did it work?” “No.” “Try this. Did it work?” “No, yes, no.” That checklist thing. Yeah, you’re right. That’s exactly what a machine can do.

Ingo: Yeah, absolutely. It’s really a pity, to be honest really because the high coding efforts that are caused by test automation, and that’s definitely the case. Often those testers, to think that they can’t add value to software development outside test automation and they just … “Well test automation is the only thing I really can add value to software development,” and that’s not true. I think that it’s really too bad to have people like this on the team.

Adam: I guess it’s down to the whole silo thing still. That’s what DevOps is trying to do, break down the silos. “Well I’m just in this silo,” or … I hate that term. That’s a thing I rail against constantly, but people think, “I’m just this person or I’m just on this team. I can’t affect this other team in anyway.”

What they don’t realize is if they’re curious enough and they get in there and poke around and not just follow, in this case, the software testing test case, and they see connections, see different scenarios and things that have been come up with, that helps them know the product more, which makes them much better testers, too. They get in and immerse themselves in not only just the test cases that they’re doing, but also say, “Oh looks like …” It just creates … Look at patterns and just know these patterns in the test cases. “Oh, it looks like we don’t have a test case for this.” They can then start telling the development team or others like, “Oh, when you do this process, then oh you may want to also do this because these test cases look like they may not cover this specific scenario.”

It’s all about just thinking for yourself and just coming up with creative solutions to problems.

Ingo: Absolutely.

Original Link

Find the Best Agile Testing Tools for Your Team

Once associated only with small application development projects and co-located teams of 8-10 members, Agile methodology is increasingly being adapted for large-scale enterprise development. Choosing the right Agile testing tool is vitally important for companies just making the transition to Agile since the right tool in the right hands can foster team collaboration, drive down costs, shorten release cycles, and provide real-time visibility into the status and quality of your software projects. It helps, too, if the tool(s) you choose plays well with others, that is, it can seamlessly integrate with other business critical tools in your development environment, such as those you’re using for requirements traceability, defect logging, manual and automatic testing, or metrics and reporting. This kind of flexibility and functionality is especially important in large, enterprise-wide projects that need to scale across different departments, locations, lines of business, platforms, and technologies.

Different Test Case Management Tools for Different Agile Testing Methodologies

Every organization is unique and, before committing to an Agile testing tool, you should choose an Agile software testing methodology that works best within your culture and the skill-sets of your development and testing teams. One of the most popular software testing methodologies, Scrum takes a highly iterative approach that focuses on defining key features and objectives prior to each iteration or sprint. It is designed to reduce risk while providing value quickly. Introducing Scrum is quite a change for a team not used to Agile software development: they have to start working in iterations, build cross-functional teams, appoint a product owner and a Scrum master, as well as introduce regular meetings for iteration planning, daily status updates, and sprint reviews.

Unlike the time-boxed approach that Scrum takes, Kanban is designed around a continuous queue of work, which goes through a number of stages of development until it’s done. Kanban teams often use index cards or sticky notes arranged on walls, such as the Kanban Board shown below, to visualize workflow in a left-to-right manner. When work is completed in a stage, it moves into the next-stage column to its right. When someone needs new work to do, they pull it from a left-hand column.

Are Spreadsheets Limiting Your Testing and Reporting?

Agile teams using either Scrum or Kanban methods often rely on a spreadsheet application like Microsoft Excel as a test case management, documentation and reporting tool. There are significant risks to using spreadsheets to store and process test cases, however, especially on multi-team projects where individual teams often adapt spreadsheets to their specific needs, which can cause problems when it comes to getting uniform reports. If two or more people are working at the same time on a spreadsheet file, there’s also the danger of corrupting the file or creating other security risks.

Achieve Agile Project Management with JIRA

If you’re looking for a tool that makes it easy for different teams to collaborate, JIRA Software is an Agile project management tool that supports any Agile methodology, be it Scrum, Kanban, or your own unique flavor. From Agile dashboards to reports, you can plan, track, and manage all your Agile software development projects from a single tool. JIRA allows users to track anything and everything — issues, bugs, user stories, tasks, deadlines, hours — so you can stay on top of each of your team’s activities. In addition to offering collaboration tools, a chat solution, and development tools, JIRA has a wide range of integrations to help you connect to almost any other tool you’re likely to need.

One of these integrations, Zephyr for JIRA provides a full featured and sophisticated test case management solution inside JIRA. With the same look and feel as JIRA, Zephyr for JIRA lets you do testing right inside JIRA, which makes it easier for Agile teams to track software quality and make better-informed go/no-go decisions about when to ship high-quality software. You can also hook your automation and continuous integration tools to Zephyr for JIRA with the ZAPI add-on (sold separately) to enable access to testing data programmatically via RESTful APIs. With well documented RESTful APIs you can create tests and execution cycles, update execution status, add attachments, and retrieve information about users, projects, releases, tests and execution cycles.

Begin Comparing Automation Testing Tools Comparison Overview

Test automation is essential in today’s fast-moving software delivery environment. Test automation works by running a large number of tests repeatedly to make sure an application doesn’t break whenever new changes are introduced. For most Agile development teams, these automated tests are usually executed as part of a Continuous Integration (CI) build process, where developers check code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect errors and conflicts as soon as possible. CI tools such as Jenkins, Bamboo, and Selenium are also used to build, test and deploy applications automatically when requirements change in order to speed up the release process.

Jenkins

Jenkins is a CI/CD server that runs tests automatically every time a developer pushes new code into the source repository. Because CI detects bugs early on in development, bugs are typically smaller, less complex and easier to resolve. Originally created to be a build automation tool for Java applications, Jenkins has since evolved into a multi-faceted platform with over a thousand plug-ins for other software tools. Because of the rich ecosystem of plug-ins, Jenkins can be used to build, deploy and automate almost any software project–regardless of the computer language, database or version control system used.

Bamboo

Bamboo is a CI/CD server from Atlassian. Like Jenkins and other CI/CD servers, Bamboo allows developers to automatically build, integrate, test and deploy source code. Bamboo is closely connected with other Atlassian tools such as JIRA for project management and Hipchat for team communication. Unlike Jenkins, which is a free and open source Agile automation tool, Bamboo is commercial software that is integrated (and supported) out of the box with other Atlassian products such as Bitbucket, JIRA, and Confluence.

Selenium

Selenium is a suite of different open-source software tools that enable automated testing of web applications across various browsers/platforms. Most often used to create robust, browser-based regression automation suites and tests, Selenium — like Jenkins — has a rich repository of open source tools that are useful for different kinds of automation problems. With support for programming languages like C#, Java, JavaScript, Python, Ruby, .Net, Perl, PHP, etc., Selenium can be used to write automation scripts that run against most modern web browsers.

Which Test Automation Framework Fits Your Needs?

In addition to CI/CD tools, many Agile teams also rely on test automation frameworks made of function libraries, test data sources, and other reusable modules that can be assembled like building blocks so teams can create automation tests specific to different business needs. So, for example, a team might use a specific test automation framework to automate GUI tests if their software end users expect a fast, rich and easy user interface experience. If the team is developing an app for an Internet of Things (IoT) device that primarily talks to other IoT devices, they would likely use a different test automation framework.

Agile teams can execute one-touch control of test automation from within the Zephyr Platform with Vortex, Zephyr’s advanced add-on that that allows you to integrate with a growing suite of automated testing frameworks (including EggPlant, Cucumber, Selenium, UFT, Tricentis, and more) with minimal configuration. Besides being able to control the execution of thousands of automated test cases, Vortex makes it easy to automatically create test cases from test scripts and to apply insights from analytics on both automated and manual testing activities.

The Importance of Exploratory Testing on Agile Projects

Agile projects still need manual testers to engage in exploratory test sessions while the automation test suite runs. In addition to revising and fine-tuning the automated tests, exploratory testers are important on Agile projects since developers and other team members often get used to following a defined process and may stop thinking outside the box. Because of the desire for fast consensus among self-organizing Agile teams (including globally distributed ones), collaboration can devolve into groupthink. Exploratory testing combats this tendency by allowing a team member to play the devil’s advocate role and ask tough, “what if”-type testing questions. Because of the adaptable nature of exploratory testing, it can also be run in parallel with automated testing and doesn’t have to slow deployment down on Agile projects.

Enhance Your Exploratory Testing Techniques with Zephyr’s Vortex and Capture for JIRA

In addition to being able to create and reuse manual tests on Agile projects, Zephyr’s Vortex tool makes it easy to bring in and work with automation information from across your development stack, including from systems external to your organization. Vortex allows users–wherever they are in your organization– to integrate, execute, and report on test automation activities. By providing an intuitive screen that lets users access both manual and automated test cases at the same time, Vortex helps Agile teams better monitor their overall automation effort (that is, the number of manual versus automated tests) from one release to another.

Capture for JIRA helps testers on Agile projects create and record exploratory and collaborative testing sessions, which are useful for planning, executing and tracking manual or exploratory testing. Session-based test management is a type of structured exploratory testing that requires testers to identify test objectives and focus their testing efforts on fulfilling them. This type of exploratory testing is an extremely powerful way of optimizing test coverage without incurring the costs associated with writing and maintaining test cases. Like Zephyr for JIRA, Capture for JIRA has a deep integration with the JIRA platform, allowing users to capture screenshots within browsers, create annotations, and validate application functionality within JIRA.

Zephyr is the go-to testing solution for 18,000 Agile development and testing teams in over 100 countries, processing more than 40 million tests a day. Vortex and Capture for JIRA are two of the latest additions to Zephyr’s suite of advanced Agile and automation tools, which includes different Zephyr for JIRA, Zephyr Teams, and Zephyr Enterprise solutions. The Zephyr platform integrates with a wide range of automation tools, such as Selenium, Cucumber, EggPlant, QTP and more. It can also run on any CI/CD framework/server (such as Jenkins, Hudson, Bamboo), which allows Agile teams to bring automation and manual test results together seamlessly.

Original Link

Organizations Need a Strong Agile Testing Strategy for Their Digital Success

Every organization is craving digital excellence, to gain the competitive edge and meet the changing preferences of the consumers. Speed is a quintessential factor, but quality is needed to ensure stability for any digital initiative. Hence, Software Testing and Quality Assurance become indispensable components in the software development cycle. Whether it’s a device for smarter homes or it’s an application for a smarter car, no one can skip the step to ensure quality. Agile Testing is one of the methodologies and approaches chosen by organizations to achieve the expected quality standards and ensure seamless customer experience.

Why Is the Agile Testing Approach Recommended?

Agile adoption largely focuses on development teams, Agile frameworks, and the allied technical practices within its strategy. Apart from this, a good amount of emphasis is laid on the tools, especially test automation tools, to speed up the development and testing activity. One of the greatest highlights of Agile Testing is the collaborative balance it brings for the testing and development teams.

There are multiple reasons and supporting factors that can validate adoption of Agile practices. However, an approach can work for you if you have a strong supporting strategy. When it comes to launching Digital initiatives for businesses, it is very much critical to focus and spend some time on the strategy, as there is not much scope to waste time and go wrong in a competitive consumer scenario. Most importantly, Agile practices also provide the flexibility to go back and forth, and track the progress effectively. Business Agility is imperative in the current “digital” context.

How Can You Build a Robust Agile Testing Strategy for Your “Digital” Business?

Gartner’s Top 10 Strategic Technology Trends for 2018 states, “Artificial intelligence, immersive experiences, digital twins, event-thinking and continuous adaptive security create a foundation for the next generation of digital business models and ecosystems.”

Development of any new digitally powered product and service will need tremendous business agility, which can be achieved by adopting a relevant Agile Testing strategy.

Combine It with a Test Automation Strategy

An Agile strategy has to incorporate a relevant Test Automation plan, as it can help teams to speed up the testing process and assure the quality parameters. The tools leveraged for automation must induce more collaboration, frequent testing patterns, and high levels of visibility of the automation process. It must also provide insights into the testing activity by going back in time and making the necessary judgments. Test Automation is like a glue that keeps the Agile practices on track and helps to speed up the testing activity.

Open Channels of Communication

Agile testing involves shorter testing cycles and frequent testing. There are even instances where the roles get interchanged. Hence, it is critical that your strategy includes open communication channels and constant communication mechanisms such as stand-up meetings and video calls. This will enable more collaboration and transparency to ensure that the project is on track.

Leverage the Team’s Earlier Testing Experiences

Adopting Agile is not enough, it is important to club it with the right software testing strategy. Whether it’s a Test Driven Development (TDD) Approach or a Behaviour Driven Development (BDD) Approach, it is recommended to consider the past experience of the software testing team while building your Agile strategy. This will reduce the pressure on training the team and accelerate the development activity. Moreover, it helps to bring consensus within the team when people with similar experience and expertise collaborate together.

Adopting a Proactive Approach Towards Testing and Quality Assurance

Similar to legacy structures, it is difficult to apply Agile practices within the traditional mode of testing. With Agile, testing becomes much more important than just harnessing a ‘gatekeeper’ attitude. It is important to build a much more proactive approach and look at ways in which the application can be made market ready. Hence, testing becomes much more process-driven, scalable, and traceable.

Nurture Skills Beyond Testing and Development

Agile, as we understand, needs more collaboration and communication between team members. That’s the reason it is important to encourage more and frequent communication. Whether it is about reviewing the code, having a relook at the application’s features, or refactoring the code base; collaboration is indispensable. Agile teams must comprise members that are not only technically sound, but also are good at communication and interpersonal skills. These soft skills will go a long way in ensuring that the project is successfully completed with all the required inputs from the customer.

Objective-based Strategy

It is important to understand whether Agile Testing approach makes sense for your application development activity. Not all application development activities can align with Agile Testing practices. Most of the times, Agile Testing works for applications that need constant updates and alterations, which is very much required for Digital Transformation initiatives. Hence, it is important to establish the relevance of Agile Testing approach within a software testing strategy. Agile adoption for the sake of it can result in chaos and mismanagement.

“The continuing digital business evolution exploits new digital models to align more closely the physical and digital worlds for employees, partners and customers,” says David Cearley, vice president and Gartner Fellow, at Gartner 2017 Symposium/ITxpo in Orlando, Florida. “Technology will be embedded in everything in the digital business of the future.”

When technology gets embedded within everything in the digital business, there is a constant need to ensure quality at every level. Agile provides the flexibility and scalability to do so, but incorporating it into a relevant software testing strategy is imperative for successful Digital Transformation.

Cigniti has been a trusted testing partner for many organizations in various stages of adopting Agile. We have helped organizations new to Agile build in QA planning, estimation, metrics into their sprints. In case of more mature organizations, we have seamlessly integrated with their sprint teams to improve test coverage, velocity, and quality.

Connect with us and leverage our expertise both in Agile Testing and Digital Testing for your new and fresh business ideas.

The post Organizations need a strong Agile Testing strategy for their ‘Digital’ success appeared first on Software Testing Blog by Cigniti Technologies.

Original Link

How Test Automation Supports Agile Transformation

Agility in IT Processes

In a world where new business models are disrupting established revenue streams and expectations of the customer experience are constantly increasing, businesses everywhere are struggling to stay relevant. To stay competitive requires efficiency: fast market adaptation, well-organized change management, and the ability to secure emerging opportunities. One of the ways to get there is through Agile transformation.

In short, the aim of Agile transformation is for businesses to reach a state of change readiness that let them embrace the unknown. Internally, this means establishing a smooth and predictive path for converting ideas into practice. The actual implementation of Agile principles is usually done by introducing new ways of managing projects and development processes in a company, i.e. Scrum and/or SAFe (Scaled Agile Framework). Although Agile transformation is relevant for every business, companies whose value chain is largely based on IT processes tend to be the first to embrace the concept of agility. Not surprisingly, Scrum, SAFe, etc. are first and foremost applied to the IT processes in enterprises, especially in software development.

Before the introduction of Agile development, projects were often managed using the “waterfall model.” With this approach, each discipline had a dedicated focus and place in the project timeline. A project would begin with an outlining of all requirements for the software to be developed. Then developers would begin building the software – this could take months or even years – and then testers would take over the final product and begin testing it. Not surprisingly, the waterfall model has proven to be too rigid an approach for modern software development.

Agile Development

In an Agile project model, there is no dedicated, focused time for each discipline. Instead, the project timeline is broken into iterations, called sprints, that involve all disciplines: scoping, development, testing, and more. Typically, the duration of a sprint is two to four weeks depending on the number of people involved, the maturity of the department, etc.

Working in short, focused iterations combined with the lack of a dedicated testing phase presents a challenge to testers: how and when to verify the quality of the software in its entirety.

As shown in the figure below, the number of features in a software will accumulate as sprints are completed. Assuming that the project is not supplied with additional test resources along the way, the accumulation of features means that during each sprint, testers will only have time to test the recently developed features and will not be able to do regression testing, i.e. testing how the new features affect existing features. 

The regression issue

FIGURE 1 – The regression issue: As features to be tested accumulate,
and with a fixed amount of testing resources available, testers are forced make to compromises.

The regression testing issue is a well-known problem in software development. Agile development has just accentuated the problem; with iterative sprints, it has become increasingly difficult to answer questions such as “How is the quality?” and “When can we do a release?”.

Automated Regression Testing

Of course, one way to solve the regression testing problem would be to hire more testers. However, this would increase project costs significantly and is not a scalable solution. Another approach could be to go back to doing end-to-end testing from time to time, but this would defeat the purpose of agility. This leaves us with just one option: automation.

With test automation – in this case, automated functional UI testing – robots are instructed to execute the repetitive, predictable test scripts allowing testers to focus on testing the new features of the latest sprint. The figure below illustrates how test automation supplements the testing efforts during sprints.

Automated regression testing

FIGURE 2 – Automated regression testing: Robots take care of regression testing, 
while testers continue to focus on testing new features.

With test automation implemented, it would still be the testers themselves designing test cases and monitoring the results, but the main regression efforts are carried out by robots. To sum up, automation reduces risk by ensuring that any uncertainties are limited to the recently developed features.

An Agile Automation Strategy

LEAPWORK’s approach to test automation is that all testers on a team should have ownership of the automation effort; each tester should be enabled to build automated test cases and to monitor and analyze the results. If an entire team can take part in utilizing automation, then each team member will also reap the benefits, and have time freed up for testing new features.

In other words, for test automation to truly support the objective of being Agile, the underlying automation strategy should not be dependent on only a few technical specialists. Instead, the strategy should be based on making automation a natural part of each tester’s profession. This will make test automation much more efficient and scalable.

Key Takeaways

  • Agile transformation is an attempt of reaching a state of readiness, including fast market adaptation, well-organized change management, and the ability to secure emerging opportunities.
  • Applying the Agile approach to software development presents a challenge to testers: how and when to verify the quality of the software in its entirety.
  • With iterative sprints it has become increasingly difficult to answer questions such as “How is the quality?” and “When can we do a release?”
  • Test automation reduces risk by ensuring that any uncertainties are limited to the recently developed features.
  • All testers should be empowered to utilize automation in their daily work, and a test automation strategy should not be dependent on only a few technical specialists.

Test Automation - LEAPWORK Guide

Original Link

The Benefits and Challenges of Pair Programming and Pair Testing

In the past, we talked about how we maximize our engineering velocity here at mabl, but now we’re going to speak on a concept that some organizations view as a duplication of effort.

The concept of a code review is well known throughout the software industry. Perhaps less well known is pair programming, in which two developers work together on the same code. The aim is to write considerably higher quality code than what typically results from individual effort. A common approach in pair programming is to have one programmer write code while another programmer reviews the code—as it is being written.

In terms of formal roles, one programmer is the driver and the other is the observer or navigator. Less formally, two programmers can have an elaborate discussion as they analyze a single code base. Either of the duo should be able to write good code, and they both agree to avoid all distractions.

The Dark Side of Pair Programming

Pair programming is fully embraced by some companies—and entirely rejected by others. Since we are all human, there are at least some situations in which nearly all software programmers can benefit from pair programming. To many professionals, however, it seems to be generally an inefficient use of resources. Theoretically, two programmers can separately build different features or functions, which should result in twice the output of a single programmer or pair programmers. In reality, many teams report that their developers produce software that is 95% complete but doesn’t integrate well and is not shippable to customers. This is the reason why pair programming is given serious consideration in many software development organizations.

Extreme Programming

Pair Testing

Much like pair programming, pair testing involves one person that does the testing—while another person observes, inquires, clarifies, records notes, and spots defects that would otherwise go unnoticed. Pair testing can be especially effective when one programmer sits together with a tester.

Pair testing accomplishes the following:

  • Finds more bugs
  • Saves time
  • Eliminates the communication gap between testers and developers
  • Is an excellent opportunity for very efficient exploratory testing
  • Provides visceral, in-person learning opportunities

Ideally, pair testing should give each participant an opportunity to take the wheel. When the tester is driving, the programmer can gain deeper, more valuable insights on how a tester uses and perceives the software. While the programmer is driving, the tester can gain a deeper understanding of how the software has been built.

Pair testing is especially effective during development. Many problems will be found at such early explorations, and this early identification will likely result in a much easier solution. If possible, invite a business analyst to pair testing sessions, and this upstream triad is sure to reveal any lurking logic, design, usability, or functionality issues. Further downstream in the delivery pipeline, post-development pair testing collaboration can be valuable for improving development and testing practices.

Pair Testing Gone Wrong

How to Start with Pair Testing

Begin by selecting a code base that is modest in size. If it’s too large, the pair may be overwhelmed. If it’s too small, then there won’t be enough to test. Ideally, it’s best if the tester and the developer have spent time working with each other previously. Prepare a mutually agreeable testing plan, then structure the meeting so that it includes time for each participant to use the software. It may require one or two sessions to get through. Then, reconvene and share feedback. Discuss what works well, what didn’t seem productive, and how pair testing can best into the development pipeline.

Pair Testing Results

For pair testing to be valuable, you’ve got to produce communicable results. This can be done in various ways:

  • Testing docs — Prepare by writing up a test plan that includes testing ideas that will justify how you will run the test.
  • Defect report — While exploring a problem with a developer, it may become clear that a defect is at the root of the problem. Document this well so that the development team can correct it.
  • Session reports — Each participant should take notes and share it with the rest of the development team and stakeholders.
  • Test automation — This may be a good opportunity to collaborate with the developer and automate a test more efficiently and comprehensively. This test can be added to the regression suite and reduce repetitive manual exploration testing in the future.
  • Knowledge sharing — This is not strictly tangible output, but it is an important outcome of pair testing. Each participant can share insights with each other, and then share with other members of the team.
  • Informing stakeholders — This includes the test plan, testing concepts and ideas, and the session report. For those who take an interest, share information on the defects that were discovered and retested.

Pair Testing Limitations

Pair testing is an excellent complement to regression and automated testing, and may result in significant time savings for smaller teams. It is not, of course, a replacement for other types of essential testing efforts. Some larger companies may find it difficult to integrate pair testing into an established development pipeline. A good approach is to wait for a phase transition—or the beginning of a new project—to introduce pair testing in a specific area.

Going Forward, Together

Pair testing can be the most valuable in teams for which there is at least some tolerance for deeper thinking, exploration, and some creativity. It’s unnecessary to prepare test scripts, nor do you need GUI as a launch point for testing. Find two team members who are critical thinkers. It’s best if one is creative and the other has the capacity for disruption. As in many other aspects of human experience, two minds can be much more productive than one.

Original Link

Continuous Testing Live: What’s Driving the Quality Engineering Evolution? [Podcast]

“Quality engineers, as we talk about them, this new individual or this new entity, are going to have the skills to be able to do development and do testing at the same time.” – Jeff Wilkinson, Accenture Managing Director

Accenture Managing Director Jeff Wilkinson explains why we hear so much about eliminating manual testing…or even suggestions of getting rid of the term “testing” altogether. Learn what’s changed in modern-day application development, why a new approach to quality engineering is required, and how today’s testers can deliver the biggest impact on the lines of business they support.

Don’t miss a single episode of Continuous Testing Live-subscribe today at iTunes, Google Play, or SoundCloud!

Wayne: Hello, this is Wayne Areola from Tricentis. I’m here with Jeff Wilkinson, the managing director of North America – Testing Platform Lead. Did I get that right?

Jeff: That’s pretty close.

Wayne: Oh my gosh.

Jeff: It’s very good. Well done.

Wayne: I was really worried about that.

Jeff: I’ll answer to anything, really. I’m not picky.

Wayne: I was really worried about blowing that line. Anyways. Hey, we’re here, and we have an opportunity to talk, and I think the question I have for you, Jeff, is really more about … We are in an immense amount of change. So I’ve been in the testing space for about 20 years, but I feel like there’s an inflection point. It feels like it’s in turmoil. Any insight? What’s going on?

Jeff: I often start my discussions with clients, with workshops, with conferences, by being disruptive and saying, “We are in the business of eliminating manual testing.”

Wayne: Interesting.

Jeff: We’re in the business of eliminating scripted testing. And, in some cases, if I’m really feeling kind of strange, I’ll say, “We would like to eliminate the word testing from the English language.”

Wayne: Really?

Jeff: Maybe I’m just doing that so that I could claim insanity defense if I get arrested later on, but the reality is that testing is not our father’s testing. Today, we’re talking about “quality engineering” and the reason for that is that the way that systems are being developed nowadays is very different than they were a decade ago, even five years ago, right? Because digital has come into play, and every company’s a digital company, it’s forcing a different type of application development.

Wayne: Interesting.

Jeff: It’s forcing continuous delivery, right? If you are a digital company, you want to be on the web. You want a presence out there. You want to get things into production as quickly as you can. You can’t have a slow waterfall approach to doing systems development. Agile development is requiring continuous delivery. Continuous delivery requires continuous testing, and you can’t have continuous testing if you don’t have automation.

Wayne: Very interesting. So, I couldn’t agree with you more. It seems like the time pressure associated with the cycle is increasing. But what about risk? I think, before, when I look back over the years of how testing has evolved, especially in agile environments, I saw agile environments take testing back to this “task type level”, right? Where it was part of the burndown, it became part of the story point, and people were asking the question, “Are we done testing?” And we seem to de-focus from this idea of “What are we really testing for?” So, where does risk come into the picture when speed becomes of the essence?

Jeff: Well, you’re trying to de-risk the whole process. That’s what testing’s all about. But nowadays, you can’t just have somebody focused on doing straight, pure testing. You’ve got to be thinking about the development angles in a sprint, and how you test in sprints, right, at the unit test level. A lot of times you see the developers and testers sometimes are the same person.

Wayne: Yeah.

Jeff: There’s a convergence of that development person and that testing person. Similarly, you’ve got out of sprint testing, you’ve got your system testing or your functional testing that goes on in parallel with that, but then you also have to think about the operations that support your development, your testing, and operations. That’s DevOps, right?

Wayne: Yeah.

Jeff: So we talk about quality engineering is the convergence of development, testing – which, test automation is the predominance, and then DevOps, into something called a quality engineer. So a quality engineer’s whole purpose in life is to de-risk the system that’s being developed, right?

Wayne: Okay.

Jeff: You talk about, you know, the whole … The testing bubble, which had this enormous amount of work that was taking place during the testing phase of the systems development lifecycle, and you had some stuff going on upstream that was your shift left, and stuff going on downstream that was your shift right.

Wayne: Yeah.

Jeff: And that’s how waterfall worked. Now, with an agile world, everything is happening at once. So we like to talk about something we call the “quality funnel.” Like, when you get a bunch of inputs at the beginning, and those inputs are requirements and new functionality and the funnel will close off towards the end, and you’re de-risking. You’re eliminating defects as you go, but you’re testing continuously.

It’s a very different mindset than saying “You’re doing your unit testing here, you’re doing your system testing here, you’re doing your user acceptance testing, your corner stats, etc.,” and on to the line of production. It doesn’t happen that way now.

Wayne: Agreed. So let me take one concept that you mentioned, and maybe expand on it just a bit. So this idea of dev-test or development testing – not quality engineering quite yet, but this idea of the developer actually doing more in terms of pulling their weight in terms of isolating or de-risking the process of creating code. What level of maturity do you think we’re at there, in your experience?

Jeff: You, Tricentis, or the world?

Wayne: Well, the world. Not Tricentis. You know, the world. Where’s the world at?

Jeff: I think it’s low.

Wayne: Very low.

Jeff: Frankly. And here’s the reasoning. You still have a mindset out there, and one of the things I’m going to be talking about later on today is this diversity, and how/why diversity is important in testing. You still have developers out there that believe that they’re artists. They don’t want to test. And that’s okay. They come in with a mindset that they’re building something, but they still have to test what they’re doing.

But the quality engineers, as we talk about them, this new individual or this new entity, is going to have the skills to be able to do development and do testing at the same time, or you have a pool of people. So, if you pool quality engineers, and some people trend more towards the development in terms of their proficiencies, others will trend more towards the automated testing.

Wayne: Got it.

Jeff: And if you’re talking about Tricentis, the good news is that you can actually take today’s manual testers and convert them to automated testers.

Wayne: Absolutely.

Jeff: You can’t do that … Wolfgang Platz very accurately talked about SDETs this morning, and how you’re going to convert manual testers to be SDETs. He’s right. You can’t, but you can convert them to be automated testers. And then on the far side, you have people who are going to focus on DevOps, right, So it’s going to be a mix of skills that are going to create this quality engineering entity.

Wayne: So, with that in mind though, Jeff, let me kind of understand here. We’re not only maybe shifting this idea of where testing fits, but it’s also a rebalancing of that lifecycle as well, and distributing the tasks associated with the quality goals at specific stages of the funnel, if I can use the funnel analogy correctly. So, is that what quality engineering is, or would you give it a different definition than what I just tried to put in your words, or put in your mouth?

Jeff: No, I think that’s it. I think that’s a good description of what it is, and I like the way that we’re talking quality. Because, once again, let’s talk about eliminating the term “testing,” right?

Wayne: So that’s an interesting conversation as well because the term “quality” has always been so horrible to use. It seems to be an amorphous term that when you walk into an organization or a client, you can’t really describe quality, but you can describe the task of testing. So, I think over the years we’ve been forced to actually isolate the term into something that is tangible, right? Rather than “quality.” Now, the interesting thing is that in any discrete manufacturing process, the term quality is everything, right? So you know, you take an automobile shop, you take someone who’s producing hard goods, they have a quality process, and it always seemed unreasonable, over the years, how these things did not seem to merge together, especially from the idea of creating software, and you hit the nail on the head. It’s this creative element that organizations didn’t really understand.

Jeff: I would make one comment. In the past, testing has had a challenging reputation.

Wayne: Yeah.

Jeff: Some people don’t necessarily want to be testers, or to be in a testing world. So, we consciously, at Accenture, try to define our testing practice as something different, right? We are not testers. We are stewards of quality. Right? We’re protectors of our client’s production, protectors of Accenture’s business, essentially. So, we focus on quality, we focus on transformational quality. We don’t just have people sitting in back rooms doing test planning, scripting, and executing.

Wayne: Yes.

Jeff: Right? We’re getting away from all that now. You may have to do a little of that at the beginning of your career, but over time you’re going to evolve into something that’s … I like to use the word sexy, right? Testing can be sexy.

Wayne: I like to use it, too.

Jeff: Quality is sexy.

Wayne: It usually gets a good laugh at conferences.

Jeff: Exactly. Right. People will treat you like you have three heads.

Wayne: So. I have one last question for you, Jeff, and this is something that I think has raised its head at this conference. You know, not only are we seeing digital our businesses, but we’re also seeing digital hit the idea of testing. So, concepts like AI, AR, robotics, whatever you might mention, also applies to the actual quality process. I was about to say testing, but I will use the word quality. Quality process. In your opinion, what do you think the most exciting element is? What’s coming down the pike, in your mind, that you really have your eye on?

Jeff: It’s already here. We’re looking at using analytics.

Wayne: Got it.

Jeff: Using not just intelligent automation, but artificial intelligence to improve the way we do quality engineering. So, here’s an example: We have additional ecosystem partners where we can take a look at the basis of functionality that needs to be scrutinized – I won’t say tested, scrutinized. Validated, if you will.

And to determine how much testing do you need to do, and where do you need to spend most cycles on testing?

Wayne: Interesting.

Jeff: That has shown to eliminate about 15-20% of the testing that you need to do.

Wayne: That’s … So just on the analytics alone?

Jeff: Just on the analytics.

Wayne: Just working smarter.

Jeff: So here’s an example: If you talk about intelligent automation on the one side, in Tricentis, if you talk about analytics and artificial intelligence on the other side, we think the intelligent automation with Tricentis can yield a better than 50% cost savings. Why? Because we’re eliminating manual testing.

Wayne: Wow. Okay, so that’s pretty huge.

Jeff: It is! Right, so we … Traditionally we try to automate regression testing, right? Been somewhat successful. The UFTs, the Seleniums of the world, get you there but you got to maintain scripts.

Wayne: Yep.

Jeff: With Tricentis you don’t have to maintain scripts, right? It’s much easier to do so. So now, you can use it for not just doing regression testing, you can do it for progression testing. The system testing. So, you really don’t need to do manual testing. Or if you do, sorry, you do exploratory testing. Which Tricentis also does.

Wayne: Absolutely. So that’s a good point. You know, I think also what we’re also seeing in the evolution of the actual testing industry is more automation, and now the combination of better analytics has allowed us to actually push the envelope just a little bit more and get more done automatically.

Jeff: Yep.

Wayne: So you know, what I always like to say is … I think ten years ago, Jeff, I think like … certainly 15 years ago, you know, the “term test automation” was here but it wasn’t keeping its promise. I think the industry actually had a lot of overhead associated with it. We actually you know, obviously, were baked-in waterfall … Agile was here, but we still were baked-in waterfall processes, which actually forced us into actually doing more work, and then you had to make a tradeoff decision. “Do I spend the time automating, or do I just get it done manually?” Right, and I think the means to the end was easier to just go knock off a bunch of manual tests in order to hit the deadline.

Jeff: Here’s why, though. It’s because back in those days, it was scripted automation testing. Nowadays we’re talking about non-scripted. No underlying code to the scripts you’ve created. They have a functionality you’re creating, right?

Wayne: Yeah.

Jeff: That … The implication is huge. It means you don’t have to maintain a bunch of scripts that you put on a shelf.

Wayne: Exactly.

Jeff: Two years later, it’s going to be completely obsolete. Now you just tweak that model, right?

Wayne: It’s a term I like to call “bloat.”

Jeff: Bloat, yes. Exactly. That’s what’s enabling us to systematically eliminate the need for manual testing.

Wayne: Very nice.

Jeff: I’ve got to ship my work for us. I’ve got 40,000 people globally, right? And our competitors do as well. The 40,000 is going to shrink, right?

Wayne: Yeah.

Jeff: And now it’s going to be, instead of having a mix of predominantly 80% manual testers, we’re going to have 80% quality engineers that have skills that are in line with automated testing, DevOps, and agile development.

Wayne: I think you’re absolutely correct. It’s the wave of the future.

Jeff: It is.

Wayne: I am here with Jeff Wilkinson. I’m not going to try your title again. I’m just going to call you managing director at Accenture. Jeff, thank you so much for spending the time with us, and I look forward to speaking with you again.

Jeff: Thank you, Wayne. Appreciate it.

Original Link

Most Complete NUnit Unit Testing Framework Cheat Sheet

An essential part of every UI test framework is the use of a unit testing framework. One of the most popular ones in the .NET world is NUnit. However, you cannot find a single place where you can get started with its syntax. So, I decided that it would be great to create a complete cheat sheet. I hope that you will find it useful. Enjoy!

Installation

Install-Package NUnit
Install-Package NUnit.TestAdapter
Install-Package Microsoft.NET.Test.Sdk

To discover or execute test cases, VSTest would call the test adapters based on your project configuration. (That is why NUnit/xUnit/MSTest all ask you to install a test adapter NuGet package to your unit testing projects). So NUnit.TestAdapter exists for that purposes.

NUnit itself implements the testing frameworks and its contracts. So you need to add a NuGet reference to it to write unit test cases and have them compiled. Only compiled projects along with the test adapter can then be consumed by Visual Studio.

Test Execution Workflow

using NUnit.Framework;
namespace NUnitUnitTests
{ // A class that contains NUnit unit tests. (Required) [TestFixture] public class NonBellatrixTests { [OneTimeSetUp] public void ClassInit() { // Executes once for the test class. (Optional) } [SetUp] public void TestInit() { // Runs before each test. (Optional) } [Test] public void TestMethod() { } [TearDown] public void TestCleanup() { // Runs after each test. (Optional) } [OneTimeTearDown] public void ClassCleanup() { // Runs once after all tests in this class are executed. (Optional) // Not guaranteed that it executes instantly after all tests from the class. } }
}
// A SetUpFixture outside of any namespace provides SetUp and TearDown for the entire assembly.
[SetUpFixture]
public class MySetUpClass
{ [OneTimeSetUp] public void RunBeforeAnyTests() { // Executes once before the test run. (Optional) } [OneTimeTearDown] public void RunAfterAnyTests() { // Executes once after the test run. (Optional) }
}

OneTimeSetUp from SetUpFixture (once per assembly)
OneTimeSetUp from TestFixture (once per test class class)
SetUp (before each test of the class)
Test1
TearDown (after each test of the class)
SetUp
Test2
TearDown

OneTimeTearDown from TestFixture (once per test class)
OneTimeSetUp from TestFixture

OneTimeTearDown from TestFixture
OneTimeTearDown from SetUpFixture (once per assembly)

Attributes Comparison

Comparing NUnit to other frameworks.

Image title

Image title

Assertions

Assertions — Classic Model

The classic Assert model uses a separate method to express each individual assertion of which it is capable.

Assert.AreEqual(28, _actualFuel); // Tests whether the specified values are equal. Assert.AreNotEqual(28, _actualFuel); // Tests whether the specified values are unequal. Same as AreEqual for numeric values.
Assert.AreSame(_expectedRocket, _actualRocket); // Tests whether the specified objects both refer to the same object
Assert.AreNotSame(_expectedRocket, _actualRocket); // Tests whether the specified objects refer to different objects
Assert.IsTrue(_isThereEnoughFuel); // Tests whether the specified condition is true
Assert.IsFalse(_isThereEnoughFuel); // Tests whether the specified condition is false
Assert.IsNull(_actualRocket); // Tests whether the specified object is null
Assert.IsNotNull(_actualRocket); // Tests whether the specified object is non-null
Assert.IsInstanceOf(_actualRocket, typeof(Falcon9Rocket)); // Tests whether the specified object is an instance of the expected type
Assert.IsNotInstanceOf(_actualRocket, typeof(Falcon9Rocket)); // Tests whether the specified object is not an instance of type
StringAssert.AreEqualIgnoringCase(_expectedBellatrixTitle, "Bellatrix"); // Tests whether the specified strings are equal ignoring their casing
StringAssert.Contains(_expectedBellatrixTitle, "Bellatrix"); // Tests whether the specified string contains the specified substring
StringAssert.DoesNotContain(_expectedBellatrixTitle, "Bellatrix"); // Tests whether the specified string doesn't contain the specified substring
StringAssert.StartsWith(_expectedBellatrixTitle, "Bellatrix"); // Tests whether the specified string begins with the specified substring
StringAssert.StartsWith(_expectedBellatrixTitle, "Bellatrix"); // Tests whether the specified string begins with the specified substring
StringAssert.IsMatch("(281)388-0388", @"(?d{3})?-? *d{3}-? *-?d{4}"); // Tests whether the specified string matches a regular expression
StringAssert.DoesNotMatch("281)388-0388", @"(?d{3})?-? *d{3}-? *-?d{4}"); // Tests whether the specified string does not match a regular expression
CollectionAssert.AreEqual(_expectedRockets, _actualRockets); // Tests whether the specified collections have the same elements in the same order and quantity.
CollectionAssert.AreNotEqual(_expectedRockets, _actualRockets); // Tests whether the specified collections does not have the same elements or the elements are in a different order and quantity.
CollectionAssert.AreEquivalent(_expectedRockets, _actualRockets); // Tests whether two collections contain the same elements.
CollectionAssert.AreNotEquivalent(_expectedRockets, _actualRockets); // Tests whether two collections contain different elements.
CollectionAssert.AllItemsAreInstancesOfType(_expectedRockets, _actualRockets); // Tests whether all elements in the specified collection are instances of the expected type
CollectionAssert.AllItemsAreNotNull(_expectedRockets); // Tests whether all items in the specified collection are non-null
CollectionAssert.AllItemsAreUnique(_expectedRockets); // Tests whether all items in the specified collection are unique
CollectionAssert.Contains(_actualRockets, falcon9); // Tests whether the specified collection contains the specified element
CollectionAssert.DoesNotContain(_actualRockets, falcon9); // Tests whether the specified collection does not contain the specified element
CollectionAssert.IsSubsetOf(_expectedRockets, _actualRockets); // Tests whether one collection is a subset of another collection
CollectionAssert.IsNotSubsetOf(_expectedRockets, _actualRockets); // Tests whether one collection is not a subset of another collection
Assert.Throws<ArgumentNullException>(() => new Regex(null)); // Tests whether the code specified by delegate throws exact given exception of type T

Assertions — Constraint Model

The constraint-based Assert model uses a single method of the Assert class for all assertions. The logic necessary to carry out each assertion is embedded in the constraint object passed as the second parameter to that method. The second argument in this assertion uses one of NUnit’s syntax helpers to create an EqualConstraint.

Assert.That(28, Is.EqualTo(_actualFuel)); // Tests whether the specified values are equal. Assert.That(28, Is.Not.EqualTo(_actualFuel)); // Tests whether the specified values are unequal. Same as AreEqual for numeric values.
Assert.That(_expectedRocket, Is.SameAs(_actualRocket)); // Tests whether the specified objects both refer to the same object
Assert.That(_expectedRocket, Is.Not.SameAs(_actualRocket)); // Tests whether the specified objects refer to different objects
Assert.That(_isThereEnoughFuel, Is.True); // Tests whether the specified condition is true
Assert.That(_isThereEnoughFuel, Is.False); // Tests whether the specified condition is false
Assert.That(_actualRocket, Is.Null); // Tests whether the specified object is null
Assert.That(_actualRocket, Is.Not.Null); // Tests whether the specified object is non-null
Assert.That(_actualRocket, Is.InstanceOf<Falcon9Rocket>()); // Tests whether the specified object is an instance of the expected type
Assert.That(_actualRocket, Is.Not.InstanceOf<Falcon9Rocket>()); // Tests whether the specified object is not an instance of type
Assert.That(_actualFuel, Is.GreaterThan(20)); // Tests whether the specified object greater than the specified value

Advanced Attributes

Author Attribute

The Author Attribute adds information about the author of the tests. It can be applied to test fixtures and to tests.

[TestFixture]
[Author("Joro Doev", "joro.doev@bellatrix.solutions")]
public class RocketFuelTests
{ [Test] public void RocketFuelMeassuredCorrectly_When_Landing() { /* ... */ } [Test] [Author("Ivan Penchev")] public void RocketFuelMeassuredCorrectly_When_Flying() { /* ... */ }
}

Repeat Attribute

RepeatAttribute is used on a test method to specify that it should be executed multiple times. If any repetition fails, the remaining ones are not run and a failure is reported.

[Test]
[Repeat(10)]
public void RocketFuelMeassuredCorrectly_When_Flying() { /* ... */ }

Combinatorial Attribute

The CombinatorialAttribute is used on a test to specify that NUnit should generate test cases for all possible combinations of the individual data items provided for the parameters of a test.

[Test, Combinatorial]
public void CorrectFuelMeassured_When_X_Site([Values(1,2,3)] int x, [Values("A","B")] string s)
{ ...
}

Generated tests:

CorrectFuelMeassured_When_X_Site(1, “A”)
CorrectFuelMeassured_When_X_Site(1, “B”)
CorrectFuelMeassured_When_X_Site(2, “A”)
CorrectFuelMeassured_When_X_Site(2, “B”)
CorrectFuelMeassured_When_X_Site(3, “A”)
CorrectFuelMeassured_When_X_Site(3, “B”)

Pairwise Attribute

The PairwiseAttribute is used on a test to specify that NUnit should generate test cases in such a way that all possible pairs of values are used.

[Test, Pairwise]
public void ValidateLandingSiteOfRover_When_GoingToMars ([Values("a", "b", "c")] string a, [Values("+", "-")] string b, [Values("x", "y")] string c)
{ Debug.WriteLine("{0} {1} {2}", a, b, c);
}

Resulted pairs:

a + y

a – x

b – y

b + x

c – x

c + y

Random Attribute

The RandomAttribute is used to specify a set of random values to be provided for an individual numeric parameter of a parameterized test method.

The following test will be executed fifteen times, three times for each value of x, each combined with 5 random doubles from -1.0 to +1.0.

[Test]
public void GenerateRandomLandingSiteOnMoon([Values(1,2,3)] int x, [Random(-1.0, 1.0, 5)] double d)
{ ...
}

Range Attribute

The RangeAttribute is used to specify a range of values to be provided for an individual parameter of a parameterized test method. NUnit creates test cases from all possible combinations of the provided on parameters – the combinatorial approach.

[Test]
public void CalculateJupiterBaseLandingPoint([Values(1,2,3)] int x, [Range(0.2,0.6)] double y)
{ //...
}

Generated tests:

CalculateJupiterBaseLandingPoint(1, 0.2)
CalculateJupiterBaseLandingPoint(1, 0.4)
CalculateJupiterBaseLandingPoint(1, 0.6)
CalculateJupiterBaseLandingPoint(2, 0.2)
CalculateJupiterBaseLandingPoint(2, 0.4)
CalculateJupiterBaseLandingPoint(2, 0.6)
CalculateJupiterBaseLandingPoint(3, 0.2)
CalculateJupiterBaseLandingPoint(3, 0.4)
CalculateJupiterBaseLandingPoint(3, 0.6)

Retry Attribute

RetryAttribute is used on a test method to specify that it should be rerun if it fails, up to a maximum number of times.

[Test]
[Retry(3)]
public void CalculateJupiterBaseLandingPoint([Values(1,2,3)] int x, [Range(0.2,0.6)] double y)
{ //...
}

Timeout Attribute

The TimeoutAttribute is used to specify a timeout value in milliseconds for a test case. If the test case runs longer than the time specified it is immediately cancelled and reported as a failure, with a message indicating that the timeout was exceeded.

[Test, Timeout(2000)]
public void FireRocketToProximaCentauri()
{ ...
}

Execute Tests in Parallel

Parallel execution of methods within a class is supported starting with NUnit 3.7. In earlier releases, parallel execution only applies down to the TestFixture level, ParallelScope.Childrenworks as ParallelScope.Fixtures and any ParallelizableAttribute placed on a method is ignored.

[assembly: Parallelizable(ParallelScope.Fixtures)]
[assembly:LevelOfParallelism(3)]

The ParallelizableAttribute may be specified on multiple levels of the tests. Settings at a higher level may affect lower level tests, unless those lower-level tests override the inherited settings.

[TestFixture]
[Parallelizable(ParallelScope.Fixtures)]
public class TestFalcon9EngineLevels
{ // ...
}

Original Link

Notes on Shift Left in Testing and Software Development

For some reason, I’ve had a few emails and LinkedIn questions asking me what I think about “Shift Left.” I thought I’d put out a public answer.

I’ll start with: I do not use the term “Shift Left” because:

  • It seems like “consultant speak” and, while I’m a consultant, I try to speak clearly.
  • It obscures, rather than clarifies, whatever point it is trying to make.
  • It makes me think of “moving a whole thing” rather than improving the System.

Instead, I think of supporting the growth and evolution of a System over its lifetime and I don’t need “Shift Left” to do that.

What Might “Shift Left” Mean?

I thought about what “Shift Left” might mean, before searching the web to see what people say it means.

At first hearing, the words seem to be talking about the position of things. So perhaps it means moving people physically to sit more on the left. Which seems ridiculous and couldn’t possibly mean this.

But perhaps it is referring to a factory conveyor belt assembly line production system. And we want to “Shift Left” on the conveyor belt system. Assuming the belt is moving from left to right, where left is ‘less produced’ and far right is ‘finished’ then it is actually referring to time, rather than physical position.

Perhaps it means “test early”?

If it does then it seems a tad overkill to say “Shift Left” rather than “Test Early.”

But… Consultant Speak.

Consultant Speak:

The art of saying words that sound simple but which actually obscure your message. When you speak Consultant people pretend to understand you, so that they don’t appear ignorant. They then repeat your phrases and spread ambiguity. You are then employed as a consultant later to fix the situation caused by the implementation of the ambiguous words.

Some sample consultancy words to try out: Facilitation, Innovation, Ideation, Initiatives (see also The Office Life Business Jargon Dictionary)

“Test Early.” Surely It Can’t Be That Simple?

A quick Web Search search later, and I’m not picking on these articles, these were simply the first in the web search (well done on your SEO skills):

Apparently, it does mean test earlier.

Except when it means:

Testing Early Seems Like a Good Thing to Do

I have nothing against the notion of “Testing stuff earlier.”

  • I think a good tester finds opportunities to test early.
  • I think a good tester finds opportunities to test where other people do not.

I do try to “test stuff earlier.”

And I can do that without the concept of “Shift Left.”

Shift Left implies to me that I have:

  • P (a ‘process’ or a ‘task’ or an ‘activity’)
  • which is done at a point in time (x+5)
  • and I can apply a “Shift Left” transformation to P
  • such that I can now do P at an earlier point in time e.g. (x+3) or (x+4)

And that is all it implies to me.

It offers me an overly simple model of improvement and implies an overly simplistic model of testing, that I do not see value in adopting.

I already have a very simple model of testing that supports testing early without the concept of “Shift Left” and it is a model of testing that can adapt to Waterfall, Agile, DevOps or any other System of Development that I know of.

A Simple Model of Testing

I present to you, a simple model of testing that does not require “Shift Left:”

This simple model of testing has testing as “a process of building a model of the system under test and evaluating the system under test in terms of the model.”

“Evaluate” seems a bit “Consultancy Speak”. Feel free to use ‘compare’ if you want to.

Evaluating involves comparing the model of the system to find out:

  • What does it do that matches the model,
  • Is not present in the model,
  • Is in addition to the model.

I could model this as:

  • “==” – matches
    • demonstrate that functionality (at a given time, with given data) repeatably provides the same answer at the level of observation adopted
    • I also include “!=” in “==” because it is a boolean operator
  • “-” – is not present
    • behaviour missing from the system that I expected, this might be a “bug” or it might mean my model needs to change etc.
    • “+” – is in addition too
    • behaviour in the system that I didn’t expect that my model did not predict. This might be a “bug”, or a failure in communication where I didn’t know about certain functionality, or a failure in my building of the model where I neglected to incorporate this behavior, etc.

I can change the level of observation and that changes my testing:

  • Exclusively at the GUI,
  • Include the database,
  • Observe the file system,
  • Observe the HTTP traffic,
  • etc.

Remember this is a simple model that may not capture all nuances associated with testing a system.

Where Does Time Fit In?

To incorporate ‘time’ in terms of “Shift Left.”

Depending on when, in the System of Development, I build the model and evaluate the System Under Test against the model, I will be able to evaluate for different properties and behaviors.

For example, I can’t evaluate the GUI until the GUI is written, but I can evaluate the ‘design’ for the GUI or the ‘wireframes’ to see how it supports the expected behavior.

Shift Implies Move

Shift Left implies to me that I “Test the GUI” earlier. Or I move the “Performance Testing” earlier. And this doesn’t’ work for me because we’ve ‘moved’ the Performance Testing, therefore, we only conduct performance testing earlier and do not do any later. ‘Shift’ suggests a ‘move’ operation to me.

Here’s a diagram of what I mean:

  • We do Performance Testing.
  • We want to “Shift Left.”
  • We shift left.

Now we do performance testing earlier. We have Shifted Left.

But:

  • Now we don’t do any Performance Testing later
  • Perhaps it was important to conduct Performance Testing later on the production environment to make sure that our load balancers work?
  • Perhaps the Performance Testing that we now do early is different from the Testing we did later and now targets different risks.
  • Perhaps we needed to do the early performance testing and perhaps there is still value in the later Performance Testing.

The wording of “Shift Left” doesn’t cover these nuances. Which doesn’t mean that practitioners of “Shift Left” do not consider those nuances, but I have seen teams implement “Shift” as a ‘move’ rather than a nuanced ‘spread and adapt’ operation.

Shift Implies No Change to the System of Development

“Shift Left” concentrates on the Testing and seems to assume that we can do that without changing the System of Development.

When I create Test Approaches I concentrate on the System Of Development and depending on the System of Development I test in different ways.

In the simple model, I test when there are opportunities to test that add value either to:

  • The construction and maintenance of the model
  • Communicating the results of evaluating the model to the System Under Test

If we can “Shift Left” regardless of our approach to developing the system then we have not implemented an effective Software Testing Strategy. i.e. if you can perform exactly the same “Test The GUI” task earlier, then why didn’t you? It could imply some organizational issues.

If we change our System of Development and do not change the approach to testing then, frankly, we are not conducting testing effectively.

Effective testing will take advantage of new opportunities to test differently.

If we start incrementally building the system in smaller chunks then we can take advantage of this to test in shorter bursts earlier rather than a long cycle of testing at the end.

If we introduce automated build processes then we can take advantage of this to automate assertions that are simple and time-consuming during the build process, and cut down on the available time for exploration.

Think Evolution and Growth instead of Linear Transformation

I worry that Shift Left presupposes a Linear Transformation model, e.g.:

  • A conveyor belt factory assembly line
  • Throw it over the wall to the ‘next discipline’
  • A focus on the ‘roles’ in the team, rather than the System of Development
  • A focus on the ‘process’ rather then the System Under Development

We build Systems that evolve and grow.

An Arborist is a person who looks after individual trees and plants. We wouldn’t ask an Arborist to “Shift Left” and prune early. An Arborist looks after a tree as it grows. The tree changes and ages over time. The Arborist responds to the needs of the tree and the risks facing the tree, over time. The Arborist does whatever is appropriate to help the tree grow and shape within the constraints and possibilities of its environment and the needs of that environment. The tree doesn’t move. The Arborist works around and within it over time.

When we build Systems. We don’t “Shift Left”. We craft a System of Development (which includes Testing) to meet the needs of the System we are building, to respond to Risks that we identify and the issues that we find. The Systems grow and evolve. We need to be good enough to identify improvements we can make and take advantage of opportunities to Test.

I believe that the notion of “Shift Left” is not required if we change our model of System Development.

“Shift Left” does not fit well into my Model of System Development.

You can find a supporting video at https://youtu.be/AaMp5skiwqA

Original Link

Continuous Testing Live: Digital Transformation Fears and Opportunities [Podcast]

Emmet Keeffe of Insight Venture Partners’ sits down to discuss what’s different about the digital transformation we’re seeing in the enterprise today, vs. what was going on a decade ago. Learn why the race to digital transformation has so many executives worried, why “doing nothing” is no longer an option, and where Continuous Testing fits into today’s modernization initiatives.

Wayne: Hello, this is Wayne Ariola with Tricentis. I’m here with Emmet Keeffe of Insight Venture Partners. Emmett, thank you for being here.

Emmet: You’re very welcome. Thanks for the invitation.

Wayne: I’m excited to talk to you because I just got one burning question for you. And I think this conversation could last for about an hour. We’ll try to keep it to a tight fifteen minutes. Digital transformation. I mean, I’m going to be a skeptic for a second, right? Haven’t we been digitally transforming for decades now? Hasn’t software been the enabler to digitally transform? And all of the sudden in 2017/2018, Boom! It’s here again. What’s different this time?

Emmet: Well, I’ve been producing roundtables and thought leadership sessions on digital, going all the way back to 2005. And you’re exactly right. We’ve been talking about digital transformation since then. And there’s been many, many topics along the way that we’ve been exploring. I think what’s different now, is the board of directors in these large, global companies are extremely focused on digital.

Wayne: Interesting.

Emmet: They’ve heard about it everywhere. They’re reading about it everywhere. They’re concerned about it. They feel extreme risk associated with it.

Wayne: Okay.

Emmet: So, I think what’s changing right now is, every single one of the global 2000, the boards are extremely concerned about digital. They’re afraid of it. They’re scared to death, really.

Wayne: Really?

Emmet: I mean it’s one of the biggest risks-

Wayne: As business disrupture, or disruptors, or …?

Emmet: It’s as big of a risk as cybersecurity.

Wayne: Interesting.

Emmet: It’s a risk, it’s also a massive opportunity.

Wayne: Yes.

Emmet: But literally … We’ve bought a company recently called Diligence. Which is for securing board communications and board materials. So, I’m spending more and more time with board directors, and digital is one of the top three issues that boards are concerned about.

Wayne: That’s really interesting. But now, I’m envisioning a board, right? A bunch of really smart men and women sitting around a table. And I just don’t envision these guys as digital transformers. What are they? Savvy tech people all of the sudden?

Emmet: No, they’re not. They want to know what it is. So, I’m starting to spend time with them to provide education on what it is. What does business model transformation mean from a digital standpoint? What does business process transformation mean from a digital standpoint? What are the technologies you need in order to accelerate digital transformation? I think they are in the early days of understanding what it is and how to do it. But, I think what’s changed, is that it’s on the radar of every board. And therefore, every CEO is under extreme pressure and therefore every CIO is under extreme pressure.

Wayne: Got it.

Emmet: I think what’s different is that digital has been a topic and a discussion point, and it’s been on everyone’s radar. But, I think now what’s different is the pressure is coming top down. I think what’s also happened, is in every industry we’ve had a massive disruptor. So, if you look at Tesla for example. It has turned the automotive industry upside down.

Wayne: Absolutely, yeah.

Emmet: And Fintech has turned that world upside down. Amazon has obviously turned retail upside down. So, I think in every industry there are cases where there’s been digital disruption.

Wayne: Got it.

Emmet: And so that’s sort of fueling the fire from the top down standpoint.

Wayne: So I’m counting Amazon and Tesla as kind of a net-new disruptor, and Fintech I think we’re seeing the combination of both, some people who realized really early that they need to straighten things out and others who have entered into the race. Can these large organizations with boards of directors and tons of people, turn the tanker?

Emmet: So, it’s fascinating just because I think back over twenty years, what were we talking about along the way? In the early days, everyone thought digital was an app. They thought it was like a web app or a mobile app.

Wayne: A thing, yeah.

Emmet: And then as time went on, I think everyone started to realize, “I need new business models, I need new business process.” I think where it’s all landing now, is everyone’s realizing it’s actually data. That’s fundamentally what digital is all about.

Wayne: Interesting.

Emmet: The more data you have, the stronger position you are to sort of dominate from a digital transformation standpoint.

Wayne: Very interesting.

Emmet: So, I think you know … We had a session last year in Abu Dhabi and what came out of it was, “data is the new oil.” I think that’s what organizations are figuring out now. “I really need to get my arms around my data. I need to have a chief data officer. I need to understand AI and machine learning, cognitive, quantum, etc.” So that’s where I think it’s all going. And that’s really the asset. And if you look ten, or twenty, or thirty years down the road, if many of these large global organizations are still standing, it will be because they figured out what to do with this data.

Wayne: Interesting. I don’t know if you’ve seen the graphic before, but there’s a fantastic graphic that’s been circulating around LinkedIn and the internet, that shows the growth of Amazon and how they’ve replaced the brick and mortar retailers, right? This conversation comes up time and time again, “Where’s the new Uber? Where’s the new iTunes?” I’m a huge music fan, I used to love going to Tower Records. There’s no record store anymore.

Emmet: Right, right.

Wayne: It’s pretty interesting to see the disruptor model.

One more question Emmett, Let me back up just a second. Emmett and I are at a conference called, Accelerate 2017. And I’m lucky enough to have you in front of me to have this conversation, because of this conference. We had a keynote speaker, Todd Pierce. He gave just a great keynote. I thought it was fantastic. It’s up on our Tricentis website.

The one thing Todd saw, was the waves of digital transformation. And he showed four waves. The internet, the mobile internet, IoT, and this fourth wave, which is kind of AI, augmented reality. He talked about blockchain, he talked about voice. Where do you think we are in that wave? Have we gotten through the IoT layer yet?

Emmet: No. I work for one of the world’s leading private equity firms and we’re a late-stage investor. So what we look for globally are businesses where the revenue has already been doubling multiple years in a row. And if we think it’s going to keep doubling into the future, we’ll make a growth acceleration investment. We invested 165 million in Tricentis because you’ve already put up massive growth and we think there’s a potential to put up more growth like that. So, it’s really interesting to look at what have we invested in but what also have we not invested in.

Wayne: Interesting.

Emmet: We’re not in blockchain yet, we’re not in IoT yet. We’re not really even in AI yet. I mean we’ve got two investments in the AI world. Actually, three.

Wayne: Interesting.

Emmet: Two of them are cybersecurity-related. One is fraud-related. But, AI is very, very early. And like I said, blockchain, and IoT – we haven’t made an investment there yet. So I think those things are very new and very young. And there’s still lots to be done around the web, around mobile. And, fundamentally, around even defining, “What is digital transformation and how are we going to go execute with digital?

Last weekend we had a leadership event in Italy. And we had a Chief Digital Officer from one of the world’s largest makers of elevators and escalators. That is probably on the forefront of an IoT transformation.

Wayne: Wait, wait, wait. Elevators, escalators? Hold on.

Emmet: That’s right.

Wayne: Digitally transformed?

Emmet: That’s right.

Wayne: Okay. I’m a little lost. You’ve got to add some color to this.

Emmet: Well, I mean, they have three million devices around the world, elevators, and escalators. And they have a billion people that ride on those devices every day. They have thousands of service engineers that are servicing that equipment around the world. So there’s a big opportunity for them to have a network. To have these three million devices communicating. And have all the services and parts distribution, and all that really automated from a digital standpoint, leveraging IoT. And there’s also an opportunity for them to communicate with those billion consumers that are riding their equipment every day.

Wayne: Amazing.

Emmet: The thing that really stood out from the presentation that they gave … And this is a German company, you won’t be surprised.

Wayne: Yes.

Emmet: The execution was absolutely phenomenal. So, the first step for them was, rationalize the current IT infrastructure and IT application. So they had a very disciplined process. First, rationalize. Second, was streamline the operations of the business, to make sure that there was nothing else we could do from an operational efficiency standpoint.

Wayne: Interesting.

Emmet: Then, it was stand up the IoT implementation, put hardware on every device around the world, and really build that IoT infrastructure. Then it was, “Okay how are we going to communicate with these consumers?” I think many people have the wrong opinion, that digital is sort of a creative exercise. And is some ways, yes it is. I mean, you have to be innovative, and you have to unlock creativity. But, the thing that I walked away with, from that presentation was that you have to be extremely disciplined.

Wayne: Yeah, very.

Emmet: And I think that’s what many organizations, frankly, are lacking right now.

Wayne: How would you rate this one company on the spectrum of IoT adoption?

Emmet: 10 out of 10.

Wayne: Really?

Emmet: Yeah. It was the best presentation I’ve seen on IoT.

Wayne: Amazing. And from an elevator-

Emmet: That’s right.

Wayne: I’m still in shock by that one. Just to be clear. But I think it’s clearly amazing that IoT and digital transformation impacts every single industry and every single business.

Emmet: Yes.

Wayne: Hey, I’m going to shift our conversation just in one more direction-

Emmet: Okay.

Wayne: If I can keep you for a couple more minutes?

Emmet: Sure.

Wayne: And I want to talk about … This is going to be a shock … software testing.

Emmet: Sure.

Wayne: I’ve had the opportunity to sit in on some of the conversations that you’ve been talking about around these board members, and it’s been very eye-opening around “testing.” How would you characterize software testing in the whole array of activities that are necessary for digital transformation?

Emmet: CIOs, I think, have two problems that they need to solve when it comes to digital. One is they actually have to go find the budget. The budget’s not going to just come out of the sky. Somebody had to fund that digital transformation. So, a pattern that I’m seeing across various investments that we’re making is that automation is becoming a very hot topic. So the really sharp CIO’s are looking at their IT spend, and asking the question, “Where are all the places that I can do automation?” Testing is clearly a massive opportunity for automation, and I think CIO’s are beginning to see that. We’ve also made an investment recently in a business that’s automating incident diagnosis, and incident resolution. Which is another big opportunity. I think one thing we’re hearing from them is that they need to find the budget and they’re looking for opportunities to automate.

The other challenge that CIOs have around digital is actually accelerating the delivery of digital apps. It’s no longer acceptable to have a three, or six, or twelve-month delivery cycle. There looking for thirty-day delivery cycles. And obviously, I think what we know about manual testing, is it stands in the way of accelerated software delivery.

Wayne: Absolutely.

Emmet: And I think they’re realizing that they need to remove that roadblock.

Wayne: Interesting, interesting.

Let me guide the question from a risk perspective, a little bit too. What do you believe is at risk for these large organizations going through these digital transformation initiatives? And perhaps, not paying attention to testing from a risk perspective. Is the brand at risk with a software failure?

Emmet: I mean, from my point of view … this goes back to our discussion about boards … the two biggest risks that an organization faces is not executing on the digital transformation and not executing from a cybersecurity standpoint. Those are the two biggest risks in the boardroom. So I think, not finding the budget would create massive risk.

Wayne: Interesting.

Emmet: And not being able to deliver software at speed, would also add risk. I mean if you look at the world of retail, for example. Clearly, Amazon has been extremely disruptive, and every retailer is trying to figure out what they’re going to do to compete. And I’ve met with many of the world’s largest retailers and everyone is going to be accelerating to try and maybe not surpass Amazon, but at least have a way more transformative consumer experience. Imagine if one of those retailers, just kind of sits at the starting line, never finds the budget, never is able to deliver digital apps at speed. They’re going to be extinct. They’re going to be in big trouble.

Wayne: Yeah. And I think we see that, right? I think we’ve seen the first and second wave of it, actually. And I think it’s going to be more and more.

Emmett, thank you very much-

Emmet: You’re very welcome.

Wayne: For spending time with us. I think it’s been very insightful. One of the things that Accelerate 2017 that we exposed to the Tricentis community, was really what the CXO’s opinion of software testing was. And it was enlightening, how the top down pressures beginning to mount, and this efficiency needs to catch up.

Emmet: Great.

Wayne: If I was going to try to summarize, really what the CIO’s brought to us, would you agree that that was the message?

Emmet: I mean, they were loud and clear. The fact that five global 2000 CIO’s would come to our conference in itself-

Wayne: Yeah, that’s pretty amazing.

Emmet: It says a lot about how important this topic is. But, clearly as I mentioned earlier, they have to find the budget, and testing is a massive opportunity to find the budget. And they have to find the acceleration. And I think more and more that testing is going to hit the radar of CIO’s.

Wayne: Well, I couldn’t agree with you more. This is why I’m with Tricentis. Again this is Wayne Ariola with Tricentis. I am sitting with Emmet Keeffe with Insight Venture Partners. And thank you very much for your time.

Emmet: Thank you.

Original Link

75 Best Software Testing Blogs

Where to Find the Tester’s Corner of the Internet

When we’re not testing, we love to keep learning new skills, strategies, and approaches to talk about in discussions with the test community.

So, we’ve scoured the web, searching high and low for the greatest testing blogs, from those by junior testers to senior test consultants, tool vendors, and developers, and compiled one gigantic list. Some of the sites are very wide in scope, and others are more specifically geared toward certain topics. Here’s 75 of the best software testing blogs we’ve encountered thus far, listed in alphabetical order.

75 of the Best Software Testing Blogs and Websites

1. Abstracta

How crazy…the first blog that appears on our list is the one that brings you this post! We blog weekly about topics like testing in CI/CD, performance engineering, automation, DevOps, tool reviews, and more.

2. Adventures in Automation

TJ Maher, a tester, Techbeacon contributor, and Ministry of Testing Boston Organizer created his blog for manual testers looking to make the switch to automation. Browse his table of contents to easily navigate or check out posts from various categories ranging from “beginner” to “Appium” to “code examples” and more!

3. Agile Testing

Agile Testing is run and owned by Grig Gheorghiu. He brings his engineering expertise to topics like performance and load testing, monitoring, and all things Agile. This site is great for when you’re looking for very technical explanations.

4. Agile Testing Days Blog

Agile Testing Days is a testing festival (aka conference) in Europe that has also spread to the US (we’ll be attending Agile Testing Days USA this year). Visit the blog to see what some of the conferences’ speakers have to say about Agile Testing, who to keep an eye on in the industry, and more!

5. Angie Jone’s Blog

Angie Jones is a senior automation engineer at Twitter who is also a master inventor with over 25 patents in the US and China. A frequent conference speaker and thought leader, she blogs about trends in automation, how to become an automation engineer, testing conferences, and more.

6. Applitools

One of our partners here at Abstracta, Applitools, offers the only commercial-grade, visual AI-based cloud engine that validates all the visual aspects of any web, mobile and native app in a fully automated way. Its blog features news about the product and posts from several industry thought leaders. It’s a must-read for anyone interested in modern software testing practices!

7. Agile Testing with Lisa Crispin

Lisa Crispin co-authored the “textbook” of Agile testing, Agile Testing: A Practical Guide for Testers and Agile Teams. Her blog features not only a wealth of information about Agile testing, but you can also see pictures of her and her miniature donkeys. Yes, donkeys!

8. Angry Weasel

Every Friday, Angry Weasel (aka Alan Page, Director of Quality Services at Unity Technologies) shares five things he is either reading, thinking about, or doing. Where does the name Angry Weasel come from? It’s an idea for the name of his old band for which he owned the domain and decided to resurrect it in the form of a testing blog!

9. AskTester

We’ve watched this blog blossom in the past few years. A creation of Thanh Huynh, this site provides a plethora of resources to help you test better from beginner guides, online resources, guest posts, and a dash of humor!

10. A Tester’s Journey

Elisabeth Hocke is a self-described “agile tester, sociotechnical symmathecist, team glue, volleyball player, and game lover” who speaks at several testing meetups and conferences. Her blog is filled with insightful takeaways from these events.

11. Association for Software Testing

If you hold a self-aware, self-critical attitude toward testing, then you’ll want to follow the Association for Software Testing’s blog. Their content takes a scientific approach to developing and evaluating techniques, processes, and tools.

12. Automation Awesomeness

A test automation architect from Rhode Island, Joe Colantonio uses his blog to help others reach their testing goals as well as learn from his automation wins and failures. Not only does he blog, but he hosts webinars, podcasts, and created online conferences for testers.

13. Automate the Planet

Created by Anton Angelov, Automate the Planet is a site to help those who are still only testing manually to embrace automation, and to do so with success! From the blog, you can read all about design architecture, design patterns, tools, DevOps and CI, and more.

14. BlazeMeter

BlazeMeter’s blog is THE virtual encyclopedia for everything you need to know about load testing and JMeter. Everything a tester or developer might want to know about performance can be found here.

15. Cartoon Tester

While this blog hasn’t been updated for a few years, you can still enjoy these fun cartoons by Andy Glover that illustrate the life of a tester, the nature of bugs in software, and more!

16. Chris Kenst

A test engineer at Laurel & Wolf, Chris is also a testing teacher and a Master Scuba Dive Trainer. His blog is full of interesting insights from his experiences as a tester. You can also check out his resources page for great reads, conferences to go to, templates, tools, and more.

17. Creating Software — A Sisyphean Task?

For those of you who don’t know what a sisyphean task is (we didn’t know what it meant either), it means a task that is never-ending. Most testers would agree wholeheartedly that software testing never ends as long as the software is being used! Adam Knight writes in his blog about testing, personal experiences he’s had, soft skills, and more.

18. CrossBrowserTesting

CrossBrowserTesting is a SmartBear company whose tool helps testers run manual, visual, and selenium tests in the cloud on 1500+ real desktop and mobile browsers. In the CrossBrowserTesting blog, you can find a treasure trove of information on test automation, development, UX/UI, and more.

19. Curious Tester

The Curious Tester, aka Parimala Hariprasad, has 15 years of experience working in various roles with a successful track record in leadership, product management, user experience, setting up independent practices (UX, QA), tinkering with boutique startups and coaching. See her blog for great insights on all the above!

20. DevelopSense Blog

If you only read one or two software testing blogs per year, make sure to check out this one by Michael Bolton, a co-creator of the Rapid Software Testing Methodology, one of the most sought-after training programs in testing. In the DevelopSense blog, you can read all about Bolton’s philosophy on testing and how he approaches the profession.

21. DZone

While not solely dedicated to testing, DZone is a great, centralized hub to read about testing and development from multiple sources. A technical library, members can access free articles, guides, and cheat sheets across 14 “Zones,” or areas of developer interest, from AI to web dev and everything in between.

22. Test Huddle

Test Huddle is an online resource for Europe’s largest testing conference, EuroSTAR. In the blog, you can find articles about every aspect of testing from a wide range of contributors.

23. Evil Tester

The Evil Tester, (aka Alan Richardson) helps teams to deliver, automate and test better with his special blend of skill, attitude, and pragmatism. You can find several how-to’s on his blog, several of them accompanied by helpful videos.

24. Gerald Weinberg’s Secrets of Writing and Consulting

A famous computer scientist and pioneer, Weinberg has published more than 40 books and more than 400 articles in which he incorporates his knowledge of science, engineering, and human behavior. He started his career as the architect for the Project Mercury’s space tracking network and the designer of the world’s first multiprogrammed operating system. You won’t want to miss his thoughts here.

25. Gil Tayar

“Test all software! And lots of love.” This positive blog is packed with practical guidance on how to test front-end code. You can even save time searching for the best articles by checking out the curated list of testing posts every 2 months.

26. Google Testing Blog

Ever wonder what the tech monolith thinks about test automation? Unit testing? Code smell? The Google Testing Blog is a great outlet that Google’s testers use to share their insights and reveal how they go about testing their products.

27. Gurock

Gurock is the creator of one of our favorite test case management softwares, TestRail. We find ourselves tweeting its blog posts all the time! The Gurock Quality Hub is dedicated to software quality, QA, testing and security with over 34,000 subscribers.

28. Hiccupps

Do testers need bugs? James Thomas poses questions like this in his blog while also sharing learnings from events and fun sketchnotes he makes.

29. Investigating Software

Manumation, randomization, gamification…learn what these all have to do with testing in Pete Houghton’s blog on thinking like a tester. Houghton has helped companies including BP, Sky-TV, John Lewis, the BBC, the Financial Times. and BBC Worldwide find out more about the risks they face and help their teams deliver better quality products more quickly.

30. I’m a Little Tester

In her blog, Corina Pip posts about testing with Java, Selenium, TestNG, Maven, Spring, IntelliJ and more. And, she doesn’t only blog about testing! Check out her other blog, The Turquoise Coconut where she writes about travel, food, and photography.

31. James Bach

One of the most influential testers to this day, James Bach takes a fascinating stance on almost everything testing related and isn’t afraid to speak his mind. He is the co-creator of Rapid Software Testing with Michael Bolton and a brilliant writer. Do not skip this blog!

32. Janet Gregory

Co-author of Agile Testing with Lisa Crispin, Janet also maintains a blog in which she reviews books related to testing, shares her experiences and findings, etc. If you want to learn from the best, you’ve got to read from her.

33. Jason Arbon

Looking for testing articles focused on AI and automation? Look no further! Jason Arbon’s blog explores the blend between humans and machines. He answers the big questions, like “What stops AI from destroying us?”

34. Katrina Tester

A kiwi and a female tester, Katrina is highly active in the testing community and a thought leader who has spoken at several conferences and published a book, A Practical Guide to Testing in DevOps. We are great fans of her work and share her blog posts often.

35. Kinga Witko

“To tell somebody that he is wrong is called criticism. To do so officially is called testing.” Kinga’s blog is atypical from other testing blogs, as she makes her posts fun to read with a creative design, excellent use of memes and GIFs and a unique approach to explaining things.

36. Knightly Builds

In this blog, Simon Knighty, tester turned product manager at Gurock shares content every Monday. His posts feature thoughtful quotations, articles, and day-to-day musings.

37. Maaret Pyhäjärvi

Maaret takes a look into her “crystal ball” to imagine the future of testing while also looking and the past and present. She uses her blog as a way to continue to learn, and in that way, helps her readers learn as well!

38. Maverick Tester

Maverick Tester is the blog of Anne-Marie Charrett, an internationally recognized expert in software testing and quality engineering based in Sydney, Australia. Besides her blog, check out her program, Speak Easy, dedicated to helping increase diversity in tech conferences.

39. Ministry of Testing

When it comes to testing resources, the Ministry of Testing is the motherload. Most of the blogs on this list appear within the community and in the testing feeds section. If you are looking to become a better tester, go to conferences to meet other testers, gain a new skill, etc., this is the place for you!

40. Mr. Slavchev

This blog, a.k.a. “The cave of the tester troll,” is where Mr. Slavchev goes to ruminate testing, especially the scientific aspects and also the critical thinking involved.

41. Mysoftwarequality

From Ireland, Augusto Evangelistini dreams of organisations where people are empowered to deliver products that matter. Joyful organisations. His blog touches on different areas in testing, seeking to bring empathy into knowledge work!

42. New Relic

New Relic is an application performance monitoring tool with over 15,000 customers both large and small. You can visit the New Relic blog for the latest news, tips, and insights for everything in the world of New Relic and digital intelligence.

43. Offbeat Testing

Dave Westerveld is a highly technical tester who uses his blog to continue learning and shares his thoughts and experiences in a fun way. He relates testing to daily life in several posts, to get us to think out of the box.

44. QA Intelligence by Practitest

A referential blog on testing, one of the highlights is its annual surveys and reports on the state of testing which help us to get a better sense of how testing is doing on the ground today, across organizations all over the world.

45. QASymphony

This blog regularly publishes content on testing tools, events, and test automation. You can filter posts based on your role, the challenge you’re facing as well as the author and category, making this blog a breeze to navigate.

46. Richr Testing

Another Aussie who makes the list, Rich Rogers, has one of the most popular software testing blogs and is the author of “Changing Times: Quality for Humans in the Digital Age.”

47. SauceLabs

A master of continuous testing in the software development world, SauceLabs’ blog offers tremendous insights on automation, CI/CD, events, and more.

48. SDTimes

SDTimes is a great place to get news across all areas of tech and software development, not just testing. Never be in the dark with this resource!

49. SmartBear

SmartBear has a whole suite of tools to help make testers’ lives easier and shares good practices and methodologies in its blog.

50. Software Testing Club

Not a blog, this site is an online “club” by the Ministry of Testing. Here you can interact with other testers about different areas of testing, mentoring, events, and there’s even a place for ranting. If you feel like being social, this is the place for you.

51. Software Testing Help

This site is “the biggest resource of software testing books, software testing templates, QA testing interview questions and answers, testing QA training, automation testing tools, software testing tutorials, software testing pdf, software testing material, QA videos, software testing certification guides.” Need we say more?

52. Software Testing Magazine

Software Testing Magazine is a free website dedicated to present articles, blog posts, book reviews, tools, news and videos and other resources about unit testing, integration testing, functional or acceptance testing, load or performance testing, bug tracking, test management in software development projects.

53. Software Testing Tricks

How to test a microwave? How to test a pen? How to test a water bottle? You probably never wondered these questions. Software Testing Tricks explores these questions and challenges you to be the best tester you can be.

54. StickyMinds

A high quality testing magazine, you’ll find some of the most unique articles, interviews, and videos by some of the biggest names in testing. Members can also access resources like white papers and downloads.

55. TechBeacon

From development and testing to security, this digital hub by software development and IT operations professionals covers the whole gamut.

56. Tech Target

Tech Target, or searchsoftwarequality.com, is an online community for developers, architects and executives interested in building high-quality software, or are involved in software project management, software testing and quality assurance (QA), application performance management (APM), application lifecycle management (ALM), plus many more related topics.

57. TestLodge

An online test case management tool, the TestLodge blog is great for junior testers or non-testers alike looking for explanations of the basic terms and concepts in testing.

58. Tester Stories

Does anything sound more “testy” to you than the subtitle of this blog by Jeff Nyman?: “TWICE UPON A TIME, IN ANOTHER SPACE, NO DISTANCE IN ANY DIRECTION FROM HERE …” Enough said.

59. Testing Curator

A very helpful resource for building this list, the Testing Curator, aka Matt Hutchison, finds some of the best and even most obscure articles about testing and shares a new batch called “Testing Bits” each week of his favorite ones. Thank you Testing Curator!

60. Testing is Believing

Check out this blog for nuggets of wisdom in testing guides and models. Each post is topped with hilarious memes featuring characters from Lord of the Rings and Star Wars.

61. Test With Nishi

Read all about software testing, Agile, DevOps, certifications, events…all from the eyes of Nishi, a consulting corporate trainer, Agile enthusiast and a software tester at heart!

62. Testzius

Testzius, by Michael Fritzius, is a blog that’s always helpful and never boring! He shares his thoughts from in the shower, or on epic fails, throwdowns, theory, and more.

63. The Life of One Man

The writer behind The Life of One Man gives practical advice for being better testers, managing one’s career, honing soft skills, etc. It may just be thoughts of “one man,” but they are worth a read!

64. The Pragmatic Tester

Nicky West is a self-described “IT all-rounder” who has been testing for ten years now, with prior experience in development and business analysis. From her blog, you can learn how to get rid of step by step test cases, how to write valuable unit tests, and more!

65. Think Like a Tester

Currently a QA Lead at Paylocity, Kristin Jackovny writes to help us “think like a tester.” Her blog is filled with how-to’s ranging from setting up Maven to testing APIs with Postman.

66. Think Testing

Trevor Atkins has been involved in 100’s of software projects over the last 20+ years and writes in his blog, ThinkTesting, about how to think through testing. Some of his blog categories include agile testing, team building, automation and tools, and more!

67. Thomas Crabtree

“IT, learning, testing and stuff,” the subtitle of Thomas Crabtree’s blog says it all.

68. Thoughts on Testing

Darryn Downey’s blog is definitely not a downer. Try to say that 5 times! Thoughts on Testing shares entertaining posts on testing, while sprinkling in some great pop culture references.

69. ThoughtWorks Software Testing Blog

A champion for positioning testing as a core development activity, ThoughtWorks’ software testing blog is full of enticing resources that are just too good to ignore with some of the best minds in testing who contribute.

70. Trish Koo

A former engineering manager at Google, Trish is an Aussie and a global leader in the software testing space having given keynotes, workshops and talks all around the world. What we appreciate most about her blog are the many hand-drawn illustrations that go with her posts.

71. Usersnap

With Usersnap’s blog, it’s all about collecting user feedback early in the dev process and better tracking of bugs. Learn how to be a more effective as a development team from communication, and motivation to the more technical aspects.

72. Viv Richards

Meet Viv Richards. This self-proclaimed TDT (Tea Drinking Tester) recently made the move from software developer to tester. Follow his blog to learn about testing topics like postman API testing, automating visual regression testing, and perhaps advice for tea and test pairings.

73. WarpTest

Stay current on the most cutting edge of technology from augmented reality to wearables to Windows Mobile thanks to Warptest who also elucidates their point of view on each topic.

74. Women Testers

Women Testsers is a quarterly e-publication by women testers, for women testers from all over the world to bring out the best in one another. Each publication features exclusive articles by ladies who test and who often speak about their experiences at conferences. You can also find events, jobs, and conferences on the website.

75. 5blogs

A delight to visit on a daily basis, the concept of this blog is very simple. The owner shares 5 of other people’s blogs they’ve read that day spanning several topics from testing to productivity.

Closing Notes

If we missed any blog that you love, let us know in the comments!

Also, as many of our testers are from Uruguay, we also know all of the best blogs, resources, etc in the Spanish speaking testing community. Check out Federico Toledo’s list of best Spanish software testing websites.

Original Link

How to Tackle the Learning Curve of your First Major Engineering Project

Image title

Six weeks into my first full-time engineering job I was tasked with a major project that put my programming abilities to the test. I had to switch from the language and mindset I was most familiar with, Javascript, and dive head first into Nylas’s 4-year-old Python codebase in order to build a API to sync over 800 million contacts (and rising). What ensued was a 16 week-long project that deepened my understanding of engineering fullstack systems at scale and taught me about the challenges of tackling your first major project.

Many engineers question their abilities early on. Diving into a new company with an entirely new codebase as well as a unique set of infrastructure, monitoring systems, best practices, and workflows can feel overwhelming. My goal with this blog is to offer advice to other engineers who are early on in their careers. But first, let me provide some context about the product that Nylas offer and the project I was tasked with: updating our Contacts API.

If you’re curious about the technical details of the project, check out my previous post. What you need to know for now is that the Nylas APIs allow developers to build email, calendar, and contacts functionality (and two-way sync) into their apps. From early product feedback, we found that customers were asking for new features to our contacts API. We previously had a read-only functionality for very limited contact data, and customers were asking for more functionality. In short, this was by far my biggest and most complex fullstack project to date. I was excited, albeit nervous, to tackle this project, and I learned a lot along the way.

Takeaways

Writing Good Tests Is the Best Way to Ensure You Are Writing Good Code

The importance of writing good tests is one of those lessons that resurfaces with every project I work on. Your objective as a programmer is to write good code. The best way to evaluate your code, and continue to evaluate it as other factors in the code or environment change, is to write good tests. It’s hypothetically a simple concept, but one that’s surprisingly difficult to implement, even for people who aren’t new to the industry.

On this project I learned to appreciate the power of unit tests for debugging. As the name suggests, unit tests isolate and test a specific unit of interest. Previously, when I would run into a bug that I was unsure about, I would immediately dive into trying to solve it. This was usually before I had a good understanding of what was going on and before I could replicate it. This meant that I spent more time debugging, had less confidence in my solutions, and had very little understanding of the causes.

Using unit tests to debug an issue is a different process. With unit tests in place, I start by isolating the problem and write a test that mocks out everything else before I dive into debugging. It’s essential to do this because you want to make sure that the buggy behavior isn’t influenced by other elements. Once you have the problem isolated, the test starts failing because of the bug. Now this failing test is a solid way to measure when the bug is fixed and if it will stay fixed. Once I have the test or tests in place, I can confidently dive into debugging knowing that the goal is to get the test to pass.

Although writing tests takes time that isn’t directly going towards the product, it will ultimately save time.

Use Your Mentor as a Resource

Joining a new company can be intimidating because there is so much to learn. To lessen the learning curve, Nylas pairs new hires with senior engineers. For the Contacts v2.0 project, I worked with Karim as my coding partner, mentor, and overall resource.

In the beginning of the project, we worked exclusively through pair programming with me typing and him guiding and explaining along the way. As I gained confidence and understanding, I slowly began to take over and do larger pieces on my own. For a while, I still relied on him heavily to field questions as I worked. Over time, my volume of questions decreased. Eventually, I became the project lead and the go-to resource to answer questions coming from the project manager, our customers, and my fellow engineers.

This pairing was an essential part of my growth in the first few months at Nylas. If your company doesn’t already do pair programming, I recommend suggesting it because it is incredibly beneficial, especially for new hires.

Without Karim’s mentoring throughout, my project would not have been as successful, and I would not feel as confident in my abilities going forward.

Ask Questions Even When You Don’t Know What to Ask

In the beginning of Contacts v2.0, I often didn’t speak up when I was confused because I felt that I didn’t even know what to ask. Some concepts felt so far beyond my understanding that I wasn’t able to piece together a question. I was embarrassed to ask what might be seen as basic questions, and even more worried that it might expose some fatal flaw in my ability. So instead, I would silently trudge along in my haze of confusion hoping that time would bring eventually clarity.

I quickly learned that this was not the best strategy. While it’s possible that time might bring clarity, often it doesn’t. And no matter what, clarity would have come more quickly had I just asked. Instead of my silence making me look like I understood exactly what was going on, it simply hid that I wasn’t following along. These gaps in my understanding would almost certainly have backfired later in the project at a time when it was more critical. In addition, if came out at that point that I not only didn’t understand the current concept, but had been in the dark all along, it would have reflected poorly on my entire performance.

However, simply knowing that you should ask a question doesn’t always help with knowing what to ask. My strategy for how to ask the questions when you can’t easily piece together the problem is to back up. If you are nested deep into some complex process, it can be helpful to continue to take steps back until you are at a level that you feel comfortable with. If you have some uncertainty regarding a function, follow the stack trace up until you get to a process you are familiar with. If all else fails, be very forthcoming about your confusion and ask your mentor to start with the basics.

It’s also important to keep in mind that you’re probably not the only one to have these questions. Asking questions is not a sign of weakness, but a desire to learn.

Reflection Takes Time, but Is Time Well Spent

When I first joined Nylas, I felt like every day was full of new terms, processes, techniques, tips and projects. I would ask a lot of questions and receive great answers, but I felt like a lot of the information wasn’t sticking.

To get more information to sink in and feel like I could answer questions about previous work, I started keeping a work log. It’s very simple; just a Google doc where I write things that come up throughout each workday — tips that I learn, how to use certain scripts, bugs that I run into, questions that arise, and the subsequent answers to those questions.

In a fast-paced startup environment it’s easy to always be in forward motion, finishing one project and moving to the next. Keeping this log forces me to slow down and reflect on my work. Sometimes, the process of writing things down reveals gaps in my understanding.

In a field where we are constantly needing to stay on top of new technologies, strategies that ease the learning process are important. Keeping a Nylas learning diary is one thing that helps me.

My experience with this project — from questioning my abilities as a programmer, to slowly becoming the lead engineer, to beta testing directly with customers, and updating the SDKs to ultimately launching — has given me greater confidence. I am excited for all the learning to come in the next project. Every engineer’s experience is different, but I hope that sharing what I learned in my first major engineering project will help others!

Original Link

End-to-End Test Automation Using Nightwatch

Nightwatch is an automated testing and continuous integration framework based on Node.js and Selenium WebDriver. It is a complete browser (End-to-End) testing solution. It can be used for writing Node.js unit tests.

Nightwatch uses JavaScript language (Node.js) and CSS/XPath to identify an element. It has Built-in command-line test runner which can run the tests either sequentially or in parallel, together, by group, tags or by single.

Installation

Step 1:Download and Install Java.

Step 2:Install Node.js.

Step 3: Install Nightwatch. In the command line, navigate to any directory and type npm install nightwatch.

Step 4: Create a folder structure as shown below where Project is the root.

Image title

Step 5: Copy the Nightwatch folder (which was downloaded using cmd) and place it in Project/lib/nightwatch path.

Step 6: Open the Node directory (C:\Program Files\nodejs\node_modules). Copy the “npm” folder and place it in the Project/lib.

Step 7: Download Selenium Server. Place selenium-server-standalone-{VERSION}.jar in Project/lib/selenium.

Step 8: Download WebDriver (Browser Driver) (Chrome is used in the example) and place it in the lib folder (Project/lib) directory.

Step 9: In the root folder, create “nightwatch.js” file and place the following line: require('E:/Project/lib/nightwatch/bin/runner.js');

Step 10: In the root folder, create the configuration file (nightwatch.json) as mentioned below

{ "src_folders" : ["tests"], "output_folder" : "reports", "custom_commands_path" : "", "custom_assertions_path" : "", "selenium" : { "start_process" : true, "server_path" : "lib/selenium/selenium-server-standalone-3.9.1.jar", "log_path" : "", "host" : "127.0.0.1", "port" : 4444, "cli_args" : { "webdriver.chrome.driver" : "lib/chromedriver.exe", } }, "test_settings" : { "default" : { "launch_url" : "http://localhost", "selenium_port" : 4444, "selenium_host" : "localhost", "silent": true, "screenshots" : { "enabled" : true, "path" : "reports", "use_xpath": true }, "desiredCapabilities": { "browserName": "chrome", "javascriptEnabled": true, "acceptSslCerts": true } }, "chrome" : { "desiredCapabilities": { "browserName": "chrome", "javascriptEnabled": true, "acceptSslCerts": true } } }
}

Step 11: Navigate to “test” folder and write the following code and save it as “newToursCss.js”

module.exports = { 'Login test new tours': function (client) { client .url('http://newtours.demoaut.com/') .setValue('input[name="userName"]', 'mercury') .setValue('input[name="password"]', 'mercury') .click('input[name="login"]') .assert.title('Find a Flight: Mercury Tours:') .end() }
};

Step 12: Open cmd and Navigate to C:\Project> and type the command node .\nightwatch.js .\tests\NewToursCss.js  

If everything goes well, the browser opens, and in the cmd output should be:

Image title

XPath Selector

By default, Nightwatch uses CSS selector,  in order to use  XPath  location strategy use useXpath() method and useCss() method for switching to CSS location strategy:

module.exports = { 'Login Sales': function (client) { client .url('https://login.salesforce.com/?locale=in') .assert.visible('input[name="pw"]') //tagname[attributename=value] .assert.visible('img#logo') // if id is present tag#id .useXpath() .assert.visible('//input[@name="Login"]') .setValue('//input[@type="email"]','v') .useCss() .waitForElementVisible('input[name="pw"]' ,30000) .end(); }
};

BDD Expect Assertion

Nightwatch uses BDD style assertion library which improves the readability and flexibility of assertions:

module.exports = { 'BDD EXPECT' : function (client) { client .url('https://login.salesforce.com/?locale=in') .pause(1000); // expect element to be present in 1000ms client.expect.element('body').to.be.present.before(1000); // expect element <body> to have css property 'display' client.expect.element('body').to.have.css('display'); // expect element <#username> to be an input tag client.expect.element('#username').to.be.an('input'); // expect element <#username> to be visible client.expect.element('#username').to.be.visible; client.end(); }
};

Test Hooks

Using before[Each] and after[Each] Hooks

Nightwatch uses before/after and beforeEach/afterEach hooks in tests. The before and after will run before and after the execution of the test suite while beforeEach and afterEach run before and after each test case.

module.exports = {
before : function(browser) { var assert = require('assert'); console.log('Before Suite'); browser .url('http://newtours.demoaut.com/') .windowMaximize() },
after : function(browser) { console.log('after Suite'); browser .end },
beforeEach : function(browser) { console.log('before test case'); },
afterEach : function() {
console.log('after testcase'); }, 'step one' : function (browser) { browser .waitForElementVisible('body', 1000) .setValue('input[name="userName"]', 'mercury') .setValue('input[name="password"]', 'mercury') .click('input[name="login"]') }, 'step two' : function (browser) { browser .useXpath() .click("//a[text()='SIGN-OFF']") }
};

Output:

Image title

Global Hooks 

Instead of writing individual hooks for a suite, the hooks can be configured globally and are run before and after test suites and test cases.

In order to use the Global hook:

Step 1: Create a Global.js file in the root folder and write appropriate functionalities:

module.exports = { before : function(cb) { console.log('GLOBAL BEFORE') cb(); }, beforeEach : function(browser, cb) { //console.log('GLOBAL beforeEach') cb(); }, after : function(cb) { //console.log('GLOBAL AFTER') cb(); }, afterEach : function(browser, cb) { console.log('GLOBAL afterEach') cb(); }, };

Step 2: Configure the nightwatch.json file.

Add the variable “globals_path” : “Global.js”

{ "src_folders" : ["tests"], "output_folder" : "reports", "globals_path" : "Global.js", "selenium" : { "start_process" : true, "server_path" : "lib/selenium/selenium-server

Run any program to see the global hooks.

Capturing Screenshots

Screenshots can be captured by using  .saveScreenshot() 

module.exports = { 'Login Sales': function (client) { client .url('https://login.salesforce.com/?locale=in') .assert.visible('input[name="pw"]') .assert.visible('img#logo') .useXpath() .assert.visible('//input[@name="Login"]') .setValue('//input[@type="email"]','v') .useCss() .waitForElementVisible('input[name="pw"]' ,30000) .saveScreenshot('./reports/sales.png') .end(); }
};

The screenshot will appear in the reports folder in the project directory:

sales

Original Link

Mocking Files for JUnit Testing a Spring Boot Web Application on Synology NAS

For a Spring Boot application which will check backup files on a Synology RS815+ NAS, we wanted to be able to easily test the files stored on this NAS, without having to copy the 7TB that were stored on it.

Ideally, we wanted to create the same file structure to use the web application in a Spring development profile, as well as use these file structures in a JUnit test.

Introducing FileStructureCreator

We started by creating a new class FileStructureCreator which looks like this:

@Getter
@Setter
public class FileStructureCreator implements Closeable { public static final Path baseTestPath = Paths.get("testFiles"); private Path fileStructureBasePath; public static FileStructureCreator create(Path file) { return createStructure(file, false); } public static FileStructureCreator createTempDirectory(Path file) { return createStructure(file, true); } @SneakyThrows private static FileStructureCreator createStructure(Path file, boolean createTempDirectory) { FileStructureCreator fileStructureCreator = new FileStructureCreator(); if (!Files.exists(baseTestPath)) { Files.createDirectory(baseTestPath); } String path = baseTestPath.toString() + (createTempDirectory ? "/" + UUID.randomUUID().toString() : "") + "/"; Path basePath = Paths.get(path); fileStructureCreator.setFileStructureBasePath(basePath); FileUtils.forceMkdir(basePath.toFile()); try (Stream<String> stream = Files.lines(file)) { stream.forEach(line -> { Metadata fileMetaData = Metadata.from(line); Path fileEntry = Paths.get(path + fileMetaData.getWindowsSafeFilename()); try { FileUtils.forceMkdir(fileEntry.getParent().toFile()); if (!Files.exists(fileEntry)) { Files.write(fileEntry, line.getBytes()); Files.setLastModifiedTime(fileEntry, FileTime.from(fileMetaData.getModificationTime())); } } catch (IOException ignore) { throw new RuntimeException("Exception creating directory: " + fileEntry.getParent()); } }); } return fileStructureCreator; } @Override @SneakyThrows public void close() { if (fileStructureBasePath != null) { FileUtils.deleteDirectory(fileStructureBasePath.toFile()); } }
}

This basically creates the whole directory structure and the necessary files. We just need to pass it a base file which holds the metadata of the file structure.

The metadata holds a timestamp, file size and the path for this file. It looks like this:

2016-04-05T10:30:15.012345678 5120
backupftp/@eaDir/sharesnap_share_configuration/SYNO@.quota

2018-02-26T00:00:09.012345678 169

backupftp/@eaDir/sharesnap_share_configuration/share_configuration

On our Synology NAS, we can then easily generate a file with the whole tree structure of a (specific) directory by executing this command:

find backupftp -type f -printf "%TY-%Tm-%TdT%TH:%TM:%.12TS\t%s\t%p\n">test/backupftp.files.txt

Copy the generated file from your Synology NAS to your project.

In a JUnit test we use the FileStructureCreator class like in the example below. Note that FileStructureCreatorimplements AutoCloseable, so we can use a try/catch block to clean up the files after the test completes.

@Value("classpath:/TestDiskConsistencyPolicy-notEnoughFileSets.txt")
private Path notEnoughFileSets; @Test(expected = RuntimeException.class)
public void backupSetWithNoFileSetsThrowException() { try( FileStructureCreator creator = FileStructureCreator.createTempDirectory(notEnoughFileSets) ) { BackupSet backupSet = BackupSet.builder().uri(creator.getFileStructureBasePath().toString()).build(); new DiskConsistencyPolicy(backupSet).execute(); assertTrue( "Expecting a RuntimeException here", false); }
}

For the Spring Boot application, we just define a @Configuration class which will create the data structures for our file shares as defined on the Synology NAS.

@Configuration
@Profile("dev")
public class TestFilesInstaller { @Bean public FileStructureCreator ftpFiles(@Value("classpath:/backupftp.files.txt") Path file) { return FileStructureCreator.create(file); } @Bean public FileStructureCreator nfsFiles(@Value("classpath:/backupnfs.files.txt") Path file) { return FileStructureCreator.create(file); }
}

Because they are defined as a @Bean, the close() method will automatically be called when the application shuts down, removing all files from disk when the Spring Boot application is stopped.

Just…don’t run the dev profile in production; I’ll let you figure out what happens.

In the future, we’ll show you how to build a backup checker and how to monitor and verify backups on your NAS.

Original Link

4 Common Pitfalls When Adopting DevOps

DevOps as a concept has been around for nearly 10 years. While it’s more widespread than ever, for many, DevOps has gone from magic bullet to meaningless buzzword status. There’s a creeping sense of unfulfilled promises. Where did it all go wrong? Are we setting expectations too high? Moving too fast? Not being ambitious enough? Did we set DevOps up to fail?

I chatted with some of our Customer Success team members here at GitLab to find out where customers are running into trouble, and what they can do about it. Below are the top four pitfalls organizations can avoid when adopting DevOps:

1. You Invest in Too Many Tools

“You think you have it all when you’ve got your issue tracker, version control system, CI/CD service, etc.,” says Technical Account Manager, John Woods. “However, what’s the cost of setting all those up and configuring them to ‘talk’ to each other?” The cost of maintaining and upgrading each of those instances and maintaining the connection between all of them is difficult to tally both in pure monetary terms and in time. We call this the DevOps tax, and it’s often overlooked when assessing the success (or lack thereof!) of DevOps adoption. Which brings us to…

2. You’re Married to Your Toolset

When you’ve invested in tools it’s difficult to throw in the towel. But sometimes that’s exactly what needs to be done. “I recently was in a conversation where the VP said their system was solid and robust. They insisted they are doing everything the way that things should be done using tools they’ve been honing since 2003,” says Joel Krooswyk, solutions architects manager at GitLab. This speaks to complacency in DevOps implementation. “DevOps is never ‘done.’” says Joel. “Tools come and go, and some of the ones you’ve invested in don’t have a strong roadmap or are dying slowly. To put faith there is poor judgment.”

What’s the upshot? “We need to be aware that better solutions may be available, which provide us more speed, increased efficiency, and improved workflow,” Joel adds. This includes considering solutions that are built for DevOps, rather than just being adapted to it.

3. You Stop at Testing

Integrating testing into deployment pipelines has been a huge shift for many large, legacy companies, and this should be treated as a major accomplishment. But why stop there? “All too often, pipelines are ‘build, test’ and that’s it. It’s a step forward, but it’s insufficient,” says Joel. “They don’t go far enough for their first iteration.” A deployment pipeline that’s truly DevOps optimized should incorporate deployment and monitoring as a minimum requirement as well.

4. You Still Treat Security as an Afterthought

Running security checks infrequently is a hangover from legacy systems where security testing is costly (sometimes requiring payment per line of code). A recent study found that only 50 percent of CI/CD workflows include application security testing. Adopting a tool whose security products are truly shifted left, and can be run without per-line cost fears, is essential to leveling up DevOps transformation and preventing the loss of gains from other DevOps initiatives when security vulnerabilities are discovered at the 11th hour.

Original Link