ALU

enterprise

Building Enterprise Java Applications the Spring Way

I think it is fair to say that Java EE has gained a pretty bad reputation among Java developers. Despite the fact that it has certainly improved on all fronts over the years and even changed its home from the Eclipse Foundation to become Jakarta EE, its bitter taste is still quite strong. On the other side, we have the Spring Framework (or to reflect the reality better, a full-fledged Spring Platform) this is a brilliant, lightweight, fast, innovative, and hyper-productive Java EE replacement. So why bother with Java EE?

We are going to answer this question by showing how easy it is to build modern Java applications using most of the Java EE specs. And, the key ingredient to succeeding here is Eclipse Microprofile: enterprise Java in the age of microservices.

Original Link

Five Powerful Enterprise Agile Frameworks

Rugby Approach

In their research paper titled ‘The New New Product Development Game’, Hirotaka and Ikujiro (both professors at the Harvard Business School) observed that the sequential approach to developing products is not best suited to the fast-paced competitive world. Instead, they recommended a rugby approach for enterprises to attain speed and flexibility to meet the ever-changing market requirements. The rugby approach refers to the Agile way (scrum) of working with practices like small batch sizes, incremental development, self-organizing teams, enhanced collaboration, cross-functional teams, and continuous learning. To put things in perspective, this research paper was launched way back in 1986! If the traditional approach was being questioned back then, it definitely needs to be relooked at now. Enterprises need to adopt agile practices to stay relevant in a market which has become extremely dynamic due to the proliferation of digital technologies. Agile practices enable enterprises to deliver solutions faster with better quality by considerably shortening the feedback loop.

Scaling Blues

Though most enterprises have realized the significance of Agile, most organizations, especially the large ones have been struggling to scale Agile at the enterprise level. This is substantiated by a recent survey through which it was found that the enterprises who had claimed to be Agile have admitted that they had adopted Agile practices only in certain pockets. Interestingly, smaller and nimbler companies have adopted the agile way of working and achieved considerable success in the market. These companies released products at a remarkable speed with high quality and reacted faster to the market needs. Take Tesla’s case (by no means a small company now!!), which launched electric cars with the auto-pilot option when the Toyotas and Bugattis of the world only had prototypes of electric cars. By the time they launched their own electric vehicles, Tesla had captured a huge pie of the market!! To the defense of these large enterprises, scaling Agile is easier said than done. These behemoths have many portfolios, with large applications requiring multiple teams, complex systems, diverse operating environments, and multiple vendors, making their Agile transformation journey a herculean task.

Original Link

IT Companies With ”Flat” Structures: Utopia or Innovative Approach?

It’s a trend for IT companies to go "flat" these days. With so many thought pieces and studies on employee empowerment and self-organization out there, it’s tempting for some CEOs to give it a try.

What Is a "Flat" Organization?

A "flat" organization is a distributed management system where no one is the boss and employees can make impactful decisions at all levels. Other typical characteristics of such organizations are transparency, continuous feedback, and "fluidity" – grouping task forces around current problems rather than having fixed teams.

Original Link

Certification of the Couchbase Autonomous Operator for K8s

The Couchbase Autonomous Operator enables you to run Couchbase deployments natively on Open Source Kubernetes or Enterprise Red Hat OpenShift Container Platform. I’m excited to announce the availability of Couchbase Autonomous Operator 1.0.0 today!

Running and managing a Couchbase cluster just got a lot easier with the introduction of the Couchbase Autonomous Operator for Kubernetes. Users can now deploy Couchbase on top of Kubernetes and have the Couchbase Autonomous Operator handle much of the cluster management, such as failure recovery and multidimensional scaling. However, users may feel a bit uncomfortable just sitting back and watching the Couchbase Autonomous Operator do its thing. To alleviate some of their worry, this three-part blog series will walk through the different ways the Quality Engineering team here at Couchbase gives our customers peace of mind when running Couchbase on Kubernetes.

Original Link

Common Cloud Management Challenges Faced by Enterprises

Taking control of the management of cloud operations on an enterprise isn’t easy. Here are some of the most cited issues that managment of cloud infrastructures face. 

Hybrid Cloud Model

For operating a hybrid cloud model, the underlying application, integrations and data architectures need to be revisited, sometimes tweaked, and other times overhauled. New tools for deployment, monitoring, and management are required. Managers in charge of maintaining the Hybrid IT environments seek to ensure the availability of the complex set of skills, tools and processes needed to manage hybrid infrastructure on a consistent, global scale.

Original Link

Waves of ‘Cloud-Native’ Transformations

This was originally published at my personal publication.

Enterprise CIOs have been working on digitally transforming their IT infrastructure for a while now. Such digital transformations have traditionally been based on virtualization and Software-as-a-Service (or SaaS) offerings. With the development of cloud computing/container technologies, transformational CIOs are looking into becoming cloud-native as well. But what is Cloud-Native?

Original Link

Enterprise Agility: Facilitating Change [Excerpt]

Many enterprises will need to significantly reinvent themselves in order to enhance their agility. The change necessary is tectonic in nature, as it encompasses not only the extrinsic and tangible elements of a business, like people, process, and governance, but also, and more importantly, the intrinsic and intangible elements like mindset and culture.

This article, based on the book Enterprise Agility by Sunil Mundra, is about key learnings related to facilitating change at the company level, based on the author’s experience. The takeaways from these learnings will help to alleviate pain and disruption, which are often side effects of enterprise-level change.

Original Link

MVP Development for Startups and Mature Enterprises

Developing an app can be a time-consuming, expensive endeavor, and as such, it’s important to work with a methodology that will deliver on ROI and business objectives. Many companies, however, choose the wrong approach. You have numerous startups which have failed because they took an idea, developed it for months, or even years, and never market tested it until launch. The results from taking this approach can range from disappointing to disastrous. To address this, companies have started working with MVPs, or Minimum Viable Products.

What is an MVP?

The MVP originated with the Lean Startup methodology, which follows a build-measure-learn feedback loop. The methodology starts with identifying a problem, and then building a barebones product known as the MVP, in order to test assumptions and customer reactions to it. With each iteration of the MVP, the company gathers actionable data and metrics in order to determine various cause and effect relationships.

Original Link

The Rise of the Citizen Developer

Understanding “Citizen Developer”

You’ve likely read the term “Citizen Developer,” or, according to Gartner: “An end user who creates new business applications for consumption by others using development and runtime environments sanctioned by corporate IT.”* Why is this movement taking place? As end users, Citizen Developers better understand the functional requirements for an application. They can specialize to ensure that what’s needed is what’s done.

Pitfalls of Citizen Developers

But Citizen Developers worry IT for three reasons: The potential for lower security standards, poor performance, and a sub-par end-user experience. Still, the Citizen Developer has significant benefits for the enterprise, so the question is: How can IT address the pitfalls?

Learning From Previous Industries

To better answer that question, let’s take a look at a similar situation: desktop publishing. In the old days, all business materials were designed and created by the graphic design group for a particular company. This often moved slowly and couldn’t be tailored to specific business units, so some units started to create their own marketing materials. While this improved turnaround times and marketing specificity, it also caused a major issue: the resulting materials didn’t follow brand guidelines. The solution was to bring graphic design back in to supervise and oversee the process. By providing a framework with templates, the graphic design team made it easier for the citizen artist as well as more compliant from an enterprise perspective. There are lessons from that transition that apply to today’s Citizen Developer movement. If companies lean on the current DevOps and enterprise framework, they can gain the benefits of Citizen Developer while lowering the associated risks. Let’s examine the role that each organization should play to ensure success.

DevOps and Enterprise Frameworks: A Two-Part Solution

The DevOps process plays an important role in enterprise application development. The process and teams working on the development workflow can provide expertise early and often to ensure quality, performance, and reliability. If we expand DevOps to work with the security team to create a secure product, then this can fit into the relatively new DevSecOps movement.

An enterprise application development framework allows for a supervised development environment within which Citizen Developers can work. A key component of this framework is the isolation of development environments for each group of Citizen Developers. This framework will also make the Citizen Developer’s job easier. They can better learn a new system by using templates created by the existing framework, which lowers the risk of poor performance, reliability or end-user experience. Further, a good framework gives a way for unit, functional, and user experience testing to progress from sandbox to release. Finally, a good framework provides simple access to all enterprise data and processes across departments.

A controlled environment also provides IT with the ability to change the development freedoms of Citizen Developers – maybe the Citizen Developers use simple applications at first and more complex applications once they are more experienced. Citizen Developers often come from legal, HR, manufacturing, security, and other teams, and this interconnectivity enables them to deliver quality output.

So, with the pairing of currently existing DevOps teams and a strong enterprise framework, the potential downsides of Citizen Developers (security, performance, and end-user experience) can be avoided while fostering true greater business agility.

References:

*Gartner Glossary, Citizen Developer, https://www.gartner.com/it-glossary/citizen-developer/

Original Link

Episode 174: How Wyze makes such a crazy, good camera for cheap

This week I was at Google’s cloud event in San Francisco while Kevin swapped out his video doorbells. We discuss Google’s news related to edge computing and several pieces of doorbell news before talking about a few recent articles that show how far the smart home has to come. Kevin talks about the first NB-IoT tracker for the U.S., a new Bluetooth security flaw, and how Google’s cloud differs from AWS in his experience connecting our voicemail hotline to the cloud. We also cover a surprise contender for the worst connected device seen this week and answer a question on Alexa and hubs that is probably pretty common.

This is the $20 Wyze camera.

This week’s guest is Elana Fishman, COO of Wyze Labs, who came on to explain how the company can make a high-quality HD camera for between $20 and $30. The combo of a low price and a good camera obviously works. Wyze has sold more than 500,000 cameras so far! She also answers questions about security, privacy and the company’s recent integration with Amazon’s Alexa ecosystem. You’ll enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Elana Fishman, COO of Wyze Labs
Sponsors: Afero and Smart Kitchen Summit

  • How Google’s IoT cloud stuff compares with Amazon’s and Microsoft’s
  • Neurotic people might not want smart home gear
  • The dumbest IoT product of the week
  • How does Wyze make a camera that costs 10X less than Nest’s?
  • Wyze has sold half a million IoT devices. That’s insane!

Original Link

Episode 174: How Wyze makes such a crazy, good camera for cheap

This week I was at Google’s cloud event in San Francisco while Kevin swapped out his video doorbells. We discuss Google’s news related to edge computing and several pieces of doorbell news before talking about a few recent articles that show how far the smart home has to come. Kevin talks about the first NB-IoT tracker for the U.S., a new Bluetooth security flaw, and how Google’s cloud differs from AWS in his experience connecting our voicemail hotline to the cloud. We also cover a surprise contender for the worst connected device seen this week and answer a question on Alexa and hubs that is probably pretty common.

This is the $20 Wyze camera.

This week’s guest is Elana Fishman, COO of Wyze Labs, who came on to explain how the company can make a high-quality HD camera for between $20 and $30. The combo of a low price and a good camera obviously works. Wyze has sold more than 500,000 cameras so far! She also answers questions about security, privacy and the company’s recent integration with Amazon’s Alexa ecosystem. You’ll enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Elana Fishman, COO of Wyze Labs
Sponsors: Afero and Smart Kitchen Summit

  • How Google’s IoT cloud stuff compares with Amazon’s and Microsoft’s
  • Neurotic people might not want smart home gear
  • The dumbest IoT product of the week
  • How does Wyze make a camera that costs 10X less than Nest’s?
  • Wyze has sold half a million IoT devices. That’s insane!

Original Link

Episode 174: How Wyze makes such a crazy, good camera for cheap

This week I was at Google’s cloud event in San Francisco while Kevin swapped out his video doorbells. We discuss Google’s news related to edge computing and several pieces of doorbell news before talking about a few recent articles that show how far the smart home has to come. Kevin talks about the first NB-IoT tracker for the U.S., a new Bluetooth security flaw, and how Google’s cloud differs from AWS in his experience connecting our voicemail hotline to the cloud. We also cover a surprise contender for the worst connected device seen this week and answer a question on Alexa and hubs that is probably pretty common.

This is the $20 Wyze camera.

This week’s guest is Elana Fishman, COO of Wyze Labs, who came on to explain how the company can make a high-quality HD camera for between $20 and $30. The combo of a low price and a good camera obviously works. Wyze has sold more than 500,000 cameras so far! She also answers questions about security, privacy and the company’s recent integration with Amazon’s Alexa ecosystem. You’ll enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Elana Fishman, COO of Wyze Labs
Sponsors: Afero and Smart Kitchen Summit

  • How Google’s IoT cloud stuff compares with Amazon’s and Microsoft’s
  • Neurotic people might not want smart home gear
  • The dumbest IoT product of the week
  • How does Wyze make a camera that costs 10X less than Nest’s?
  • Wyze has sold half a million IoT devices. That’s insane!

Original Link

Episode 173: Nest CEO is out and Jacuzzi is in with the IoT

Nest’s CEO has been forced out, and GE and Microsoft create even deeper integrations for industrial IoT. Also this week, UPS creates a partnership with a startup to take on Amazon Key, and we discus the common question of if you should upgrade your Echo? There’s a lot of lock news, some connected car fundings (Zoox and Light) and an Alexa-enabled microwave that feels perfect for dorms or bachelors. Kevin also shares a secret to turn your Kindle FireHD tablet into an Echo Show and some news for those still hoping for a decent Android WearOS device. This week’s listener question is also about smart locks but for a very particular use case.

GE’s latest microwave costs $139 and can be controlled with Amazon’s Alexa. Image courtesy of GE Appliances.

Our guest this week is Mark Allen, vice president of IT at Jacuzzi, who discusses why and how Jacuzzi connected its premium line of hot tubs. Jacuzzi has connected 1,000 hot tubs so far and since it starting selling them in April, it has 500 of the connected tubs in consumers’ homes. Allen explains the tools Jacuzzi has used to get the hot tubs online and connected to dealers’ service operations. He also shares his thoughts about privacy rules and how connected devices will change Jacuzzi’s business. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Mark Allen of Jacuzzi
Sponsors: Afero and Avnet

  • Why Microsoft and GE got a little closer
  • Lots of lock news from the home to the enterprise
  • Should you update your Echo?
  • Which platform did Jacuzzi choose to connect its tubs?
  • GDPR will affect your hot tub

Original Link

Episode 173: Nest CEO is out and Jacuzzi is in with the IoT

Nest’s CEO has been forced out, and GE and Microsoft create even deeper integrations for industrial IoT. Also this week, UPS creates a partnership with a startup to take on Amazon Key, and we discus the common question of if you should upgrade your Echo? There’s a lot of lock news, some connected car fundings (Zoox and Light) and an Alexa-enabled microwave that feels perfect for dorms or bachelors. Kevin also shares a secret to turn your Kindle FireHD tablet into an Echo Show and some news for those still hoping for a decent Android WearOS device. This week’s listener question is also about smart locks but for a very particular use case.

GE’s latest microwave costs $139 and can be controlled with Amazon’s Alexa. Image courtesy of GE Appliances.

Our guest this week is Mark Allen, vice president of IT at Jacuzzi, who discusses why and how Jacuzzi connected its premium line of hot tubs. Jacuzzi has connected 1,000 hot tubs so far and since it starting selling them in April, it has 500 of the connected tubs in consumers’ homes. Allen explains the tools Jacuzzi has used to get the hot tubs online and connected to dealers’ service operations. He also shares his thoughts about privacy rules and how connected devices will change Jacuzzi’s business. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Mark Allen of Jacuzzi
Sponsors: Afero and Avnet

  • Why Microsoft and GE got a little closer
  • Lots of lock news from the home to the enterprise
  • Should you update your Echo?
  • Which platform did Jacuzzi choose to connect its tubs?
  • GDPR will affect your hot tub

Original Link

Episode 173: Nest CEO is out and Jacuzzi is in with the IoT

Nest’s CEO has been forced out, and GE and Microsoft create even deeper integrations for industrial IoT. Also this week, UPS creates a partnership with a startup to take on Amazon Key, and we discus the common question of if you should upgrade your Echo? There’s a lot of lock news, some connected car fundings (Zoox and Light) and an Alexa-enabled microwave that feels perfect for dorms or bachelors. Kevin also shares a secret to turn your Kindle FireHD tablet into an Echo Show and some news for those still hoping for a decent Android WearOS device. This week’s listener question is also about smart locks but for a very particular use case.

GE’s latest microwave costs $139 and can be controlled with Amazon’s Alexa. Image courtesy of GE Appliances.

Our guest this week is Mark Allen, vice president of IT at Jacuzzi, who discusses why and how Jacuzzi connected its premium line of hot tubs. Jacuzzi has connected 1,000 hot tubs so far and since it starting selling them in April, it has 500 of the connected tubs in consumers’ homes. Allen explains the tools Jacuzzi has used to get the hot tubs online and connected to dealers’ service operations. He also shares his thoughts about privacy rules and how connected devices will change Jacuzzi’s business. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Mark Allen of Jacuzzi
Sponsors: Afero and Avnet

  • Why Microsoft and GE got a little closer
  • Lots of lock news from the home to the enterprise
  • Should you update your Echo?
  • Which platform did Jacuzzi choose to connect its tubs?
  • GDPR will affect your hot tub

Original Link

Episode 173: Nest CEO is out and Jacuzzi is in with the IoT

Nest’s CEO has been forced out, and GE and Microsoft create even deeper integrations for industrial IoT. Also this week, UPS creates a partnership with a startup to take on Amazon Key, and we discus the common question of if you should upgrade your Echo? There’s a lot of lock news, some connected car fundings (Zoox and Light) and an Alexa-enabled microwave that feels perfect for dorms or bachelors. Kevin also shares a secret to turn your Kindle FireHD tablet into an Echo Show and some news for those still hoping for a decent Android WearOS device. This week’s listener question is also about smart locks but for a very particular use case.

GE’s latest microwave costs $139 and can be controlled with Amazon’s Alexa. Image courtesy of GE Appliances.

Our guest this week is Mark Allen, vice president of IT at Jacuzzi, who discusses why and how Jacuzzi connected its premium line of hot tubs. Jacuzzi has connected 1,000 hot tubs so far and since it starting selling them in April, it has 500 of the connected tubs in consumers’ homes. Allen explains the tools Jacuzzi has used to get the hot tubs online and connected to dealers’ service operations. He also shares his thoughts about privacy rules and how connected devices will change Jacuzzi’s business. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Mark Allen of Jacuzzi
Sponsors: Afero and Avnet

  • Why Microsoft and GE got a little closer
  • Lots of lock news from the home to the enterprise
  • Should you update your Echo?
  • Which platform did Jacuzzi choose to connect its tubs?
  • GDPR will affect your hot tub

Original Link

DevOps: Who Does What (Part 1)

The following is an excerpt from a presentation by Cornelia Davis, Senior Director for Technology at Pivotal, titled “DevOps: Who Does What.”

You can watch the video of the presentation, which was originally delivered at the 2017 DevOps Enterprise Summit in London.

Throughout the years, I’ve had the great opportunity of working with very, very large enterprises across all verticals. My background is as a technologist, I’m a computer scientist, and initially, I spent a lot of time talking tech at the whiteboard. But then I realized that there was so much more that needed to change, which is why I’m sharing now about the organizational changes that can support our technology needs.

My First Question to You Is, Is This Your Reality Today?

We have different business silos across the organization and different individuals that are coming from those silos. When we have a new idea for a product, we kick off a project and individuals go into the project to do some work.

The first individuals from the first silos come in, and they generate their artifact. Then what do they do? They throw it over the wall to the next step. If you look at this slide below you’ll notice once they’re done they leave the project.

If for some reason we have to go backwards, we have to figure out how to get them back into the project. And, so it goes through each silo. We all recognize that this is a slow and challenging process. If it only moved linearly, it might be okay. But we all know that it goes this process goes backwards, and forwards, even circular!

But that’s not even the biggest problem of these things.

The biggest problem is that each one of these organizations are incentivized differently.

My favorite examples are App, Dev, and QA — so let’s look at these.

Application Development is almost always incentivized by ‘Did you release the features that you promised on time, and ideally on budget?’ And, if you released the features on time you get a ‘Way to go, you achieved your goals.’
Then it moves over to QA.

What is QA incentivized on? Well, they’re responsible for quality. So they are generally incentivized by the number of bugs that they have found and fixed.

Now, let’s look at these things in combination. What happens when the application development process starts to fall a little bit behind? Developers start working late into the evenings. They work on weekends, they start working very unsustainable hours, and what happens? Quality suffers, but they hit their features on time.

Well, when they throw that over the wall to QA, what’s gonna happen now?

QA is going to find more bugs. Way to go! So we’ve got locally optimized metrics that do not create a globally optimized solution. That’s a big problem.

Well, the answer is really simple…

The Answer Is Balanced Teams!

What we’re going to do is we’re going to center things around a product and the product team is incentivized to deliver value to a customer, to deliver value to some constituency.

For example if I’m in an e-commerce scenario:
I have a product team that is really about the best experience around showing product images, recommendations, soliciting reviews, or it could be some back office product that is enabling your suppliers. These are all the different product teams.

There’s been a lot of research, and a lot of discussion, and a lot of proof points that product teams are really the way to go.

But what if we don’t have product teams? What if we have different roles within the SDLC, how do you create product teams of these different disciplines to come together into a product?

We’re Going to Try and Put Things Through the Sorting Hat

If you have been living in a cave for the last 10 years, and you don’t know what the sorting hat is, this comes from Harry Potter. When new students come to Hogwarts School of Witchcraft and Wizardry on their first day, each one of them places the hat on their head and they get sorted into one of four houses, and that’s the house that they live in for the next seven years.

So we’re going to take those roles and we’re going to sort them into houses. But the question then is, what are the houses that we’re going to sort into? So let’s take a little bit of a tangential ride over to the side and think about a couple of houses. (I’m going to end up with four houses in the end, but I want to start with two.)

The left part of this slide you’ve all seen for the last several years.

That is where we were maybe 15 years ago. IT was responsible for the entire stack from the hardware all the way up through the application.

Then VM Ware came along and virtualized infrastructure.

Then a whole host of people made infrastructure as a service available. Amazon web services of course being kind of the behemoth of that.

That made it so that we could just get machines, EC2 machines for example, and then we could stand up everything that we needed on those machines. Getting machines was easy.

Then in the last five years or so, we’ve taken that abstraction up another level and we’ve created application platforms where we have individuals who can be building applications, and the only thing that they need to worry about is their application code.

What’s important about that application platform is that it generates a new set of abstractions. Those abstractions are at a higher level. They are fundamentally the application, or maybe some services there in support of that application, and it allows us to not do things like implement security by creating firewall rules that machine boundaries, but instead allows us to implement security at the application boundary.

This new abstraction is one of the key things that’s happened in platforms over the last five years. It’s given us something really interesting and really important. It’s allowed us to define two different teams. And it’s defined a contract between those teams that allows these teams to operate autonomously.

When we hear about all of the different goals of an enterprise, they all talk about needing to bring software solutions to market more quickly, and more frequently. So agility, and autonomy, and teams is incredibly important. We’re always looking for those boundaries where we can create more autonomy.

Now the Application Team…

The team that’s going to create the next mobile app or the next web app or even some analytics app for example, can focus on building that application, and they don’t need to worry about even the middleware that sits below it.

They’re responsible for creating the artifact. They’re also responsible for configuring the production environment, deploying to production. They are doing Dev and Ops. It’s not necessarily the same person, but it is the same team. They’re deploying to production, they’re monitoring. When they notice that they need more capacity, they’re scaling so that they can achieve better performance. They deploy new versions when they need to.

It’s entirely up to them.

Now There’s Another Product Team, and That Is the Platform Team

So that’s the team that’s providing the platform, and notice that they’re doing exactly the same things.

They are deploying the platform, they’re configuring it, they are monitoring it, they are upgrading it when they need more capacity, or upgrading it to the next version. They’re doing the same things but they have their own products that they’re working on. So the product orientation is really key.

This separation gives us the first two houses that were going to sort into. The APP team, and the platform team.

Now let’s take all of these roles that come from traditional organizations and start sorting them. And, so here’s our two houses, the APP team and the platform team.

We’re going to do this piece by piece and I’ll explain the steps as we go along.

The first ones that we’re going to do is we’re going to start with the purple bubble there. Before I sort them, notice that this Middleware and App Dev team is actually taking care of both the Middleware and the application development.

In retrospect, having worked in this new world for the last five years, I find this kind of counterintuitive because why would somebody who’s creating an application i.e., using the middleware be in the same group as the middleware itself? To a large extent, it’s because in the past middleware required a great deal of expertise. You had to know a lot about the middleware to be able to effectively program against it. That’s something that we’re trying to move away from. We’re having more agile middleware platforms and so on.

Notice what happens here.

We’ve got middleware and we’ve got App Dev, and we break those apart. We put the middleware engineers inside of the platform team. They’re part of that team providing the capabilities that the APP team can then use. Then we take kind of a full stack application development team and put them up in the APP team. We’ve got front end, and we’ve got back end. All of those individuals are there.

That one’s pretty straightforward.

The next one that’s also pretty straightforward is we’re going to pull some of the folks out of the infrastructure team, the folks responsible for building out the servers and the networks.

You might have noticed I put virtualized infrastructure and platform together in one team that many of our customers actually keep those as separate, but in this case it really wasn’t important to make that separation. You could be separating the platform team into two separate individual ones as well. The thing that I would caution you is you need to make sure that you then have a very crisp contract between the platform team and the infrastructure team.

I’ll be honest with you, that’s a little bit harder to find at the moment, so that’s part of the reason I’ve put them together.

Again, server build-out, network build-out, they are part of the platform team providing the view of the infrastructure up to the App team.

The next one that we’ll talk about here is what I like to call the control functions.

There is information security for example, and change control. Why did I move them at the same time? Change control was usually often coming out of the infrastructure team, and information security coming out of the chief security office. I moved them at the same time because they share a common characteristic. They are functions that today can stop a deployment. They are functions that on every release, on every release into production, they need to give their blessing.

We’ve seen when it comes to the very end, and we find problems in information security or any other types of security, it can actually stop things. There’s a great huge ball of things that we need to check off.

These functions here, information security and change control should engage with your teams that are providing the platforms, and the automation around the deployments to ensure that their concerns are satisfied. Their concerns are not wrong. It’s just the way that we’ve been solving them is something that’s in need of transformation.

In Part 2, we’ll talk about Ops.

Original Link

Episode 172: The smart home goes public

This week’s show takes up last week’s news of Netgear’s Arlo division and Sonos filing for initial public offerings. Kevin and I share what we see in the filings and what it means for the smart home. We also discuss Amazon’s Prime Day deals and Google’s answering sale with Walmart,  before digging into this week’s other news.  There’s a bit about building IoT networks in space and LG CNS’ plans to launch a smart city platform. Kevin also found a fun project that tackles how to make your own indoor air quality monitor.  We close our segment by answering a listener question about garage door automation.

Me installing the Alexa-enabled faucet a few weeks ago.

This week’s guest helped build the new Alexa-enabled faucet from Delta Faucet and shares the process with us. Randy Schneider is a product electrical engineer at Delta Faucet, and discusses how the company decided on Alexa, why there’s no app and why the phrasing for asking Alexa to turn on a faucet is so awkward. You’ll learn a lot from this, and may even find yourself wanting to connect your own kitchen sink. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guests: Randy Schneider is a product electrical engineer at Delta Faucet
Sponsors: Afero and Avnet

  • Amazon looms large in both planned smart home IPOs
  • Google and Walmart take on Prime Day with deals for Google gear
  • Want to make a DIY air quality monitor?
  • Why Delta decided voice would be good for the kitchen sink
  • What’s Crate and Barrel got to do with this?

Original Link

The Continuous Delivery Challenge in the Enterprise

Continuous delivery (CD) as an engineering practice is taking hold at enterprises everywhere. Most forward-looking app developers’ efforts rely on CD to one extent or another. Typically, that is in the form of a functionally automated pipeline for code promotion and some test execution. Some amount of the delivery work, such as database changes, provisioning or configuration management tickets, production signoffs, etc., is still done manually. These forward-looking teams, therefore, have a CD pipeline that “works” reasonably well.

There is an old engineering adage that accurately describes the attitude many such teams have towards adopting CD: “First, make it work. Then, make it work well. Finally, make it work quickly and efficiently.” Today, enterprises are getting through the first and second phases of that adage in their CD adoption efforts, but they are going to want to reach the third eventually-and that’s where the difficulty lies. Organizations in this position should start planning for phase three now to avoid the expense and disruption of bringing it under control later down the line.

A Pipeline of Pipelines

Enterprises attempting to transform their app delivery approaches typically rely on team-level efforts. As a result, they usually have app delivery pipelines in different areas of the business. Many of those current efforts have a very limited scope, only focusing on the basic functional tasks of the specific technical environments of the specific application system they support. Sometimes the focus may even just be on a subset of those environments. Furthermore, the pipelines are often duplicative of each other across teams-even if the technology stacks are the same. There is nothing but manual effort and spreadsheets coordinating the pipelines.

This is a result of teams’ natural, but narrow, focus on their functional needs. The narrow focus can create architectural problems when it comes to helping adjacent delivery pipelines-especially if the tech stacks among the adjacent teams are different. That is because most team-level tools do not support the variety of tech stacks present in many enterprises. As a result, these function-focused automated pipelines still rely on manual management techniques for higher-level needs, such as cross-app dependency management. These enterprise needs are less technically focused or have other, larger business aspects that are beyond the scope of a team’s responsibility-even if the team’s activities impact those needs.

As development teams build out newer applications that expand to become critical pieces of enterprise software infrastructure, they discover that they now need something to manage their “pipeline of pipelines.” No matter how consistently the involved teams’ individual pipelines work, their narrow focus limits visibility and constrains their ability to effectively manage complexity for the business stakeholders. That blinds business stakeholders to the progress of key features and defects and is exacerbated when there are dependent projects. Existing data reported by team-level tools does not help (and may even hinder) coordination, security, quality, or compliance efforts, because the data resides in so many formats and tools across the various teams that it is impossible to correlate.

Enterprise-Grade Continuous Delivery Solutions

So, how do enterprises progress from scattered CD pipelines that merely “work” to CD pipelines that “work quickly and efficiently” at the scale they require? They switch to a model-driven approach that enables them to leverage the functional effectiveness of team-level efforts and, without throwing those away, thread them into a coherent and manageable whole. These models — a hallmark of enterprise-grade CD tools — enable enterprises to gain a clear, end-to-end understanding of complex value streams at both the technical “works” level and the management “works quickly and efficiently” level.

A model-driven approach to bringing disparate team efforts together:

  • Eases coordination among dependencies by supporting heterogeneity in tools and team preferences and supporting the breadth of enterprise technology equally (containers, cloud, platforms, legacy distributed, mainframe)
  • Increases visibility by providing a broad framework that collects data (both statistical and business-stakeholder relevant, e.g. feature/defect progress, etc.) for coherent reporting to the business and management
  • Improves security by providing a consistent framework for enterprise security and compliance concerns, managed consistently for all teams with minimal duplication

The emerging awareness that CD requires different or additional tools to manage the complexity in an enterprise context is a natural outcome of the constantly evolving modernization of software delivery practices. It is an example of why CD practitioners and evangelists talk about the adoption of such practices as a “journey”. With that journey will come learnings that shift team managers from being excited that something works “well” to an urgency toward taking the next step on their journey to make them “work quickly and efficiently.”

Original Link

Episode 170: Smart stents, surveillance tech and Alexa-powered faucets

This week’s episode begins on a grim note, as Kevin and I discuss the New York Times’ story about how smart home gadgets can become another point of control in abusive relationships. From there we touch on the new Wi-Fi WPA3 security standard and Tesla’s new plan to charge users for data and what it means for IoT. Kevin shares the new Alexa for iOS feature and explains why it’s useful, while I talk about a startup that wants to detect pollution at granular levels. We share news of a smart stent, smart park benches and my experience with an Alexa-enabled faucet. We then answer a question from a reader who wants to buy Abode’s security system but wonders what gadgets will work with it.

This smart stent is one long antenna with a pressure sensor. Image courtesy of the University of British Columbia.

For the guest segment, I visit with Cyrus Farivar, who is a reporter at Ars Technica and wrote a book on surveillance tech called “Habeas Data”. We discuss the current legal underpinnings of privacy law in the US and how it has evolved. Our conversation covers the recently decided Carpenter case, the 1967 case that established the concept of a “reasonable expectation of privacy,” and how the government could use our connected devices against us. You’ll learn a lot, but you may want to unplug your Echo.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Cyrus Farivar author of “Habeas Data
Sponsor: Control4

  • How to reset connected devices and be a decent human being
  • Y’all had some great ideas on connected cameras
  • Alexa, ask Delta to turn on faucet
  • Where the expectation of privacy came from
  • What to ask device makers about government snooping

Original Link

Lean Startup Strategy: Will Corporations Innovate Like Startups?

Lean startup strategy has come into full force. Initially, this approach was designed to help startups that are short of funding gain a foothold in a certain niche. But why do business titans who feel no need for investments, resources or even the market influence, try hard to adopt lean methodology? Is “lean” really applicable at the level of an enterprise?

Lean Methodology Revolution

Lean Methodology RevolutionEarly on, the concept of lean business was synonymic to the notion of a startup. The lean methodology came to the rescue when too many startups were failing miserably. Sure they did! These startups had never spoken to their customers, tried too hard to meet a non-existent market need and overestimated vitality of their ideas. Lean startup strategy appeared to be a tried-and-true remedy. But it’s not all roses.

Lean Business: Why It Is Difficult for Corporations to Work in It

There are two sides to the lean approach. We’ll start with the dark one to see which threats the companies should be aware of when practicing lean startup strategy.

At a very high level, a company that has decided to do it the lean way should be guided by the following principles:

  • Develop a minimum viable product (MVP), proof of concept (PoC) or prototype to test the hypothesis of whether a market needs a product or not.
  • Try the hypothesis, measure data received and validate this hypothesis.
  • Decide whether to build a full-fledged product or pivot.

Let’s assume that a particular enterprise would adopt the principles of startup methodology. They might encounter certain setbacks while trying to look more innovative and competitive in a digital world.

The Biggest “Lean” Fear of the Enterprise

The Biggest “Lean” Fear of the EnterpriseAccording to the survey “How large companies are using lean startup methodology,” 50% of respondents feel concerned or aren’t ready to show unpolished products to their audience. Moreover, 36% believe that it’s complicated to develop an MVP within their industry. It means that an MVP, a must-have from the startup survival kit, is seen as a stumbling block by the majority of incumbents. Possible threat to the brand’s image, the hard decisions as what to call “minimal” in fields like shipbuilding or pharmaceutics and, finally, a need to move fast with MVP development – all of these issues are often too much to deal with at a corporate level.

An Unequal Cost of Failure

An Unequal Cost of Failure The lean methodology teaches startups how to deal with failures. When experimenting with the market response, entrepreneurs should be ready that their MVP might not be a success. The ability to recover fast and move on from failure to failure is perceived as a learning basis for the lean business. (This good advice applies not only to startups, by the way). But the categories of failure will differ. When a startup losses seed capital, a company may lose an entire business branch; when a startup risks a target audience of one thousand users, a company risks an account with one million end users. The bigger company, the higher the stakes.

Vast Opportunities Often Entail Vast Bureaucracy

Lean Startup StrategyStartups are not mini versions of the established companies. Still, they often compete like siblings in a family. Advantages of early-stage companies over the corporate giants seem less obvious, but they do exist. A time-consuming process of executive approvals contradicts the very idea of lean startup strategy. In lean business, the category of time is precious. Decisions and works are performed quickly to eliminate waste of time and get vivid evidence to move further or pivot. Slow decision-making, going through risk management processes and communication with many stakeholders impede adoption of the lean methodology. Innovations won’t wait, which equates to one more score for a startup.

Lean Business is a Modern Business

Lean Business is a Modern BusinessEven though big corporations look stunning from the outside, they constantly undergo disruption and fight for mindshares. On top of that, they are a bit jealous or even intimidated by startups. Just think! It took them decades groping their ways to the top to make names. But what do we have today? Airbnb, Uber, Xiaomi, Snapchat are several names of the hilariously famous startups that didn’t exist ten years ago but now are worth billions. There is no life tenure in the business world, but there is uncertainty, as Eric Ries states. It means that even the global powerhouses need renewal and revision of their business methodologies. Lean startup strategy is the right chance to do so.

Conclusion

P&G, Toyota, GE, and Philips are just several global titans that have already tried and succeeded with lean startup approach. So it’s not about whether the established companies should adopt the lean methodology. It’s more about in what way it should be incorporated to bring benefits. Uncertainty in business world only proves that there is no universal recipe for innovation success.

Lean methodology suggests that a startup’s true purpose is to turn bright ideas into products. To make this so, entrepreneurs should try their products out, learn how customers respond and decide on whether to build a full-fledged product, pivot or abandon everything. There is a huge amount to be gained from a lean startup strategy, so it’s better to start testing this hypothesis right now.

Original Link

Episode 169: Alexa gets a hotel gig

This week in IoT news, Kevin and I talk about AT&T’s plans to launch an NB-IoT network. Then we talk about the pros and cons of Marriott putting Alexa into hotel rooms. We also talk about a new voice assistant for the enterprise, HP Enterprises’ $4 billion investment in IoT, and digital rights management in smart fridges. We touch on a few more stories including an accelerator for the smart kitchen, leaked location data, a router that acts as a smart hub, and a clarification on the Thread news from last week. We then answer a question on how to view content from video doorbells and cameras on Alexa-enabled screens.

Amazon created a special version of Alexa for hotels. Image courtesy of Amazon.

This week’s guest is Gabriel Halimi, CEO and co-founder of Flo Technologies who discusses his leak detection technology as well as the insurance market. We talk about why consumers will end up sharing their data with an insurance firm, what you can learn from water flow data, and Halimi poses a somewhat scary future where your insurance firm will know if you actually set your alarm that they offer a discount for. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Gabriel Halimi, CEO and co-founder of Flo Technologies
Sponsors: Praetorian and Control4

  • AT&T joins Verizon and T-Mobile with anew NB-IoT network
  • Here’s why Alexa is everywhere
  • Wait, this fridge comes with DRM?
  • With insurance and IoT, if you can’t join ’em, beat ’em.
  • You can learn a lot from water data

Original Link

Episode 168: How GE’s Current curtailed dreams to meet reality

This week Kevin and I spend a bit of time on industrial IoT news with Rockwell Automation’s $1 billion investment in PTC and also ARM’s buy of a Stream Technologies. On the consumer side, we debate Wi-Fi subscription plans and Nest’s price drop and Ring’s new security system. We also talk about Thread’s milestone in industrial IoT, Verizon’s new CEO, and whether or not Google Home can now handle three consecutive commands. I review the Wyze Pan Cam and we answer a question about the Qolsys’ IQ Panel 2.

Ring’s security system lands on July 4 for $199.

This week’s guest comes from GE’s Current lighting business. Garret Miller, the chief digital officer at Current by GE explains why the division is for sale, why GE has to offer lighting as a service, and how reality forced a shift in thinking for Current. When Current launched, it had grand plans to deliver electricity as a service but realized that it was several steps ahead of the market, so it now offers lighting as a platform. It’s a good interview about how to reassess the market when needed.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Garret Miller, chief digital officer at Current by GE
Sponsors: Praetorian and Control4

  • Why ARM bought Stream Technologies
  • Ring and Nest gear up for home security fight
  • I like the Wyze Pan Cam
  • Why GE had to change the way it sells lights
  • Why Current changed business models and what it says about IoT

Original Link

Getting Enterprise Features to Your MongoDB Community Edition

{{ articles[0].views | formatCount}} Views

New whitepaper: Database DevOps – 6 Tips for Achieving Continuous Delivery. Discover 6 tips for continuous delivery with Database DevOps in this new whitepaper from Redgate. In 9 pages, it covers version control for databases and configurations, branching and testing, automation, using NuGet packages, and advice for how to start a pioneering Database DevOps project. Also includes further research on the industry-wide state of Database DevOps, how application and database development compare, plus practical steps for bringing DevOps to your database. Read it now free.

Many of us need MongoDB Enterprise Edition but might be short of resources or would like to compare the value.

I have summarized several key features of MongoDB Enterprise Edition and their alternatives:

  • MongoDB Cloud Manager: Performance monitoring ($500/yr/machine) => $1000-1500
  • Datadog/NewRelic => $120-$180/yr per machine, Datadog is better for this case
  • DYI using tools such as mongotop, mongostat, Mtools and integrate with grapha and others

Replication is recommended and is part of the community edition:

Replica set => min 3 nodes, at least 2 data nodes in 3 data centers (2 major DC and one small).

There are 3 major options (that can be combined of course):

  • fsync to MongoDB and physical backup:
    • Fast backup/restore
    • Might be inconsistent/unreliable
  • Logical backup: based on mongodump
    • Can be done w/ $2.5/GB using the cloud manager w/ Point in time recovery
    • Can be done w/ Percona hot backup
    • Incremental is supported
  • Have a delayed node

The first two may be done using a 3rd data node in hidden for backup (high frequency backup) that enable

  • Disk-based encryption => data at rest (can be done in AWS and several storage providers)
  • eCryptFS => Percona => data at Rest
  • Encryption based application by the programmers in the class level before saving to disk.

Using Percona edition is a good alternative that may close many of your enterprise needs.

It is well supported with MongoDB BI Connector in the enterprise edition but can be done also with:

  • Some BI tool supports MongoDB natively
  • 3rd party provider for JDBC connector: such as Simba and https://www.progress.com/jdbc/mongodb

Getting your MongoDB Community Edition to meet Enterprise requirements is not simple, but with the right effort, it can be done.

New whitepaper: Database DevOps – 6 Tips for Achieving Continuous Delivery. Discover 6 tips for continuous delivery with Database DevOps in this new whitepaper from Redgate. In 9 pages, it covers version control for databases and configurations, branching and testing, automation, using NuGet packages, and advice for how to start a pioneering Database DevOps project. Also includes further research on the industry-wide state of Database DevOps, how application and database development compare, plus practical steps for bringing DevOps to your database. Read it now free.

Topics:

database ,mongodb ,enterprise ,community edition ,mongo db

{{ articles[0].views | formatCount}} Views

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

Original Link

Episode 167: Apple’s WWDC news and connected musicians

Kevin kicks off the show with his thoughts on Apple’s World Wide Developer Conference news, including Siri’s new IFTTT-like abilities. We continue with Alexa finding a home on computers and a discussion of the OVAL sensor that’s hoping to crowdfund a second-generation product. I’m disappointed that Lenovo’s new Google Assistant screen-enabled device won’t ship until September, but super excited about Microsoft’s new IoT offerings, including spatial intelligence. There’s yet another industrial IoT platform for cellular low power wide area networks, this time from Sierra Wireless. Finally, Kevin and I share our latest buys, an Aware Glow air quality monitor for me, and an app that puts Alexa on the Apple Watch for Kevin.

My Awair plugged into my bedroom wall.

Our guest this week is Anya Trybala, a musician and creator of SynthBabes, a group that supports female electronic music artists. Trybala talks about how connectivity and technology could change the way artists perform and introduces a concept for VR called The Elevator. For a look at her work, check out this video. To hear her thoughts on how to use AR/VR and the blockchain for changing music, listen to the interview.

Hosts: Stacey Hgginbotham and Kevin Tofel
Guest: Anya Trybala of SynthBabes
Sponsors: Praetorian and Bosch

    • Apple still isn’t changing the game in the smart home
    • Microsoft continues making its IoT services better
    • Check out Alexa on an Apple Watch
    • Building a connected concert experience
    • Are you ready for drone microphones?

Original Link

Setting Up a Server Cluster for Enterprise Web Apps – Part 3

In this series of tutorials, we will set up a Server Cluster that is Horizontally Scalable, that is suitable for high traffic Web Applications and Enterprise business sites. It will consist of 3 Web Application Servers and 1 Load Balancing Server. Although we will be setting up and installing WordPress on the cluster, the actual cluster configuration detailed here is suitable for most any PHP based Web Applications. Each server will be running a LEMP Stack (Linux, NGINX, MySQL, PHP).

To complete this tutorial, you will need to have completed the first two tutorials in the series.

In the first tutorial, we provisioned 3 node servers and a server for load balancing. On the node servers, we configured database and web application file system replication. We used Percona XtraDB Cluster Database as a drop-in replacement for MySQL to provide the real-time database synchronization between the servers. For Web Application file replication and synchronization between servers, we set up a GlusterFS distributed filesystem.

In the second tutorial, we completed the installation of our LEMP stack by installing PHP7 and NGINX, configured NGINX on each of our Nodes and our Load Balancer, issued a Let’s encrypt SSL certificate on the Load Balancer for our domain, and installed WordPress to on the cluster.

We now have a WordPress cluster with equal load balancing between each node.

In the final tutorial, we will look at more advanced cluster architecture configurations that directs administration traffic to node1 and general site traffic to node 2 and node 2. This will ensure that any behind-the-scene CPU and resource-intensive work being carried out in the administration of our web application will never affect any of our site traffic responses.

When this tutorial is completed we will have Cluster architecture like so:

1Three Node Cluster with Load Balancer redirecting Admin traffic and Site traffic

In addition, we will also add NGINX FastCGi caching to the mix to aid performance and ensure the cluster doesn’t sweat even under the most extreme loads, and harden our database cluster and distributed file system.

Throughout the series, I will be using the root user, if you are using your superuser please remember to add the sudo command before any commands where necessary. I will also be using a test domain ‘yet-another-example.com’, you should remember to replace this with your domain when issuing commands.

In the commands I will also be using my server’s private and public IP addresses, please remember to use your own when following along.

As this tutorial directly follows the first two, the sequence of steps is numbered accordingly. Steps 1 to 3 are in the first tutorial, Steps 4 to 7 in the second tutorial. This tutorial begins at Step 8.

Advanced Configurations

Step 8: Configure NGINX FastCGI Caching

With the present configuration, the web application is being served from a cluster of 3 servers, this horizontal scaling will allow the site to withstand tremendous loads, and allow for additional scaling with new servers, or easy swapping out of old servers.

We can improve the performance further using NGINX FastCGI caching.

If you visit your site, open the inspector network tab and reload your site you will see the page load speeds:

2Inspect your site load in the network tab

In my case, the site is loading in 1.91 seconds.

On each of your nodes that will deal with site traffic, open the Virtual Host NGINX Configuration file for your WordPress site for editing.

In our example, node1 will be used for administration tasks so doesn’t require caching. Therefore, on node 2 and node 3 issue the following command:

# nano /etc/NGINX/yet-another-example.com

Above the server block add the following:

fastcgi_cache_path /var/run/nginx-fastcgi-cache levels=1:2 keys_zone=FASTCGICACHE:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

This creates the cache in the /var/run/ directory, which is mounted in RAM, and gives the cache a key_zone identifier. The fastcgi_cache_use-stale also instructs your server to keeps serving cached pages even in case of a PHP timeout or http 500 errors. NGINX caching is really quite brilliant.

Inside the server block, below your error logs and above your first location block add the following:

set $skip_cache 0; # POST requests and urls with a query string should always go to PHP
if ($request_method = POST) { set $skip_cache 1;
} if ($query_string != "") { set $skip_cache 1;
} # Don't cache uris containing the following segments
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1;
} # Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1;
}

These set specific cache omissions for different WordPress functionality.

Finally, within the ‘location ~ /.php$’ block add the following:

fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache FASTCGICACHE;
fastcgi_cache_valid 60m;
add_header X-FastCGI-Cache $upstream_cache_status;

The fastcgi_cache directive must match the keys_zone from the code block above the server block, the fastcgi_cache_valid sets the time to hold cache for, you can adjust this to be longer if your content rarely changes or you get fewer visitors, and the add_header directive adds a header to the Responses Headers so we can verify if a page is being served by cache or not.

Your full configuration file should now look like this:

fastcgi_cache_path /var/run/nginx-fastcgi-cache levels=1:2 keys_zone=FASTCGICACHE:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie; server { listen 80; listen [::]:80; root /var/www/yet-another-example.com; index index.php index.htm index.html; server_name _; access_log /var/log/nginx/yetanotherexample_access.log; error_log /var/log/nginx/yetanotherexample_error.log; set $skip_cache 0; # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } # Don't cache uris containing the following segments if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache FASTCGICACHE; fastcgi_cache_valid 60m; add_header X-FastCGI-Cache $upstream_cache_status; } location ~ /\.ht { deny all; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; allow all; } location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ { expires max; log_not_found off; }
}

In your terminal it should look like this:

3
NGINX Virtual Host Configuration File with FastCGI Cache enabled

Save and exit the file, and as ever, check it for syntax errors before reloading:

# nginx -t
# service nginx reload

Now reload your site with the network inspector tab open:

4Reload and inspect the site with FastCGI caching

As you can see, my site now loads in nearly one-third the time it did before. Loading in 693ms, we have shaved 1.3s from the loading time. You should see similar gains.

Step 9: Configure Admin Node and Visitor Nodes

At the moment our cluster is configured in a balanced configuration. The Load Balancer will serve traffic equally to each of the node servers.

We could weight that traffic if we liked, to serve more traffic to some servers and less to others. However, we are going to leave node 2 and 3 to each be served equal site traffic, while reserving node1 for administration duties.

As mentioned earlier, many of the administration tasks involved in running a web application, like WordPress, can consume valuable resources and lead to a slow down on the server. This can adversely affect visitors to the site if they are being served pages from the same server the administration tasks are being executed on. Our chosen cluster architecture ensures this never happens, and make it easy to add extra site visitor nodes if we ever need to scale further.

Open another port in the security group

Visit your security group in the Alibaba Cloud Management Console, and open another inbound port:

  • Port 9443/9443 – Authorization Object 0.0.0.0/0

5
Open a Port for Admin access to Node1

Reconfigure the load balancer’s NGINX virtual host configuration file

On your load balancer open the NGINX Configuration file for editing:

# nano /etc/NGINX/sites-available/yet-another-example.com

Inside the configuration file add a new upstream block, and add your node1 private IP:

# Cluster Admin - Only accessible by port 9443 - reserves this node of Admin activities
upstream clusterwpadmin { server 172.20.62.56; }

Now remove the node1 private IP from the clusternodes upstream block:

# Clusternodes - public facing for serving the site to visitors
upstream clusternodes { ip_hash; server 172.20.213.159; server 172.20.213.160;
}

Below the existing server block, add another one for listening on the Admin Port with the following code:

#Admin connection to yourdomain.com:9443 they will be directed to node 1. server { listen 9443 ssl; server_name yet-another-example.com www.yet-another-example.com; ssl_certificate /etc/letsencrypt/live/yet-another-example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/yet-another-example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot location / { proxy_pass http://clusterwpadmin; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; }
}

Make sure this block has access to the SSL directives from Certbot, and the proxy_pass is directed at the ‘clusterwpadmin’ upstream servers.

Now your entire Configuration file should look include the following:

# Cluster Admin - Only accessible by port 9443 - reserves this node of Admin activities
upstream clusterwpadmin { server 172.20.62.56; } # Clusternodes - public facing for serving the site to visitors
upstream clusternodes { ip_hash; server 172.20.213.159; server 172.20.213.160;
} server { listen 80; server_name yet-another-example.com www.yet-another-example.com; location / { proxy_pass http://clusternodes; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/yet-another-example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/yet-another-example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot # Redirect non-https traffic to https # if ($scheme != "https") { # return 301 https://$host$request_uri; # } # managed by Certbot
} #Admin connection to yourdomain.com:9443 they will be directed to node 1. server { listen 9443 ssl; server_name yet-another-example.com www.yet-another-example.com; ssl_certificate /etc/letsencrypt/live/yet-another-example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/yet-another-example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot location / { proxy_pass http://clusterwpadmin; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; }
}

In your terminal:

6
Admin Node configuration for Load Balancers NGINX

Now you can only visit node1 by appending :9443 on the end of the url. To access the node1 for Admin work, visit:

https://yet-another-example:9443/wp-admin

7
WordPress administration on the node1 Admin server

Of course, you can still visit the WordPress administration on any of the other nodes if necessary, but I would advise against it.

Step 10: Securing the Cluster Replication

In our cluster each of our Nodes Percona Database is communicating with the other Nodes Database via the MySQL 3306 port, alongside the specific Percona ports 4444,4567, and 4568. Likewise, our GlusterFS glustervolume is communicating with each of it’s nodes via standard open TCP ports.

At the moment, any external server can communicate with each of these components if they know their ports and volume details. We should secure them.

Securing Percona database replication ports

In our Security Group, we opened the following ports for access to all IP addresses 0.0.0.0/0:

  • Port 3306 TCP (Inbound/Outbound)
  • Port 4444 TCP (Inbound/Outbound)
  • Port 4567 TCP (Inbound/Outbound)
  • Port 4568 TCP (Inbound/Outbound)

We now need to create individual rules for each port, one rule for each allowing Inbound and Outbound access to each of our Private IP addresses:

We need to add the following rules:

Port 3306 x 3 – Inbound & Outbound Rules

Authorization Type: Address Field Authorization Object: 172.20.62.56
Authorization Type: Address Field Authorization Object: 172.20.213.159
Authorization Type: Address Field Authorization Object: 172.20.213.160

Port 4444 x 3 – – Inbound & Outbound Rules

Authorization Type: Address Field Authorization Object: 172.20.62.56
Authorization Type: Address Field Authorization Object: 172.20.213.159
Authorization Type: Address Field Authorization Object: 172.20.213.160

Port 4567 x 3 – – Inbound & Outbound Rules

Authorization Type: Address Field Authorization Object: 172.20.62.56
Authorization Type: Address Field Authorization Object: 172.20.213.159
Authorization Type: Address Field Authorization Object: 172.20.213.160

Port 4568 x 3 – – Inbound & Outbound Rules

Authorization Type: Address Field Authorization Object: 172.20.62.56
Authorization Type: Address Field Authorization Object: 172.20.213.159
Authorization Type: Address Field Authorization Object: 172.20.213.160

Now we need to delete the original rules for each port that allowed full access to 0.0.0.0/0

Our Security Group Inbound rules should look like this:

8
Secure the Percona Inbound Ports

Our Security Group Outbound rules should look like this:

9
Secure the Percona Outbound Ports

Test Secured Percona Ports

We need to test communication between our nodes on their private IP addresses using these ports. Unfortunately we can’t use the ‘ping’ tool for this, as it doesn’t work with ports.

Luckily the ‘hping3’ tool does, install it with:

# apt-get install hping3

Now on each of your nodes run the following command for each of the other nodes IP addresses AND each of the ports, that means run the command 8 times on each node:

# hping3 <other node ip> -S -V -p <port number>

For example, on my node1:

# hping3 172.20.213.159 -S -V -p 3306
# hping3 172.20.213.159 -S -V -p 4444
# hping3 172.20.213.159 -S -V -p 4567
# hping3 172.20.213.159 -S -V -p 4568
# hping3 172.20.213.160 -S -V -p 3306
# hping3 172.20.213.160 -S -V -p 4444
# hping3 172.20.213.160 -S -V -p 4567
# hping3 172.20.213.160 -S -V -p 4568

If all the ports are working on a node you should get response as follows:

10
Successfully Testing Node 2 Port 3306 from Node 1

11
Successfully Testing Node 2 Port 4444 from Node 1

12
Successfully Testing Node 2 Port 4567 from Node 1

13
Successfully Testing Node 2 Port 4568 from Node 1

With this completed the nodes in your Percona Cluster can only talk to each other via their open Ports and Private IP addresses.

Remember, if you add extra nodes, you will need to configure extra rules in your security group.

Secure your GlusterFS File System

At the moment, any computer can connect to our storage volume as long as it knows the volume name and our IP range, but it is easy to secure this.

On any of your nodes, issue the following command, using your nodes private IP addresses separated by a comma:

# gluster volume set glustervolume auth.allow 172.20.62.56,172.20.213.159,172.20.213.160

You should receive a “Success” message.

14
Set GlusterFS volume authorizations

At any point you can check whether you have security enabled, or for other details about your volume with the info command:

# gluster volume info

As you can see, our volume only authorizes access from our nodes’ private IP addresses.

15
Gluster Volume Info – With Restricted Access

If you want to turn this off and allow all access for any reason, do that with the following command:

# gluster volume set glustervolume auth.allow all

We Are Done!

And that is it, we are done. We have created and secured a highly performant WordPress cluster using 3 nodes. We are using a GlusterFS distributed network storage system, and Percona XtraDB Cluster Database.

We have set one node for the administration of the Web Application with its system crontab administering WordPress Cron scheduled tasks, while the other and two nodes are left to handle Site traffic. Our site traffic nodes are using NGINX FastCGI caching to further enhance performance and stability under heavy loads.

This architecture can be scaled horizontally with ease to server the most demanding of enterprise sites. We could even deconstruct the cluster to have it running with a cluster of NGINX web servers being served by a cluster of dedicated GlusterFS node file servers, and dedicated Percona Cluster database servers. We could even add external object caching via a Redis server, and remove search functionality to a dedicated Elasticsearch server. These are topics for another tutorial.

Original Link

Setting Up a Server Cluster for Enterprise Web Apps – Part 2

In this series of tutorials, we will set up a Server Cluster that is Horizontally Scalable, that is suitable for high traffic Web Applications and Enterprise business sites. It will consist of 3 Web Application Servers and 1 Load Balancing Server. Although we will be setting up and installing WordPress on the cluster, the actual cluster configuration detailed here is suitable for most any PHP based Web Applications. Each server will be running a LEMP Stack (Linux, NGINX, MySQL, PHP).

To complete this tutorial, you will need to have completed the first tutorial in the series. In the first tutorial, we provisioned 3 node servers and a server for load balancing. On the node servers we configured database and web application file system replication. We used Percona XtraDB Cluster Database as a drop-in replacement for MySQL to provide the real-time database synchronization between the servers. For Web Application file replication and synchronization between servers, we set up a GlusterFS distributed filesystem.

In this tutorial, we will complete the installation of our LEMP stack by installing PHP7 and NGINX. We will then configure NGINX on each of our Nodes and our Load Balancer, and issue a Let’s encrypt SSL certificate on the Load Balancer for our domain, before finally installing WordPress to work across the distributed cluster.

By the end of this tutorial we will have the following Cluster Architecture:

1Equally balanced three Node Server Cluster with Load Balancer

In the final tutorial, we will look at more advanced cluster architecture configurations involving NGINX Caching, creating specialized nodes in the load balance for Administration and for Public site access, and finally hardening our database cluster and distributed filesystem.

Throughout the series, I will be using the root user; if you are using your superuser please remember to add the sudo command before any commands where necessary. I will also be using a test domain yet-another-example.com; you should remember to replace this with your domain when issuing commands.

In the commands I will also be using my server’s private and public IP addresses, please remember to use your own when following along.

As this tutorial directly follows the first, the sequence of steps is numbered accordingly. Steps 1 to 3 are in the first tutorial. This tutorial begins at Step 4.

Step 4: Install NGINX and PHP

Install NGINX on each node and the load balancer

On every node run the following command to install NGINX:

# apt-get install nginx

Now log in to your load balancer:

$ ssh root@load_balancers_ip_address

Then install NGINX on your load balancer, too:

# apt-get update
# apt-get install nginx

Install PHP on each node

On each node, install PHP and the most common packages required to run WordPress:

# apt-get install php-fpm php-mysql
# apt-get install php7.0-curl php7.0-gd php7.0-intl php7.0-mysql php-memcached
php7.0-mbstring php7.0-zip php7.0-xml php7.0-mcrypt
# apt-get install unzip

Step 5: Download WordPress files

Since we have all our web application root directories mounted as part of the glustervolume we only need to install our WordPress files onto one node and they will be replicated across the entire cluster.

Since it is always useful having WP-CLI available on a system, I will install that and use WP-CLI commands to download the latest version of WordPress into the mounted directory.

Install WP-CLI

On node1 run the following commands to install WP-CLI.

Download the PHP Archive:

# curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Make it executable and move it into your ‘PATH’:

# chmod +x wp-cli.phar
# mv wp-cli.phar /usr/local/bin/wp

Test to make sure it is working:

# wp --info

You should now see an output in your terminal showing details of your WP-CLI installation:

2Install WP-CLI and test it is working

If you want to have WP-CLI available on each node then you can repeat the above on each node. By the end of this series, node1 will be set up as the administration node so it is only really important for me to have WP-CLI set up on this node.

Download WordPress files

On node1 change directory into the mounted directory that will be used for your web application root directory, and download the WordPress core files.

Remember to use the ‘–allow-root’ parameter to use WP-CLI as root, execute the following commands:

# cd /var/www/yet-another-example.com
# wp core download --local=en_GB --allow-root

WP-CLI will download all the core files and unzip them into the directory:

3Download WordPress Core files with WP-CLI

But if you check the ownership of the directory and files with ‘ls -l’ you will see that there is an ownership problem, we need to change their ownership to ‘www-data’ web server user and group.

Do that with:

# chown -R www-data:www-data /var/www/yet-another-example.com

Now if we check the directory and its contents we can see that it has the correct ownership:

4Give ownership of the Web App directory to the Web Server

5Check Web App Directory Ownership

On node1 or node3 we can check that the WordPress files have been replicated:

# cd /var/www/yet-another-example.com
# ls

6The WordPress files have been replicated across the Glustervolume

Step 6: Configure NGINX

Configure NGINX on each node to serve the WordPress site

On each node, create a Virtual Host NGINX configuration file for the WordPress web application:

# nano /etc/nginx/sites-available/yet-another-example.com

Configure the file as follows:

server { listen 80; listen [::]:80; root /var/www/yet-another-example.com; index index.php index.htm index.html; server_name _; access_log /var/log/nginx/yetanotherexample_access.log; error_log /var/log/nginx/yetanotherexample_error.log; location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } location ~ /\.ht { deny all; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; allow all; } location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ { expires max; log_not_found off; } }

7Web Application’s (WordPress) Virtual Host NGINX Configuration file

Save and close the file, then symlink it into the /etc/nginx/sites-enabled/ directory:

# ln -s /etc/nginx/sites-available/yet-another-example.com /etc/nginx/sites-enabled

If you were to change directory into the ‘sites-enabled’ directory, then list its contents you would see this configuration files symlink:

8Symlink the Web Applications Configuration file into site-enabled

Since we have made changes to NGINX configuration we should check the files for syntax errors:

# nginx -t

You may see a warning conflicting server name '_', this is because the configuration file we created are domain name independent and user ‘_’ as server_name.

Don’t worry about that warning, reload NGINX:

# service nginx restart

9Ignore the NGINX syntax warning and reload NGINX

Configure NGINX on the Load Balancer

So that we can use the Let’s Encrypt Certbot with its NGINX Plugin, we need to create a Virtual Host Configuration File for the Web Application on the Load Balancer.

In the previous section, we created configuration files on the nodes that had ‘root’ directives, but were missing ‘server_name’ directives.

On the load balancer our Virtual Host configuration files will be the opposite, they will have ‘server_name’ directives, but no ‘root’ directives

Create and open the NGINX Virtual Host configuration file we need:

# nano /etc/nginx/sites-available/yet-another-example.com

Configure the file as follows, replacing the server IP addresses with the private IP addresses of your node servers:

upstream clusternodes { ip_hash; server 172.20.62.56; server 172.20.213.159; server 172.20.213.160;
} server { listen 80; server_name yet-another-example.com www.yet-another-example.com; location / { proxy_pass http://clusternodes; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; }
}

10The Load Balancers Virtual Host NGINX Configuration

Save and close the file, then Symlink it into the ‘/etc/nginx/sites-enabled/’ directory:

# ln -s /etc/nginx/sites-available/yet-another-example.com /etc/nginx/sites-enabled

Now delete the ‘default’ virtual host from the ‘/etc/nginx/sites-enabled/’ directory:

# rm /etc/nginx/sites-enabled/default

Now as we have been making changes to NGINX, we should always check our syntax before restarting the service:

# nginx -t
# service nginx restart

11Symlink your configuration file and check the NGINX Syntax

This has now configured NGINX to serve our site. The server will listen on the HTTP port 80, and redirect traffic to the upstream ‘clusternodes’.

We don’t want to be serving our Web App on HTTP/1 though, so we will fix that next.

Step 7: Install Let’s Encrypt SSL on the Load Balancer

Install Certbot

On the Load Balancer install the package that will allow use to add external package repositories to the ‘apt’ package manager:

# apt-get install -y software-properties-common

Then add the Let’s Encrypt external package repository for certbot

# add-apt-repository ppa:certbot/certbot

Now you can install ‘certbot’:

# apt-get update
# apt-get install python-certbot-nginx

Implement an SSL with Certbot

Normally we would now just install our certificate with the following command:

# certbot --nginx -d domain.com -d www.domain.com

However, there was a security issue reported on the 21st January 2018 that means this command has been temporarily disabled. This situation will soon be remedied I am sure, but I’m including the workaround instructions below.

For now, we need to issue a slightly longer command that temporarily stops the NGINX server while the certificate is being obtained, and then restarts it again afterwards. Do so with the following command:

# sudocertbot --authenticator standalone --installer nginx -d yet-another-example.com -d

www.yet-another-example.com --pre-hook "service nginx stop" --post-hook "service nginx start"

Your certificate will be issued after you submit your email, and you will need to choose whether to implement a redirect on the server to only allow HTTPS:

12

13Issue your Let’s Encrypt SSL on the Load Balancer

Now if you reopen your Load Balancer NGINX Virtual Host Configuration file for the domain again:

# nano /etc/nginx/sites-available/yet-another-example.com

You will see that Certbot has configured the server blocks automagically for you to serve the sites over HTTPS via port 443:

14Certbot Automagically configures your NGINX Virtual Host File

Step 7: Install & Configure WordPress

Create the WordPress database and user

We only need to do this on one node, on node1 connect to MySql:

# mysql -u root -p

Create the WordPress database and User, and grant the necessary privileges:

CREATE DATABASE wordpress_cluster DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
GRANT ALL ON wordpress.* TO 'new_user'@'localhost' IDENTIFIED BY 'new_users_password';

The flush privileges and exit:

FLUSH PRIVILEGES;
EXIT;

Your terminal should look like:

15Create a WordPress Database and User

Configure WordPress

Visit your domain to go through the ‘famous’ 5-minute WordPress installation procedure:

https://yet-another-example.com

You will notice that at the moment, none of the CSS files are loading. Don’t worry, just complete the first step so that the wp-config.php file is created.

Enter your database name, database user, password, database host, and database prefix, and submit, and the installer will create the wp-config.php file we need.

16Enter the Database details the WordPress installer requires

On node1 open the newly created wp-config.php file:

# nano /var/www/yet-another-example.com/wp-config.php

Somewhere to the end of the configuration file, before require_once ABSPATH . wp-settings.php, add the following few lines:

/* SSL Settings */
define( 'FORCE_SSL_ADMIN', true );
define( 'WP_HOME', 'https://yet-another-example.com' );
define( 'WP_SITEURL', 'https://yet-another-example.com' ); if (strpos($_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false) { $_SERVER['HTTPS']='on';
} /* Disable WP-Cron */
define( 'WP_DISABLE_CRON', true')

In your terminal, it will look like so:

17Add SSL settings and Disable WP_Cron in the WordPress Config file

The SSL settings will fix the CSS problems we have been having. They force Admin access via SSL, change the site home to server over HTTPS, and ensures that WordPress knows that when we forward traffic from our Load Balancer on HTTP, we want static files to be served by HTTPS if requested.

Notice we also disabled WP_CRON, and for good reason. WordPress Cron Jobs are not true Cron Jobs, they rely on visits to the site to run scheduled tasks. This is terrible.

Now we will schedule WordPress Cron jobs using the Ubuntu system crontab, but we will only run it on the administration node1.

On node1 of your nodes execute:

# crontab -e

Now add an extra line at the bottom of your cron jobs:

* * * * * wget http://yourdomain.com:9443/wp-cron.php?doing_cron &> /dev/null

Your crontab will look similar to this:

18
19Create a system cron job to run WordPress scheduled tasks

Now revisit your URL.

https://yet-another-example.com

And you can complete the installation process with CSS intact:

20Complete the installation process

21WordPress site on a Cluster

Success!

Well done, we have completed the installation of our LEMP stack by installing PHP7 and NGINX. We have configured each of our Node servers NGINX virtual Hosts, our Load Balancers NGINX Configuration and its SSL certificate, and installed WordPress.

We now have a fully working equally load balanced server cluster running WordPress being served over HTTPS, according to the following Cluster architecture:

In the next and final tutorial, we will reconfigure this cluster architecture so that Node1 is reserved for Web Application Administration duties while Nodes 2 and 3 are used for site traffic.

The final cluster architecture we will build is illustrated thus:

22Three Node Cluster with Load Balancer redirecting Admin traffic and Site traffic

In the final tutorial we will also add NGINX FastCGi caching to the mix, and harden our database cluster and distributed file system.

See you then.

Original Link

166: Alexa gets better at business and AI at the edge

The General Data Protection Regulation took effect last week so we kick off this episode by talking about what it means for IoT devices. We then hit the Z-Wave security news and explain why it isn’t so bad, after which we indulge in some speculation on Amazon’s need to buy a security company. We also discuss a partnership between Sigfox and HERE and a new cellular module for enterprises. Also on the enterprise IoT side, we review Amazon’s new Alexa meeting scheduler feature. Then we hit on news about Arlo cameras, Philips’ lights, new gear from D-Link and Elgato’s compelling new HomeKit accessories. We also have a surprisingly useful Alexa skill for enterprise service desks.

The new Elgato Aqua is a HomeKit water controller for your spigot. It will sell for $99.95. Image courtesy of Elgato.

Our guest this week is Jesse Clayton, a product manager for Nvidia’s Jetson board. I asked Clayton to come on the show because the 10-watt Jetson board is being used in a lot of industrial IoT applications and I want to understand why. He tells me, explains how AI at the edge works and shares some cool use cases. I think you’ll learn a lot.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Jesse Clayton of Nvidia
Sponsors: Praetorian and Bosch

  • Baby, don’t fear the GDPR
  • Here’s that list of Z-Wave certified devices
  • Amazon’s scheduling has a lot of hoops
  • A good explainer of machine learning
  • Why companies need computer vision at the edge

Original Link

Episode 165: How Sears plans to use IoT

I was at the Parks Connections event that covers the smart home this week, so I share a few thoughts on what’s holding back adoption and how to think about using AI to create a smart home. From there Kevin talks about the new meeting function offered by Alexa and we add nuance to the debate over Amazon selling facial recognition software to police. We then dig into some doubts about the new Wi-Fi EasyMesh standard, cover Comcast expanding the places it offers new Wi-Fi pods, discuss funding for a smart light switch company and new Arduino boards. For the more industrial and maker minded, we talk about Ayla adding Google Cloud as a hosting option and Kevin shares how we put our IoT hotline into the cloud. Finally, we answer a question about getting different bulbs to work together before switching to our guest.

A panel on smart home user interfaces. Photo by S. Higginbotham.

This week’s guest is Mitch Bowling, the CEO of Sears Home Services, who gives me the answer to what Sears plans to do with its acquisition of Wally sensor business back in 2015. I have been wondering what happened to Wally inside Sears for years. He also discusses how Sears can use IoT to make appliance repair better and the plans to add smart home installation services. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guests: Mitch Bowling, CEO of Sears Home Services
Sponsors: MachineQ and Bosch

  • Device interoperability is a huge challenge for the smart home
  • The fuss over computer vision is just beginning
  • What can Sears Home Services do with IoT?
  • The smart appliances are coming!
  • The installer will see you now

Original Link

Episode 164: New Wi-Fi standards and robots

The Wi-Fi Alliance has created a new standard for mesh networks, and Kevin and I are on top of it, discussing what it means, who’s participating, and whether or not it matters. We then tackle Sigfox’s new sensor and network in a box offering before sharing details on a new home hub from Hubitat that keeps your data local. We then talk up a new product for communicating with your kids, plans for outdoor lights from Philips and Netgear’s Arlo, and Kevin discusses his experience with the $20 Wyze v2 camera. He also bought a Nest x Yale lock, so we talk about that before getting a tip from a listener on the hotline about using cameras to set his alarm.

The Misty II is cute and somewhat affordable.

Our guest this week is Chris Meyer, who is head of developer experience at Misty Robotics. We talk about the newly launched personal robot that is aimed squarely at developers. In our conversation we get technical (so many specs), physical (why do robots fart?) and philosophical (will playing with robots turn our kids into monsters?). You’re going to enjoy this episode.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Chris Meyer of Misty Robotics
Sponsors: MachineQ and Bosch

  • Where’s Eero in this new Wi-Fi spec?
  • A hub privacy-minded folks could love
  • Why wouldn’t you buy this $20 camera?
  • Robots are in their infancy
  • Why do robots fart?

Original Link

Episode 163: Everything IoT from Microsoft Build and Google I/O

This week was a big one in the tech ecosystem with Microsoft and Google both hosting their big developer conferences. Microsoft’s featured a lot more IoT. Google shared a few updates for its Google Home and, prior to the show, made its Android Things operating system available. In Alexa news, Microsoft showed off its integration between Cortana and Amazon’s digital assistant, and Amazon added in-skill payments to Alexa. Ring has a new app, Fibaro has a new button, Netgear has a new update, Wyze has a new camera, Intel Capital has a new partner,  and we share a new report on camera security. I also share my experience with the Nest Hello Doorbell and the Nest Yale lock before we answer a question about moving music from room to room using the Amazon Echo.

The Fibaro HomeKit compatible button is $60.

Our guest this week is Microsoft’s head of IoT Sam George. He’s been on the show before, but this time we run down the big news on edge computing from Microsoft Build and discuss how a company can avoid messing up their business transformation. It’s a fun show no matter what you care about.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Sam George, head of Microsoft’s IoT platform
Sponsors: MachineQ and Twilio

  • Google’s turning the Home into a hub
  • How much is your hacked camera feed worth?
  • Thoughts on the new Nest gear
  • Why Microsoft’s edge strategy is open source
  • How to out of pilot purgatory for enterprise IoT

Original Link

Episode 162: Smart walls and dumb homes

This week Kevin and I discuss Amazon’s big security install reveal and how it made us feel. Plus, a smart home executive leaves Amazon and Facebook’s rumored smart speaker makes another appearance. China is taking surveillance even further and Kevin and I share our thoughts on the state of the smart home, and failed projects. In our news tidbits we cover a possible new SmartThings hub, a boost for ZigBee in the UK, the sale of Withings/Nokia Health, the death of a smart luggage company, and reviews for Google Assistant apps. We also answer a reader question about a connected door lock camera.

The Smart Wall research was conducted at Disney Research. The first step is building a grid of conductive materials. Later, researchers painted over it.

This week’s guest Chris Harrison, an assistant professor at Carnegie Mellon University, share his creation of a smarter wall, one that responds to touch and also recognizes electronic activity in the room. We discuss the smart wall, digital paper, how to bring context to the connected home or office, and why you may want to give up on privacy. It’s a fun episode.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Chris Harrison, an assistant professor at Carnegie Mellon University
Sponsors: MachineQ and Twilio

  • A surprise appearance from the Wink hub
  • What happens when IoT can read your thoughts?
  • Kevin swapped hubs and is pretty unhappy about it
  • A cheap way to make connected paper
  • Go ahead, rethink you walls

Original Link

CIO Panel Interview: The Digital Imperative for Software Delivery Transformation [Video]

Have you ever wondered how your CIO views topics like the need for software testing and digital transformation? At Accelerate 2017 we held our first-ever CIO panel interview, discussing questions like: How urgent is the business need for digital transformation? How does the board receive information like this? What is IT’s new role in the digital economy? The session was so popular, we are gathering a new set of CIOs, including Mahmoud El Assir, the CIO of Verizon, Barry Libenson, the GCIO of Experian, and Jennifer Sepull, the former CIO of USAA, for the upcoming Tricentis Accelerate San Francisco. Check out the video and transcript below for a sneak peek of what’s to come.

Full Transcript

Emmet Keefe: We actually had the global head of digital at BBVA speak at one of our events over the weekend, and he shared with us a very important insight at BBVA – which is that whenever they have a failure, they double down. That’s their philosophy. And so, Franz, to your point about Formula One, my original dream was to “start a Formula One team”, which I did in America. We started a company, a team in 2008 to go racing as an American Formula One team. Ultimately, we failed, and so I doubled down. My new dream is to win the championship, so, we’ll see what happens. What’s funny about my is dream is that when I used to come to Europe and share it, people would burst out laughing, which I really enjoyed.

So, this session is really a follow-up on the morning session from Todd Pierce, which I thought was absolutely extraordinary, and he really gave us three messages. One is that digital transformation is a board and CEO-down transformation that’s happening within every one of the Global 2000 companies, without question. Every board member is truly scared about digital transformation, so that was one key point.

Another one is that agile, DevOps, cloud, and software delivery transformation is on the radar of every CIO globally now. And the third point, which I was incredibly inspired by, is the role that transforming testing can play in unlocking this accelerated software delivery, and ultimately digital. So, we’ve got four leaders from very different types of businesses. I’m going to ask them about their digital transformation, so you can understand how different leaders and different businesses think about digital. I’m also going to ask them each to talk a little bit about their agile DevOps cloud transformation, so you can see some differences there, as well.

Before I do that, I’m just going to talk a little bit about our private equity firm and the program that I run called Inside Ignite. So, personally what I do, is I run a program called Ignite, which is about accelerating digital transformation. When I meet with a CIO, I typically ask them a question, which is what are the four biggest problems that you have to solve in order to unlock your digital transformation? I’m having those conversations all around the world, and I haven’t seen a single company or a single global CIO that can’t answer right away. This is a top of mind issue for every global company.

So, if I had a room full of CIO’s and if I asked the question, how many of you are going to Silicon Valley this year, every single hand would go up in the room. And the reason why they’re going to Silicon Valley is they want to find technology that’s going to help unlock and accelerate their digital transformation. What a lot of CIO’s don’t know is that actually in the Valley, those earlier stage venture capital firms are placing bets on futuristic ideas. So, they’re betting will the market want to solve this problem three years down the road, or five years down the road? That’s what you’re going to see in Silicon Valley, which is important. But actually to solve things today, you need technologies that are relevant to today’s problems, and that’s really what Inside Venture Partners and our Ignite program is all about.

So, just a bit about the firm. We’re the world leading private equity firm in the area of the market that they call software growth. Most of our portfolio companies are between 20 and 100 million in revenue/turnover when we invest. The second part of our vision is only software, so we never invest in hardware or services. The third part is to find businesses that are growing exponentially fast already. We look for businesses where the revenue has already gone from two and a half million to five million, to ten million, to 20 million, and when we find one of those, if we think it’s going to go from 20 to 40 to 80, then we make a growth acceleration investment. As you heard this morning, we invested 165 million in Tricentis. We’re extremely bullish and excited about Tricentis. We agree with Sandeep that there is a very large business that can be built here in this new, continuous testing world.

So as a firm, we currently have 13 billion under management, and each year we go through a fascinating process. Currently, we’re tracking about 100,000 software companies globally, so whenever somebody gets Series A funding or Angel Funding, they go into our database, and we start to track them from the very beginning. We have a team of 40 people that call almost 20,000 CEOs per year. So, that’s how we found Sandeep actually – one of the young analysts was calling, and I think Sandeep was ignoring them for some time. And then finally the analyst said, “well, can I come to your Accelerate conference?”, and that’s really what unlocked the conversation.

Out of those 15,000 conversations, we only find 2500 that we want to meet with. And then in the very end, we consider 250 investments, and we only deploy two billion in about 25 companies each year. So, it’s an incredible needle in the haystack exercise, and I wanted to show you this just to emphasize how special Tricentis really is. This is one of the fastest growing software companies on Earth, and we found them as one of 25 out of a pool of about 100,000 companies globally.

I want to throw up a couple other portfolios that you may know. We have 150 currently in the Portfolio Docker, I’m sure you’re aware of, and I’m sure your organization is leveraging microservices and containers and that whole movement. We made that investment about two and a half years ago, just as the business started to accelerate. We put 100 million in there to help them move faster. WalkMe is a very interesting business that provides self driving software, so it eliminates the need for training. So, when you’re done and the application goes into production, WalkMe is actually a Google Maps type application that can just show the user how to use it without actually having to put them through any type of training. Each one of these portfolios is fascinating, and there are many, many more in the portfolio.

When I hear from a CIO what their four issues are, I match analysts against those, and portfolios against those, and try and help accelerate solving those problems, and therefore accelerating the digital transformation.

So, Todd challenged you this morning to be brave, and to engage your CIO, and this will be a way that you can become more relevant on the digital agenda in the CIO’s office. There’s three ways that we engage with large global companies: one is that we create curated briefings on topics of the CIO’s choice. So, if they say AI is top of the radar right now, we’ll bring all of our analysts and all of our portfolios and see if we can accelerate the knowledge around AI. If they say DevOps is a topic, we can address that. Whatever the topic is, we can bring thought leadership and technology and try to accelerate the solution. Franz mentioned we produce thought leadership events around the world. We were in Tuscany this past weekend with 20 CIO’s having a very interesting discussion about IOT, about digital transformation. So we do these all around the world, and you can invite your CIO to come to one of these events. And the last thing we do is invite CIO’s to sit on the board of our portfolio and help them accelerate. So, actually, Rob and Vittorio and Irwin are all board members at Tricentis, along with Todd, to help think about how to accelerate the business faster.

So, with that, we’re going to transition over to the panel, and first I just want to go through … if you guys just quickly your career 30 seconds prior to Etihad, and then what you’re currently doing at Etihad.

Rob Webb: I’m Rob Webb, I’m the CIO of Etihad Aviation Group, and prior to that, I was the CIO of Hilton Hotels, the global hotel company, and I worked with General Electric and Equifax in the early parts of my career.

Emmet Keeffe: Great. Vittorio?

Vittorio Cretella: I’ve been 26 years with Mars where I was the global CIO, and about a month ago, I retired, and I became an independent advisor.

Emmet Keeffe: Very good. Andreas?

Andreas Kranabitl: I’m responsible for IT at the SPAR Austrian Group. I’m in this company over 55 years, and my passion is Formula One. So if you need somebody to do something for you-

Emmet Keeffe: Okay, the panel discussion is done.

Erwin Logt: I guess after we meet you, all our passions are Formula One. Good afternoon my name is Erwin Logt. I worked for Proctor and Gamble for about 18 years, the last couple of years in the US. Since 2013 I’ve been the CIO of FrieslandCampina, one of the leading dairy companies in the world. For the last two years, I have the pleasure of also being the Chief Digital Officer.

Emmet Keeffe: Fantastic. We’ve got a great group of panel members here, so I want to just investigate first how they’re thinking about digital transformation. Rob, we’ll start with you, if you could just talk-

Rob Webb: Well, I’m just going to kick this off by saying you are all very fortunate to be in this room because the opportunity that we all have is just enormous. You’ve got one of the world’s top venture capital firms that’s mid-stage, so all of the screening of the companies has already been done, and they’ve invested in a fantastic company, Tricentis, right here in Vienna, but also in Silicon Valley. And, the sweet spot that Tricentis is in with respect to automated testing is something that is in incredible demand around the world. So it’s a unique culmination of events because as a CIO, what I’m doing across Etihad and our equity partner airlines is trying to accelerate innovation, and that really means everything we’re doing with online and mobile, and all the rapid application and agile development we need to do is the most differentiating part of our application portfolio. And you’re becoming aware of, getting trained in, using, and buying software that will make your CIO’s survival rate higher. They’ll make your company more profitable, safer, and grow faster. And as a CEO, they love that.

Can you make my testing faster and get my new apps out there so I can be more competitive? Can you do that in a way that makes testing more automated and safer, and can you do that while you’re lowering costs? This is something that is very, very unique, and I think we all have a wonderful opportunity to be part of this revolution. As Todd really highlighted this morning, it takes each of us to commit to make this happen, and we can change the world.

Emmet Keeffe: So I had the privilege last summer to visit the innovation center in Abu Dhabi that Etihad built. I’m just curious – that was a massive investment. Can you talk about what drove the board to make that investment, and really just how the company is thinking about digital transformation?

Rob Webb: Well, you know, we’re the national carrier of the United Arab Emirates, and the country’s a very, very wealthy country, but they have these amazing aspirations. They want to build the very tallest buildings and the best education systems, the best healthcare systems, and they also want the world’s best airline. So that means new planes from Airbus and Boeing, but also it’s more than just the physical aircraft – it’s the service on board and the digital guest experience that goes with that service. That includes the online applications, the loyalty program, the mobile apps, the in-flight entertainment. So, you’ve heard the expression from Marc Andreessen and Bob Horowitz that software is eating the world. What Tricentis allows you and your companies to do, is accelerate how software is digitizing your businesses. So, you’re just, as I said, right at the sweet spot of an enormous opportunity.

Emmet Keeffe: Fantastic. When Sandeep and I first started recruiting CIO’s, I have to say actually, testing was not sort of the hottest topic in the world. So, it was fascinating when Sandeep and I started reaching out to some of the world’s most famous CIO’s. I think we reached out to 60 of them, thinking that maybe 20 would want to join this board. We actually had a 60 for 60 hit rate. Every single CIO responded and said, “I want to have a conversation about that and I want to consider joining the board of that company,” which I think shows just how relevant testing is for the digital transformation.

So, Vittorio, I know you’ve come from a very different type of business. If you could talk about how you think about digital transformation?

Vittorio Cretella: Sure, as a CPG-CIO, I think of digital transformation from the top line to the bottom line, and everybody gets the top line and why the front end, and the relationship with the customer, benefits from digitalization. We had a clear example where digitization and the development of digital solutions using DevOps make you closer to the consumer, creating value with digital factories that are a blend of the physical product and the data.

What many don’t get is that on top of digitalizing – that shiny, visible part of the iceberg – you need to digitize the whole of the company operations, and that includes your data asset and your enterprise system, your system of records, and your ERP. There are three fundamental reasons why you want to do that. And the first one is that data becomes equity, which historically is not really part of the successful model for a CPG. But now, it’s a big differentiator, and if you don’t get your internal data in order, let alone try to absorb and extract inside from a multitude of external data with the internet of things, with the closeness to consumers and digitization, you actually need to.

So that’s the first reason. The second reason is speed, and we have several examples. When you have a merger or an acquisition, what stands in the way is the integration and the regression testing of ERP, especially when you talk about a global footprint. So, products like Tricentis Tosca would massively reduce the time to market; the lead time to make those changes happen.

And the third one, last but not least, is efficiency – because for any CIO who needs to digitize, part of the funding for that initiative comes from rationalizing and making your IT and your enterprise system more agile. And we have a typical example in that the next frontier to making your enterprise system more efficient is to automate. Testing is at least, as I said this morning, 40% of that effort. So, we have an automation expert center, that is looking at all the transversal processes in operations, as well as the development expert center adopting DevOps for ERP, and both of them using tools like Tricentis Tosca, or looking at a tool like Tosca to speed up and deliver that part of efficiency. So, again, everybody looks at the top line, and clearly, there is a massive differentiation with digitization on the way you craft the consumer value proposition. But we shouldn’t forget the system of records and the massive benefit of digitalizing your operations.

Emmet Keeffe: That’s great. Thank you, Vittorio. We spent the last day and a half here in Vienna with a room full of 15 CIO’s. These are all members of our Growth Advisory Board. The question we were asking is, if you’re calling on a global CIO at a business of this scale, how do you explain Tricentis in such a way that the CIO will actually sponsor that transformation? I know for many of you, if you have the right CIO level sponsorship, it would accelerate everything that you were trying to do with continuous testing and Tricentis. And what’s interesting is that a lot of times, when you reach out to a CIO on continuous testing, they’ll ping their testing organization or their vendor partner and ask, “are we doing test automation?”, and the answer comes back up, yes, we are. What they don’t say is we’re doing UI level automation, and we’re not really doing automation of the core. So this is actually one of the things that came out of the last couple of days that’s really a fundamental value proposition of Tricentis: end-to-end testing across the entire net new and legacy infrastructure.

Vittorio Cretella: I’d like to add something. I do remember the head of our development factory telling me we don’t automate testing because we don’t have the money to pay for scripting. And you know, if we’re script-less, that problem goes away.

Emmet Keeffe: Perfect. That’s great. So Andreas, you obviously come from a more consumer-oriented business. If you could maybe talk about when that pressure really started building on the digital transformation, and where you are in the journey with digital?

Andreas Kranabitl: I think we’ve been in a digital tsunami over the last three years, so the effect on the retail business is very strong. We are working hard to optimize the digital processes using digital systems by I do not know how many thousand products per months. One big part of this digital story is really to optimize the existing processes to be much faster.

The second area, of course, is new business models, which are really possible with digital. So, coming closer to the consumer – here in Austria a store’s opening hours are limited, so I have to deal with how we can really meet the customer all the time and on the weekdays.

And the third focus area is the digital customer experience, so we are talking about of course mobile apps and more. This means that the digital challenge is around all the companies, so we’re not talking only about online shopping, for example – we are talking through the whole company. The last few years we have been working hard on that. But it is much easier to digitize the existing stuff.

In the past, there was the cushion in between the IT and the consumer. But, going digital with this customer experience, using a mobile application, we are dealing directly with IT and dealing directly with the consumer, which places IT in a very strategic position in the company. That position ranges from the supporting role to a strategic one, because we are talking to the consumer and is it challenging to understand how they are thinking. We have built new disciplines inside the IT organization, so we have to really understand how the user is reacting and how to really achieve this customer experience.

As we come into the digital experience with IT projects, we need a permanent department. Deploying functionality day by day, or hour by hour, is a big challenge for the organization. If you would have asked me one year ago, I would have said that testing is boring. Now, testing is strategic and I’m very happy that my colleagues started to implement Tricentis in all these testing processes years ago in the legacy world.

So now, we are really ready to service the new world.

Emmet Keeffe: That’s great. Well, that’s exciting for everyone in the room and confirms exactly what Todd was saying this morning. We have one more quick question for you: this weekend, we had two chief digital officers talk at this event in Tuscany. One was from BBVA, and the other was from Schindler. Very, very different businesses, and one of the things that struck me was hearing BBVA’s strategy versus Schindler’s strategy. I realized that Schindler is a hardware company and BBVA is really a software company, and therefore they had really different digital strategies, and I’m curious. Are you beginning to think of your self as a software company now, and do you think retailers are going to ultimately be heading in that direction?

Andreas Kranabitl: I think this is similar to our situation. Our board and our owners are really thinking in the future. But, I think we have to convince the more senior guys. Our board leader is always saying, Mr. Kranabitl, you must be aware we are not going to be an IT department. We are a retailer. But, I think the understanding now is more and more that IT, as I said before, is moving in a very strategic position. More and more, I think we’re also moving into a position to really to lead this transformation, so inside SPAR, we are talking about digital and innovation.

Emmet Keeffe: That’s great.

Andreas Kranabitl: I think we’re really now a business driver, for example, organizing and doing innovation workshops with the business to explain to them where the future is and what technology can really do into the business model. So, this is our position – I am willing to trying this. So, this is really a cool situation because I think that they’re really starting to see the changes and that’s really the point.

Emmet Keeffe: Okay, great. Well, another really interesting session that we had this weekend was called IT In the Boardroom, and we had the CIO of Rolls Royce, who’s just finishing up a big transformation at Rolls Royce. And two things he left us from his keynote: he said, don’t ever underestimate the lack of knowledge that a board director has about technology, and don’t ever underestimate their uncomfortableness to share that with you. So, this actually a big challenge that CIO’s have: they’ve got pressure from the board level regarding a digital transformation, but the board members don’t really know what it is, and they’re also afraid to admit that.

Erwin, same thing, I’d like to hear about your business. I know you’re just are taking this Chief Digital Officer role, as well. If you could just talk about what caused the business to head in that direction and how are you thinking about the digital strategy?

Erwin Logt: First of all, I heard a few of my partner panel members say testing is boring. I don’t think it’s boring. I think it’s actually pretty cool. I have to admit, before I became a member of the advisory board of Tricentis, I didn’t know that much about testing. And, I totally underutilized and underestimated it. I should say, the amount of time we spent on it, and the opportunity to really transform testing – how you can drive some speed and quality on the side… but, I’ll come to that in a second.

So FrieslandCampina, for those of you who don’t know it, is a big dairy player, multinational, about 12 or 13 billion. We sell milk, yogurt, that kind of stuff. For about 80%, we are a fast mover, consumer goods company. So, it’s very similar to Mars, to a certain extent. And about 20 to 30% is B to B, where we take ingredients out of dairy and we sell it, for example, to pharmaceutical businesses. Now, about three years ago, with the executive board, we said, “Okay, digital; the world is changing whether we like it or not. We have to change, we have to adapt.” To a certain extent, we looked at it from an opportunity point of view. To a certain extent, we looked at it from a threat point of view. The recent examples, for example, like Blockchain, is an interesting one that is used to track or to trace the ingredients of food. And, we have to decide whether we want to play or not play.

Anyway, we declared a strategy. We called it Embrace Digital, and to be very honest, it probably, like many other companies, it took us about a year to figure out what we really meant with that, and where we really wanted to move the needle. What were the priorities and how do we measure success, etc.?

Right now, we have a digital strategy, if you’d like, and we are prioritizing three areas: one is actually the commercial domain, so we’re looking at digitized marketing, but if you’d like, we want to set up our marketeers successfully in the new, digital world, with all kinds of tools and technologies. The second one is, of course, that we want to set up a whole new e-commerce channel. And, to a certain extent, we want to drive a new business model, which for us is direct to consumer sales. So, we have a few products where the profit margins allow us to sell it directly to consumers with all kinds of interesting challenges, etc.

The second priority is analytics. Combined, obviously, with elevating the first priority, there’s more and more data becoming available. There’s more demand for real-time insights, whether it’s consumer analytics, consumer behavior, or just business performance. And, the third priority is what we call the digital workplace or the employee. We truly believe that one the one hand, we are investing in customer experience and transforming that, but we also believe in transforming the pre-experience, and therefore elevating the performance of the company.

Now, last but not least, of course, like all of us, we are challenged to a higher demand for speed in terms of technologies, and bringing value to market. Of course, it always needs to be cheaper in our business, and we cannot drop in quality. As a matter of fact, quality needs to increase, so it’s an end-to-end-to-end game. And then like I said, things like Agile and DevOps, are coming up. I wouldn’t say we are a front-runner, but we are catching up quite quickly, and with that, obviously, we are now very much aware of the pain points around testing, especially around the lack of speed, if you’d like, in those areas, both on the operational backbone, as well as on the new fancy stuff. The apps and the websites are very much forward also. We are trying to transform that part of the business.

Emmet Keeffe: Thank you very much. Franz mentioned prior to entering the private equity world, in the year 2000, I spent about a year interviewing analysts, project managers, CIO’s, heads of development, and I was asking the question, “if you’re trying to make software delivery go lightning fast, what gets in the way of that speed?” And the answer I kept hearing over and over again was the upfront requirements phase, which is what bogs everything down when you’re trying to get software down at speed. So, we invented the market for software simulation, and what I spent 17 years working on was trying to create a real-time, collaborative, prototyping platform for product managers and business analysts, so they could actually collaborate on design solutions with the business. Unfortunately, it was an idea that was about 30 years ahead of its time. But actually, the market is catching up. Etihad, they’re spending time and money on design thinking – what is the latest around how you rapidly visualize new, innovative ideas? I think, an interesting question might be, if your organization is involved in this sort of early, upfront prototyping, how can you get engaged from a testing standpoint earlier, and then help make sure they do testing the right way as they head into the development process?

So, I’ve asked Rob just to talk a little bit about design thinking. What are you doing?

Rob Webb: Well, we’re running short on time, but I’d kind of summarize it by saying that these technology and process changes move very, very quickly. And, for everyone in the room, there’s a huge opportunity to be a change agent, to inform your CIO, inform your head of development about the new Agile design thinking, DevOps world, and to be a champion for the changed behavior. And through that process, in addition to inviting them to connect with Emmet’s Venture Capital firm, which I really think they’d welcome that opportunity, you also have the opportunity to help make them heroes, and to make yourself a hero because you’re going to increase speed, reduce risk, and improve the quality and cost structure of that testing, And there’s many ways to do that. But, it means taking a bit of risk and being a change agent inside your technology organization.

I would just leave it at that. You can read up on design thinking, it’s a customer-first approach to solving problems in agile, bite-sized ways, and it’s working for us. But, I think everyone in the room has a huge opportunity.

Emmet Keeffe: That’s great. Final question and then we’ll wrap it up. We’ve got to get some of these gentlemen on an airplane. Andreas, I want to ask you when did this testing topic hit your radar? How did it hit your radar? I’m just curious. Within your organization, how did they bring it to you?

Andreas Kranabitl: It did it really when we started to go online, enabling online shopping, and learned that this is not a project which starts and ends. After the project is finished, the next project is starting. So, we said that this was a permanent process, and we made the requirements never-ending, and I think the main problem was speed. I really couldn’t understand it because I’d been trying to stay on the business side, and if there are coming requirements, I would have to say yes. And really that was the state of the art online customer requirement: we need this functionality, that functionality, and we deploy functionality on daily basis. The message or the topic of the year is permanent deployment in our companies, and we are working hard on that, and now this was really the point because I really have to clarify, that testing is boring.

But now I really understand how strategic testing is and how important it is this in this environment, because before in this classical waterfall project, it was something yeah at the end of the project. It was done by the business people or by some IT people, which was not really of interest. But, now I think testing is really strategic because first of all, you have to save money, you have to save time, and you have to save the time in the testing environment. And I think the sooner you start to do testing, I think it’s more efficient.

I think we have to mention that if you ask me what is the most important focus in the digital transformation, it’s people. So, and I think we cannot use people for testing. There is much higher work to do. We need future-oriented staff, and to not make them suffer by doing manual or needless testing. I think this is very important from my point of view.

Emmet Keeffe: That’s great. Well just in summary, over the last day and a half, there’s really three or four things we heard from this room full of CIO’s. One is that they are under tremendous, tremendous pressure from the board to accelerate this digital transformation. The second thing we heard from them is that they’re looking for opportunities to automate within the IT budget. Typically, 80% of the budget is spent to sort of run the business, and only 20% is on the innovations, so they’re looking for ways to shift that money over. And in our investment portfolio, we’re seeing more and more businesses that are growing fast that have automated something. In this case, it’s testing. We just invested in one that’s automating incident diagnosis and incident resolution. Many of our investments are somehow in the automation space. And the reason why is that CIO’s are looking for a budget that they can release, and redeploy towards digital. The other thing we heard from the CIO’s is that they desperately want speed. I mean, they want 30-day type speed. Not three month, six month, 12 months type speed. And, the last thing they want is everything done at a lower cost: more efficiency.

So, I couldn’t agree more this morning with Todd. I think you’re in an absolutely extraordinary position to have a really fun ten years here as we go through this transformation. And I also think from a career development standpoint, if you are brave enough to elevate your story up into the office of the CIO, I think your career will accelerate as well.

So, with that, I want to thank Todd from this morning and our four panel members here. Thank you, everybody.

Original Link

Episode 161: Amazon’s Alexa Blueprints, home robots and more

This week’s show finds me in Sweden pondering Alexa Blueprints, the Amazon Echo for kids and Amazon’s smart robot plans. Kevin and I talked about all of that, before showcasing new research for IoT out of Carnegie Mellon, the University of Washington, and Princeton. Two senators proposed a social media data sharing law that appears to ignore the IoT, Comcast reported growth in home automation subscribers, a few gadgets got new features and there’s a new version of a popular IoT chip that can handle mesh Wi-Fi. Kevin changes his smart home platform and we advise someone on a connected kitchen renovation.

The IKEA Tradfri lights have expanded to include colors and wall-mounted flat lights.

Our guests this week are from IKEA with Rebecca Töreman, who heads up the IKEA Tradfri products and Lena Pripp-Kovac, Sustainability Manager IKEA of Sweden. Töreman gives us a Tradfri update after a year on the market, while Pripp-Kovac offers valuable tips on how to design connected products with sustainability in mind. It left me questioning how I think about many connected devices. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guests: Rebecca Töreman and Lena Pripp-Kovac of IKEA
Sponsors: Forgerock and Twilio

  • Alexa for kids and the home robot debate reignites
  • Smart walls, power-saving cameras and IoT security
  • Kevin is dumping SmartThings for Wink
  • IKEA’s next smart home area could be health
  • How to design a sustainable connected product

Original Link

How to Build a QA Strategy Like Spotify

The QA strategy (or lack thereof) that works well when your team is first starting out probably wasn’t necessarily sophisticated. But teams that go through periods of growth often discover the hard way that QA that is “good enough” for a tiny start doesn’t hold steady in the long run.

Spotify is a great example of how to scale development and QA practices successfully. Since their launch in 2008, Spotify has grown from 150 employees to over 2000. How do they keep product quality high for over 157 million users? Here are a few QA lessons shared by Spotify that growing development teams can model their own testing process after.

Prioritize Long-Term Reliability and Quality

One of the biggest hurdles faced by growing teams like Spotify is maintaining quality and coverage over time. As organizations gain momentum, opting for quick fixes and low-hanging fruit can easily become the norm. But to scale effectively, teams must prioritize development and testing choices that will provide the best long-term gains. At Spotify, developers work with the product team to prioritize testability and stability in the product over time.

“If you have a solid system to test, you need to add a lot of testability; I talk to the product owners saying that we need to add this to give you what you want. I know you didn’t order this, and this will take us ten days extra, but this is what we have to do to have a product which we can have high reliability in the future.” – Kristian Karl, Test and Development Manager

Structure Your QA Team to Align With Your Product Goals

Fast-moving organizations know that siloed QA teams don’t scale. Spotify’s development organization is composed of squads and tribes arranged around focus areas. This structure helps individuals collaborate across different functional areas more effectively, and ensures that the goals of the QA, product and development teams stay aligned.

In addition to this more tribe-squad team structure, Spotify stresses that the role of QA isn’t to block releases or act as a gatekeeper, but to collaborate with developers and product teams to continually improve the product.

Leaving that role behind can be harder to do than it might sound like. It takes some effort as a QA to not have full control of what goes out to the end-user, but to try to have that would be detrimental to true engagement from the others in the dev-team, and also slow all of you down immensely. – Olof Svedström, QA Chapter Lead

Use Automation to Accelerate QA, Not Replace It

Testing automation is a key component of Spotify’s ability to scale development. However, it isn’t used as a panacea for every quality woe. Instead, Spotify’s QA team uses testing automation as a tool to help their QA engineers be more effective, and focus more of their energy on their overall product quality goals.

“I would rather say that you can never replace humans with machines for testing. We can lose 20% of the testers that don’t apply. The number of tests that you can do on this system is infinite. If I can get help with automation, I can focus on further, deeper and broader tests. That’s what I want to achieve and get from automation.” – Kristian Karl, Test and Development Manager

Learn More About How Enterprise Teams Build a Scalable QA Strategy

We recently took a look under the hood at a few big-name enterprise organizations with famously lean QA processes. In our guide, Agile QA at Scale, we dig into how companies like Spotify, Facebook and Atlassian have built QA strategies that scale, and the lessons that growing teams can learn from how they approach quality assurance.

Download the guide now for more insight into what it takes to build a software testing strategy that will scale with your organization.

Original Link

Episode 160: A deep dive into Microsoft’s IoT security platform

This week’s show is all about Microsoft’s new IoT security product, Azure Sphere. Kevin and I start with that, before talking about a new checklist from the Online Trust Alliance explaining how to secure your enterprise IoT gear. We then discuss acquisitions such as Nice buying a 75% stake in home security startup abode, Lutron buying professional lighting company Ketra, and the possibility that Google might acquire Nokia’s health assets. In news bits, we talk about August’s new unlocking powers, Twilio’s new SIM offering, smart pet transport and VMware’s new lab setting for its IoT software. Kevin shares his thoughts on HomeKit sensors from Fibaro and we answer a question about doorbells.

The Art Institute of Chicago uses Ketra’s lighting. Ketra was recently acquired by Lutron. Image courtesy of Ketra.

Our guest this week is Galen Hunt from Microsoft, who has been working on the Azure Sphere product for the last four years. He shares why Microsoft attacked IoT security with a hardware, OS and cloud product and shared how far Redmond is willing to go on openness. He also talked about the revenue model, support life and other practical aspects. You’ll walk away from this one a lot smarter.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Galen Hunt, partner managing director at Microsoft
Sponsors: Forgerock and Yonomi

Original Link

Transforming Enterprise Decision-Making With Big Data Analytics

A survey conducted by NVP revealed that the increased usage of big data analytics to make decisions that are more informed has proved to be noticeably successful. More than 80% executives confirmed the big data investments to be profitable and almost half said that their organization could measure the benefits from their projects.

When it is difficult to find such extraordinary result and optimism in all business investments, big data analytics has established how doing it in the right manner can being a glowing result for businesses. This post will enlighten you on how big data analytics is changing the way businesses make informed decisions. In addition, you’ll understand why companies are using big data and elaborated processes to empower you to take more accurate and informed decisions for your business.

Why Are Organizations Harnessing the Power of Big Data to Achieve Their Goals?

There was a time when crucial business decisions were made solely based on experience and intuition. However, in the technological era, the focus shifted to data, analytics, and logistics. Today, while designing marketing strategies that engage customers and increase conversion, decision-makers observe, analyze, and conduct in-depth research on customer behavior to get to the roots instead of following conventional methods wherein they highly depend on customer response.

Five exabytes of information were created between the dawn of civilization through 2003, which has tremendously increased to the generation of 2.5 quintillion bytes data every day. That is a huge amount of data at disposal for CIOs and CMOs. They can utilize the data to gather, learn, and understand customer behavior, along with many other factors before making important decisions. Data analytics surely leads to making the most accurate decisions and getting highly predictable results. According to Forbes, 53% of companies are using data analytics today, up from 17% in 2015. It ensures the prediction of future trends, the success of the marketing strategies, positive customer response, and an increase in conversion and much more.

Various Stages of Big Data Analytics

Being a disruptive technology big data analytics has inspired and directed many enterprises to not only make informed decisions but also has helped them with decoding, identifying, and understanding information, patterns, analytics, calculations, statistics, and logistics. Utilizing it to your advantage is as much art as it is science. Let’s break down the complicated process into different stages for better understanding on data analytics.

Identify Objectives

Before stepping into data analytics, the very first step all businesses must take is to identify objectives. Once the goal is clear, it is easier to plan, especially for the data science teams. Initiating from the data gathering stage, the whole process requires performance indicators or performance evaluation metrics that could measure the steps time to time that will stop the issue at an early stage. This will not only ensure clarity in the remaining process but also increase the chances of success.

Data Gathering

Data gathering, being one of the important steps, requires full clarity on the objective and relevance of data with respect to the objectives. In order to make more informed decisions, it is necessary that the gathered data is right and relevant. Bad data can take you downhill and with no relevant report.

Understanding the Importance of the 3 Vs

The 3 Vs define the properties of big data. Volume indicates the amount of data gathered, variety means various types of data, and velocity is the speed the data processes.

  • Define how much data is required to be measured.
  • Identify relevant data (for example, when you are designing a gaming app, you will have to categorize according to age, type of the game, and medium).
  • Look at the data from the customer perspective. That will help you with details such as how much time to take and how to respond within your customer’s expected response times.
  • You must identify data accuracy. Capturing valuable data is important. Make sure that you are creating more value for your customer.

Data Preparation

Data preparation, also called data cleaning, is the process in which you give a shape to your data by cleaning, separating it into right categories, and selecting it. The goal to turn vision into reality is dependent on how well you have prepared your data. Ill-prepared data will not only take you nowhere but also no value will be derived from it.

Two key focus areas are what kind of insights are required and how will you use the data. In order to streamline the data analytics process and ensure you derive value from the result, it is essential that you align data preparation with your business strategy. According to Bain report, “23% of companies surveyed have clear strategies for using analytics effectively.” Therefore, it is necessary that you have successfully identified the data and insights are significant for your business.

Implementing Tools and Models

After completing the lengthy collection, cleaning, and preparation the data, statistical and analytical methods are applied here to get the best insights. Out of many tools, data scientists require using the most relevant statistical and algorithm deployment tools to their objectives. It is a thoughtful process to choose the right model since the model plays the key role in bringing valuable insights. It depends on your vision and the plan you have to execute by using the insights.

Turn Information Into Insights

The goal is to turn data into information, and information into insight.” — Carly Fiorina

Being the heart of the data analytics process, at this stage, all the information turns into insights that could be implemented in respective plans. Insight simply means the decoded information, understandable relation derived from the big data analytics. Calculated and thoughtful execution give you measurable and actionable insights that will bring great success to your business. By implementing algorithms and reasoning on the data derived from the modeling and tools, you can receive valuable insights. Insight generation is highly based on organizing and curating data. The more accurate your insights are, the easier it will be for you to identify and predict the results, as well as future challenges, and deal with them efficiently.

Insights Execution

The last and most important stage is executing the derived insights into your business strategies to get the best out of your data analytics. Accurate insights implemented at the right time in the right model of strategy is important at which many organization fail.

Challenges Organizations Tend to Face Frequently

Despite being a technological invention, big data analytics is an art that, handled correctly, can drive your business to success. Although it could be the most preferable and reliable way of making important decisions, there are challenges (such as cultural barriers). When major strategical business decisions are taken on their understanding of the businesses and experience, it is difficult to convince them to depend on data analytics, which is objective, and data-driven process where one embraces the power of data and technology. Yet, aligning big data with traditional decision-making processes to create an ecosystem will allow you to create accurate insights and execute efficiently in your current business model.

Original Link

10 DevOps ”Secrets” for Job Seekers – and Everyone Else

Oh sure, you “do” DevOps — but how well do you really know this popular combination of software development and operations philosophies, practices, and tools?

If you’re looking for a job in the world of DevOps, just starting out on a DevOps transformation, or still working in a more traditional IT environment, the honest answer may be “Not a whole lot.” Let’s face it, even seasoned DevOps veterans often retain a blind spot…or three.

From the term itself to the reasons behind its rapid expansion inside businesses of all shapes and sizes, DevOps has a complex and often surprising history and practice. We decided to dig into that still evolving story to spotlight 10 DevOps “secrets” everyone involved should know about, but often don’t.

1. What’s in a Name? With DevOps, Plenty

It doesn’t take supernatural powers of deduction to figure out that “DevOps” fuses development and operations into a single portmanteau. Indeed, the term itself, even as it has become commonplace, is a reminder of its core purpose: to bring traditionally siloed development and operations teams into greater alignment.

But that’s not the whole story. The term is typically attributed to Patrick Debois and Andrew Shafer, from a presentation on “Agile Infrastructure & Operations ” given at a 2008 Agile conference in Toronto.

Check out our post on “The Incredible True Story of How DevOps Got Its Name” for more, including a video history of the term from Damon Phillips.

2. DevOps Isn’t Really About Technology

If you consider some of the biggest, buzziest terms in the modern software landscape – from “cloud” to “containers” – they ultimately refer back to a technology or set of technologies. Not so with DevOps. Though commonly associated with software development, it’s ultimately a matter of culture. You might hear proponents describe DevOps as a mix of people, processes, and tools – which is to say it’s not about any one of those things in isolation, nor is it something you can procure, install, hire, or otherwise cross off a list as “complete.” You can’t just buy a “box of DevOps”.

“DevOps isn’t a service like payroll processing, and it’s not something you can outsource or assign one team to perform the entire role,” says Robert Reeves, CTO and co-founder at database release-automation vendor Datical.

Making the transition isn’t always easy, warns New Relic Developer Evangelist Tori Wieldt. “If you have legacy procedures, they were put in place for a reason, so there are likely vested interests in keeping things just the way they are. But you have to realize that change is constant – you are never done with DevOps. You always have to be tuning and optimizing.”

Indeed, DevOps is a culture of ongoing improvement; there’s no finish line, and to pretend there is one probably means you’re missing the point.

3. There Is Consensus Around the DevOps Toolchain

Of course, the point above doesn’t mean there’s no relationship between DevOps and technology. For starters, hanging a DevOps banner above the same old, tired tools and processes won’t do anyone much good.

The conventional wisdom says that DevOps practitioners are best served by standardizing on a toolchain. The specific tools a particular organization chooses are up to that organization. But there’s general agreement on the building blocks of a strong, standardized toolset, often represented in an “infinity loop” diagram like this one.

Image title

The common links of the toolchain are typically Plan, Create, Verify, Package, Release, Configure, and Monitor. As New Relic’s Henry Shapiro explained in our recent primer on SRE tools (which can have a lot of overlap with DevOps tools), teams can take this framework and make it their own by then selecting the specific tools they’ll use to achieve each of these necessary steps in the software development lifecycle.

4. One of the Most Famous DevOps Books Is Pure Fiction

Seriously! One title on anyone’s DevOps 101 reading list should be The Phoenix Project. Co-authored by Gene Kim, Kevin Behr, and George Spafford, the book is considered the definitive case study for the power of DevOps to solve real-world IT problems. It’s also a novel. As in, it’s made up! The subtitle gives it away: “A Novel about IT, DevOps, and Helping Your Business Win.”

Of course, The Phoenix Project is firmly grounded in everyday reality. The story follows an IT manager tasked by the CEO with saving a high-profile project that is way over budget and way behind schedule. How the story plays out has made the fictional tale into a touchstone in DevOps culture, clearly and dramatically explaining the need to modernize monolithic approaches to IT operations, software development, and more.

(Listen to Gene Kim discuss The Phoenix Project and his latest tome, The DevOps Handbook, on the New Relic Modern Software Podcast.)

5. The Role of “DevOps Engineer” Is Surprisingly Controversial

One long-simmering DevOps debate addresses whether the term itself should be codified into job or team titles. For example, some folks cringe at the term “DevOps Engineer.” Ditto the idea of creating a “DevOps team” or department within a larger software organization. Purists contend that DevOps culture should spread throughout the organization; creating a specific team just puts a new shade of lipstick on the same old siloed, monolithic IT pig.

Others note, however, that if DevOps is ultimately about building and operating better software, then it needs to be evangelized widely -so why not include it in job titles?

In the end, DevOps principles matter far more than what you call them or how you title a team. For some, the Site Reliability Engineer role encapsulates that effort. That’s true at New Relic, for example; vice president of software engineering Matthew Flaming describes the SRE role as “maybe the purest distillation of DevOps principles into a particular role”. For other companies, time-tested titles like Systems Engineer, Software Engineer, and the like do just fine.

6. “DevOps” Jobs Do Exist – and They Pay Very Well

Like it or not, “DevOps Engineer” and similar titles are abundant these days. A recent national search for “DevOps Engineer” on the jobs site Glassdoor produced more than 23,000 open positions. Try the same search on other jobs or networking sites and you’ll see similar numbers.

Whether the title stands the test of time remains to be seen. Many DevOps experts think it will ultimately go away, as will the term “DevOps” itself, as it becomes part of doing business as usual. In the meantime, though, these jobs pay really well. Compensation data can be fickle, but Glassdoor pegs the average salary for DevOps Engineers at a healthy $138,378.

7. DevOps Can – and Should – Be Measured

Don’t let the word “culture” deceive you into thinking DevOps is a feel-good, Kumbaya-type of thing. DevOps is serious business, and its effects can and should be quantified and tracked for insight into the overall health and productivity of your development and operations teams.

If the ultimate goal of DevOps is to improve time-to-market and quality-two objectives not always aligned in the software world-then there are plenty of ways to measure its effects:

  • Deployment frequency: The goal is smaller, more frequent releases rather than large, complex, and infrequent releases. According to some measurements, DevOps teams release deployments 30 times more frequently.
  • Change volume: How much net change is pushed to production in a typical release. Improving deployment frequency should not lower overall change volume.
  • Leadtime: Code deployment lead time measures how long it takes for code to get from development to production. According to Gene Kim, it predicts the quality of deployments, the ability to restore service quickly, customer experience, and, perhaps most critically, how quickly teams can get feedback on their work. Significantly, Kim says, the more general measure of change deployment lead time isn’t just tactical, it’s a powerful strategic measurement of “internal quality, external customer satisfaction, and even employee happiness.”
  • Failed deployment rate: What percentage of deployments caused outages, performance issues, or otherwise failed to meet user needs? The number should be as low as possible and trend downward.
  • Mean Time to Recovery (MTTR): How long does it take for your team to recover from failures and other issues? DevOps is intended to create coordinated, collaborative teams that can identify and resolve issues faster.

In order to achieve good results on these kinds of metrics, Kim said, “We need to instrument everything, whether it’s our code, database, environment, features… It’s a great time to be in the game when we can get more telemetries than ever possible.”

Surprisingly, even limited metrics can make a big difference. “Sometimes just the act of agreeing on what to measure can tip the scales,” notes New Relic’s Tori Wieldt. “Determining how you measure success is an important exercise.”

8. DevOps Is for Big, Traditional Companies, Too

DevOps naysayers have often suggested that it works best in small, agile companies rather than in large, traditional enterprises with 10,000 engineers or more. But evidence shows that simply isn’t true.

According to Gene Kim, speaking on the New Relic Modern Software Podcast, “We now have so many proof points to show that DevOps is truly not for just the unicorns of Google, Amazon, Facebook, and Netflix, but it really is for any technology organization, especially in large, complex organizations. What I’m so excited about … is the fact that they’re getting the same sort of outcomes that we’ve typically seen only in the unicorns.” Kim also says the majority of DevOps value is now being created in large, complex organizations, which are getting the same kinds of results as the online unicorns.

9. DevOps Correlates With Business Success

DevOps doesn’t just lead to good results for IT departments, it’s also associated with business success. Again, according to Gene Kim, high-performers who incorporate DevOps are much more agile and more reliable: “They are more likely to win in the marketplace,” in terms of both bottom-line results and corporate valuations.

The numbers back him up. The 2017 State of DevOps Report from Puppet and DORA notes that “DevOps practices lead to higher IT performance. This higher performance delivers improved business outcomes, as measured by productivity, profitability, and market share.” And now it’s clear that “the ability to develop and deliver software efficiently and accurately is a key differentiator and value driver for all organizations-for-profit, not-for-profit, educational, and government organizations alike. If you want to deliver value, no matter how you measure it, DevOps is the way to go.”

Heck, if you’re not doing DevOps, your competition probably is and you could find yourself at a competitive disadvantage.

10. DevOps: It’s Not Just for IT Anymore

If DevOps is essentially a culture of doing things better and faster, then why limit it to technology teams? DevOps principles-speed, agility, efficiency, reliability, and quality-should permeate all organizations, shouldn’t they? So while DevOps is typically associated with software teams, it can be a powerful business strategy as well as a tactical technology approach. Eliminating inefficient silos and bottlenecks and fostering a culture of shared responsibility for service quality across functional roles and teams has value throughout the whole organization.

Like programming methodologies such as agile and scrum, DevOps principles are beginning to find homes beyond IT departments. As Christopher Tozzi explained in Channel Futures recently, “DevOps has reshaped the way software is designed and delivered. But why stop there? DevOps can help improve your entire organization.”

Tozzi suggests five key ways to leverage DevOps-inspired strategies throughout your organization:

  • Centralized, streamlined communications
  • Flexibly defined roles
  • Automation everywhere
  • Continuous improvement
  • Early identification of problems, long before the product or service reaches end users

New Relic’s Wieldt likes the DevOps notion of “look left, look right” to see what other teams are doing. “Think about what you are creating and how others consume it,” she suggests. “Walk a mile in their shoes and figure out what they really need, rather than just what they are asking for-and then satisfy that need.”

Of course, doing that can require you to create what CollabNet CEO Flint Brenton calls “a common currency of value.” While a metric like Mean Time To Recovery, for example, might be inherently understandable-and valuable-to an engineer, CMOs and CEOs might need some translation. But everyone can understand the importance of finding and fixing problems as quickly as possible. And that’s exactly the kind of thing DevOps is all about.

This article was originally published on the New Relic blog.

Original Link

Episode 159: The Nest doorbell is a great video doorbell

Microsoft plans to spend $5 billion on the internet of things, and it’s more than the usual shell game that big firms play with these sorts of announcements. We discuss its plans on this week’s podcast. We also talk about Qualcomm’s new vision chips for edge devices, what it means that apps are disappearing from the Apple Watch and Kevin’s thoughts on getting Alexa or Google to talk to you. Comcast shared its vision and new features for Stringify, August is working with SimpliSafe, there’s an old UPnP exploit hitting the IoT and I dumped a gadget for poor performance. I review the Nest doorbell before we answer a question on Z-wave and ZigBee for a listener.

My Nest Hello fresh out of the box.

This week’s guest is Poppy Crum, chief scientist at Dolby Laboratories, who came on the show as part of an IEEE event at SXSW last month. We talk about where hearables are today, what’s changing and some of the cool things we can look forward to. I suggest a mute button for people you dislike, which Crum admits is possible. We also dig into the things that kill your hearing, and how we perceive sound. You may never take an aspirin again. Listen and learn, y’all.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Poppy Crum, chief scientist at Dolby Laboratories
Sponsors: Yonomi and Forgerock

  • Why every chip company has a chip for computer vision at the edge
  • This is a great podcast on Amazon Alexa
  • Goodbye Ikea lights and hello Nest video doorbell
  • Every ear is different and so is its perception of sound
  • You can jam a lot of sensors into a hearable

Original Link

Episode 159: The Nest doorbell is a great video doorbell

Microsoft plans to spend $5 billion on the internet of things, and it’s more than the usual shell game that big firms play with these sorts of announcements. We discuss its plans on this week’s podcast. We also talk about Qualcomm’s new vision chips for edge devices, what it means that apps are disappearing from the Apple Watch and Kevin’s thoughts on getting Alexa or Google to talk to you. Comcast shared its vision and new features for Stringify, August is working with SimpliSafe, there’s an old UPnP exploit hitting the IoT and I dumped a gadget for poor performance. I review the Nest doorbell before we answer a question on Z-wave and ZigBee for a listener.

My Nest Hello fresh out of the box.

This week’s guest is Poppy Crum, chief scientist at Dolby Laboratories, who came on the show as part of an IEEE event at SXSW last month. We talk about where hearables are today, what’s changing and some of the cool things we can look forward to. I suggest a mute button for people you dislike, which Crum admits is possible. We also dig into the things that kill your hearing, and how we perceive sound. You may never take an aspirin again. Listen and learn, y’all.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Poppy Crum, chief scientist at Dolby Laboratories
Sponsors: Yonomi and Forgerock

  • Why every chip company has a chip for computer vision at the edge
  • This is a great podcast on Amazon Alexa
  • Goodbye Ikea lights and hello Nest video doorbell
  • Every ear is different and so is its perception of sound
  • You can jam a lot of sensors into a hearable

Original Link

The DevOps Loop

DevOps is taking over business. Not because technology permeates business, but because it has broadened to include the entire business value stream. Different best practices throughout the enterprise are incorporating principles of DevOps to deliver better outcomes to customers.

Helen Beal, DevOpsologist at Ranger4, spends her days “making life on earth fantastic” in part by helping implement DevOps philosophies in organizations. I recently watched her presentation from the All Day DevOps conference entitled, DevSecOps and the DevOps Superpattern. Here is what I learned:

Helen pointed out that DevOps is about 10 years old, yet, unlike Agile, there isn’t a DevOps Manifesto. But, we know what DevOps is about. She quotes Mark Schwartz from The Art of Business Value, “DevOps, in a sense, is about setting up a value delivery factory — a streamlined, waste-free pipeline through which value can be delivered to the business with a predictably fast cycle time.”

Helen also introduced the audience to the DevOps equivalent of the OODA loop — Ideation → Integration → Validation → Operation → Realization → and repeat, and the CAMS principle: Culture; Automation; Measure; and, Sharing. A process and principles that can be applied inside and outside of IT and software development.

null

She is leading up to making the point that business best practices systems are converging around DevOps – a concept she calls the Emerging DevOps Superpattern.

null

These are best practice systems – some of which have been around of more than a half-century and others that aren’t even a decade old – where, as they mature and evolve, it is becoming evident they share best practices with DevOps. DevOps is at the center of improving business.

Helen looked at each best practice system and principles they share with DevOps:

Agile: Support and trust are key; the first principle of the Agile Manifesto is continuous delivery; measuring value to the customer; daily collaboration across functions.

Holacracy: Everyone has the ability to call out when they see a problem; heavily focused on using peer-review processes and relies on collective intelligence.

ASM (Agile Service Management): This builds on ITSM. It is just enough governance to deliver the best service to the customer; promotes better collaborations by cross-pollinating vocabulary and methods.

Lean: Focuses on delivering value to the customer with minimal waste; the types of waste Lean seeks to eliminate are errors and duplication – both of which automation helps to tackle; uses Value Stream Mapping to understand the handoffs between processes and human interactions.

Learning Organization: Decentralizes the role of leadership; puts long-term sustainability ahead of short-term fixes; automates rote tasks to release time for learning and experimentation; uses knowledge management tools; touts exposing personal mental patterns and thinking for inspection and influence from others.

Safety Culture: In a highly experimental, innovative environment, we need to build safety in. Fail-safe, fast, smart – testing and auditing early in the release cycle and pre-emptive monitoring; Mean Time to Repair but measuring failure in terms of real business value; accountability and ensuring all understand their role in procedures is key. In DevOps, we love failure because it shows we are innovating.

Theory of Constraints: Mental models held by people can cause behavior that becomes a constraint; automation can remove constraints in manual processes; constraints are frequently poor handoffs due to weak collaboration.

The reality is DevOps embraces principles that make business better – better for the business, for the employees, and for the customers. That is seen as other systems to improve business embrace shared principles.

To learn more about what Helen had to say on this, DevSecOps, and the importance of safety culture to innovation, you can access her full talk here.

Craving more? Binge watch all 100 All Day DevOps sessions here.

Original Link

The Top 5 New Features in Java EE 8

The much-anticipated release of Java Enterprise Edition 8 boasts two exciting new APIs (JSON-Binding 1.0 and Java EE Security 1.0) and improvements to current APIs (JAX-RS 2.1, Bean Validation 2.0, JSF 2.3, CDI 2.0, JSON-P 1.1, JPA 2.2, and Servlet 4.0). This is the first release of Oracle’s enterprise Java platform for nearly four years and it contains hundreds of new features, updated functionality, and bug fixes.

So, what are the best new features? I attempt to answer this highly subjective question in this post.

Top 5 New Features TL;DR

  1. The new Security API: Annotation-driven authentication mechanism. The brand new Security API, which contains three excellent new feature: an identity store abstraction, a new security context, and a new annotation-driven authentication mechanism that makes web.xml file declarations obsolete. This last one is what I’ll be talking about today.
  2. JAX-RS 2.1: New reactive client. The new reactive client in JAX-RS 2.1, which embraces the reactive programming style and allows the combination of endpoint results.
  3. The new JSON Binding API. The new JSON-binding API, which provides a native Java EE solution to JSON serialization and deserialization.
  4. CDI 2.0: Use in Java SE. This interesting new feature in CDI 2.0 allows bootstrapping of CDI in Java SE application.
  5. Servlet 4.0: Server Push. This server push feature in Servlet 4.0 aligns the servlet specification with HTTP/2.

You might be interested in the new book Java EE 8: Only What’s New for just $9.95. It covers all additions to Java EE 8 with plenty of code examples.

Are you ready? So let’s get to it.

1. The New Security API

Probably, the single most significant new feature added to Java EE 8 is the new security API.

The primary motivations for this new API were to simplify, standardize and modernize the way security concerns are handled across containers and implementations. And they have done a great job.

  • The configuration of web authentication has been modernized thanks to three new annotations that make web.xml file declaration redundant. More on this later.
  • The new security context API standardizes the way the servlet and EJB container perform authentication.
  • The new Identity store abstraction to simplifies the use of identity stores.

So let’s look at the first of these additions.

Annotation-Driven Authentication Mechanism

This feature is all about configuring web security. Which traditional required XML declaration in the web.xml file.

This is no longer necessary, thanks to the HttpAuthenticationMechanism interface, which represents an HTTP authentication and comes with three built-in CDI-enabled implementations, each representing one of the three ways web security can be configured.

They are trigger with the use of one of these annotations.

@BasicAuthenticationMechanismDefinition
@FormAuthenticationMechanismDefinition
@CustomFormAuthenticationMechanismDefinition

They replicate the functionality of the classic HTTP basic authentication, form, and custom form-based authentication already available in the servlet container.

For example, to enable Basic authentication, all that is necessary is to add the BasicAuthenticationMechanismDefinition annotation to your servlet and that’s it.

@BasicAuthenticationMechanismDefinition(realmName="${'user-realm'}")
@WebServlet("/user")
@DeclareRoles({ "admin", "user", "demo" })
@ServletSecurity(@HttpConstraint(rolesAllowed = "user"))
public class UserServlet extends HttpServlet { ... }

You can now throw away your XML configurations and use one of these new annotations to drive web security.

2. JAX-RS 2.1: New Reactive Client

Let’s look at the new reactive client in JAX-RS 2.1 and how it embraces the reactive programming style.

The reactive approach is centered on the idea of data flows with an execution model that propagates changes through the flow. A typical example would be a JAX-RS method call. When the call returns, the next action is performed on the result of the method call (which might be a continuation, completion, or error).

You can think of it as data being an asynchronous pipeline of processes with the next process acting on the result of the previous process and then pass the result of its process to the next one in the chain. The flow of composable so you can compose and transform many flows into the one result.

The reactive feature is enabled by calling the method on an instance of the rx()Invocation.Builder used to construct client instances. Its return type is a CompletionStage with the parameterized Response type. The CompletionStage interface was introduced in Java 8 and suggests some interesting possibilities.

For example, in this code snippet, two calls are made to different endpoints and the results are then combined:

CompletionStage<Response> cs1 = ClientBuilder.newClient() .target(".../books/history") .request() .rx() .get(); CompletionStage<Response> cs2 = ClientBuilder.newClient() .target(".../books/geology") .request() .rx() .get(); cs1.thenCombine(cs2, (r1, r2) -> r1.readEntity(String.class) + r2.readEntity(String.class)) .thenAccept(System.out::println);

3. The New JSON Binding API

Now, let’s move on to the next great feature. The new JSON Binding API provides a native Java EE solution to JSON serialization and deserialization.

Previously, if you wanted to serialize and deserialize Java to and from JSON, you would have to rely on third-party APIs like Jackson or GSON. Not anymore. With the new JSON Binding API, you have all the feature you could possibly want natively available.

It couldn’t be simpler to generate a JSON document from a Java object. Just call the toJson() method and pass it the instance you want to serialize.

String bookJson = JsonbBuilder.create().toJson(book);

And it is just as simple to deserialize a JSON document to a Java object. Just pass the JSON document and target class to the fromJson() method and out pops your Java object.

Book book = JsonbBuilder.create().fromJson(bookJson, Book.class);

But that’s not all.

Behavior Customization

It’s possible to customize the default serialization and deserialization behavior by annotating fields, JavaBeans methods, and classes.

For example, you could use the @JsonbNillable to customize null handling and @JsonbPropertyOrder annotations to customize property order, which you specify at the class level. You could specify the number format with the @JsonbNumberFormat() annotation and change the name of a field with the @JsonbProperty() annotation.

@JsonbNillable
@JsonbPropertyOrder(PropertyOrderStrategy.REVERSE)
public class Booklet { @JsonbProperty("cost") @JsonbNumberFormat("#0.00") private Float price; }

Alternatively, you could choose to handle customization with the runtime configuration builder, JsonbConfig:

JsonbConfig jsonbConfig = new JsonbConfig() .withPropertyNamingStrategy( PropertyNamingStrategy.LOWER_CASE_WITH_DASHES) .withNullValues(true) .withFormatting(true); Jsonb jsonb = JsonbBuilder.create(jsonbConfig);

Either way, the JSON Binding API provides extensive capabilities for the serialization and deserialization of Java objects.

4. CDI 2.0: Use in Java SE

Now let’s move on to the next API. The CDI 2.0 API. This version boasts many new features and one of the more interesting features is the capability to bootstrap CDI in Java SE applications.

To use CDI in Java SE the CDI container must be explicitly bootstrapped. This is achieved by calling the static method newInstance() on the SeContainerInitializer abstract class. It returns an SeContainer instance that is a handle to the CDI runtime, with which you can do CDI resolution as shown in this code snippet. It has access to the BeanManager, which is the core entry point to CDI.

SeContainer seContainer = SeContainerInitializer.newInstance().initialize(); Greeting greeting = seContainer.select(Greeting.class).get(); greeting.printMessage("Hello World"); seContainer.close();

The CDI bean is retrieved with the select() method by passing it the class name of the bean you want to retrieve and use.

Configuration Options

Further configurations can be made to the SeContext by adding interceptors, extensions, alternatives, properties, and decorators.

.enableInterceptors()
.addExtensions()
.selectAlternatives()
.setProperties()
.enableDecorators()

The container is manually shut down by calling the close() method on SeContainer or automatically when using a try-with-resources structure because SeContainer extends the AutoCloseable interface.

5. Servlet 4.0: Server Push

And last, but not least, the Server Push feature in Servlet 4.0 which aligns the servlet specification with HTTP/2.

To understand this feature, you first need to know what server push is.

What Is Server Push?

Server push is one of the many new features in the HTTP/2 protocol and is designed to anticipate client-side resource requirements by pushing those resources into the browser’s cache so that when the client sends a request for a webpage and receives a response back from the server, the resources it needs are already in the cache. This is a performance-enhancing feature that improves the speed that web pages load.

How Is It Exposed in Servlet 4.0?

In Servlet 4.0, the Server Push feature is exposed via a PushBuilder instance, which is obtained from an HttpServletRequest instance.

Take a look at this code snippet. You can see that the path to the header.png is set on the PushBuilder instance via the path() method and pushed to the client by calling push(). When the method returns, the path and conditional headers are cleared in readiness for the builder’s reuse. The menu.css file is pushed and then the ajax.js JavaScript file is pushed to the client.

protected void doGet(HttpServletRequest request, HttpServletResponse response) { PushBuilder pushBuilder = request.newPushBuilder(); pushBuilder.path("images/header.png").push(); pushBuilder.path("css/menu.css").push(); pushBuilder.path("js/ajax.js").push(); // Return JSP that requires these resources }

By the time the Servlet doGet() method finishes executing, the resource will have arrived at the browser. The HTML generated from the JSP, that requires these resources, will not need to request them from the server as they will already be browsers cache.

Original Link

Episode 157: Why Foxconn is buying Belkin and the future of healthcare

We discuss two big news issues this week with the first being Foxconn’s offer to buy Belkin for $866 million. The deal would include the Wemo line of smart home devices and the Phyn leak detection joint venture. After that, data, privacy and surveillance rule the show in light of Facebook’s decision to delay its smart home speaker device. Before we lose hope in IoT entirely, Kevin brings up an effort in the UK to enshrine some basic consumer rights around the IoT including a device expiration date. We also talk about new Google Home skills, August’s updates, an acquisition by Particle, and Kevin’s thoughts on the Fibaro wall plug. We end our segment answering a question about smart door locks.

Particle’s recently launched mesh-enabled boards were part of a collaboration with the newly acquired RedBear Labs.

After the news segment, I interview Dr. Leslie Saxon who heads up the Center for Body Computing at USC, who believes that we’ll soon get 80 percent of our healthcare virtually. She talks about what we’ll need to make that happen and offers up a unique idea—a virtual version of herself that uses AI to provide basic care in her image and demeanor. The implications of all of this are pretty big, so we dig into two of the big ones; privacy and how it changes the relationship individuals have with healthcare. You’ll end up doing a lot more work. It’s an eye-opening episode.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Dr. Leslie Saxon of USC
Sponsors: Samsung ARTIK and Ring

  • Why Foxconn wants Belkin
  • Why would anyone want a Facebook smart speaker?
  • How the UK is advancing IoT security
  • The virtual doctor is in your pocket, your car and even your airplane seat
  • Get ready to take charge of your own healthcare

Original Link

Episode 156: Lennar’s smart home and why it dumped Apple HomeKit

Like the rest of the tech media, Kevin and I kick off the show with a discussion about data collection and privacy in light of the allegations against Cambridge Analytica. It’s a stark reminder on what can be gleaned from your information as well as how much of your data is being gathered without your knowledge or real consent. We also talk about smart home lock in, Alexa’s new “brief” mode, shopping on Google Home and my IoT Spring Clean. IBM’s new crypto chip and Watson Assistant made the show as well as several industrial IoT news bits such as Foghorn’s industrial IoT integration with Google’s cloud and a new hardware platform for IIoT from Resin.io. We also answer a listener question about IoT for new parents.

The Nest Hello doorbell is now available, and sells for $239.

I’ve heard that smart home tech is the new equivalent of granite countertops (basically it’s a big deal for buyers) for several years now, but I had never investigated what that tech experience would look like or how it would come to be. It’s pretty complicated, as you’ll learn from David Kaiserman, president with Lennar Ventures, the investment arm of Lennar Homebuilders. Kaiserman walked me through a Lennar home outfitted with a bunch of smarts last month, and shares his thoughts on what matters to buyers and the gear inside. He also sheds light on Amazon’s Alexa-focused geek squad and explains why Lennar backed out of its plans for a Apple HomeKit home and banked on Alexa instead. Enjoy.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: David Kaiserman of Lennar Ventures
Sponsors: Samsung Artik and IoT World

  • Get ready for an IoT spring clean
  • Kevin thinks shopping with Google Assistant is “brilliant”
  • This board’s build for industrial use
  • How Amazon’s team of Alexa experts changes the smart home experience
  • Why Alexa beat out HomeKit for Lennar

Original Link

Why Citizen Developers Can’t Replace Skilled Developers

The rise of low-code programming and rapid mobile app development (RMAD) platforms have made it increasingly possible for anyone, from a marketing manager to a sales rep or a business analyst, to develop their own business apps without any coding background or IT training. Known as citizen developers, these enterprising employees aim to take some of the pressure off overstretched IT departments by devising their own technology solutions.

As Gartner put it, “We’re all developers now.”

Many businesses view citizen app development as a possible answer to their agility problems. Citizen developers often achieve quicker turnaround times than IT can provide; more than 60 percent can churn out a new app in less than two weeks. Citizen development activities are already taking place within 50 percent of businesses—whether they’re aware of it or not—and Gartner predicts that seven in 10 large enterprises will have formal policies in place by 2020.

But just because anyone can develop an enterprise app doesn’t mean they should.

Businesses need to be careful about how they harness this growing trend. Otherwise, citizen development could become the next IT nightmare, says CIO Isaac Sacolick.

Citizen Developers Lack Essential Know-How

In many companies, even employees with minimal tech expertise are now building their own enterprise apps. For example, one study found that 97 percent of citizen developers had only traditional word processing and spreadsheet skills, while just over one in three had front-end web development skills, and only 8 percent had coding experience.

And while coding skills may not be a requirement anymore, it doesn’t mean that technical skills aren’t necessary.

In today’s enterprise environment, applications don’t run in isolation. They rely on data from complex systems such as SAP, Oracle, and ServiceNow, and they need to be able to efficiently integrate with these and any other systems to function. Citizen developers must have enough technical know-how to create those integrations, as well as a thorough understanding of fundamental processes, data structures, and business logic—yet most of them don’t.

“Citizen developers are only concerned with their immediate environment, looking at the problem that they are trying to solve so they can do their job, rather than seeing it in the context of the wider IT ecosystem,” says CTO Michael Allen. “This opens up a number of questions: Who will support the application? What other tools will it need to integrate with? If there are bugs or problems (which it is more than likely there will be), what will the wider impact be?”

When inexperienced developers get their hands on low-code or no-code tools, they often find themselves unable piece it all together, leaving a mess for IT—or worse, a costly consultant—to clean up. The end result can cause more problems and create more work than if IT had simply written the app in the first place.

That’s why 60 percent of business and IT leaders identify business process modeling as a crucial skill for successful citizen development, and more than half believe it’s important to have a solid understanding of relational databases and data modeling.

“Opening up citizen development does not automatically mean that every employee can or should be empowered to be citizen developers,” Gartner says. “In addition to having the desire to build apps, employees must also have the aptitude and training to become productive citizen developers.”

Citizen Developers Need IT Governance

For better or worse, citizen developers appear to be here to stay. However, enterprises shouldn’t just put tools into their hands and let them run with it. That’s a mistake.

Most citizen developers “need help along the way,” says product evangelist Dan Juengst. “They’ll need tooling within the platform—tutorials, documentation, guides, and help systems that will help walk them through the process of building an application.”

Without a formalized citizen development policy to establish a partnership between developers and IT, at least 50 percent of enterprises will grapple with substantial data, process integrity, and security vulnerabilities by 2020. Communication with IT is essential to ensure apps are successfully integrated into the enterprise ecosystem.

Yet fewer than 20 percent of top global enterprises have a collaborative citizen development strategy in place. Just 16 percent of IT departments are fully involved in citizen development, and 36 percent provide back-end support only. Fewer than 10 percent of IT leaders even bother to track basic KPIs like how many citizen developers they have, how many apps are being built, and how many people are using them.

Without IT involvement, citizen development becomes more akin to shadow IT, opening the door to potential security and compliance risks as well as reliability, maintainability, and scalability challenges.

Instead, IT needs to discover ways to work together with business users and citizen developers to build apps that will meet the process, data, and workflow needs of employees. While citizen developers and business users are uniquely capable of identifying pain points for completing their workflows, only IT has the knowledge and skill to create simple apps that streamline processes while delivering secure, personalized experiences.

Original Link

Episode 155: New toys, Pi Day and insect-tracking LIDAR

We have reached the purported end of Broadcom’s bid for Qualcomm, so Kevin and I finally shared our thoughts on the topic. After that we discussed a murder that was solved using evidence from connected devices, Google Routines and Strava’s privacy clean up. We used the SmartThings outage to discuss whether or not we need a hub in the smart home before hitting an array of new devices, including the new Raspberry Pi Model B+, Rachio’s new sprinkler, Ecobee’s new light switch, and a new security hub/camera from Abode. Kevin’s Nest Cam solved a crime as well and we answered a listener question about taking the first steps to learn about the IoT.

Rachio’s third generation sprinkler sells for $249.99 and looks easier to install.

Our guest this week was Tobias Meene, the global head of digital farming at Bayer AG, who shared a bunch of insights about bringing sensors, machine learning and intelligence to farmers. He discussed how the firm has managed to remotely identify insects by their wingbeats using LIDAR, several startups working with Bayer to make farming more productive and how Bayer sees IoT remaking its business and business model. Plus, Bayer has built a cool app to identify weeds and problems called Xarvio. I couldn’t try the app because it’s not compatible with my devices, but I would love to. Enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Tobias Menne of Bayer
Sponsors: Samsung Artik and IoT World

  • Chip consolidation ain’t over yet
  • Google Routines is a step forward, but not far enough
  • This week’s crop of new devices is strong
  • The problem with using LIDAR to track insects
  • How Bayer finds customers who want to buy into its new business plan

Original Link

How DevOps Networking Will Change SD-WAN Services

IT managers have a long and turbulent relationship with their carriers. I’ve seen it personally in building out large-scale networks, but I’m not alone. In a review of 3,500 WAN-related client inquiries and more than 500 service provider contracts, Gartner analysts concluded what most of us who purchase networking services have long known: Enterprises are dissatisfied with large incumbent network service providers.

This relationship becomes even more pertinent as IT professionals reevaluate their wide area network architectures. Software-defined wide area networks (SD-WANs) are the way forward for enterprise networking, and SD-WAN services promise to remove the complexity of configuring, managing, and delivering the SD-WAN.

At the same time, SD-WAN services are being delivered by the same carriers who sold us MPLS services. How can IT professionals ensure they receive a better experience with SD-WAN services than with MPLS services?

The answer is to adopt DevOps networking.

SD-WAN: DevOps for the Infrastructure

DevOps typifies the lean operation that has been changing IT. Self-service, agility, and low cost — those factors are revolutionizing how we build, deploy, and deliver compute, storage, and applications. By collapsing the stack, and by automating and monitoring all phases of software development, DevOps leads to faster software development, more releases, and better reliability.

Cloud services express DevOps thinking within the world of datacenters and software. Want a new test or production environment? Spec it, activate a virtual datacenter, load up your application, and you’re done. Changing configurations is a matter of tweaking a few settings on a management portal. Gone are the days of opening support tickets and waiting days for the provider to make a minor change. Overprovisioning? A thing of the past. The cloud’s elasticity allows businesses to pay for only what they need.

DevOps thinking, though, has been noticeably absent from carrier-delivered, infrastructure services. Rigidity typifies MPLS services. They only connect physical locations, often requiring weeks, if not months, to install new circuits. Even slight configuration changes require opening trouble tickets, introducing hours if not days of delay. Circuit reprovisioning takes far too long, forcing companies to overspend for overprovisioned circuits to accommodate traffic bursts and changing conditions.

None of that is acceptable for today’s businesses. Cloud datacenters, cloud apps, and mobile users are as typical as traditional offices, datacenters, and factories. To have an infrastructure service that only connects some of those elements is insufficient. Moreover, each of those “nodes” — the users, office, a cloud resource — bring unique requirements for costs, uptime, and performance.

The infrastructure must be malleable enough to accommodate those constraints. An intelligent, software-layer, such as SD-WAN,  can change the rigid and slow networking models of the past — in the broadest sense it is DevOps meets networking. Zero-touch provisioning instantly connects locations. Routing algorithms accommodate application requirements and adapt to real-time link conditions. The ability to connect any data services into the SD-WAN gives organization incredible flexibility.

What and How of DevOps Thinking

Secure SD-WAN services seem to go a step further by offloading the configuration, architecture, and delivery of SD-WAN onto a seasoned provider. But taking advantage of SD-WAN requires services providers to have DevOps thinking in their organizations. After all, what good is SD-WAN agility if the organization delivering the SD-WAN service still takes days to respond to tickets and overcharges for its services?

The carriers would seem to have the “what” to conquer the new WAN. They point to their infrastructure and years of experience packaging third-party technologies into managed services. It’s exactly what they’re doing with carrier-managed SD-WAN, which integrates third-party SD-WAN and security appliances.

However, the real question is, do they have the “how” — the DNA and corporate culture to deliver on the self-service, agility, and speed of DevOps networking. Rapid upgrades and new features become difficult with individual, third-party appliances. The same is true for problem resolution.

The reality is that there’s little incentive for carriers to change. High-margin, network-managed services continue to primarily drive carrier revenue. SD-WAN, with its orientation towards simplicity, is a threat to that revenue stream. Expecting carriers to overhaul their operations to accommodate the agility of SD-WAN is a double mistake. Like all companies, carriers must protect their businesses.

Rise of the Uncarrier

If carriers aren’t the answer, then what is? The Uncarrier. Like Amazon and Salesforce, Uncarriers are purpose-built, cloud-centric service providers made to deliver networking and security services. Their core values remain centered around self-service, agility, and cost.

The Uncarrier operates lean, charging customers affordable prices for what were once expensive services.

The Uncarrier is software-centric — not integration-centric. It can build the platform customers need and rapidly iterate to evolve and adapt to new requirements.

With its native-cloud orientation, the Uncarrier’s architecture is inherently multitenant, scalable by design, and highly elastic — all of which translates into affordability.  Its values are the capabilities enterprises need — a global, software-defined network fabric with built-in cloud and mobile support, and integrated network security.

Self-service, agility, and low costs have conquered all areas of IT. Now is the time for them to transform networking infrastructure services. But delivering on those changes requires more than just a technological change. It requires providers to adopt DevOps-cloud thinking: lean operations, responsive customer support, and revenue from selling targeted software components — not integration services. Carriers are weak candidates for leading that revolution; cloud networking providers are what’s needed. Welcome to the era of the Uncarrier.

Original Link

11 IT Tool Integrations to Optimize Your Enterprise Software Delivery [Webinar]

Time and time again, you’ve heard that the world is digital and that every company is a tech company. That software delivery is what gives you a competitive edge. And that you need all the right tools, people and methodologies (Agile, DevOps etc.) to accelerate the speed of delivery and quality of your software products.

You’ve probably also heard that Value Stream Integration is the missing piece – the secret sauce – behind all the best IT transformations in the world. That connecting your best-of-breed tools for planning, building and delivering software at scale, and automatically flowing project-critical information between practitioners, is absolutely vital to optimize the process.

If all your specialist teams are to collaborate efficiently and effectively when scaling operations, they need to be working as one. That all work must be visible, traceable and manageable, with no costly manual work required to share important knowledge about a product’s status and development. But what does that all look like in reality?

We’ve analyzed over 300 leading enterprises – all high-performing IT organizations – to identify similarities between their software delivery value streams. What we found was that these enterprises all realize the massive value of end-to-end process automation beyond DevOps and the CI/CD pipeline.

In our latest webinar, we discuss the compelling insights that we have gleaned, including:

  • How IT tool integration accelerates enterprise software delivery
  • How to implement 11 popular tool integration patterns
  • Strategies to reach integration maturity through chained integration patterns

We also share the results of an analysis of 1,000 tool integrations, including how IT organizations are implementing a sophisticated integration infrastructure layer to automate the flow of work from ideation to production.

If you missed the live webinar, just click on this link.

You can also read more about our research in our press release, Tasktop Research: Largest Enterprises Now Extending DevOps Process Automation Beyond Continuous Integration/Continuous Delivery.

Original Link

Episode 154: Google and Amazon fight and we are the losers

The tech titans are feuding again, and this time it means you can no longer buy Google’s Nest gear on Amazon’s online store. Kevin and I dissect the fight and speculate where it could lead. We also hit on funding for Ecobee, Alexa’s creepy laugh, and I ponder buying Delta’s pricey new Alexa-enabled faucet. Kevin shares his thoughts on the Raven dashboard camera, a new security camera standards effort and smart dorm rooms at Arizona State University. I talk about a new Wi-Fi feature that’s on the long-term horizon, and we answer a user question about lights and Google Home.

This week’s guest shares exclusive details of Allegion’s new, $50 million venture capital fund aimed at the safety and security startups combining tech and hardware. Rob Martens, futurist and president of Allegion Ventures, comes on the show to talk about where he wants to invest, how he sees consumer IoT and what it means that Amazon is getting deeper into the smart home sector. Allegion, through Schlage, is a sponsor of the podcast. Hope you enjoy the show.

Hosts: Stacey Higginbotham and Kevin Tofel
Guest: Rob Martens of Allegion Ventures
Sponsors: Samsung Artik and Yonomi

  • What comes next in Google and amazon’s fight?
  • You really need a capacitive touch faucet (with Alexa)
  • Qualcomm’s betting on a new skill for Wi-Fi
  • Why Allegion just created a $50 million venture fund
  • Places enterprise and industrial IoT could use a hand

Original Link