ALU

hybrid cloud

Cloud 2019 Predictions (Part 6)

Given how fast technology is changing, we thought it would be interesting to ask IT executives about what they see on the horizon for 2019. Here are more predictions about the cloud in the coming year:

Andreas Pettersson, CEO at Arcules

Original Link

7 Insights Into AWS’s Outposts Announcement at Re:Invent and The Future of Hybrid Cloud

At Amazon’s re:Invent 2018 conference, Amazon finally announced their venture into on-premises datacenters, called AWS Outposts. Just a few years ago, this would have been unthinkable, given how Amazon executives would proclaim the inevitable death of on-premises infrastructure and private clouds. While AWS Outposts is in a very early technical preview, I wanted to share 7 key insights from the information available at this time.

1) On-Premises Environments Aren’t Going Away

For years, the world’s largest federal and enterprise customers have resisted Amazon’s call to dump their datacenters and move everything into the AWS cloud. Today’s announcement is a testament that Amazon recognizes the importance of on-premises environments, despite the company’s previous claims to the contrary. Growth in the public cloud market is slowing because AWS is starting to saturate most customers that were willing to move away from on-prem datacenters. Today’s announcement is a huge validation for customers who have been pursuing a hybrid cloud strategy.

Original Link

Cloud 2019 Predictions (Part 5)

Given the speed with which technology is evolving, we thought it would be interesting to ask IT executives to share their predictions about what they see on the horizon for 2019. Here are more predictions about the cloud:

Keith Casey, API Problem Solver, Okta

Original Link

Cloud 2019 Predictions (Part 4)

tl;dr: Kubernetes raises all clouds – public, private, hybrid, multi-.

Given how fast technology is changing, we thought it would be interesting to ask IT executives to share their predictions for 2019. Here are some more for the cloud:

Original Link

Cloud 2019 Predictions (Part 3)

Given the speed with which technology is changing, we thought it would be a good idea to see where IT professionals see the cloud going in 2019. Here’s what they told us:

Murli Thirumale, co-founder and CEO, Portworx 

Original Link

Cloud 2018 Surprises and 2019 Predictions

Given the speed with which technology evolves, we thought it would be interesting to ask IT executives to share their thoughts on the biggest surprises in 2018 and their predictions for 2019. Here’s what they told us about the cloud:

Brad Parks, VP, Business Development, Morpheus Data

In 2018, we saw some substantive changes in the hybrid cloud market…some of which were easy to predict and others which came out of nowhere. If I had to give the past year a summary headline it might go something like “Industry consolidation marks the end of first-generation cloud management.”

Original Link

What is Serverless? – Part 2: Considerations for Choosing the Right Serverless Solution

See the first part of this 5-part blog series here.

AWS Lambda? Azure functions? Openwhisk? Fission?

As you’re considering serverless and looking for ways to get started with shedding all your infrastructure-worries, here are some considerations and gotchas to be aware of when choosing the right serverless solution to support the needs of large scale enterprises today.

Original Link

Serverless and What It Means for You – Part 1

Serverless computing has emerged in the past year as a compelling architectural alternative for building and running modern applications and services. Serverless applications allow developers to focus on their code, instead of on the infrastructure configuration and management. This speeds up development and release cycles, as well as allows for better, more efficient, scaling.

Serverless computing is closely tied to new architecture patterns and technologies such as microservices and containers. Greenfield, cloud-native applications are often microservices-based, which makes them ideal for running on containers (Docker). The further decoupling — and abstraction — that serverless functions allow between the application and the infrastructure make them an ideal pattern for developing modern microservices that can run across different environments.

Original Link

Hybrid Cloud: Evolving Chapter of 2018

Since recent years, IT decision makers and strategists are focusing on cloud computing. But organizations that are security conscious are still hesitant to move workloads and data to the cloud. However, with the fundamental technology behind cloud computing, a new model of cloud is gaining limelight in business: the hybrid cloud.

We all know hybrid cloud is a combination of public and private cloud deployment. With this, the organizations can store protected and sensitive data on a private cloud while leveraging computational resources from the public cloud to run applications which bank on this data.

Original Link

2019 Predictions: What’s Next for Software Defined Storage?

As we head into the heart of predictions season, the tech prophets are working overtime. There are so many streams of emerging technology — some of them converging into rapids — that we all need to arm ourselves with some foresight and guidance for navigating our way through the rush of data and possibilities. 

The first stop on the journey is cloud strategy, namely standardization of orchestration and commoditization of cloud resources. As your digital business grows in scale and complexity, automated capabilities will be critical to maintaining control and visibility. In 2019, you should be figuring out how to optimize savings and efficiency by leveraging the commoditization of hardware, managed services, security solutions, and cloud platforms — but this will only work if you have a robust, overarching orchestration solution in place. 

Original Link

How to Build Hybrid Cloud Confidence

Software complexity has grown dramatically over the past decade, and enterprises are looking to hybrid cloud technologies to help power their applications and critical DevOps pipelines. But with so many moving pieces, how can you gain confidence in your hybrid cloud investment?

The hybrid cloud is not a new concept. Way back in 2010, AppDynamics founder Jyoti Bansal had an interesting take on hybrid cloud. The issues Jyoti discussed more than eight years ago are just as challenging today, particularly with architectures becoming more distributed and complex. Today’s enterprises must run myriad open source and commercial products. And new projects — some game-changers — keep sprouting up for companies to adopt. Vertical technologies like container orchestrators are going through rapid evolution as well. As they garner momentum, new software platforms are emerging to take advantage of these capabilities, requiring enterprises to double down on container management strategies.

Original Link

5 Ways Kubernetes Transforms Your Business

In today’s economy, every business is a software business and every enterprise CIO is tasked with releasing applications that deliver a high quality and unified customer experience that rivals that of Amazon, Google, or Netflix.

While the business benefits of software innovation are clearly understood, the IT capabilities needed to support these are more complex (high degree of customization, infinite systems capacity, seamless scalability, ironclad security, cost optimization, and more), and the most effective way to meet these challenges is still evolving, as organizations strive to become better at competing in this digital world.

Original Link

IBM Acquires Red Hat. Now Who Does Google Buy?

IBM announced yesterday that they had entered into an agreement to acquire Red Hat for $34 billion in cash. That’s one of those big milestones in IT history that will have a profound impact for years to come. But is that totally surprising? Not quite…

At the beginning of the year, as part of my 2018 predictions, here is what I had posted on Twitter:

Original Link

IBM to Buy Red Hat for $34 Billion

Red Hat, everyone’s favorite open source software giant, will soon be under new ownership.

IBM announced on Sunday that it will pay $34 billion to acquire Red Hat and its massive portfolio of OSS. The transaction still needs approval from regulators and shareholders (the latter of whom likely won’t mind, as Red Hat’s stock prices soared 50 percent after the news broke), but the deal is on pace to close in the second half of 2019.

Original Link

Who’s Afraid of the Big, Bad Hybrid Cloud?

This article is featured in the new DZone Guide to Cloud: Serverless, Functions, and Multi-Cloud. Get your free copy for more insightful articles, industry statistics, and more!

Cloud management is a key aspect that organizations are looking at on their journey to becoming a software-driven enterprise in order to simplify operations, increase IT efficiency, and reduce data center costs.

Original Link

Cloud, Innovation, and Updates From Google Next 2018 [Podcast]

Joining us this week is Ian Rae, CEO and Founder CloudOps who recorded the podcast during the Google Next conference in 2018.

Highlights

  • 1 min 55 sec: Define Cloud from a CloudOps perspective
    • Business Model and an Operations Model
  • 3 min 59 sec: Update from Google Next 2018 event
    • Google is the “Engineer’s Cloud”
    • Google’s approach vs Amazon approach in feature design/release
  • 9 min 55 sec: Early Amazon ~ no easy button
    • Amazon educated the market as industry leader
  • 12 min04 sec: What is the state of Hybrid? Do we need it?
    • Complexity of systems leads to private, public as well as multiple cloud providers
    • Open source enabled workloads to run on various clouds even if the cloud was not designed to support a type of workload
    • Google’s strategy is around open source in the cloud
  • 14 min 12 sec: IBM visibility in open source and cloud market
    • Didn’t build cloud services (e.g. open a ticket to remap a VLAN)
  • 16 min 40 sec: OpenStack tied to compete on service components
    • Couldn’t compete without Product Managers to guide developers
    • Missed last mile between technology and customer
    • Didn’t want to take on the operational aspects of the customer
  • 19 min 31 sec: Is innovation driven from listening to customers vs developers doing what they think is best?
    • OpenStack is seen as legacy as customers look for Cloud Native Infrastructure
    • OpenStack vs Kubernetes install time significance
  • 22 min 44 sec: Google announcement of GKE for on-premises infrastructure
    • Not really On-premise; more like Platform9 for OpenStack
    • GKE solve end user experience and operational challenges to deliver it
  • 26 min 07 sec: Edge IT replaces what is On-Premises IT
    • Bullish on the future with Edge computing
    • 27 min 27 sec: Who delivers control plane for edge?
      • Recommends Open Source in control plan
  • 28 min 29 sec: Current tech hides the infrastructure problems
    • Someone still has to deal with the physical hardware
  • 30 min 53 sec: Commercial driver for rapid Edge adoption
  • 32 min 20 sec: CloudOps building software / next generation of BSS or OSS for telco
    • Meet the needs of the cloud provider for flexibility in generating services with the ability to change the service backend provider
    • Amazon is the new Win32
  • 38 min 07 sec: Can customers install their own software? Will people buy software anymore?
    • Compare payment models from Salesforce and Slack
    • Google allowing customers to run their technology themselves of allow Google to manage it for you
  • 40 min 43 sec: Wrap-Up

Podcast Guest: Ian Rae, CEO and Founder CloudOps

Ian Rae is the founder and CEO of CloudOps, a cloud computing consulting firm that provides multi-cloud solutions for software companies, enterprises and telecommunications providers. Ian is also the founder of cloud.ca, a Canadian cloud infrastructure as a service (IaaS) focused on data residency, privacy and security requirements. He is a partner at Year One Labs, a lean startup incubator, and is the founder of the Centre cloud.ca in Montreal. Prior to clouds, Ian was responsible for engineering at Coradiant, a leader in application performance management.

Original Link

Development Best Practices for Hybrid Cloud: Part 2

This is the second part of the two series on Development Best Practices for Hybrid Cloud. Part 1 can be found here.

Architecture 

Whether we are developing a microservice, UI, or a configuration script, it is worthwhile to spend enough time architecting the solution. In hybrid cloud environments, it is preferred to architect solutions taking into account patterns such as event-driven, stateless, interface contracts, messaging, command query responsibility segregation (CQRS) etc. which can greatly help to scale applications as desired while avoiding complexity. Also, if we have enough use cases for a certain functionality, it is recommended to use available tools rather than trying to create a solution.

Original Link

Development Best Practices for Hybrid Cloud: Part 1

Development is both engineering and an art which is perfected as experience is gained. With the introduction of hybrid cloud, the way we look at how applications have changed, we see our customers’ expectations have changed and more importantly, our business revenue streams have changed. To accommodate these changes by incorporating evolving new technologies, best practices for development needs to morph to suit the emerging needs.

There are numerous best practices which can be derived based on multiple factors, but there are a few which form the core, especially for the hybrid cloud environment. This article is the first of two-part series, which tries to shed some light on these best practices, which will greatly help the development teams for their day-to-day development activities.

Original Link

Evolution of Enterprise Storage

When it comes to storage, the problems faced by consumers and enterprises are not all that different. As an iPhone user, my constant worry is that I’ll run out of storage on my phone while recording a very important (soon to be viral) video.

Gone are the days when we stressed over floppy/CD/HDD corruption and data loss. Public cloud storage lets consumers take data availability for granted; I am not afraid of giving my iPhone to my 1-year-old nephew anymore, because even if the commodity hardware suffers damage the valuable photos, playlists, and critical emails are safe in the cloud. Cloud storage does not mean a new type of storage hardware: it is simply an intelligent software managing your data on massively large numbers of drives.

Enterprises have the same set of problems, albeit at a much larger scale. Data availability, security, and scalability are the biggest concerns. Enterprise storage has come a long way in the last 60 years, but data explosion, due to IoT, AI, and social media has completely transformed storage requirements. It will soon be cliché to mention that “Data is King.” It’s not about data storage anymore; it’s a race to extract maximum intelligence out of the data, evolve applications, and gain a competitive advantage. Data storage solutions are already expanding toward data management and analysis. Like dinosaurs, traditional storage solutions are too rigid to keep pace with modern demands and will soon be extinct. Let’s explore how storage has evolved over the past few decades.

DAS

The first production IBM hard disk drive, the IBM 350 disk storage unit, shipped in 1957 as a component of the IBM 305 RAMAC system. It was approximately the size of two medium-sized refrigerators and stored five million six-bit characters (3.75 megabytes) on a stack of 50 disks. Even though Direct Attached Storage (DAS) drives have improved to terabytes of storage and are now just a couple of inches in form-factor, they are not suitable for modern applications, which assume five or six nines availability. With DAS, your applications get handcuffed to local monolithic storage, and scaling DAS for large capacity requirements is simply not possible.

NAS/SAN

Storage Area Network (SAN) and Network Attached Storage (NAS) solutions connected storage arrays over the network and resolved many issues associated with DAS. These solutions allowed storage admins to eliminate the web of wires needed to support DAS solutions. Storage arrays could now be shared among multiple application servers, and the storage layer could be scaled independently of compute. Both NAS and SAN solutions provided different ways to consume storage and were suitable for applications with specific requirements, such as file access and block access, respectively. The delight of storage admins was short-lived though. Both of these solutions involved large CapEx investments and also had limitations when it came to scaling beyond a certain capacity, and stretching installations across distant locations were generally not supported.

HCI

Hyper-Converged Infrastructure solutions came into existence with the goal of making the life of infrastructure administrators simpler. These solutions combined the best parts of DAS and NAS/SAN solutions by consolidating compute and storage layers in the same server so that resources could be shared. Also, multiple servers were allowed to connect over the network so that they could be part of the same cluster. You did not have to worry about managing different servers for compute and storage anymore — you could scale capacity by adding more servers, and reduce management and support headaches by eliminating multiple vendors. But, HCI was still very expensive and did not solve the problem of data movement across distant locations.

Evolution of Storage Timeline

Cloud or Nothing

Whether it’s consumer or enterprise storage, cloud is the new norm: high availability, on-demand scalability, and cost-effectiveness are the key features required from any storage provider. For enterprise storage, gone are the days of purpose-built storage solutions designed as short-term solutions, leading to the rip-and-replace of an entire infrastructure every 3 to 4 years. Modern applications are now being designed with cloud-native and cloud-first models, which demands that enterprise storage blur the lines between private cloud and public cloud infrastructure.

Hedvig was founded with one goal in mind: to make cloud storage easy. Bring the capabilities that consumer storage already enjoys to the enterprise. Ingest data from anywhere and access it from anywhere: this should be the new standard, without stressing over security, 24×7 availability, and cost. As Avinash Lakshman, founder of Hedvig once said, “Product should be designed and capable of being deployed in an evolutionary and revolutionary fashion.”  

To learn more about Hedvig, please contact us here

Original Link

DZone Research: Biggest Changes in the Cloud

To gather insights on the current and future state of the cloud, we talked to IT executives from 33 companies about their, and their clients’, use of the cloud. We asked, “What’s been the most significant change in the cloud environment in the past year or two?” Here’s what they told us:

Serverless

  • Serverless architecture and container-based solutions.
  • We see a lot of change around the premise of what a compute resource is with serverless and microservices becoming much more impactful. A lot of cross-pollination of services across cloud providers. The ability for cross-cloud solution mapping. 
  • 1) Serverless technology pattern will continue to evolve. 2) ML and calibration of predictive models into every form of application. 3) Single biggest change is how apps are thought of and changed. 
  • “I don’t think there’s a man that has democratized computing more than I have” – Steve Wozniak. I believe the same can be said for AWS. Virtual workspaces are created in moments. You can move mountains with a mouse click. The move to serverless computing. Dynamo DB makes it easy to put high-end computing at everyone’s fingertips AWS Aurora serverless. Pay only for resources used on second by second basis allowing anyone to afford high-end computing. Huge advantage in a changing world.

Containers

  • The emergence of Containers as a service and Function as a Service.
  • 1) AI/ML is a big piece. Fundamentals have not changed. Massive cheap computing on demand in someone else’s data center. It’s cutting-edge and most people have trouble coping with what they can do with it. 2) Evolution of containers and microservices is a significant swing. A lot is being taken care of for you by the big cloud vendors (Docker, Kubernetes). Serverless computing is kicking into gear as the next big thing. Lightweight processing. Container management – ACI. Leads us to get an understanding to use serverless, instances on demand, managed container environment. Making technology go away. 
  • The big change is the move to innovation as the motivator. The rise of Kubernetes and the consolidation of thought around that platform. The industry is building the new platform that apps will be built on based on containers and Kubernetes. Consolidation of thought around the new platform. Progression to higher problems is becoming easier. Istio is about managing the connection between different applications on the cloud. The incredible pace of definition and delivery. A single platform definition pervasively defined in the industry. Faster development because it gets people unstuck – they know which horse to ride. 
  • Containers added to the agility and ease of applications in the cloud ability to move on-prem to cloud. Few are used for stateful applications. That’s what we see in the future. Expand agility within and across clouds. 
  • I don’t know if it is really a new change, but containerization is making a huge impact on architecture, both in planning, designing, and implementing cloud applications. 
  • The emergence of containers as a disruptive force. 1) Containers as paradigms of self-contained execution environments. Higher density, security, multi-tenancy. Docker and OCI is a great trend. 2) K8 orchestration solutions. Standardizing on Kubernetes is a win-win for everyone.

Hybrid Multi-cloud

  • Multi-cloud has really risen in prominence in the past two years. Clients are moving from unconsciously implementing multiple clouds (e.g. via shadow IT), to strategically plan the best ways to leverage multiple clouds for economic and technical benefits – not to mention protection from competitive expansion on the part of their cloud providers. It’s still early days in the multi-cloud process for most clients, but the trend is definitely there.
  • I think it’s the realization that public cloud is not a panacea and within the public cloud world, there are important differences between what you can get from hyperscalers and what you can get from the thousands of cloud service providers that maybe aren’t as well known. IT shops are getting much more sophisticated in developing hybrid cloud and multi-cloud strategies to optimize cost, agility, and scalability.
  • The recognition that the public cloud is not the end game, but rather a critical component of a broader infrastructure strategy. As with any seismic shift in IT infrastructure, there are simply too many legacy applications that cannot be shifted to the new paradigm, either due to missing requirements or concerns about interruptions to business. Even companies that previously went all in on the public cloud are now pulling some workloads back on-premise. The capabilities of the cloud cannot be ignored, however, and so hybrid cloud initiatives are growing at a substantial rate. In response, public cloud vendors are clamoring to partner with other infrastructure providers to offer more complete solutions. 
  • 1) The growth of multi-cloud strategies has changed the computing landscape. More organizations are using multiple clouds to host their data and are looking to flexible platforms to easily manage and store their data. This demand is driven by the need to increase agility and avoid vendor lock-in which can create various bottlenecks. 2) While we are often used alone, we also are commonly used in conjunction with one or more other cloud providers. This is another reason why we don’t see companies like AWS, Google, etc. as direct competitors — we offer a unique solution in this market that works beautifully with many other cloud platforms. 3) A further significant change has been the continued abstraction away from the cloud primitives – this has given way to new technologies to enter the landscape, such as serverless.

Other

  • Big providers are getting better, paying attention to security, seeing individual agreements with clients to ensure data is protected and available.
  • Today, cloud providers all roughly offer the same underlying primitives such as servers, load balancers, block and object storage, to name a few. They are now increasingly trying to differentiate themselves from each other by offering more complex solutions, such as big data analytics and warehousing, IoT, continuous delivery and deployment, container orchestration platforms, cloud development GUIs and so on. While customers can derive incredible benefits from these turnkey solutions, they also increase the likelihood of vendor lock-in as developers tailor solutions directly to a particular cloud provider.
  • Accelerated adoption. It’s the biggest change we’ve been through. I’ve been surprised by speed of adoption in the enterprise. The speed of adoption of the Microsoft Azure stack as multi-cloud. Azure has a good stack that has come along quickly. Clean, easy to understand. Just in migration.
  • Custom software is being developed in the cloud. It’s more difficult for the average business to protect all of its data because it’s stored in so many different places. The data sits in many different clouds. It’s a regulatory burden to ensure that every provider is compliant. It only takes one copy to leak to have one data breach. While the cloud solves problems, it provides new problems.
  • Now lift and shift versus much extra work putting your database in the cloud. With SQL server on-prem can move to the cloud.
  • 1) Forcing customers to buy holistically versus independently. The ecosystem approach to solving the problem. 2) Moving to public clouds. Providers building solutions to integrate. VMware is going to run on bare metal, AWS, and Azure. Maintain standards and best practices. IT has the option of what it wants to do.
  • Size and scale of company environments. VPCs are increasing quickly, they are becoming networks in and of themselves. Need orchestration and management that are software defined. Three VPCs – development, test, production. Virtual private cloud (VPC) for a region and an account. Akin to a LAN or a VLAN. Can run virtual machines. VPC is the routing layer with the LAN layer. VPC = a mini data center that’s where routing and automation come in. Azure = VNet. AWS and GCP = VPC.
  • Formerly unstructured data to structured data for auditing and compliance. Create a copy of the record in the cloud and secure it. As archive legacy database data have the ability to go in an edit an entry which is necessary for GDPR. Renditioning data. 1) Sunset applications to meet internal or external governance. Another application in read-only mode certain view, stored procedure, see representations of the data at record levels. 2) Learn the user view and extract the information and reconstruct the view. 3) Concern about privacy and IP need to redact and anonymize to meet GDPR requirements. Meet governance and privacy rules.
  • The speed at which it’s being adopted in the marketplace. Old technology but fresh in people’s minds.
  • One, the cloud is more reliable and secure. Second, there are much better tools and mechanisms to move data to the cloud. And third, migration services have emerged from cloud service providers and system integrators to move data and applications to the cloud. Both enterprises and cloud vendors realize that customers need the coexistence of on-premises and cloud deployment models that bridge all elements of the hybrid cloud/multi-cloud infrastructure.
  • Companies in the cloud are moving quickly with regards to agility and scalability. Finding people to manage infrastructure and run systems are getting harder and more expensive. The scarcity of skilled technologists has been a factor for companies.
  • Over the last year, we have seen a dramatic shift of cloud interest within our customer base. There was limited interest in cloud migration about 18 months ago, but now over 60 percent of our customers are planning or executing cloud migration strategies. We believe this is driven by new services that are more oriented towards HPC users, more flexible pricing options for compute, network and storage, as well as top-down driven initiatives to get out of the data center and systems management business.
  • For many customers, the most significant changes come post-migration. First, there’s the realization that outages do in fact occur with the most frequent outages occurring in AWS followed by Azure. The other big realization starting to dawn on many customers is that there is no real privacy in the cloud.
  • The last two years we’ve gone from our vertical to be able to use the video we put in there. AI/ML functionality with the cloud. Facial recognition. Cashier-less experience. Really fast change. Most other companies have not sold the basics of getting video in the cloud let alone being able to analyze it in real-time.
  • Fast applications. IaaS model. What are the terms of service – who owns the data? Where does the data reside? GDPR has driven the importance of protecting the data. Make sure privacy is paramount. Make things clear and transparent. Cloud providers now take privacy seriously.

Here’s who we talked to:

Original Link

3 Pitfalls Everyone Should Avoid with Hybrid Multi-cloud (Part 3)

The daily cloud hype is all around you, yet there are three pitfalls everyone should avoid.

From cloud, to hybrid cloud, to hybrid multi-cloud, you’re told this is the way to ensure a digital future for your business. These choices you’ve got to make don’t preclude the daily work of enhancing your customer’s experience and agile delivery of those applications.

Let’s take a journey, looking closely at what hybrid multi-cloud means for your business. Let’s examine the decisions being made when delivering applications and dealing with legacy applications. Likely these are some of the most important resources to your business.

This article highlights three pitfalls everyone should be aware of when transitioning into hybrid multi-cloud environments. It’s based on experiences from interactions with organizations working to conquer hybrid multi-cloud while delivering their solutions.

In part one, we covered the basic definitions to level the playing field. We outlined our views on hybrid cloud and multi-cloud, making sure to show the dividing lines between the two. This set the stage for part two, where the first pitfall discussed why cost is not always the obvious motivator for moving into the cloud.

In part three, it’s time for technology and looking at the question of whether it’s a good plan moving all workloads into the cloud.

Everything’s Better in the Cloud

three pitfallsThe second pitfall is about the misconception that everything will benefit from running in the cloud. All workloads are not equal and not all workloads moving into the cloud result in a measurable effect on the bottom line.

recent article stated, “Not all business applications should migrate to the cloud, and enterprises must determine which apps are best suited to a cloud environment.” A hard fact that the utility company mentioned in part two of this series found out as labor cost estimations rose while trying to move applications into the cloud.

Discovering this was not a viable solution, the utility company backed up and re-evaluated their applications. Turns out, some application were not heavily used, and others had data ownership and compliance issues. Some of their applications were not certified for use in a cloud environment.

Sometimes it’s not physically possible to run applications in the cloud, but other times it’s not financially viable to run in the cloud.

Imagine a fictive online travel company. As their business grew, they expanded their hosting capacity on-premise eventually to over 40 thousand servers. It became a question of expanding their resources by purchasing a data center at a time, not a rack at a time. Their business consumes bandwidth at such volumes that cloud pricing models, based on bandwidth usage, remains prohibitive.

Why a Baseline?

Nothing is more important than having a thorough understanding of your application landscape, as the examples above show. Along with a good understanding of what applications need to migrate to the cloud, you also need to understand current IT environments, the present level of resources, and estimated costs for moving.

The current situation and performance requirements (network, storage, CPU, memory, application & infrastructure behavior under load, etc), called a baseline, gives you the tools to make the right decision.

If you’re running servers with single-digit CPU utilization due complex acquisition processes, then a cloud might be a great idea with on-demand resourcing. However, first ask these questions:

  • How long did this low utilization exist? 
  • Why wasn’t it caught earlier? 
  • Isn’t there a process or effective monitoring in place? 
  • Do you really need a cloud to fix this, or just a better process for both getting resources and managing said resources? 
  • Will you have a better process in the cloud?

Container Necessity

Many believe that you need containers to be successful in the cloud. The famous catchphrase sums it up nicely, “We crammed this monolith into a container and called it a microservice.”

three pitfallsContainers are a means to an end, and using containers doesn’t mean your organization is capable of running maturely in the cloud. It’s not about the technology involved, it’s about applications that often were written in days gone by using dated technology. If you put a tire fire into a container and then put that container on a container platform (ship), it’s still functionality that someone is using.

Is that fire now easier to extinguish? These container fires just create more challenges for your DevOps teams, who are already struggling to keep up with all the changes being pushed through an organization moving everything into the cloud.

Note, it’s not a default bad decision to move legacy workloads into the cloud, nor is it a bad idea to containerize them. It’s about weighing the benefits, the downside, assessing the options available and making the right choices for each of your workloads.

Pitfalls Everyone Should Avoid

In part four of this series, the third and final pitfall is presented. A pitfall that everyone should avoid with hybrid multi-cloud. Find out what the cloud means for your data.

Missing the start of this series? Just head on back and catch up with part 1.

Original Link

The What, Why, and How of Hybrid Cloud Strategy

Most people are familiar with the term hybrid, although it is usually in relation to cars. Fortunately for the sake of my point, it will do just fine. Now, by definition, a hybrid is the result of something new being made by combining two different elements, and in the case of hybrid cars, it is an electric and gasoline-powered engine. This provides the user with the fuel efficiency of an electric motor with the power and convenience of a gasoline one. However, it isn’t only cars that are enjoying the benefits of combining two different elements to create something new and better, for the world of IT has constantly been melding together concepts, principles, and methodologies in order to have the best of both worlds. Today, we are going to dive into one of those combinations known aptly as the Hybrid Cloud.

What is a Hybrid Cloud Strategy?

According to the National Institute of Standards and Technology, a hybrid cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability. Inferring from this definition, a hybrid cloud strategy is the use of these multiple infrastructures, including an organizations’ own on-premise IT and data centers, to achieve IT goals. When you allow workloads to transition between public and private clouds as needs and costs change, a hybrid cloud model can empower organizations with enhanced flexibility and more data deployment options. So, whereas a cloud-first strategy prioritizes the cloud for all IT functions, a hybrid strategy uses a combination of infrastructures to provide an organization with the benefits of each without limiting themselves through adherence to one.

Benefits of a Hybrid Cloud Strategy

Gartner estimates that 90 percent of organizations will have adopted a hybrid strategy by 2020, and when you look at the benefits, it is easy to see why. Both public and on-premise deployment boast their own unique advantages; however, they also come with their own drawbacks. That is until you combine them.

Simply put, a hybrid cloud strategy is ideal for organizations of all sizes who prioritize security, are positioned for growth, and who have high data demands, mainly because it addresses all these factors and more. Here are a few of the advantages of a hybrid cloud business strategy:

A hybrid cloud strategy is also incredibly useful for organizations who work with dynamic or changing workloads. Take for example an order entry system, one that sees a spike in demand around the holiday season. This situation would be a prime candidate for a hybrid approach as they could have the application running in a private cloud, but can utilize cloud bursting from a public cloud during times of peak demand.

How to Build a Hybrid Cloud Strategy

One of the greatest benefits of adopting a hybrid cloud strategy is that you can customize it to the needs, present and future, of your organization. In other words, there isn’t just one type of hybrid cloud; rather, there are multiple approaches and combinations. However, in order to establish a hybrid cloud strategy, the following requirements must be available:

  • A public Infrastructure-as-a-Service (IaaS) platform
  • Creation of a private cloud, either on-premise or via a hosted private cloud provider
  • Sufficient Wide Area Network (WAN) connectivity between environments
  • Private clouds must be properly architected to attain compatibility with public cloud

Now that I’ve established the prerequisites for hybrid implementation, it begs the question as to what exactly it would look like — and there isn’t one answer. One option is an organization could invest in cloud management software, providing them with a single platform from which to manage public cloud applications and on-premise infrastructure. Another option is the vendor-native hybrid cloud strategy, where you take your on-premises infrastructure or data centre and connect it to the public cloud in order to leverage its benefits.

Regardless of your approach, the beauty of a hybrid strategy is that it allows organizations to tailor their infrastructure to meet their specific needs. However, like most things in life, there are steps you can take to help ensure your hybrid cloud initiative gets off to a good start:

  • Integrate new cloud services with existing on-premise infrastructure so you can have access to all necessary applications
  • Implement security measures to control and restrict the flow of vital data
  • Use a virtualization layer, or hypervisor, to deploy existing applications in such a way that it works seamlessly on-premise and on the cloud platform. This will provide an easier approach to lift and shift between cloud and on-premise infrastructures
  • When possible, develop future applications using container technology tools which can reduce dependence on cloud platform vendors and allow you to use on-premise and different cloud platforms in unison
  • Automate tasks whenever and wherever possible, especially repetitive manual ones
  • Develop and monitor KPIs (deployment frequency and speed, failure rate, time-to-recovery) so as to ensure a winning strategy using the hybrid cloud model

The key to constructing a successful hybrid cloud is found in the selection of hypervisor and cloud software layers that are uniquely compatible with your desired public cloud. This allows for proper interoperability with said public clouds API and promotes seamless migration between the two.

Hybrid Cloud Implementation: Challenges and Solutions

As is true when implementing any new strategy or practice, challenges can often arise. However, being informed, and more so prepared, for these occurrences is the best way to ensure a smooth transition to utilizing a hybrid model. Here are some common hurdles and helpful tips to work through them:

  • Security Risks – Data in a private cloud is secured, however, a hybrid cloud may require different security platforms to safeguard sensitive data. By selecting the right platform, developing a deep understanding of the data requirements, and with the right partnership in integrating your network solution, security risks can be minimized.
  • Integration and Maintenance Cost – Integration of public cloud with private cloud, on-premise infrastructure, or integration with various cloud providers can prove challenging. It adds a greater complexity from not only the management and implementation point of view, but it can also make tracking resources and computing power across multiple environments difficult.
  • Technology and DevOps Process – A hybrid cloud strategy requires greater prudence in selecting technology and solutions which can work across a variety of environments. When this isn’t accounted for, all too often I see lifecycle management and deployment slow down. However, turning to an expert in your rollout can help provide the best possible solutions and technologies such as containerization using Docker/Kubernetes/Openshift and automate application build and deployments by ensuring proper CI/CD process is implemented. Among the most popular technologies in the field right now include Microsoft Azure, Google Cloud Platform, HPE OneSphere, and Amazon Web Services through their recent partnership with VMware software.

Hybrid cloud strategy is the way of the future for any organization that’s looking for flexibility, affordability, enhanced security, improved access and communication, and customizable solutions for IT needs. However, like any new strategy, this can be intimidating when the experience is lacking. Don’t hesitate in reaching out to an expert to help develop and implement a hybrid cloud solution that isn’t only tailored to your specific requirements, but is designed to drive your business forward.

This blog was originally written published in mobileLIVE 

Original Link

Is It Better to Be Cloud-First, Cloud-Ready, or Cloud-Only?

Is “private cloud” an endangered species? After all, the conventional way of running a private cloud is to buy and manage the servers and other hardware infrastructure used to store and operate your apps and data. A principal benefit of cloud computing is not having to deal with hardware (much) because so much of your IT infrastructure is virtualized. While private cloud services will fill an important niche in the future, hybrid clouds will continue to be the cornerstone of companies’ cloud strategies for many years to come.

Why private clouds? In a word, security. But there are two other good reason enterprises, in particular, retain in-house data operations: first, they haven’t yet squeezed every penny of amortization out of the hardware and software they own outright — or lease via a long-term agreement. CIOs may talk about security and governance concerns related to the public cloud, but what’s holding back much of their cloud adoption is legacy equipment and process.

The second reason some IT functionality remains on premises is prudence: growing familiarity with managing cloud infrastructure shows the wisdom of not putting all your eggs in one basket. Organizations want to take advantage of the benefits of new technologies at a pace that ensures their valuable data assets will not be put at risk by relying too much on a single third-party cloud service. This explains the growing popularity of multi-cloud strategies, as Computer Business Review’s April Slattery explains in a September 19, 2017, article.

How Hybrid Clouds Smooth the Journey

The consensus of the industry experts is that eventually, organizations of all types and sizes will rely on cloud services for a majority of operations but the nature and mix of those services will vary. The “cloud only” approach can be seen as a logical end-game but as InfoWorld’s David Linthicum writes, the first cloud-only companies are usually those that are very small, or very new, and often both.

At the same time, the clear trend is toward increased reliance on hybrid clouds by companies of all sizes. Research conducted by MarketsandMarkets concludes that the global hybrid-cloud market will grow to $91.74 billion by 2021, representing a compound annual growth rate of 22.5 percent. The popularity of hybrid clouds shows no signs of waning: a survey conducted by McAfee found that the percentage of companies adopting a hybrid-cloud strategy increased from 19 percent in 2015 to 57 percent in 2016, as reported by Associations Now’s Ernie Smith in a September 19, 2017, article. Numbers compiled by Statistica forecast growth in the global hybrid-cloud market from $40.8 billion in 2017 to $91.74 billion by 2021.

Source: Statistica

InfoWorld’s Linthicum cites a recent survey of “IT leaders” conducted by Commvault that found two-thirds of the executives fear missing out on the latest innovations being offered by cloud services. While only 24 percent of the survey respondents report being “cloud only,” another 32 percent describe themselves as “cloud first” with plans to become cloud only. Linthicum likens today’s cloud-adoption trend to IT’s reaction to the rise of the web two decades ago: first, a “go away” mentality, followed by a slow and reluctant (and piecemeal) adoption, and finally comes a rush to seize an opportunity before the competition does. That rush is where critical mistakes can be made, which is one of the many good reasons for adopting a hybrid-cloud approach that maximizes internal resources and gives organizations a range of cloud options to choose from.

Private Cloud Finds It’s Place and Pace

What prevents companies from committing to cloud only is the need to safeguard sensitive data, trade secrets, and intellectual property. In particular, highly regulated industries such as government, financial services, and healthcare must ensure compliance and proper governance of sensitive information.

Research conducted by Gartner forecasts growth in private cloud services as an important component of the multi-cloud approaches that are becoming the norm in organizations of all types. In a September 8, 2017, article on Silicon Angle, Michael Wheatley points to a Gartner analysis that found “rapid growth” in the use of third-party private cloud services. Still, the reticence CIOs harbor about trusting their data to a cloud-only approach applies equally to the firms offering to host their companies’ private clouds.

According to Gartner’s most recent Hype Cycle for Cloud Security Products, private clouds have reached the “trough of disillusionment,” which means that in terms of value to users, the technology has failed to live up to its hype.

The long-term outlook for private clouds shows promise, however: technologies that are able to survive the disillusionment period enter the “slope of enlightenment,” which leads to the “plateau of productivity” — if they are able to earn their customers’ trust, that is. Low storage costs, instant scale, high availability, and the elimination of in-house infrastructure top the list of public cloud benefits. However, any company that has adopted a cloud-first or cloud-only strategy knows well the other side of the coin: spotty or nonexistent internet connections, the added risk to data as a result of multi-tenant cloud services, bandwidth limitations, and untrustworthy cloud service providers.

Your cloud plans don’t have to be binary or one size fits all and they shouldn’t be dictated by a single hypervisor, platform choice, or cloud provider.

Getting from Now to Next and Always Being Ready for Anything

There are a number of issues that we see customers encounter when engaging in cloud transformation projects. While there is a desire to stand-up internal private clouds for all of the reasons called out above, the truth is these deployments are often more complex than expected, hardware costs are displaced by people and tool costs, and it can be difficult for those internal clouds to keep up with the pace of innovation found in the public domain.

Many of our most successful clients have had false starts along the way, have tried their own automation projects, have worked with a number of IaaS and PaaS platforms, and are constantly re-evaluating their tool choices. The truth is there is nothing wrong with that description and in fact, failing fast and reducing the mean time between experiments is part of both cloud and DevOps maturity. There is no way your internal IT teams can keep up with the pace of change across multiple hybrid IT stacks, developer toolchains, and deployment platforms.

The trick is to create an environment that provides control without chaos and agility without anarchy. You should be able to:

  • Span platforms: bare metal, hypervisors, native containers, PaaS, serverless,etc..
  • Span destinations: OpenStack, VMware, PCF, K8s, AzureStack, AWS, Azure, Alibaba, etc…
  • Provide I&O teams the guardrails and role-based access to meet the desire for control
  • Provide Dev teams the ability to bring their own tools and treat infrastructure as code
  • Cover end-to-end deployment needs from build servers to day-2 monitoring, logging, and scaling

To learn about how Morpheus could help your teams deploy applications in less time independent of cloud implementation strategy setup time for a demo with one of our solution architects. Our next-generation cloud management platform can unify orchestration across the tools you already have and the ones you’ve yet to discover.

Original Link

Why Did Kubernetes Win?

Kubernetes won the Container Orchestration War on 29 November 2017. On that day AWS announced their Elastic Container Service for Kubernetes (EKS).

Preceding Amazon’s announcement were the announcements from Mesosphere, Pivotaland Docker of native support for Kubernetes. Those are the providers of most of the key offerings in the container orchestration space. That’s on top of Google and Azure offering managed Kubernetes services and OpenShift being based upon Kubernetes.

Why did all of these providers choose to support Kubernetes? Why, for example, is Amazon offering EKS?

For Amazon it’s because customers want it and so many AWS customers are already running Kubernetes on AWS, with many managing their clusters themselves. Given that Google and Azure offer easier managed Kubernetes services, it makes sense for AWS to make the AWS Kubernetes experience easier.

But why have users chosen to run Kubernetes on AWS when other providers were already offering managed services? Many may prefer AWS as a cloud provider but was that the only reason? If so why have so many chosen Kubernetes over AWS ECS?

The Secret to the Success of Kubernetes?

Organizations have so far very clearly preferred to run Kubernetes on AWS instead of other clouds, according to the Cloud Native Computing Foundation’s March 2017 survey results:

Image title

Notice that Datacenter also ranks high. We can start to understand this pattern by looking at survey results published by CoreOS on drivers for container orchestration. Application portability and hybrid and multi-cloud solutions feature very prominently:

Image title

Cluster federation is a big factor in the success of Kubernetes as this can be used to support multi-cloud or hybrid cloud use-cases, giving it a big advantage over AWS ECS. But this alone doesn’t explain why Kubernetes has become so popular relative to other tools.

We can understand this better by looking across a range of factors – we’ll look at how organisations wanted the flexibility of Kubernetes for hybrid cloud, multi-cloud and its general extensibility. What will emerge is a theme of decision-makers trying to retain flexibility and leverage their existing infrastructure.

Hybrid Cloud

We’ve seen already that organisations have been keen on hybrid cloud. This is even clearer in RightScale’s 2018 survey, which found up to 51% of large companies using hybrid cloud.

Image title

The reasons for hybrid cloud are concerns like leveraging existing hardware, and needing to meet regulatory or security restrictions requiring data to be kept on-premise. It can also be about resilience and being able to maintain some level of service in the case of failures.

Multi-Cloud

According to RightScale’s survey, up to 81% of large companies are choosing a multi-cloud strategy. CoreOS suggest that a big part of the reason is wariness about commercial lock-in and a survey by Stratoscale indicates that up to 80% of organizations consider cloud lock-in a major concern.

If your software needs rewriting in order to move it to another cloud provider, then your cost of going elsewhere is high. So if the price goes up then you either take that pain or swallow the price increase. Organizations may be attracted to multi-cloud in order to mitigate this risk.

If you’re committed to a multi-cloud strategy then having the option to run the same orchestration technology on different clouds can be appealing even if you don’t typically use multiple clouds in a single project. It can allow you to choose your orchestration technology for a particular project before choosing the Cloud provider and to ensure that orchestration skills can be re-used across projects.

It seems the market wanted to be able to choose orchestrator independently of their cloud provider/s and to be able to use multiple providers. Given the popularity of AWS, it makes sense that many would want to be able to choose Kubernetes as orchestrator and use AWS prominently in their setup. Given that Kubernetes was not being offered by AWS and was being offered as a managed service by other providers, using AWS prominently had the added benefit of proving that your choice of orchestrator really was independent of your choice of cloud provider/s.

It is possible that AWS is not only figuring heavily for worker nodes and is also being used for master nodes in self-managed setups. Despite the availability of managed Kubernetes services, a 2018 analysis by TNS revealed 91% of Kubernetes deployments being handled internally. This path has challenges but provides assurance that you can adequately customize your setup and add to it later as you need to. Once enough people take this path and share their knowledge they can help to make it easier for more to join them and this can create a snowball effect.

Open and Extensible

With the market committed to avoiding cloud lock-in, Kubernetes had a big advantage over AWS Elastic Container Service. But what about the other players in the market – how did Kubernetes pull ahead of them? Here the open and extensible ethos of Kubernetes is important.

The success of Kubernetes relative to the Mesosphere DC/OS, Docker Swarm, or Pivotal Cloud Foundry centers on users not wanting to be locked into tools or languages as a result of their choice of orchestration product. Some of this may have been about the line between the enterprise and open-source offerings and a perceived lock-in with the orchestration tools themselves. But more important was the range of options that the orchestration layer could keep open. The orchestrator’s support for a wide range of infrastructure, tooling and language stacks (especially the organization’s existing tooling) figure heavily in TNS’s 2016 survey results as stated evaluation criteria for container orchestrators.

Kubernetes was designed to be extended and people really are extending it (e.g. OpenShift) and contributing to the open source project. The key orchestration concepts are carefully abstracted to not be language-specific or tool-specific. There are no restrictions on supported languages. Docker is widely used for the container runtime but even this is optional. This approach, together with the CNCF’s welcoming of a wide range of corporate partnerships, has encouraged confidence in Kubernetes as a community project (as opposed to associating the project to a particular vendor/s). GitHub’s 2017 Octoverse report reveals a vibrant open source culture around Kubernetes:

Image title

A vibrant open source culture, together with a pluggable design, means a wide range of options, plug-ins, and tools.

The success of Kubernetes looks like a success for both open source and for consumers having choice of cloud provider. It also reflects organizations wanting to transition to cloud but doing so cautiously, making use of their existing tools and infrastructure as part of the journey.

Has Kubernetes Really Won?

There is no clear definition of “won.” But Kubernetes looks set to become a standard – not in a formal sense but more in a de facto sense of a common ground emerging. The industry has found itself in need of a standard for orchestration to parallel the way that Docker has become a standard for containers – Kubernetes appears to be it.

Original Link

How to Simplify and Reduce Hybrid Multi-Cloud Management and Expense

I enjoyed the opportunity to speak with Rajiv Mirani, CTO, Nutanix at the .NEXT conference following the introduction of three new products — Flow, Era, and Beam

There were a lot of announcements today. What the biggest for developers, engineers, and architects and why?

Beam is a new SaaS offering that delivers multi-cloud governance so organizations can manage their spending, security, and regulatory compliance across multiple cloud platforms. Beam provides visualization, predicts, manages costs to help organizations optimize their infrastructure investment. The offering is particularly useful for DevOps with regards to compliance and security.

Flow provides application-centric security to protect against internal and external threats not detected by traditional perimeter-oriented security products while also providing visibility into the performance of those applications across multiple clouds enabling the team to improve user and customer experience.

Era is a new set of enterprise cloud platform-as-a-service (PaaS) offerings to streamline and automate database operations so database administrators (DBAs) can focus on business-driving initiatives while allowing enterprises to reduce storage costs, simplify the management, control, and security of data while easing the complexity of database lifecycle operations. 

How do developers, engineers, and architects need to retool their careers to excel in a hybrid multi-cloud environment?

Avoid vendor lock-in. It’s easy for developers to spin up something on AWS, but it can be difficult to move your application once it’s there.

Think about how to write your applications so they can move and run anywhere. Applications are written differently for the cloud. We recommend starting on-premise with the core Amazon services so you can move your code and applications back and forth. Avoid the rush to get the application out and develop the app so it will be able to run in a multi-cloud environment. Maintain your flexibility. At some point, the costs get out of control either because of scale or you’ve provisioned more services that you will need. Over time, it will be more cost effective to move your applications based on different factors. Different technologies are best for different applications. AI is probably best on Google (GCE).

What are the greatest challenges for legacy enterprises today moving to the cloud?

Applications and data have gravity. Apps built for the web do not translate to cloud. Block storage on Amazon is not resilient or cheap, so developers and organizations need an alternative. Developers to know the new source code. Using AWS Snowball can be a cumbersome process. We’ve built our platform to provide a more seamless experience between on-prem and the cloud since we see hybrid multi-cloud environments as the future for most businesses.

Another challenge is knowing which apps are right to move and are they compliant with security and governance for a particular industry. Lastly, you need to choose the right cloud for the right app. Beam provides the data and analytics that will help optimize an organization’s hybrid cloud environment.

What’s the future of the cloud from your and Nutanix’s perspective?

Everyone will be in a multi-hybrid cloud environment and will have a seamless experience moving data and applications between clouds like we currently have with iPhone and iCloud. This will be a seamless extension of your infrastructure and ideal for developers, engineers, architects, and DevOps.

What have I failed to ask you that our readers need to know about the cloud and how Nutanix is making it easier for them?

What’s the right level of services you need to provide on-prem to have a comparable platform to the public cloud? We are looking at the 20 AWS services that are used by 90% of the market and providing DBaaS and an Object Store to make it easier for everyone to run their enterprise in a hybrid-multi-cloud environment.

Original Link

Hybrid Multi-Cloud is The Future

Enlightening keynote by Dheeraj Pandey, Founder, Chairman, CEO of Nutanix at .NEXT on his vision for a hybrid multi-cloud world which helps legacy enterprises use public and private clouds effectively and efficiently.

Customer success is in a multi-cloud world. But, multi-cloud is fraught with hard problems, decisions, and dichotomies:

  • Frictionless versus the need for governance;
  • Simplicity versus security;
  • Own versus rent;
  • Core versus edge processing;
  • Increasing delight while reducing waste.

Just as in software development, the solution is to have machines allow design to be in the forefront, making machines invisible to make humans, teams, careers, visible with applications, services, and governance.

True customer success is machines becoming invisible while humans gain freedom. A company’s success is tied to the success of its customers, so we have to help customers manage this hyper-converged infrastructure. 

The rise of DevOps aligns with the advent of public clouds and multi-cloud environments. Here’s why:

  • The laws of physics – throughput – take the app where the data is because the network is the enemy
  • The laws of locality – latency, pushing things down to where teams are
  • Centralized core cloud
  • The back-office warhorse
  • Enterprise Cloud OS
  • Dispersed ROBO cloud
  • Dispersed edge cloud – the IoT machine “Fog”
    • Terminals
    • Barges
    • Cruises
    • Rigs
    • Humvees

The consumption models have changed, and will continue to do so:

  • From mainframe to on-premises

  • From physical three-tier that require 5+ year decisions

  • To virtualized three-tier that require 3 to 5-year decisions

  • To a hybrid multi-cloud platform that requires months, to weeks, to day decisions

With Xi, customers are able to get reserved instances – off-prem on-demand in days to weeks. Spot pricing in hours. Lambda in minutes. Elasticity increases as you move to the right.

People want to make better-informed decisions. They need to have software help them determine if we have the right balance between governance and security compliance. Beam multi-cloud governance provides visibility across clouds while maintaining agility and reducing costs 25%. The software reports:

  • Cloud spend efficiency
  • Total cloud spend
  • Cloud spending over time
  • Reserved instances

While monitoring across clouds:

  • Compute resources being used
  • Network and security
  • Storage and database use
  • Single-pane view of the public cloud

It enables one-click able to eliminate waste – remove the source of the waste.

It enables users to see security policy compliance across all cloud instances

  • PCI-DSS
  • HIPAA
  • CIS

Compliance checks, security vulnerabilities, can be taken care of with 1-click remediation

According to a recent report from IDC, an enterprise cloud provides an average ROI of 534% in five years, and a break-even in only seven months.

Customer success is when you become:

  • Operationally efficient
  • Organizationally proficient
  • Financially accountable
  • Turn waste into delight

If we don’t put our arms around the customer, we won’t have them as customers. That’s why their success is our success.

Original Link

Top 3 Considerations for Building A Cloud Native Foundation in Your Enterprise

The key business imperative driving moves to new data center architectures are their ability to natively support digital applications. Digital applications are “Cloud Native” (CN) in the sense that these interactive applications are originally written for cloud-based IaaS deployments. With Kubernetes and Serverless Computing there now exist industrial scale alternatives to simply porting over monolithic applications to the Cloud. Thus, Cloud Native application development is emerging as the most important trend in digital platforms and one that determines enterprise competitiveness. This blog post will identify the four key considerations of embarking on an enterprise CN strategy.

SpaceX-SES-9 Launch (Image Credit – UDHWallpapers)

Every Enterprise Needs a Cloud Native Strategy…

Cloud Native applications need to be architected, designed, developed, packaged, delivered, and managed based on a deep understanding of the frameworks of cloud computing. The application itself is designed for scalability, resiliency, and incremental enhancement abilities from the get-go. Depending on the application, supporting tenets include IaaS deployment and management and Container Orchestration. These applications need to support the development of and incremental enhancements using agile principles. The fundamental truth is that not only will this change how your infrastructure is provisioned & deployed, but also how it is managed.

An illustration of the Cloud Native Stack as of 2018 is shown below. The most important projects emerging out of this are the container orchestration platform Kubernetes, and a hybrid cloud infrastructure. No matter which IaaS provider one picks, Kkubernetes can be the single standard that applications can be developed for.

Perform an Enterprise-wide Application Portfolio Rationalization Assessment

One of the key things I recommend organizations do at the onset of their cloud journey is to perform either an enterprise-wide or key department wise assessment of both their application landscape and their current strategic initiatives. It is very important to understand which of these applications across departments can benefit from a cloud-based development and delivery model based on business requirements. The move to a cloud is dictated by quantitative factors such as economics (such as infrastructure costs, developer/admin training/interoperability costs); return on investment (ROI); the number of years/quarters passed before break even; and qualitative factors, like the tolerance of the business for short-term pain and the need for the enterprise to catch up with and disarm competition. It may also very useful to combine this analysis with existing IT vendor investments across full global infrastructure footprint so that a holistic picture of the risk/rewards continuum be built. One also needs to take into account if combining planned cloud spending can somehow be incorporated into existing legacy modernization/re-platforming projects or data-center consolidation projects.

Another important thing to consider is that public cloud spend is sometimes misleading to estimate in terms of cost. Once lines of businesses in large organizations start using public clouds, the financial promise of zero CapEx is outweighed as OpEx costs begin to run amok. In a lot of these cases, a private cloud powered by commodity open source platforms such as OpenStack may be the right way to begin. To counter the complexity of OpenStack, it may be a step in the right direction to consider a SaaS-managed OpenStack control plane so that risk is minimized in terms of both the operator and developer experience. This is a key theme that will be expanded on in later posts.

Let us be clear that not every enterprise application is a candidate for cloud migration. Given that a monolithic departmental application runs on legacy virtual machine, what are the ideal criteria to make this decision of when to migrate it over?

At a very high level, I recommend that those legacy applications that serve a limited community of interest that isn’t anticipated to grow much or result in frequent changes to the concerned suite of applications. These legacy applications can be made resident in a private cloud leveraging OpenStack.

They can then be incrementally enhanced over time (starting with changes around their provisioning, development, management, etc.) to take advantage of a private cloud design until such time that business needs dictate that they can be migrated over to a true CN development model.

Enterprise CIOs also need to ensure that their investments in the cloud don’t result in a significant container or VM sprawl, which will add to the compounding of the technical debt challenge.

Consideration #1: Adopt Hybrid Cloud

As discussed above, a range of cloud choices exist, namely:

  1. The public cloud providers – Amazon AWS, Microsoft Azure & Google Cloud Platform
  2. Open Private Cloud Platforms such as OpenStack
  3. Proprietary Cloud or Legacy virtualization approaches – VMWare, Xen, etc.
  4. Converged Hardware Infrastructure
  5. Enterprise Cloud Services such as IBM, Oracle, etc.
  6. SaaS Platforms such as Salesforce, Workday, etc.

When you combine the above notion with the complex vendor landscape out there, a few important truths emerge:

  1. The Enterprise Cloud will be hybrid, no question. However, one needs to pick and stick with a unified set of standards for development.
  2. Workloads will be placed on different providers based on business and cost considerations. Examples include flexibility, advantages of the application frameworks, and data services provided by the cloud vendor.
  3. IaaS lock-in makes zero business sense from both a business and technology perspective. The usage of a SaaS-based management plane that supports multiple cloud providers. Managed Kubernetes and Open Source Serverless Computing technology should help in avoiding lock-in as much as possible.
  4. Multi-cloud management is a challenge your cloud admins need to deal with and something executives need to account for in the entire business case – economics, value realization, headcount planning, etc.

Consideration #2: Adopt Kubernetes

It may seem odd to find so much of mention of a software platform in a blog about enterprise cloud but Kubernetes is a very special project and perhaps the most transformational cloud technology. Across all the above cloud provider choices, containers are unquestionably the granular unit of application development and deployment. Kubernetes is the defacto standard in container orchestration across multiple cloud providers. As far as technology goes, this is a sure thing to bet on and one you can’t go wrong with.

With its focus on grouping containers together into logical units called pods, Kubernetes (Kubernetes) enables lightweight deployment of microservice based multi-tier applications.

Kubernetes also provides auto-scaling (both up and down) to accommodate usage spikes. It also provides load balancing to ensure that usage across hosts is evenly balanced. The controller also supports rolling updates/canary deployments to ensure that applications can be seamlessly and incrementally upgraded. The service abstraction then gives a set of logical pods an external facing IP address. A service can be discovered by other services as well as scaled and load balanced independently. Labels (key, value) pairs can be attached to any of the above resources. Kubernetes is designed for both stateless and stateful app as it supports mounting both ephemeral as well as persistent storage volumes.

Developers and operations can dictate whether the application works on a single container or a group of containers without any impact to the application.

These straightforward concepts enable a range of architectures from the legacy stateful to the microservices to IoT land – data-intensive applications and serverless apps – to be built on Kubernetes.

However, with Kubernetes being operationally complex to deploy, manage and maintain, it makes a lot of sense to consider a SaaS-managed control plane as a solution so that Kubernetes install, troubleshooting, deployment management, upgrades and management, and monitoring do not end up causing significant business disruption and result in personnel cost increases.

Consideration #3: From Monoliths to Microservices to Serverless

The vast majority of applications being developed now are systems of engagement being directly used by customers. These apps support a high degree of interactivity and rate of change to the application based on the data gathered using millions of micro customer interactions. All of this results in a high degree of velocity from a development standpoint. Monolithic architectural styles are no longer a fit for digital platforms as discussed below.

It is no surprise then that Cloud Native apps need a range of architectural style to accommodate this discrete nature of business functionality and change. Accordingly, most enterprise apps need to consider approaches ranging from microservices to serverless architectures. Microservices apps are broken down into smaller business services and then deployed, maintained, and managed separately. Typically each service can be run in its own process. The promise of this style is greater flexibility for development teams, higher release velocity as the whole app doesn’t need to be changed to accommodate changes in smaller units and scalability.

In addition, frameworks that support microservices provide functionality such as load balancing, discovery, high availability, and flexibility in upgrades (blue/green deployments, rollbacks/roll forward etc). The more cutting-edge cousin of microservices is serverless architectures. Especially around domains such as IoT/edge computing where architecture needs to support streaming data. Each of the serverless functions can be deployed into a Docker container which is instantiated when invoked and destroyed when idle. Serverless architectures and frameworks can dramatically reduce the time spent on building up the infrastructure for container driven applications. They reduce business time to value by eliminating a lot of operational steps involved in packaging, deploying and managing infrastructure around development pipelines.

Cloud Native with Platform9

  1. As the only provider of SaaS-delivered and managed open-source cloud framework control planes, Platform9 can assist executives considering a move to Cloud Native model –with the five strategic areas listed below. Driving the business case with economics and value realization models in mind. We understand that an inefficiently designed cloud landscape can actually be disastrous for business in terms of both cost and operational challenges. We can help clients realize operational savings and ROI across a complex infrastructure & development organization.
  2. Help develop a range of hybrid cloud models that satisfy a range of operational requirements using OpenStack, Kubernetes, and our serverless computing framework, Fission. Avoid lock-in to IaaS providers or to cloud stacks as much as possible and help clients invest in an advantageous private cloud strategy.
  3. Help clients deploy CN apps by leveraging microservices and serverless architectures as a way of re-architecting their application footprint
  4. Take the operational pain and maintenance nightmares out of Cloud with our SaaS-based management planes. This is an easy way to de-risk your hybrid cloud and Kubernetes investments.

Original Link

The Promises, Payoff, and Products of Hybrid Clouds

The cloud promised to provide us all flexibility. The opportunity to access infinite resources as and when we need them and pay accordingly. We would no longer have to spend time installing, configuring and maintaining servers; we were promised more time to “just code.”

Instead, we got increasing vendor lock-in and a handful of cloud players so large that if a data center experiences problems, significant sections of the internet go offline. Naturally, we created more tools and practices to cope with the problem we created for ourselves, and dear readers, welcome to hybrid clouds.

I am of course being slightly facetious. In reality, hybrid clouds are a method for building flexibility and redundancy into a cloud infrastructure. The past decade has taught us that relying on one provider is a bad idea, and we should use a mixture of public and private platforms and switch between them as required for operational or financial reasons.

Reasons to Use Hybrid Clouds

There are several reasons you might want to consider a hybrid cloud instead of throwing all your egg-shaped services into one cloud-shaped basket.

Privacy

For regulatory or architectural reasons, an application may contain data that you need to store in particular regions or on servers that you have more control over.

Financial

Some cloud providers provide better value for certain services than others, or you might want to take advantage of the best deals with specific providers.

Custom services

While increasingly unlikely as most software vendors rush to the cloud, you may have legacy or custom services that only run on particular private machines or third-party providers. This includes services that you intend to migrate eventually but haven’t yet.

Considerations Before Adopting a Hybrid Cloud Solution

It may surprise you to hear, but most cloud providers are supportive of hybrid clouds, especially those that connect their services to legacy and on-premises systems. After all, they are removing barriers for potential customers. Here’s a couple of factors to consider in your hybrid-cloud strategy.

Incompatibility

While in theory developer standards are widely adopted, you can potentially experience library or protocol inconsistencies between providers, so do your research and testing before a major rollout.

Security

As I hope you are already doing, naturally you need to encrypt all communications between services and make sure that public endpoints are secured.

Performance

Again, although cloud services, CDNs, and transmission mechanisms are continually improving, the more hops you introduce, the more the opportunity for lag, latency, and ‘moving parts’ that you need to debug in case of a problem.

Tools

Now for everyone’s favorite discussion, let’s talk about the tools available to help you create, manage, and tweak your hybrid cloud setup. I’ve tried to break them into categories, but there is some crossover.

Cloud Providers

AWS has an entire suite of tools to help their services form a part of your hybrid cloud, including:

  • AWS Storage Gateway: for using on-premises storage as part of AWS storage devices
  • Amazon VPC: for creating a VPN between AWS and other parts of your network, plus the capability to manage IP address ranges if you need to use specific values
  • AWS Direct Connect: similar to VPC but for creating direct connections
  • AWS OpsWorks: for those of you following “infrastructure as code” practices, AWS’s offering can also manage on-premises servers

Azure has a similar offering in the form of Azure Stack. What features it offers and how you use them is a little unclear and lurks behind a sign-up form. Other tools offered by Azure that relate to hybrid clouds are:

  • Logic Apps: for pulling data from on-premises applications into public cloud applications
  • Service Bus: for inter-cloud messaging
  • StorSimple: for consolidated storage

You can integrate many of the smaller hosting players with a hybrid cloud, using any of the commercial tools below that support your provider(s), or a roll-your-own option if you put in the work. There are companies like Joyent who focus their business on helping you integrate them with other larger players, which is a smart move, and open-source their tools.

Commercial Tools

One of the many all-in-one solutions, CoreStack coins another buzzword to add into the mix: cloud governance. Aimed more at operations and business people than developers, the service focuses on defining how your services fit together based on consumption and cost and don’t provide a tremendous amount of detail before an appointment with sales.

There are also a handful of companies such as ParkMyCloud and Replex that focus entirely on the money-saving aspect, helping you save as much money as possible by shifting application components around as efficiently as possible.

Cloud Controller pulls in a lot of enterprise-friendly service providers such as Oracle, Citrix, and Red Hat. With another new buzzword is Nutanix and their “hyperconverged infrastructure technology.” They both have an impressive client list and support a lot of enterprise-friendly software components, but again it’s hard to know how their platform works.

Finally, of course, Cisco has their own solution in the shape of CloudCenter that has a few extra useful features such as budget plans, centralized security, and supports over 20 providers.

Open-source Tools

There are plenty of choices in the open-source realm, too, that you can install and manage yourself, or find preinstalled on public and private clouds. While many developers will use more complex (and thus scalable options) for managing Docker containers across multiple hosts, for simple setups, Docker machine and Swarm could be enough for your needs.

A small project called Kubernetes has hybrid cloud functionality (or as they call it “cluster federation”) in the form of kubefed. It’s a little complex to set up, but read this Google blog post for some ideas. If you are interested in Kubernetes but don’t want install and manage it yourself, then look no further than my roundup of Kubernetes managed hosting options.

Somewhat overshadowed by Kubernetes these days but still a powerful option is Apache Mesos, which uses interesting paradigms to treat your distributed computing resources as one collective whole.

In a similar vein is OpenStack, which puts hybrid clouds front and center, and I would hazard a guess that some of the commercial vendors use it behind the scenes as well.

Lesser known but with equal vintage and aims is Apache CloudStack.

An older but well-established option is OpenNebula, it’s not immediately apparent that it’s open source (though it declares loudly that it is). After some digging, I finally found the codebase.

Finally, take a look at OneOps from Walmart. Yes, the retail chain. At least you know it’s production-tested.

Monitoring

Tools for monitoring hybrid clouds are also plentiful. What you choose mostly depends on your setup and what you want to monitor. Common tools such as DataDog, New Relic, Prometheus, and the Elastic stack should suit your needs and are widely available.

Future Flexible

In reality, the hybrid cloud is what the cloud should have been in the first place: a flexible suite of services that do what we ask, when we ask, and charge us accordingly.

Granted, a hybrid cloud requires more initial steps than we might have all hoped, but few people also want to run and maintain their own servers anymore, so it’s a happy compromise.

Original Link

Ready for Bionic Beaver? What’s New in Ubuntu 18.04

A couple of weeks ago I wrote a piece about how desktop Linux may be the only remaining “proper” desktop OS. The article proved popular and sparked some conversation around the topic in comments and social media surrounding it. With perfect timing, this week saw the release of the latest version of Ubuntu 18.04, and I joined two press calls to hear more details about this new LTS release of one of the most popular Linux distributions.

On the Desktop

Fittingly, the call kicked off with reiterating a commitment to Ubuntu desktop after years of experiments that ultimately failed. This step meant the abandonment of a handful of custom technologies and a switch back to more widely adopted open source projects such as Gnome Shell, perhaps providing a much-needed resource injection to these projects. Later in the call, there was a slight dig at some other hardware and software vendors, and the “artificial limits” they impose on users. Being free, Ubuntu lets you take things as far as you want.

Version 18.04, or “Bionic Beaver” (that’s quite a name) has a handful of interesting new features that reflect trends in the operating system world more widely:

Gnome Shell and Windowing Managers

Conscious of the disruption a major UI change can cause to users, 18.04 sets up Gnome Shell with default settings that are remnants of the Unity UI, such as keeping the application launcher on the left and retaining themes and colors.

After testing the Wayland in 17.10, the team found that while it’s increasingly stable, and is miles ahead of X.org in many ways, there were key issues (mostly related to screensharing) that resulted in a switch back to X.org for 18.04, upcoming releases will revert that change. Of course, being Linux, Wayland is still maintained and available if you want to use it instead.

A new community-contributed theme is also available, search for Communitheme (working title) in the snap store.

Live Updates

The new Canonical Livepatch Service lets you apply critical kernel security fixes without rebooting and reduces planned or unplanned downtime while maintaining security. It’s available as part of an Ubuntu Advantage subscription, or for all Ubuntu community members, it’s free for up to three machines. This feature keeps machines as up to date as possible with minimal effort and involvement.

Snaps

I wrote before about Canonical’s snap concept and efforts, and if the constant flurry of emails I receive from their PR team is anything to go buy, Canonical is working hard to grow the selection of applications. This selection now includes open and closed source applications, which will upset some members of the Linux community, but to be blunt, it’s a good strategy to bring users to Linux who may switch to open alternatives once they are comfortable with the platform.

Whether you like the concept or not, Canonical announced that the Snap store has seen over 1 million installs in the past 3 months alone, and the snap store is available for other Linux distributions.

Data Metrics

Always a controversial topic in the Linux community, but Canonical is insistent that tracking useful anonymous information about how people use Ubuntu and what hardware they use is important for them, and for developers creating applications for Ubuntu.

Ubuntu on Other Devices

As nice as some of these features are, Ubuntu’s primary user base is not on the desktop, but on a variety of other platforms (the majority of cloud providers use Ubuntu), so what does 18.04 offer for them? For developers, one of the significant advantages of using Ubuntu is common packages from development to these other platforms, ‘guaranteeing’ that they work.

Performance Optimization

Without giving much detail, Canonical has worked with all the cloud providers to roll-out optimized versions to their customers. Noting that with hybrid clouds the new normal, they improved inter-cloud communication and faster boot times for “bursty” applications and services. For an ultimate performance tweak, all 18.04 cloud images, OpenStack and Kubernetes distributions include support for hardware acceleration with NVIDIA GPUs. The OpenStack distribution also includes support for NFV and NVIDIA Tesla GPUs among others.

Adding to Intel and ARM architecture support, 18.04 also adds support for Power9 architectures suited to machine learning workloads.

Containers and Kubernetes

Nearly everyone is dipping their toes into Kubernetes, and Canonical optimized their own (mostly vanilla) blend for Google Cloud and specifically with AI and analytics tooling built-in. At a container level, Ubuntu 18.04 continues to have Docker and LXD support by default. If you’re not familiar with LXD, version 3.0 contains interesting new features, especially for those who need to maintain and manage outdated and insecure images without them affecting daily operations.

Like what you see? Read more and download 18.04.

Original Link

Top 3 Considerations When Your Organization Moves to Hybrid Cloud Model

This is the right time for your organization to make a clear decision to have a hybrid cloud model for all your IT aspects if you have not started. The following consideration will give you better clarity for the cloud enablement and migration both on-premise and cloud.

Identify the Services That You Are Going to Use in Cloud

The organizations of different industry sectors have different use cases. Sectors like banking, logistics, insurance, healthcare, and retail migrate to cloud based on their unique business requirements. Though the domain varies there are certain aspects of software development,  system administration and infrastructure management which are common for which ideally the organizations would choose to go for hybrid cloud, so you should take extra care to have the list of exact services and the business benefits before moving to cloud for those kinds of services. 

The cloud players have paved the way for the organizations to use the services more seamlessly than before, and the recent changes allow you to write only the business logic and connect the cloud components together where you need them and use it. It is not mandatory that a customer has to use all the services provided by the cloud service provider; it all depends on your company’s business requirements and cost estimation factors.

Evaluate the Best Cloud Service Provider for the Service That You Choose

The major cloud service providers in the market today such as Amazon AWS, Microsoft Azure, IBM, Salesforce, SAP, Google Cloud and Oracle are more focused on AI, machine learning and cloud-native apps in recent years. The recent updates on Text to Speech Service such as Amazon Polly, Google Text-to-speech Deepminds AI, and Microsoft Azure Bing Speech API provide the same capability with different performance levels on each cloud environment. You have to choose which player is best suited to your requirements in terms of handling customer data effectively. Similarly, the Cloud Native Apps are the ones which get built completely on the cloud starting from IDE to code commit, repository, testing, build and deployment in production. There are tradeoffs on the cloud-native apps as the code developed is less portable to other cloud service providers if it doesn’t have a better design; however, the choice of the cloud player for these aspects should be taken care of well before starting the development on the cloud.

List down the Mass Data Workloads Which Requires Migration and Leverage on the Cloud Scalability

You could evaluate the data available in your on-premise and segregate the idle data and dynamic data which includes live transactions. This evaluation would help to take strategic decisions on moving to the cloud and to do the cost estimation, which also involves how frequently you will be using the data in the cloud. The idle data could be further drilled down to find out the preferred data and the storage space required for it on the cloud and on-premise. You have to plan and schedule the data migration activity as few organizations do it in periodic cycles; this will allow you to decide on business downtime and infrastructure and scalability factors for the amount of workload being migrated to the cloud. You need to pay attention to the retrieval part of the data from the cloud as it costs different for different cloud players. Direct Connect and Express Route options provided by AWS and Azure and a few other cloud players provides the way to have a private connection between your data center and cloud for better network latency.

 

Original Link

Hybrid Cloud Security: Building Infrastructure That Works for Your Organization [Infographic]

It seems like everyone is talking about hybrid cloud security right now, and for good reason! More and more organizations are choosing to diversify their infrastructure through a variety of cloud and container environments. Increased speed and agility are among the top reasons organizations are choosing to adopt new environments, which better feed into their DevOps processes.

Our latest infographic, Hybrid cloud security: Building infrastructure that works for your organization, explores this sweeping hybrid trend and analyzes security measures that continue to be a concern as organizations adopt new infrastructure.

This infographic will explore:

  • Growth of hybrid cloud and container workloads
  • Security challenges of cloud and container workloads
  • Visibility challenges in new environments
  • Container security concerns

If you have questions about unifying your hybrid cloud infrastructure or are curious to learn about how rapidly our industry is truly changing, check out our infographic below!

Original Link

The Distributed Cloud Database for Hybrid Cloud

DataStax Enterprise (DSE) 6 represents a major win for our customers who require an always-on, distributed database to support their modern real-time (what we call ‘Right-Now’) applications, particularly in a hybrid cloud environment. Not only does it contain the best distribution of Apache Cassandra, but it represents the only hybrid cloud database capable of maintaining and distributing your data in any format, anywhere—on-premise, in the cloud, multi-cloud, and hybrid-cloud—in truly data autonomous fashion.

Let me take you on a quick tour of what’s inside the DSE 6 box, as well as OpsCenter 6.5, DataStax Studio 6, and DSE Drivers, and show you how our team has knocked yet another one out of the park.

Double the Performance

To start, new functionality designed to make Cassandra more efficient with high-compute instances has resulted in a 2x or more out-of-the-box gain in throughput for both reads and writes. Note that these speed and throughput increases apply to all areas of DSE, including analytics, search, and graph. A new diagnostic testing framework developed by DataStax helped pinpoint performance optimization opportunities in Cassandra, with more enhancements coming in future releases.

Next, DSE 6 includes our first ever advanced Apache Spark integration (over the open source work we’ve done for Spark in the past) that delivers a number of improvements, as well as a 3x query performance increase.

All of these performance improvements have been designed with our customers in mind so that their Right-Now applications deliver a better-than-expected customer experience by processing more orders, fielding more queries, performing faster searches, and moving more data faster than ever before. If an app’s response time exceeds three seconds, it won’t be because of DSE.

Self-Driving Operational Simplicity

In designing DSE 6, we listened to both DataStax customers and the Cassandra community. While the interests of these groups sometimes diverge, they do have a few things in common.

It turns out that helping with Cassandra repair operations is a top priority for both. For some, Cassandra repairs aren’t a big deal, but for others they are a PITA (pain in the AHEM). Don’t get repair right in a busy and dynamic cluster, and it’s just a matter of time until you have production-threatening issues.

If you like your current repair setup, keep it. But if you want to eliminate scripting, manual intervention, and piloting repair operations, you can turn on NodeSync and be done. It works at the table level so you have strong flexibility and granularity with NodeSync, plus it can be enabled either with CQL or visually in OpsCenter.

Another area for improvement on which open source users and DataStax customers agree is upgrades. No technical pro that I know looks forward to upgrading their database software, regardless of the vendor used.

These management improvements and others are directly aimed at increasing your team’s productivity and letting you focus on business needs vs. operational overhead. The operational simplicity allows even novice DBAs and DevOps professionals to run DSE 6 like seasoned professionals. Ultimately that means much easier enterprise-wide adoption of data management at scale.

Analyze (and Search) This!

For the first time, we’re introducing our advanced Spark SQL connectivity layer that provides a new AlwaysOn SQL Engine that automates uptime for applications connecting to DSE Analytics. This makes DSE Analytics even more capable of handling around-the-clock analytics requests, and better support interactive end-user analytics, while leveraging your existing SQL investment in tools (e.g. BI, ETL) and expertise.

We also have great news for analytics developers and others who want to directly query and interact with data stored in DSE Analytics. DataStax Studio 6 provides notebook support for Spark SQL, which means you now have a visual and intelligent interface and query builder that helps you write Spark SQL queries and review the results – a huge time saver! Plus you can now export/import any notebook (graph, CQL, Spark SQL) for easy developer collaboration as well as undo notebook changes with a new versioning feature.

Supporting Distributed Hybrid Cloud

Over 60% of DataStax customers currently deploy DSE in the cloud, which isn’t surprising given that our technology has been built from the ground up with limitless data distribution and the cloud in mind. Customers run DSE today on AWS, Azure, GCP, Oracle Cloud, and others, as well as private clouds of course.

DataStax Managed Cloud , which currently supports both AWS and Azure, will be updated to support DSE 6, so all the new functionality in our latest release is available in managed form. Whether fully managed or self-managed, our goal is to provide you with multi and hybrid cloud flexibility that supplies all the benefits of a distributed cloud database without public cloud lock-in.

Yes, There’s Actually More…

With DSE 6, we want you to enjoy all the heavy-lifting advantages of Cassandra with none of the complexities and also get double the power. , are now available, so give DSE 6 a try (also now available for non-production development environments via free online training , and other Docker Hub) and let us know what you think.

Original Link

Functional Hybrid Cloud Alternatives for SMBs and Retail Chains

The majority of startups and small business owners are increasingly emphasizing on online visibility for getting their brand names out in open. While they are opting for functional cloud management strategies like Dropbox, the more intuitive ones are primarily focusing on security and accessibility. At present, the trends are implying towards the usage of hybrid cloud solutions as they offer higher levels of confidentiality as compared to the public storage services. In the subsequent sections, we shall talk about some of the best hybrid solutions for small businesses and retail chains which can help them with better brand management and online presence.

Why Hybrid Solutions are Advantageous to a Business?

Let’s be honest; hybrid cloud computing solutions offer a more secured premise to the organizations, allowing them to combine different services, as per organizational demands. The problem with only public or only private cloud solutions is that organizations need to compromise on certain fronts. However, hybrid solutions offer exceptional data management ideas to the organizations and retail chains. While a public-only cloud solution lacks in terms of security, the private ones often fall short in terms of flexibility.

Hybrid solutions are more secured than the public ones and they also offer uninhibited control over the data usage and cloud management services. In addition to that, better agility is a trait which hybrid cloud services boast. The cost-efficiency is unparalleled and these cloud alternatives are also more scalable as compared to some of their private counterparts.

Now when we have seen how beneficial the hybrid platforms can be, it’s time we enlist some of the most effective ones for startups and SMBs alike.

The Azure Stack from Microsoft

This tool from Microsoft helps companies create custom hybrid cloud solutions which leverage Software as a Service application for connecting the feature sets. Microsoft’s Hybrid Cloud Platform is typically optimized for all business requirements and allows companies to stay protected against outages and frequent data losses. The best features on offer include the Disaster Recovery Solution and the Azure Backup service. Moreover, Microsoft also supports multi-channel compatibility when it comes to this hybrid cloud solution.

Google Cloud

Google’s Hybrid Cloud platform is quite popular among startups as it helps them deploy services and applications without having to worry about physical entities. The likes of localized retail chains like Bakeaway have resorted to similar cloud computing platforms for maintaining a secured and exhaustive database. These startups can easily monitor application and network performances while being able to keep a close eye on the behavior of the concerned applications and enterprise security. While the mentioned startup maintains an updated database with client details, there are other SMBs which swear by higher levels of confidentiality, making hybrid cloud-computing solutions almost imperative.

IBM Z

IBM comes forth with a formidable cloud computing platform that includes the best of both worlds. The Z hybrid platform integrates a host of analytics, technologies and other essentials for empowering the entire security console. Apart from that, this cloud-computing solution prevents multiple data outages and can be perfectly leveraged by the concerned organizations. Last but not the least, IBM offers complete access to all the useful insights including the machine learning strategies and real-time analytics.

Inference

It’s time that companies focusing on growth start working alongside hybrid cloud solutions, instead of concentrating solely on private or public counterparts. Hybrid platforms come with a host of benefits, including the likes of better security options and high-end recovery attributes; thereby helping organizations minimize cyber threats and other forms of data outages. These services are being increasingly used by the startups for branding themselves in a much better way.

Original Link

AWS CloudWatch Monitoring with Grafana

Hybrid cloud is the new reality. Therefore, you will need a single tool, general purpose dashboard and graph composer for your global infrastructure. That’s where Grafana comes into play. Due to it’s pluggable architecture, you have access to many widgets and plugins to create interactive & user-friendly dashboards. In this post, I will walk you through on how to create dashboards in Grafana to monitor in real-time your EC2 instances based on metrics collected in AWS CloudWatch.

To get started, create an IAM role with the following IAM policy:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "cloudwatch:PutMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:GetMetricData", "cloudwatch:ListMetrics" ], "Resource": "*" } ]
}

Launch an EC2 instance with the user-data script below. Make sure to associate to the instance the role we created earlier:

#!/bin/sh
yum install -y https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-5.0.3-1.x86_64.rpm
service grafana-server start
/sbin/chkconfig --add grafana-server

On the security group section, allow inbound traffic on port 3000 (Grafana Dashboard).

Once created, point your browser to the http://instance_dns_name:3000, you should see Grafana Login page (default credentials: admin/admin) :

Grafana ships with built in support for CloudWatch, so add a new data source:

Note: In case you are using an IAM Role (recommended), keep the other fields empty as above, otherwise, create a new file at ~/.aws/credentials with your own AWS Access Key & Secret key.

Create a new dashboard, and add new graph to the panel, select AWS/EC2 as namespace, CPU Utilization as metric, and the instance ID of the instance you want to monitor in the dimension field:

That’s great !

Well, instead of hard-coding the InstanceId in the query, we can use a feature in Grafana called ” Query Variables “. Create a new variable to hold list of AWS supported regions :

And, create a second variable to store list of instances IDs per selected AWS region:

Now, go back to your graph and update the query as below:

That’s it, go ahead and create other widgets:

Note: You can download the dashboard from GitHub.

Now you’re ready to build interactive & dynamic dashboards for your CloudWatch metrics.

Original Link

Living Hybrid in the IT World

We get to hear the word “Hybrid” a lot. Hybrid cars, hybrid animals, hybrid solutions and hybrid clouds are some examples. For academic and business world, hybrid means, “derived or composed of heterogeneous sources.”

You might have heard a lot about hybrid solutions, hybrid clouds and going hybrid. When thinking through this I noticed two perspectives to look at.

Hybrid as a Hosting/Deployment Strategy

I have seen many places where architects, consultants, etc. claim we are following an hybrid cloud approach in our organization. These organizations had all their enterprise IT solutions hosted on-premises and now moving to the cloud. When doing so, their approach is move a non-mission-critical app to the cloud first, learn from it and and follow it for more apps. In my opinion, this is a hybrid deployment strategy. They keep some apps in cloud and the rest on-premises. But to achieve a functionality, most of the times they operate independently; there is no involvement of cloud and on-prem components together to achieve a functionality.

Hybrid as a Solution Strategy

Other scenario is a solution consisting of components deployed in both cloud and on-premises. To achieve a functionality, both components should function together. For example, consider an integration service running in the cloud which talks to a third party service hosted by some other vendor/partner and also talks to a service hosted on-premises to update your database. During one request to the integration service, it also touches the on-premises component. This is a hybrid solution.

Challenges in Adopting a Hybrid Strategy

Adopting a hybrid solution strategy is more difficult than adopting a hybrid deployment strategy. Some of the challenges you face in both strategies include:

  1. On-premises components need to be exposed to the cloud components (for a hybrid solution).When exposing the on-premises components security becomes a key concern. How do you restrict access to only your cloud components and not to anyone else? There can be different options such as protecting with credentials, IP whitelisting, accessing via a VPN, etc.
  2. Operations team needs to follow different approaches when maintaining and monitoring the cloud components.
  3. There may be concerns on sensitive data going out of your data center. If an API call is carrying sensitive information and it is hitting a cloud component, you have reasons to be worried about it.
  4. Additional latency introduced by the cloud to on-premises or on-premises to cloud communications.

Ideal World

If I was adopting a hybrid strategy I would prefer to have following characteristics in it:

  1. Avoid any cloud to on-premises communications. This removes the requirement of exposing the on-premises components to cloud.
  2. I would prefer a SaaS solution to be used as the cloud component rather than having to maintain some components in the cloud by myself.
  3. Route non-sensitive data through cloud and route the sensitive data through on-prem components

But, one thing to note is, its not easy to adopt a hybrid strategy with above characteristics. I’ll write about a hybrid API Management strategy with above characteristics very soon.

Until then, your thoughts are welcome…

Original Link

The Unexpected Business Benefits of Hybrid Clouds

IT pros of every stripe are in the process of rewriting their job descriptions. In many cases, the reality of 21st-century data management leads to a version of this concept:

“I’m a strategy consultant helping the business from the inside.”

IT managers are transforming themselves into in-house business consultants charged with providing decision makers in the organization with the tools and business insights they need to achieve the company’s goals. Such transformations always present challenges — some obvious, and some unexpected.

One of the unforeseen obstacles facing IT departments as they shift from service provider to business consultant is called the “endowment effect.” In a November 27, 2017, article on ITProPortal, Ian Furness defines the endowment effect as the tendency of people to value something more highly solely because they own it. This leads to them undervaluing alternatives that might actually be a better deal.

The result of the endowment effect on IT is the lost opportunity to capitalize on new approaches and technologies. According to Furness, the emotional connection to “owning” data and systems explains the slow pace of cloud migration in many organizations. Not only does overvaluing in-house systems cost companies more money, it leaves data less secure: professional cloud services now deliver a higher level of security expertise than any company can provide economically on its own.

Spend Less Time Managing Machines, More Time Consulting with Business Managers

IT workers continue to spend most of their time managing the information infrastructure that their business relies on. However, the clear trend is to let cloud services do the heavy lifting: they provide the CPUs, servers, storage, security, and network plumbing that supports your company’s apps and data. IT’s attention turns away from infrastructure concerns and toward discovering and distributing the business insights sought by customers throughout the organization.

Companies that are just now ramping up their migration to cloud services have one built-in advantage: As David Linthicum writes in a December 4, 2017, article on Datamation, “we’re just getting better at migration and refactoring.” Linthicum points out that some workloads will never be a good fit for the cloud — legacy systems and proprietary databases in particular. Still, the two factors most likely to stall a cloud migration plan’s momentum are fear of vendor lock-in and skepticism about the cloud’s much-touted ability to save money.

More potential cloud customers cite concerns about vendor lock-in and compliance/governance, while fewer worry about security, cost, and loss of control. Source: Kleiner Perkins, via Datamation

When enthusiasm for the cloud projects in your organization begins to sag, Linthicum recommends reminding the stakeholders that the only way to keep pace with the competition is to be as agile as they are in terms of provisioning compute and storage resources. If you can’t get your products to market as quickly as your competitors do, you’ll always be playing catch-up. This is one of the strategic advantages that are easy to miss when you focus exclusively on the cloud’s tactical advantages.

Minimize Hybrid Cloud Management Overhead to Maximize Cost Savings

Another way to keep people optimistic about the company’s cloud efforts is to generate early successes with small projects. This serves as both a proof of concept and a way to fail early, fail often, and fail small so you can learn quickly from initial mistakes. Thinking small also helps you collect, organize, and analyze metrics in a way that can be applied quickly and simply to future projects, many of which will have a larger scale.

The top three drivers of hybrid/multi-cloud adoption are flexibility and choice in deciding where workloads run (64 percent), extending IT resource capacity (56 percent), and maximizing return on existing IT investments (56 percent). Source: 451 Research, via

A potential sinkhole for businesses planning their cloud strategy is overspending on management. 451 Research’s Cloud Price Index found that taking a hybrid/multi-cloud approach rather than relying on a single cloud service can save companies 74 percent of direct expenditures, as reported by TechTarget’s Alan R. Earls in a November 2017 article. The caveat is that such savings are possible only when cost controls are applied to monitor and optimize all cloud spending.

A comprehensive cost-control system ranges from such basic costs as powering off idle servers and other unused equipment, to logging the many different cloud accounts in use, including which applications and teams are using them. To automate the operating model, apply a tagging taxonomy at the build and design phases. Use tagging to identify all resources consumed in the multi-cloud infrastructure. This lets you tie each resource to a cost center. The tagging can be done via third-party tools or by open languages such as JSON or YAML.

Benefits of a Deliberate, Cost-Focused Approach to Cloud Migration

Tagging also lets you set budget alerts as part of a DevOps methodology. This highlights the need to monitor centrally all bills received from all accounts and subscriptions.

You can’t blame people for believing that cloud computing is the answer to all their problems. The tremendous level of hype surrounding cloud services causes many business people to think all their apps and data need to be moved to the cloud, the sooner the better. It is becoming increasingly common for CIOs and IT managers to find themselves in the role of bringing these cloud dreamers back down to earth.

While public and private cloud will account for an increasing share of total worldwide IT infrastructure spending through 2021, traditional in-house data centers will continue to represent the largest single category. Source: IDC, via SDX Central

Market research firm IDC forecasts that 58.7 percent of worldwide IT infrastructure spending in 2017 will go toward in-house systems, as Jessica Lyons Hardcastle reports in a July 6, 2017, article on SDX Central. This represents a decline of 4.6 percent from 62.6 percent in 2016, and the trend is clearly toward increases in cloud IT spending: According to IDC, cloud infrastructure spending will have a compound annual growth rate of 11 percent from 2017 to 2021, to a projected $45.7 billion that year.

In a December 9, 2017, article in the Economic Times, Deepak Misra and Rajeev Mittal list some common reasons for keeping applications in-house: compliance and regulatory issues, the need for low latency, and the inability of custom legacy apps to run in cloud environments. Misra and Mittal present five typical use cases justifying a cloud-migration plan:

1.Start by bringing the cloud in-house: Rather than maintaining a legacy in-house infrastructure while migrating apps to the cloud one by one, transform your data center to an architecture that is compatible with public and private clouds. The result is greater efficiency, improved performance, enhanced security, and lower operating costs through replacement of outdated multi-vendor servers, storage, and backup.

2.Realize savings quicker by adopting appliances: Appliances that are pre-configured for a specific operation are simple to deploy and run, and they require fewer specialized cloud skills.

3.Extend your private cloud to a hybrid setup: A path many companies take is to begin their cloud strategy by migrating a handful of non-critical workloads to a private cloud as a proof of concept and testbed for tools and techniques. The downside of this approach is the time it takes to “build out” private clouds, which cancels out the speed, cost, and agility benefits of the cloud.

4.Focus on your most-critical applications: It’s counter-intuitive that an organization’s most important systems are also the ones most likely to be running on outdated infrastructure, and those most likely to benefit from an upgrade. For these apps, migrating to high-end servers — whether in public or private clouds — on a single platform and using a unified management system will deliver the most bang for your IT bucks.

5.Replace outdated storage systems with their fast, efficient cloud-ready counterparts: The typical scenario as IT departments rushed to accommodate the tidal wave of data swamping their systems is simply to keep adding to the legacy storage infrastructure. The result is overspending on outdated technology and ineffective data management. Cloud-ready storage allows you to consolidate existing data resources while improving both security and performance.

The IT departments that will come out ahead in the “race” to the cloud are those that get a jump on the future while ensuring that their apps and data will continue to run smoothly and safely in the present — whether they reside in-house or in the cloud. The IT professionals most likely to succeed in the future are those who think like business consultants rather than like data managers.

Original Link

Surmounting Cloud Adoption Challenges With an iPaaS Model

Cloud is not the next big thing anymore. It is now the big thing. Immersive cloud-based technologies have totally altered the IT landscape. Organizations now understand that centralized computing and archaic architectures cannot drive the edge. The zeal to achieve low latency and hyper-interactivity is pushing organizations to adopt cloud-based technologies. Integration is being seen in the light of new approaches, but organizations are facing same old challenges that were prevailing in old frameworks. We will cover those challenges and methods to circumvent them with the help of an Integration Platform as a Service (iPaaS) framework.

Barriers to Cloud Adoption

A fair share of cloud migration challenges are spawned by weak integration between on-premise and cloud-based applications. These problems keep on resurfacing when the point-to-point network and hairball coding is used to integrate several applications. Developers develop code and throw it before the testing team for validation. And whenever there is a shakeup, developers need to repeat the entire process. This strategy is not ideal for companies with thousands of applications.

Once upon a time, applications were developed with little focus on integration with other applications. They were primarily stove-piped applications that offered one or two endpoints. These points don’t scale to accommodate SOA infrastructure. Lengthy code needs to be redeveloped to accommodate small changes. The following are some of the drawbacks of the hand-coded approach:

  • Growth in complexity leading to the formation of an integration hairball.
  • Lack of scalability for Service Oriented Architecture.
  • Increased Total Cost of Ownership (TCO).
  • Long delays in onboarding partner data.
  • Lack of dedicated features for firewall mediation, data management, governance, etc.

Historically, data was brought to the Central Processing Unit (CPU) for processing. This approach was changed as massive amounts of data soon overwhelmed the processor. The instant response was bringing multiple processors to data for processing and assessing. Each server consisted of small components to process individual elements of data sets. It was called parallel processing, where an infinite number of CPUs processed infinite numbers of data sets. In such an ecosystem, organizations face difficulties in scaling the ability of processors up to deal with a wide variety, volume, and velocity of data. As a result, teams encountered treacherously difficult challenges while:

  • Deriving true value from their data.
  • Overcoming data silos that restrain capabilities.
  • Getting the skills required for marshaling and orchestrating data.
  • Linking the data with Big Data and other digital initiatives.
  • Safely exchanging data with business partners.

To surmount these challenges, experts recommend organizations embrace a strategic integration approach that is based on industry best practices. Hybrid cloud adoption will see further increase and organizations undermining integration will fail to receive the benefits of cloud migration.

Conventional Approaches to Hybrid Integration

Initially, application leaders used ESB architecture to manage integration between applications and services. However, they realized that ESB doesn’t support scenarios where new and old applications run parallelly. It lacked scalability to accommodate new technology initiatives like Salesforce, Workday, Quickbooks, etc. Frequent IT intervention was needed at every layer to develop, test, and generate code.

Extract, Transform, and Load (ETL) was also used, but it suffered from many drawbacks, too. It only allowed getting data from a data repository. However, with the advent of Hadoop, this functionality has become outdated. Traditional row-and-column ETL doesn’t allow users to support storage of structured and unstructured data.

Modern applications use the Extended Markup Language (XML) format for storing data. On the other hand, physical machines use comma-separated values (CSV) for data storage. The ESB/ETL approaches are overwhelmed when too much data needs to be mapped from XML to CSV or CSV to XML. It is unwise to use these approaches for high speed and high volume projects.

Connecting applications through APIs is good, but it is not a silver bullet for more pervasive B2B integration needs. Specific APIs need to be developed for specific integration needs. And too much pressure is taken on when the same API muscle is used for multiple integrations. Organizations need to buy separate licenses for specific application integration scenarios.

Data has become a critical corporate asset and it is no longer a secondary asset. Jurassic integration approaches don’t help teams leverage data and engage customers. A greater degree of support from data integration tools is required to bring data from mobile, Internet of Things (IoT), and social channels.

Data security has become an even bigger concern as new and stringent compliances like the General Data Protection Regulation (GDPR) are looming. Conventional approaches don’t provide a safe passage through dense API cloud networks. A reliable pathway is needed that secures data at every endpoint.

Manual Coding and Thorny Challenges

Thorny challenges await application leaders when on-premise systems need to be integrated with cloud-based systems. And disruptions continue to resurrect in manual workflows. Let’s take a real-time scenario to understand this problem where manual steps will be used to connect SAP with Salesforce. This is a powerful combination of two trusted icons that can help organizations in becoming more productive.

Salesforce offers the Data Loader to integrate with other applications. To download the application, go to Salesforce Setup –> Data management -> Data Loader.

Lighting Connect is another approach to connect an SAP ERP Central Component (ECC) system with Salesforce, which will be discussed in this article. Here are the steps to integrate SAP with Salesforce with this tool.

Step 1: Configure Lighting Connector to perform queries and getting connected with SAP ERP system.

Step 2: Login to SAP Community Network

  • Select Join us and get free Login @ http://scn.sap.com/
  • A pop up appears
  • Register so that you are now part of the SAP Community Network

Step 3: Get access to the Public SAP System

Step 4: Configure the Lighting Connector for SAP Access

  • Click Setup → Develop → External Data Sources.
  • Click New External Data Sources
  • Fill out the following information

Label:

SAP Data

Name:

SAP_Data

Type:

Lightning Connect OData 2.0

URL:

https://sapes1.sapdevcenter.com/sap/opu/odata/sap/SALESORDERXX/

Connection Timeout:

120

High Data Volume:

Unchecked

Compress Requests:

Unchecked

Include in Salesforce Searches:

Checked

Custom Query Option:

Blank

Format:

AtomPub

Certificate:

Blank

Identity Type:

Named Principal

Authentication Protocol

Password Authentication

Username:

<The SAP Supplied user-name from Step 2>

Password:

<The SAP supplied password from Step 2>

  • Click Save >>A page appears: Connect to a Third Party System or Content System
  • Click Validate and Sync >>Validate External Data Source Page Appears

Step 5: Synchronize tables from Salesforce to SAP (creating corresponding custom objects inside Salesforce)

  • Click Sync to allow Salesforce to Read SAP Tables
  • Click SOHeaders to see the custom object and the custom fields

Objects that end in __c are the custom objects that you created

Step 6: Create an Apex Class to Retrieve SAP Data

  • Go to Setup –> Develop –> Apex Classes and click New.
  • Cut and paste this code provided in the code editor and then Save:
public class SAPsalesordersExtension { // // Read the custom object SOHeaders__x, that was created by the oData sync. // Use this to display the specific sales order data by customer number via // a VF page... // private final Account acct; List<SOHeaders__x> orderlist; public SAPsalesordersExtension(ApexPages.StandardController stdController) { Account a = (Account)stdController.getRecord(); List<Account> res = [ SELECT Id, AccountNumber from Account WHERE Id = :a.Id LIMIT 1]; this.acct = res.get(0); } public String getSAPCustomerNbr() { return acct.AccountNumber; } public List<SOHeaders__x> getOrderList() { if (null == this.orderList) { orderList = [SELECT ExternalId, CustomerId__c, SalesOrg__c, DistChannel__c, Division__c, DocumentDate__c, DocumentType__c, OrderId__c, OrderValue__c, Currency__c FROM SOHeaders__x WHERE CustomerId__c = :this.acct.AccountNumber LIMIT 300]; } return orderList; }
} // end of oData Apex Class

The SAP Sales Order Execution Page Appears

Step 7: Create a Visualforce page to display results

  • Go to Setup –> Develop –> Pages, click New
  • The Visualforce page appears

Include the following information

Label

SAP_oData_Example

Name

SAP_oData_Example

Description

A simple example of getting SAP Data without any middle-ware!

  • Paste this code in the Editor
<apex:page > <style> td { border-bottom-color: rgb(224, 227, 229); border-bottom-style: solid; border-bottom-width: 1px; background-color: #FFFFFF; border-collapse: separate; padding-bottom: 4px; padding-left: 5px; padding-right: 2px; padding-top: 5px; font-size:12px; } th { border-color: rgb(224, 227, 229); border-style: solid; border-width: 1px; background-color: #F7F7F7; border-collapse: separate; font-size: 11px; font-weight: bold; padding-bottom: 4px; padding-left: 5px; padding-right: 2px; padding-top: 5px; font-size:12px; } table { border-color: rgb(224, 227, 229); border-style: solid; border-width: 1px; } </style> <apex:dataTable > <apex:column > <apex:facet >Id</apex:facet> <apex:outputText ><a >{!order.Externalid}</a></apex:outputText> </apex:column> <apex:column > <apex:facet >Sales Org</apex:facet> <apex:outputText >{!order.SalesOrg__c}</apex:outputText> </apex:column> <apex:column > <apex:facet >Dist Channel</apex:facet> <apex:outputText >{!order.DistChannel__c}</apex:outputText> </apex:column> <apex:column > <apex:facet >Division</apex:facet> <apex:outputText >{!order.Division__c}</apex:outputText> </apex:column> <apex:column > <apex:facet >Customer Id</apex:facet> <apex:outputText >{!order.CustomerId__c}</apex:outputText> </apex:column> <apex:column > <apex:facet >Document Type</apex:facet> <apex:outputText >{!order.DocumentType__c}</apex:outputText> </apex:column> <apex:column > <apex:facet >Order Id</apex:facet> <apex:outputText >{!order.OrderId__c}</apex:outputText> </apex:column> <apex:column > <apex:facet >Order Value</apex:facet> <apex:outputText >{!order.OrderValue__c}</apex:outputText> </apex:column> <apex:column > <apex:facet >Currency</apex:facet> <apex:outputText >{!order.Currency__c}</apex:outputText> </apex:column> <apex:column > <apex:facet >Date</apex:facet> <apex:outputText >{!order.DocumentDate__c}</apex:outputText> </apex:column> </apex:dataTable>
</apex:page>
  • Click Save.
  • The SAPo Data Page appears

Step 8: Assign the Visualforce page

  • Setup –> Customize –> Accounts –> Page Layouts.
  • Click Edit to modify page layout
  • The Account Layout Page populates
  • Drag the Section option and move it to the spot where it needs to be moved.
  • A Popup appears to name the new section.
  • Click the OK button
  • Locate the Visualforce Page list and drag it to the Accounts Page Layout
  • Add the newly created Visualforce Page to the screen.
  • Drop the Visualforce page on the screen
  • Click Save to save updated Accounts Page Layout

Step 9: Test Drive Data Movement

  • Click Accounts tab and bring up any account.
  • Click New to Create a New Account
  • Populate the Following Test Data. For Example:

Account Name

Belmont cafe Inc

Account Number

100001

  • Click the Save button on the Salesforce Account page:
  • Click on the newly created account:
  • Click on the newly created account under the recent accounts section

All real-time SAP data will be returned.

This method is cumbersome and it cannot be used over and again for integrating Salesforce data with other applications. This approach allows only access to SAP data and moving large chunks of data can be an uphill task.

iPaas Approach for Becoming a Cloud First Enterprise

Previously, it was only employees who were generating and accumulating organizational data into computer systems. Now users and machines are also generating the data across social channels, forums, online commerce, etc. Due to this change, organizations today have to deal with larger accumulated data generated by their customer facing platforms, monitoring systems, smart meters, etc. The next big challenge is unlocking this colossal amounts of data which holds massive hidden opportunities. Processing and refining such massive amounts of data is a next level challenge during cloud migration which can be addressed through iPaaS.

IT experts consider iPaaS is the best solution for defying the impact of disruption on security, data and analytics, communications, and endpoint technology. Smarter organizations are successfully overcoming their integration weaknesses and setting up future-ready IT architecture with this model. Next generation iPaaS model delivers compelling business benefits:

  • 3000 times faster lead times.
  • 300 times increase in deployments.
  • 30X faster recovery.
  • 10 times lower failure rate.
  • 60 times more reusability

By simplifying application and data integration, iPaaS helps in modernizing IT architecture and setting up a cloud-first enterprise. The framework enables even business users to integrate with a gamut of external and internal business applications and processes safely and cost-effectively. Organizations can access new workloads from new channels (social, analytics, cloud, and the Internet of Things with a few clicks). This advantage allows users to connect faster with partner networks, bringing data faster and reducing total cost of ownership and accelerating time to revenue.

Leveraging iPaaS: No Code Approach for Integrating a Cloud Application (Salesforce) with an ERP (SAP)

An advanced iPaaS framework automates integration between Salesforce and SAP. It provides a secure bridge to connect Salesforce with ERP and other applications. Normal business users can handle exceptions and replicate Salesforce data faster with other applications. Here are some steps for Salesforce API integration with SAP.

 Adeptia iPaaS Interface to Connect Salesforce with SAP Adeptia iPaaS Interface to Connect Salesforce with SAP

Choose from a shared list of Salesforce connections

Step 1: Log into the Adeptia Integration Suite

Step 2: Choose from a variety of Salesforce to SAP Connectors

Create Connections

Step 3: Use Triggers and Actions.

  • Triggers: For Salesforce to the target system.

  • Actions: To sync-up data from other business application into Salesforce

Visually Map Data Fields

Step 4: Map Data between Source and Target fields with drag-and-drop ease

Step 5: Click Save

The data between Target and Source Systems is mapped!

In this way, an iPaaS framework allows normal business users to connect with any business application. Users can update leads, contacts, and campaigns in Salesforce in simple non-technical steps.

Guidelines for Adopting an iPaaS Framework

Adopting an iPaaS framework is a longstanding decision. And organizations should evaluate their requirements before investing in the right platform.

It is about time that organizations should become more data-oriented instead of system-oriented. View iPaaS as a process that streamlines IT integration in steps. Here are some guidelines for succeeding with iPaaS adoption.

Preparing a Proof of Concept (PoC): Data governance and management is as important as any other change management initiative. Organizations need to demonstrate that their iPaaS approach delivers significant value and return on investment, i.e., improved forecasting, greater degree of personalization, optimized resources, better-targeted marketing, etc. It is important to consider the continuum of data challenges arriving from a wide variety of data sources. Establishing a strong data governance model will require continuous support from all departments at every layer. It is important to bring all departments for getting more ideas and preparing a crackerjack data governance frame.

Testing the Hypothesis: It is better to fail faster during testing than at the later stages. The next step is putting the frameworks to the test and shortlisting an ideal model that ensures triumphant success. The frameworks should be tested on the basis of data readiness, feasibility analysis, usability, etc.

Validating the Roadmap: At this stage, the data governance model should be tested in the actual working environment. Application leaders must demonstrate at every stage that the model allows an organization to harness more value with less friction. Some of the factors to consider are capital expenditure, operational expenditure, total cost of ownership, and ROI.

Implementing the Data Governance Model: Then the model has to be handed over to the business teams and embedded into the organization. It is important to verify that the model delivers real-time benefits in a continuous operating environment.

Emerging business needs are driving more innovations in the iPaaS market. The iPaaS market is becoming dense with every newly added functionality. To achieve continued success from an investment, an organization should follow these guidelines and select an iPaaS model that promises stable trajectory and guaranteed success.

Original Link

Blockchain Powering Loyalty to New Hybrid Cloud Heights

There has been a lot said about the distributed ledger technology known as blockchain, mostly in the realm of financial services and Bitcoin.

It goes beyond the normal usages you might think of with blockchain technologies, such as ICOs where tokens are sold as initial offerings to purchase services from the companies making the offerings.

There are others jumping into the fray, such as large logistics companies attempting to gain better control of shipping standards.

The basis is about trust, being absolutely sure that the thing you are receiving is actually what it’s supposed to be. With that in mind, there are many other interesting opportunities for using blockchain technologies to ensure your applications and customers are interacting through a cryptographically secure, decentralized, tamper-proof network.

Let’s take a look at how this can be implemented using open technologies and watch the recording of this live demo as presented on stage in Sydney, shall we?

Hybrid Cloud and Blockchain

Just this year in Sydney, Australia at the Red Hat Customer Forum, a small team demonstrated a really interesting way to use an open source technology implementation called Ethereum in a travel industry scenario.

blockchain powers travelThere is much more in this scenario, so let’s look at what they are showcasing.

A small team tells the story of how a fictitious travel company, Destinasia, is able to leverage various open technologies and cloud services to gain momentum in their digital transformation journey. They showcase how quickly they’re able to react to changes in their markets in Asia, offering evolving customer experiences across mobile devices and leverage both their own as well as third-party services through blockchain distributed ledger technology to implement a loyalty-points-sharing experience across many of their customers travel services.

This demo story is packed with a lot of very valid solutions for real-life business problems, demonstrating across this unique storyline how open technologies are enabling businesses to respond to their constantly changing marketplaces.

Imagine the following all available to your business and implemented using open technologies across a hybrid cloud scenario:

  • Imagine the power of Loyalty Points that can be transferred instantly and securely, to any other loyalty program, using a common digital currency
  • Generate additional revenue streams by sharing existing business assets — think residual revenue
  • Peak or off-peak season — your entire application and infrastructure stack automatically adjusts itself to meet seasonal business needs.

Now grab a coffee, sit back, get comfortable and enjoy the video recording of the complete Destinasia Travel company’s digital transformation experience:

If you want to see the specifics of the blockchain open technology in the Destinasia solution story, jump to 47:30 in the video.

Leave a comment or reach out directly if you are interested in discussing these topics.

Original Link

Enterprise Hybrid Cloud and Federated Kubernetes

Cloud vendors and visionaries have long promoted the hybrid cloud — a coordinated mix of on-premise cloud with single or multiple public cloud resources — as an ideal path for balancing needs for security and control; agility and flexibility; opportunities for cost-arbitrage, optimization, and reduction; complexity management; and for mitigating risks of single-vendor lock-in.

In a recent blog post, my colleague Akshai Parthasarathy and I outlined key strategic considerations for deciding on an enterprise hybrid cloud approach and made a case for hybrid’s benefits to virtually all large- and medium-sized organizations as well as select smaller businesses.

They make the point that gaining the maximum benefit from a hybrid (or any other cloud) strategy demands adopting newer technologies for composing, hosting, orchestrating, and managing applications. These include:

  • Containers, which hide complexity, vastly accelerate deployment, and enable fluent workload mobility across disparate hosts.
  • Container hosting and orchestration frameworks, which provide a standardized, abstract host environment for workloads; and automate deployment, lifecycle management, resilience and scaling of containerized applications. Most important among these is now Kubernetes, a fast-evolving, practical, and performant open source solution with a large and growing community, widely supported on both public and private clouds.
  • Microservice-based application architectures, which enable real-time, horizontal scaling of granular, containerized application components based on dynamically-changing performance requirements, and support high availability via component redundancy and load balancing.
  • Platform-as-a-Service, Serverless computing, and similar paradigms, which exploit containers, container orchestrators, and service APIs to conceal platform and cloud complexity and let developers focus on applications.

More than the Infrastructure-as-a-Service (IaaS) technologies preceding (and, in some cases, hosting) them, Kubernetes-based container and microservice strategies are now positioned to play a more and more important role in enabling some of the most-critical affordances on which enterprise hybrid cloud success depends. These include:

  • Real, dependable, and rapid workload portability. Standalone containerized workloads can be deployed instantly to any (same version) Kubernetes cluster located anywhere (i.e., on private or public cloud(s)); as can more-complex, multi-container applications (so long as these are composed to use well-understood and widely supported techniques and tools for service discovery and dynamic self-configuration). This enables Ops to locate workloads optimally to meet requirements for security, performance, latency, meet cost-optimization guidelines; or satisfy other business criteria.

  • Continuum of self-service capabilities across private and public platforms. Using container techniques to pre-build, standardize, and make development environments, tools, and components universally deployable across private and public Kubernetes clusters lowers Ops overhead, helps developers be more productive, eliminates many sources of QA problems, and supports modern, agile development processes such as CI/CD.

  • Rapid scaling and automated resiliency. Up to the limits of local cluster capacity, a Kubernetes implementation can easily scale out, load balance, and automate high availability of applications.

Multi-Cloud Operational Abstraction

However, Kubernetes in its simplest implementation — as separate clusters running on-premise IaaS or bare metal, or on various public cloud platforms — is still missing several affordances needed to fully realize hybrid cloud’s long-promised potential.

The first of these might be called ‘operational abstraction’ and is a way of reducing the significant new complexity and provider/platform-specific knowledge requirements of spinning up and lifecycle-managing individual Kubernetes clusters and groups of clusters, running on multiple public cloud platforms and private cloud infrastructure, each with its own operations tools, requirements, and configuration details.

SaaS managed solutions deliver this needed operational abstraction — through a ‘single pane of glass’ — enabling consumption of Kubernetes as a service in a way that’s infrastructure/provider agnostic: functioning equally well on private infra (e.g., as a companion to OpenStack on bare metal, or hosted on OpenStack), or on public clouds from Amazon (AWS), Microsoft (Azure), and Google (GCP). By lightly abstracting Kubernetes in this way, users enjoy one low complexity process model for operations; one set of compatible APIs for automation; and dependable, issue-free workload mobility, but without the heavy cost and flexibility downsides of a solution deployed at the customer premise (e.g., Red Hat OpenShift or Core OS Tectonic).

Resource Abstraction, Scaling, Bursting, and Availability

The second critical affordance for enabling a true hybrid cloud is resource abstraction: the ability (up to whatever point is practical given technical characteristics and operational requirements) to treat multiple clouds/clusters as a single pool of virtualized resources.

This is the province of Kubernetes Federation: a fast-evolving standard for placing multiple Kubernetes clusters running on disparate hosts under management by a specialized, federated control plane. Setting up a Kubernetes federation manually isn’t simple — a common, top-level DNS must be provided; naming conventions for member clusters and other entities are somewhat strict (enabling clusters and their components to be addressed using internet standards-compliant names); credentials must be collected and provided to the federation host; an admission controller, policy engine, and other common components (ConfigMap, DaemonSet, Autoscalers, ReplicaSets, and other constructs relevant also to individual Kubernetes clusters) must be configured, etc. Within the next several months, Platform9 plans to introduce the ability to rapidly configure, deploy, operate and lifecycle-manage Federated Kubernetes control planes across diverse public cloud hosts as well as private clouds.

Once a Kubernetes Federation is established, users gain a range of new and extremely powerful tools for consuming cloud resources rapidly and efficiently and automating a host of complex, intelligent operations with relatively low levels of effort. A single command and .yaml file let you define an application deployment on all the federation’s underlying clusters, which will allow them to collaborate and ensure that the required number of replicas are spread evenly across the clusters (unless configured otherwise) and kept alive. Updates can be propagated to deployments across all clusters, automatically.

Scaling in a Kubernetes Federation can be configured to respect, or effectively ignore cluster boundaries. Federated HPAs — Horizontal Pod Autoscalers — can be used to ensure that workloads in a federation-wide deployment are spun up, automatically, where required, and moved around to meet local load demands and configured policy objectives. This enables many kinds of automated and deliberate optimization long viewed as essential to a fully realized hybrid cloud model, including:

Automated inter-provider scaling and/or cost- (or performance-)optimized workload placement: For example, moving commodity workloads preferentially to reside on replicas on the public cloud host that currently has the most free reserve capacity, hence the lowest available costs; or placing workloads optimally for fastest response time/lowest latency to users.

“Bursting,” or automated scaling on demand: Often demonstrated in proofs-of-concept, but seldom in practical, generalized ways, Kubernetes Federation enables the use of public cloud resources (effectively limitless) to complement (always limited) private cloud capacity. Rather than tolerating the degradation of application availability under transient high load, apps can be configured to burst, via HPAs, from private cloud to public cloud; scaling out when demand is high, then scaling back when it tapers off.

High availability, made simpler: Federation offers a simple means for achieving arbitrarily high levels of reliability that “just works”: You can distribute an application’s workloads from private to public cloud; across multiple clusters in a single public cloud region; geographically-separate regions managed by a single provider; or across multiple public cloud providers’ resources; eliminating the risk of downtime due to infrastructure problems, local internet and provider backbone issues, and even regional disasters.

Hybrid Cloud’s Promise: Finally Delivered?

With the addition of Federation, Kubernetes — itself the new top of the modern enterprise cloud stack — is (at long last!) very close to delivering the full scope of benefits long promised by hybrid cloud strategy: agility, high levels of functional automation in operations, numerous avenues for cost optimization, and practical access both to commodified public cloud resources and to more secure (and, depending on the scale of use, often more cost-efficient) private cloud capacity. 

Original Link

An Introduction to Hybrid Multi-Cloud Architectures

When organizations decide to shift their workloads, data and processes across multiple on-premises, hosted, private, and public cloud services, there will be a need for a new approach. This new approach leads to hybrid multi-cloud cloud management. But this approach requires uniform solutions in terms of billing and provisioning, access control, cost control, and performance analysis and capacity management.

A hybrid multi-cloud architecture is emerging within nearly all enterprises. IT organizations are no longer limited to managing data-centers and a few hosted and managed services providers. Needy lines-of-business teams and impatient IT developers have procured SaaS, IaaS, and PaaS cloud services to overcome resource constraints. Now many enterprises’ IT structures are composed of multi-clouds.

In the IT industry, the tools and technologies needed to craft and manage hybrid multi-clouds architecture are fragmented. Multi-clouds and hybrid clouds bring workload and infrastructure challenges that will drive the development of new cloud management technology. In addition to having to manage resource utilization, performance and costs of various public and private cloud services, cloud management platforms must also be aware of the integrations and processes that transcend on-premises and cloud execution venues, and interoperate (in some way) with the new multi-purpose hybrid iPaaS that connects them, to assure business continuity.

Organizations plan to migrate their on-premise systems to hybrid multi-cloud when they need a solution for the following challenges and requirements:

  • Users are widely distributed geographically where they are surrounded by multiple data centers instead of a single data center.

  • Facing regulations limit in particular countries for storing data, e.g., EU.

  • An environment where public clouds are used with on-premises resources.

  • A cloud-based application is not resilient, which can affect disaster recovery when loss of a single data center.

According to the above challenges, I’ve introduced two hybrid multi-cloud architectures for migrating on-premise environment to a hybrid multi-cloud environment. There are many multi-cloud architectures, namely re-deployment, cloudification, relocation, refactoring, rebinding, replacement, and modernization for organizations to for adopt multi-cloud environments.

1. Multi-Application Rebinding

Image title

In the above hybrid multi-cloud architecture, a re-architected application is deployed partially on multiple cloud environments.

This architecture can be used for the systems that route users to the nearest data center when the primary or on-premise data center fails. In particular, they can be configured to monitor the status of the service to which they are directing the users. If any service is not available, all the traffic will be routed to another healthy instance.

This architecture uses an on-premise cloud adapter (e.g., service bus or elastic load balancer) to provide an integration of components in different cloud platforms.

Let’s understand with an example.

Here, AC1 and AC2 are two application components hosted on-premise before migration. As both the components are independent integrity units, AC1 remains on-premise while two AC2s are deployed on AWS and Azure for disaster recovery. AC1 and two AC2 components are connected via EBS or service bus.

The main benefits of using this architecture are the application’s response rate increases to the maximum level and unhealthy services become healthy again.

2. Multi-Application Modernization

Image title

In this architecture, on-premise applications are re-architected as a portfolio and deployed on the cloud environment.

In the above example, A1, A2, and one application component, AC1, are re-architected as a portfolio and deployed on different cloud providers and on-premise. AC1 is deployed on AWS and A2 is deployed on Azure while A1 is kept on-premise.

This architecture overcomes the problem where re-architecting an on-premise application does not remove duplicated functionality and inconsistencies.

Multi-Application Modernization analyzes an application as a portfolio to identify opportunities for consolidation and sharing. The separation of workloads enables the identification of components that are shared by more than one solution.

This architecture provides a consistent performance and reduces operational tasks and maintenance costs for shared components.

Conclusion

The introduction of hybrid multi-cloud architectures complements existing migration practices and allows for an engineering approach towards constructing and evaluating the migration plan.

Multi-cloud architectures provide an environment where businesses can build secure and powerful cloud environments outside the traditional infrastructure. Maximizing the impact of multi-cloud, however, means tackling the challenges of app sprawl, unique portals, compliance, migration, and security head-on.

Original Link

Enterprise Hybrid Cloud: Strategy and Cost

A clearly defined cloud strategy is an imperative need during any enterprise’s transition to a cloud-based IT environment. To minimize risks and make the right choice between private, public, and hybrid cloud options, it is important to devise an effective enterprise cloud strategy with the following considerations in mind:

  1. Security and Control
    • Is it important to have physical access to the infrastructure and platforms that are used to provision workloads?
    • Are there any security and compliance requirements that need to be met for customers?
  2. Agility and Flexibility
    • Which cloud choice will adequately meet resource demands for developer and business needs?
    • Which cloud environment will adequately support customers during high demand/peak usage times?
  3. Application Complexity Management
    • Does the IT environment need to support legacy, virtualized, microservices-based, and serverless applications?
  4. Operating Cost Reduction
    • What will be the costs of deploying, managing, and maintaining IT infrastructure be with each cloud choice?

Cloud Strategy Definition: Mapping Strategic Considerations to Market Segments

The following chart depicts the importance of these four criteria across large, mid-market, and SMB segments as observed by Platform9:

The importance of each of these strategic considerations can be understood as follows:

  • In large organizations, security and control, application complexity management, and operating cost reduction are typically non-negotiable. Agility and flexibility might have lesser importance in comparison with the other considerations.
  • For the mid-market segment, application complexity management and operating cost reduction are more important than the other two factors.
  • For the SMB segment, agility and flexibility, as well as operating cost reduction, are usually much higher priorities than the other two considerations. These organizations are willing to make tradeoffs for security and control and application complexity management.

Cloud Strategy Definition: Mapping Strategic Considerations to Cloud Options

Let us now discuss how these four factors can influence cloud strategy and choice.

Security and Control: Private clouds are considered more secure than public clouds for these two reasons:

  • Data stored in a public cloud is placed on the cloud provider’s storage and within its data centers. It is not possible to ascertain who amongst the provider’s numerous employees and contractors have access to your data and whether the data has been accessed by any of those individuals.
  • Private clouds offer more controls and customizations over security configurations. With a public cloud provider, you are restricted to a limited set of security tools offered.

Agility and Flexibility: A public cloud provides greater agility and flexibility in comparison to a private cloud. Public clouds, such as Amazon Web Services and Google Cloud Platform, offer massive amounts of capacity and deployments can be architected to better handle dramatic demand spikes. As a result, it can be easier to meet requirements of developers and customers.

Application Complexity Management: Public clouds offer a wide selection of operating systems, virtual machines, microservices, and serverless technologies, and the tools to manage and monitor these types of applications. With a private cloud, you will invest more time and resources for management of newer application paradigms, such as microservices and serverless applications, since existing tools will not offer these management capabilities.

Operating Cost Reduction: Each of the three major public cloud vendors tries to closely match on price and charge only for resources consumed (utility-style pricing). In addition, it is no longer necessary to have an IT team for deploying, maintaining, and upgrading on-premises hardware, hypervisors, and operating systems. However, as experienced by several Platform9 customers, public costs can skyrocket due to large volume data transfers and unmonitored resources.

It is clear that neither a private cloud or public only approach can address the strategic considerations by themselves. Having a private- or public-cloud-only approach requires enterprises to make tradeoffs.

The importance of a hybrid cloud arises in such situations. A hybrid cloud allows you to harness select benefits from both the public and private cloud pathways while balancing the trade-offs.

Designing the Enterprise Hybrid Cloud: How Should Workloads be Distributed Between Private and Public Clouds

A hybrid cloud can be architected to include a significant proportion of workloads on either the public cloud or the private cloud. In order to intelligently implement a hybrid cloud, it is important to decide what proportion of an enterprise’s workloads will reside on-premises and in the public cloud. The ideal mix of public and on-premises workloads can be determined by considering both cost factors and the type of workload.

Determining Hybrid Cloud Workload Locations Based on Costs

In general, adopting an enterprise hybrid cloud strategy with a combination of proprietary software based private clouds and public cloud(s) will be more expensive than adopting a public cloud only strategy. However, operating a Platform9 SaaS-Managed open source based hybrid cloud solution can provide significant cost savings over a public cloud only strategy. The cost savings achievable in such cases will vary based on the enterprise size (which largely aligns with IT complexity) and the workload distribution between private/public cloud locations.

The following chart shows the cost advantage enterprises can achieve through a hybrid cloud strategy enabled by Platform9 over a public cloud only (i.e. 100% of workloads on a public cloud e.g. AWS).

As can be seen from the chart, for all enterprise segments, as more workloads are located on-prem (and correspondingly, fewer workloads reside in the public cloud), the cost savings of a hybrid cloud approach are superior relative to a public cloud only option.

Large and mid-market enterprises can achieve higher cost savings over a public cloud only approach by locating a very large proportion (> 80%) their workloads on-premises. For example, with 90% of workloads on-premises and 10% on AWS, cost savings for large enterprises is 55% , 51% for mid-market enterprises but only 6% for SMBs. The study shows, for large enterprises, if < 45% of workloads are on-premises then there is no economic gain over running all workloads on AWS. With such a workload distribution approach, it might make more sense to consider a 100% public cloud approach. More workloads have to be located on-premises to realize cost savings if a hybrid cloud approach is desired. A similar situation is observed for mid-market enterprises with < 35% of workloads on premises.

The study also indicates that for smaller enterprises with lower IT complexity and a smaller number of workloads, a public cloud only approach might be more suitable than a hybrid cloud.

Determining Hybrid Cloud Workload Locations Based on Criticality to Enterprise

Workloads can also be classified as mission-critical or business-critical to arrive at the right on-premises and public cloud mix. Mission-critical workloads are those that are essential to run your business: product/service delivery to your customer, financial transactions for products/services offered, manufacturing, and so on. Mission-critical applications should typically be deployed on-premises to allow the IT team to proactively monitor the entire application and infrastructure stack, and respond to application and system events on a 24×7 schedule. On-premises deployment for mission-critical workloads will also provide more control over security configurations.

Business-critical workloads are those that are amenable to some downtime. Examples include email services provided by Office 365, business intelligence workloads, etc. Using the public cloud for these workloads can reduce operational expenditure without creating substantial risk for the company. Such a strategy also allows the in-house IT team to focus on “keeping the lights on” with mission-critical workloads.

Takeaways

In order to make a strategic choice between public, private, and hybrid cloud options, four factors must be considered: security and control, agility and flexibility, application complexity management, and operating cost reduction.

Among the three cloud models, an enterprise hybrid cloud melds public and private cloud benefits and balances trade-offs. As a result, a hybrid cloud approach has increasingly become the preferred option for enterprises. Platform9 has observed this first hand with many of its customers who start thinking about a private or public cloud only approach and then shift to a hybrid cloud strategy. Intelligent design of workload distributions in a hybrid cloud needs to be made by taking into account both the cost savings achievable and the type of workload.

Original Link

10 Steps to Cloud Happiness (Step 3): Adding Cloud Ops

Every journey starts at the beginning, and this journey’s no exception.

As previously presented in the introduction to this series of articles, you’ll be taken through the 10 steps to your cloud happiness.

This journey focuses on the storyline that you’re interested in due to the push towards a digital transformation and the need to deliver applications into a cloud service.

This focus on application delivery and all the new moving parts, like containers, cloud, platform as a service (PaaS), and digital journeys might leave you searching for that simple plan to get started. There is nothing like getting hands-on to quickly leverage the experience you’ve acquired over the years, so let’s dive right in.

Previously you were shown how to get a cloud and the use of a service catalog, so what’s next?

Cloud Operations

A cloud is nothing if you can only deliver your applications to it. This need becomes apparent when trying to manage a diverse landscape of applications, infrastructure and reporting across a hybrid-cloud infrastructure.

To give you the feel of solid operations happiness, you’ll be interested in an open technology-based cloud management tool. Managing a complex, hybrid IT environment can require multiple management tools, redundant policy implementations, and extra staff to handle the operations. Red Hat CloudForms simplifies IT, providing unified management and operations in a hybrid environment.

The following provides the Red Hat CloudForms experience by installing it in a container on any OpenShift Container Platform (OCP). Below are the instructions that include installing OCP as outlined in step one of this series called Get a Cloud.

  1. First ensure you have an OpenShift container based installation, such as one of the following installed first:
  1. Download and unzip.
  2. Run ‘init.sh’ or ‘init.bat’ file. ‘init.bat’ must be run with Administrative privileges:
 # The installation needs to be pointed to a running version # of OpenShift, so pass an IP address such as: # $ ./init.sh 192.168.99.100 # example for OCP.

  1. Follow the displayed instructions to log in to your brand new Red Hat CloudForms!

Once installed, you can follow the readme instructions to add a provider to start generating reporting data. The rest of the exploration is left up to the user, as there are many roads to travel when using Red Hat CloudForms.

Rest of the story

If you are looking for the introduction to this series or any of the other steps:

  1. Get a Cloud
  2. Use a Service Catalog
  3. Adding Cloud Operations
  4. Centralize Business Logic
  5. Real Process Improvement
  6. Human Aspect
  7. Retail Online Web Shop
  8. Online Travel Bookings
  9. Financial Services Examples
  10. Agile Cloud Service Integration

So stay tuned as this list’s tackled one-by-one over the coming weeks and months to provide you with a clear direction towards your very own application delivery in the cloud happiness.

Original Link

The Promise of Multi-Cloud

Several reports by industry think tanks have indicated that multi-cloud adoption is at an all-time high. IDC says that 2017 will witness more than 85 percent of enterprises committing to multi-cloud architectures. A recent Dimensional Research survey of more than 650 IT decision-makers found that 77 percent of businesses are planning to implement multi-cloud architectures in the near future.

But really, what is multi-cloud? An overhyped fad? Or a brilliant technology?

In this post, we look at what are the advantages of multi-cloud and why enterprises should construct a multi-cloud strategy.

Multi vs. Hybrid

Multi-cloud is almost always confused with hybrid-cloud in the business world. Let’s clear that up for once and for all.

Multi-cloud is when businesses leverage multiple cloud service providers such as AWS, Azure, OpenStack, etc, to power multiple applications, and also includes on-premise infrastructure. On the other hand, a hybrid cloud is a bridge that connects multiple deployment platforms, such as public cloud, private cloud, dedicated, and on-premise servers.

Advantages of Multi-Cloud

It’s not that hard to understand why enterprises are scrambling to add multi-cloud in their infrastructure arsenal, given its numerous benefits:

Match the Right Cloud With the Right Requirements

For example, using public clouds such as AWS and Azure for quickly scaling up and down to meet customer demand during a sale period, while banking on private cloud for proprietary applications.

Avoiding Vendor Lock-In

With only one vendor to rely on, you are tied up to specific protocols, standards and tools, that might not work in your favor. Migrating to another cloud may prove to be costly too.

Improve disaster recovery and geo-presence

Having multiple cloud centres in different countries helps in reducing risks of downtime and also leads to lower latency.

New Tech: Serverless and Containers

Upcoming technologies in the cloud, such as containers and serverless, are only making the deal of multi-cloud sweeter. If moving an application from one cloud to another is a challenge, behold containerization. If you want to burst to a cloud only for a specific function deployment, then serverless is the key. Let’s look at these two in some more detail.

Containers

What if a developer wants to move an application from AWS to Microsoft Azure or vice versa in a multi-cloud setup? That’s where container technology steps in. Containerization is an OS-level virtualization method to deploy and run distributed applications without launching an entire VM for each application. Instead, multiple isolated systems, containers, are run on a single control host and access a single kernel.

Serverless

Serverless computing (Functions-as-a-Service) executes function code that developers write using only the precise amount of compute resources needed to complete the task, slashing costs.

Enterprises Need a Multi-Cloud Strategy

Isn’t multi-cloud a great approach? By adopting a multi-cloud strategy, enterprises can cherry pick the best technologies and services from different cloud service providers to create the solution that best fits their needs. And that’s why enterprises are lining up for it.

While adopting multi-cloud, it is necessary to have a multi-cloud strategy in place that covers governance, applications and data, and platforms and infrastructure.

Having a multi-cloud strategy helps in the following ways.

Tamp Down Capex

By using cloud technology, enterprises reduce infrastructure costs and shift to modest operational costs.

Ensure Security

Security is a given, as the cloud service providers are zealous about data protection and ensure that stringent perimeter security measures are taken.

Finalize Cloud Orchestration Solutions

Multi-cloud comes with its set of challenges. Therefore, it is vital to opt for a multi-cloud orchestration solution that provides automation capabilities to effectively consume and govern cloud resources.

Multi-cloud is definitely the way forward for enterprises that want to undergo a digital transformation. However, before rushing into things, it is essential to formulate a strategy for multi-cloud technology to deliver on its promise.

Hear It From the Experts!

On October 24, we hosted the webinar “The Promise of Multi-Cloud”, in which Mike Stipe (Chief Revenue Officer, CloudEnablers) and cloud aficionado Krishnan Subramanian (Founder, Rishidot Research) participated and shared their thoughts on what’s multi-cloud, how enterprises can leverage it to drive agility and innovation, while shedding light on the future of multi-cloud.

Get the webinar recording here.

Original Link

Constructing Your Cloud Infrastructure for Performance

As predicted, it’s prediction time for 2018! The year ahead will bring more technology advancements, and also advancements for the people using that technology as they figure out what will work best for their users. Cloud infrastructures come in a lot of different flavors these days, but they all need to perform well and consistently if they’re going to succeed.

If you’re using cloud now, and planning to use more of it (that probably covers most enterprises), here are a couple of interesting cloud computing trends to watch. Some of them, like the Internet of Everything concept, may not be coming that soon to your data center, but the rise of 5G and continually improving internet quality will likely be welcome news for a lot of IT teams. Increasing storage capacity from cloud storage providers may also make a big difference. Those providers are using larger-capacity equipment in their data centers to handle the demands of enterprise cloud computing. Those demands include the use of PaaS and IaaS, but in 2018, SaaS is expected to be the most highly deployed cloud service, according to a Cisco survey.

For more details on the future of enterprise cloud, here’s a look at how companies are settling into a hybrid cloud environment. VMware, the on-premises virtualization leader, has released multiple management as a service (MaaS) products so that enterprises can manage their multiple cloud systems using a SaaS-based product. Their virtual networking product also lets systems and network managers control apps and services for users in many locations. And as we’ve seen elsewhere, enterprise IT will continue finding a balance between managing cloud-native applications along with on-premises tools. Containers may play an important part there.

Cloud providers are also adjusting to enterprise demands by offering network options for faster cloud connections. AWS, Google, and Microsoft have all released private connection options for customers to connect from enterprise data centers to a public cloud or colo data center. Larger businesses are choosing these networks for security and performance reasons, and new customers like the option for the big initial data transfer to the cloud. Keeping data as close to end users as possible, by matching up user locations with cloud data centers, is a good foundation for improving network performance.

Hybrid cloud and multicloud deployments are making sense for a lot of enterprises, mostly because of their flexibility. At ONUG’s fall conference, some leading companies talked about the buy vs. build decision that’s now a part of IT infrastructure growth. One surprise is that it doesn’t necessarily depend on the cost, but has more to do with the competitive advantage that building a product in-house could bring. Technology-providing companies are also obviously more likely to build a missing piece themselves. And of course, having the right mix of engineering experts is important for any company building their ideal cloud infrastructure.

Original Link

Hybrid Cloud: Teaching Old Data Centers New Tricks

Oracle/mainframe-type legacy systems have short-term value in the “pragmatic” hybrid clouds described by David Linthicum in a September 27, 2017, post on the Doppler. In the long run, however, the trend favoring OpEx (cloud services) over CapEx (data centers) is unstoppable. The cloud offers functionality and economy that no data center can match.

Still, there will always be some role for centralized, in-house processing of the company’s important data assets. IT managers are left to ponder what tomorrow’s data centers will look like, and how the in-house installations will leverage the cloud to deliver the solutions their customers require.

Spending on hybrid clouds in the U.S. will nearly quadruple between 2014 and 2021. Source: Statistica, via Insight

Two Disparate Networks, One Unified Workflow

The mantra of any organization planning a hybrid cloud setup is “Do the work once.” Trend Micro’s Mark Nunnikhoven writes in an April 19, 2017, article that this ideal is nearly impossible to achieve due to the very different natures of in-house and cloud architectures. In reconciling the manual processes and siloed data of on-premises networks with the seamless, automated workflows of cloud services, some duplication of effort is unavoidable.

The key is to minimize your reliance on parallel operations: one process run in-house, another run for the same purpose in the cloud. For example, a web server in the data center should run identically to its counterparts deployed and running in the cloud. The first step in implementing a unified workflow is choosing tools that are “born in the cloud,” according to Nunnikhoven. The most important of these relate to orchestration, monitoring/analytics, security, and continuous integration/continuous delivery (CI/CD).

Along with a single set of tools, managing a hybrid network requires visibility into workloads, paired with automated delivery of solutions to users. In both workload visibility and process automation, cloud services are ahead of their data center counterparts. As with the choice of cloud-based monitoring tools, using cloud services to manage workloads in the clouds and on-premises gives you greater insight and improved efficiency.

The components of a Cloud Customer Hybrid Integration Architecture encompass the public network, a cloud provider network, and an enterprise network. Source: Cloud Standards Customer Council

The View of Hybrid Clouds From the Data Center Out

Process automation is facilitated by orchestration tools that let you automate the operating system, application, and security. Wrapping the components in a single package makes one-click deployment possible, in addition to other time- and resource-saving techniques. Convincing staff of the benefits of applying cloud tools and techniques to the management of in-house systems is often the most difficult obstacle encountered when transitioning from traditional data centers to cloud services.

Role reversals are not uncommon in the tech industry. However, it took decades to switch from centralized mainframes to decentralized PC networks and back to centralized (and virtualized) cloud servers. The changes today happen at a speed that the most agile of IT operations find difficult to keep pace with. Kim Stevenson, VP and General Manager of data center infrastructure at Lenovo, believes the IT department’s role is now more important than ever. As Stevenson explains in an October 5, 2017, article on the Stack, the days of simply keeping pace with tech changes are over. Today, IT must drive change in the company.

Data Center Evolution Becomes a Hybrid Cloud Revolution

The only way to deliver the data-driven tools and resources business managers need is by partnering with the people on the front lines of the business, working with them directly to make deploying and managing apps smooth, simple, and quick. As the focus of IT shifts from inside-out to outside-in, the company’s success depends increasingly on business models that support “engineering future-defined products” using software-defined facilities that deliver business solutions nearly on demand.

Standards published by the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) define four data center tiers: from a basic modified server room with a single, non-redundant distribution path (tier one) to one with redundant capacity components and multiple independent distribution paths. A tier one data center offers limited protection against physical events, while a tier four implementation protects against nearly all physical events and supports concurrent maintainability, so a single fault will not cause downtime.

These standards addressed the needs of traditional client-server architectures characterized by north-south traffic, but they come up short when applied to server virtualization’s primarily east-west data traffic. Network World’s Zeus Karravala writes in a

Laying the Groundwork for the Cloud-First Data Center

Karravala cites studies conducted by ZK Research that found 80 percent of companies will rely on hybrid public-private clouds for their data-processing needs. Software-defined networks are the key to achieving the agility future data-management tasks will require. SDNs will create hyper-converged infrastructures to provide the servers, storage, and networks modern applications rely on.

Two technologies that make HCIs possible are containerization and micro-segmentation: the former allows an entire runtime environment to be virtualized nearly instantaneously because it can be created and destroyed quickly; while the latter supports predominant east-west data traffic by creating secure zones that allow data to bypass firewalls, intrusion prevention tools, and other security components.

IT executives are eternal optimists. The latest proof is evident in the results of a recent Intel survey of data security managers. Eighty percent of the executives report their organizations have adopted a cloud-first strategy that they expect to take no more than one year to implement. What’s striking is that the same survey conducted one year earlier reported the same figures. Obviously, cloud-first initiatives are taking longer to put in place than expected.

Sometimes, an unyielding “cloud first” policy results in square pegs being forced into round holes. TechTarget’s Alan R. Earls writes in a May 2017 article that often the people charged with implementing their companies’ cloud-first plan can’t explain what benefits they expect to realize as a result. Public clouds aren’t the answer for every application. In addition to the organization’s experience level with cloud services and the availability of cloud resources, some apps simply work better and/or run more reliably and efficiently in-house.

An application portfolio assessment identifies the systems that belong on the cloud, particularly those that take the best advantage of containers and microservices. For a majority of companies, their data center of the future will operate as the private end of an all-encompassing hybrid cloud.

Original Link