ALU

cloud

Hybrid Cloud: Evolving Chapter of 2018

Since recent years, IT decision makers and strategists are focusing on cloud computing. But organizations that are security conscious are still hesitant to move workloads and data to the cloud. However, with the fundamental technology behind cloud computing, a new model of cloud is gaining limelight in business: the hybrid cloud.

We all know hybrid cloud is a combination of public and private cloud deployment. With this, the organizations can store protected and sensitive data on a private cloud while leveraging computational resources from the public cloud to run applications which bank on this data.

Original Link

10 Ways Cloud Computing Makes Your Employees Efficient

Migrating to the cloud is a huge decision and you need to weigh several pros and cons of before making any sort of changes in your infrastructure and the way of working. The ultimate goal of doing any new change in an organization is increasing ROI (Return On Investment); which also directly proportional to increasing the effectiveness and productivity of the employees.

As per the surveys were done in the last decade, the organizations have voted in the favor of adopting cloud computing and said that they are benefited from this change. Employee productivity has increased in multiple ways like reducing downtime, increasing communication efficiency and collaboration.

Original Link

K8s KnowHow: Using A Service

This is the second article of the Kubernetes Knowhow series. In this article, we will see how to expose a pod to the outside world. In the last article https://dzone.com/articles/k8s-knowhow-running-first-pod-1, we learned how to run a Spring Boot sample application in a pod. We also learned to start an interactive shell in order to access pod.  However, we could not access the welcome url from outside of K8s cluster, for an instance from the browser, a Rest client. Now, the next step is to expose this pod outside of K8s cluster. Let’s get started!

I will start with our pod definition.

Original Link

Intro to Cloud Computing: Types and Benefits [Infograph]

In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer’s hard drive. As cloud computing has grown so rapidly, this has been categorized in majorly four different deployment models, all based on specific needs of different users. Each type of deployment models and cloud services provides you with different levels of control, flexibility and management. 
Image title

Original Link

Were the Expectations From Cloud Computing Met?

After the introduction of cloud computing technology, some people considered it mere hype while others had many expectations about it. Many were skeptical about the adoption of cloud technology but as time passed, a major number of people started becoming in favor of its adoption. There have been 32% of self-influence of departments to migrate to the cloud, 58% were less influenced, and 10% were not influenced at all. The recent stats suggest that people have got following benefits since adopting cloud:

•    62% increase in productivity;

Original Link

2019 Predictions: What’s Next for Software Defined Storage?

As we head into the heart of predictions season, the tech prophets are working overtime. There are so many streams of emerging technology — some of them converging into rapids — that we all need to arm ourselves with some foresight and guidance for navigating our way through the rush of data and possibilities. 

The first stop on the journey is cloud strategy, namely standardization of orchestration and commoditization of cloud resources. As your digital business grows in scale and complexity, automated capabilities will be critical to maintaining control and visibility. In 2019, you should be figuring out how to optimize savings and efficiency by leveraging the commoditization of hardware, managed services, security solutions, and cloud platforms — but this will only work if you have a robust, overarching orchestration solution in place. 

Original Link

Exploring AWS Lambda Deployment Limits

In one of our last articles, we explored how we can deploy Machine Learning models using AWS Lambda. Deploying ML models with AWS Lambda is suitable for early-stage projects as there are certain limitations in using Lambda function. However, this is not a reason to worry if you need to utilize AWS Lambda to its full potential for your Machine Learning project. When working with Lambda functions its a constant worry about the size of deployment packages for a developer.

Let’s first have a look at the AWS Lambda deployment limits and address the 50 MB package size in the AWS official documentation which is kind of delusive as you can make larger deployments of uncompressed files.

Original Link

Deploying Spring Boot and MongoDB as Containers Using Kubernetes and Docker

For this tutorial, you’ll have a Dockerized sample spring-boot application that talks to MongoDB for GET/POST REST APIs and deployed in Kubernetes cluster.

Prerequisites

  • minikube
  • kubectl
  • docker
  • maven

Docker is a Linux container management toolkit with a “social” aspect, allowing users to publish container images and consume those published by others. A Docker image is a recipe for running a containerized process, and in this guide, we will build one for a simple Spring boot application.

Original Link

Deploying Spring Boot and MongoDB as Containers Using Kubernetes and Docker

For this tutorial, you’ll have a Dockerized sample spring-boot application that talks to MongoDB for GET/POST REST APIs and deployed in Kubernetes cluster.

Prerequisites

  • minikube
  • kubectl
  • docker
  • maven

Docker is a Linux container management toolkit with a “social” aspect, allowing users to publish container images and consume those published by others. A Docker image is a recipe for running a containerized process, and in this guide, we will build one for a simple Spring boot application.

Original Link

Doing Cloud Right: Takeaways from Our Recent Jez Humble Webinar

Last month, we hosted our "Doing Cloud Right Webinar" with Jez Humble (DORA CTO, author) and Anders Wallgren (Electric Cloud CTO). In the webinar, Jez and Anders discussed some of the most striking findings of the recent 2018 Accelerate State of DevOps Report (ASODR), including the fact that organizations that "do cloud right" are 23 times more likely to be elite DevOps performers!

Continue reading for some top takeaways from this insightful webinar.

Original Link

Top 5 Benefits of Shared MongoDB Hosting

Shared hosting is one of the most cost-effective and easy-to-setup options for deploying MongoDB in the cloud, and is used by thousands of companies around the world to host their databases. In this post, we outline the top five benefits of using shared MongoDB hosting to help you decide whether it’s the right thing for your business.

Shared MongoDB hosting plans are typically best-suited for startups up to medium-sized businesses who need to move fast, develop their customer scenarios, or host a development or testing environment for their application. The most important thing to look for is a shared hosting solution for MongoDB that is fully managed so you have the necessary expertise on-hand to help you monitor, backup, and troubleshoot your database operations. Otherwise, it can significantly impact the security or stability of their application, and consequently, the longevity of your business. This also puts you and your team in a position to focus on building out your application, not getting bogged down by unforeseen database issues.

Original Link

VMware Acquires Heptio: Have the Classic Giants Woken Up?

VMware just announced that they had acquired Heptio, a company started only a couple of years ago by some of the key minds behind the Kubernetes project: Craig McLuckie and Joe Beda, CEO and CTO of Heptio, respectively.

Kubernetes vs. Kubernetes vs. Kubernetes

What is Heptio doing? First of all, through their founding team, Heptio has immense Kubernetes credibility. But the way Heptio has put that credibility at work is very different from what most Kubernetes-centric startups have done. Instead of creating yet another sanctified Kubernetes distribution, Heptio decided to focus on the upstream Kubernetes distribution, the mother of them all. They built the expertise, credibility and, more specifically, the tools to validate (on-premises) customers’ Kubernetes environments and make sure they’ll perform as expected. And this is a big deal!

Original Link

[DZone Research] Data Storage and Database Partitioning

This article is part of the Key Research Findings from the DZone Guide to Databases: Relational and Beyond.

Introduction

For this year’s DZone Guide to Databases, we surveyed software professionals from across the IT industry. We received 582 responses with a 79% completion rating. In this article, we look at the environments in which respondents store their data and how they partition their databases.  

Original Link

The Kubernetes Networking Model

Many web applications today consist of multiple containers, utilizing different Services from different places. Kubernetes effectively streamlines the process of implementing multi-container applications. With Kubernetes, the user can configure features on the container orchestration tool exactly how they want to combine different containers within a single app. Kubernetes then handles the process of rolling them out, maintaining them, and ensuring that all the components remain in sync.

There are other advantages to Kubernetes as well:

Original Link

Comparing Windows and Linux SQL Containers

By several measures, Windows SQL Server containers offer better enterprise support than Linux MySQL or Postgres containers. SQL Server containers provide more backward compatibility, and support for existing apps, storage arrays, and infrastructure.

Windocks has evolved as an independent port of Docker’s open source project to include database cloning, a web UI, secrets store, and other capabilities. These capabilities are customer driven, and seem to diverge from Linux mainstream development. This article takes looks at the capabilities being driven by Windows customers. Full disclosure, I am a principal of Windocks, and this article focuses on the Windows-based SQL Server containers provided by Windocks.

Original Link

How to Build Hybrid Cloud Confidence

Software complexity has grown dramatically over the past decade, and enterprises are looking to hybrid cloud technologies to help power their applications and critical DevOps pipelines. But with so many moving pieces, how can you gain confidence in your hybrid cloud investment?

The hybrid cloud is not a new concept. Way back in 2010, AppDynamics founder Jyoti Bansal had an interesting take on hybrid cloud. The issues Jyoti discussed more than eight years ago are just as challenging today, particularly with architectures becoming more distributed and complex. Today’s enterprises must run myriad open source and commercial products. And new projects — some game-changers — keep sprouting up for companies to adopt. Vertical technologies like container orchestrators are going through rapid evolution as well. As they garner momentum, new software platforms are emerging to take advantage of these capabilities, requiring enterprises to double down on container management strategies.

Original Link

How Enterprises Should Prepare Themselves Before Migrating to the Cloud

cloud migrationLeading analysts project that 83% of business operations and processes will be in the cloud by 2020 and 41% of that will be on public cloud platforms. Whatever the projections say, migrating an established business to the cloud is not simple. There are various aspects, such as performance, flexibility, security, cost, the threat of data breach, accessibility control, compliance, and governance, are a matter of concern for enterprises. This means that enterprises must adhere to documented guidelines and some key points before they embark on their cloud journey.

If you are enthusiastic for a cloud migration and are willing to learn how to move to the cloud, this blog will be your handy reference of key aspects.

Original Link

Kubernetes Demystified: Restrictions on Java Application Resources

This series of articles explores some of the common problems enterprise customers encounter when using Kubernetes.

As container technology becomes increasingly sophisticated, more and more enterprise customers are choosing Docker and Kubernetes as the foundation for their application platforms. However, these customers encounter many problems in practice. This series of articles presents some insights and best practices drawn from the Alibaba Cloud container service team’s experience in helping customers navigate this process.

Original Link

How to Build Your Next App at 4x the Speed

"In terms of prioritization of new funds for digitalization, investment in digitalization was placed in the top 3 for 8 of 15 industries in 2018." — Gartner

In this new age of digital transformation, we need to get our applications to the market as soon as possible. There is so much competition around us, and feature and appeal aren’t all that are required. You don’t want to release your app only to find out your competition beat you to it. And there are also the costs associated with development. Being Agile and lean is the key to survival.

I’ll be talking about an exciting concept called Backend as a Service (BaaS) which accelerates the development process. It essentially aims at automating your backend (more on this later) to give a boost to your development efforts.

Original Link

5 Ways Kubernetes Transforms Your Business

In today’s economy, every business is a software business and every enterprise CIO is tasked with releasing applications that deliver a high quality and unified customer experience that rivals that of Amazon, Google, or Netflix.

While the business benefits of software innovation are clearly understood, the IT capabilities needed to support these are more complex (high degree of customization, infinite systems capacity, seamless scalability, ironclad security, cost optimization, and more), and the most effective way to meet these challenges is still evolving, as organizations strive to become better at competing in this digital world.

Original Link

The Cloud and ERP: Choosing the Best Solution for Your Business

As cloud computing continues to evolve, an increasing number of companies are opting to run their business applications in the cloud rather than on-premise. The current trend is that many organizations worldwide are using some form of cloud services and that use is expected to grow in the coming years.

The benefits of migrating to the cloud can be numerous. Not only does cloud computing support business growth by reducing overhead costs and ensuring all-around transparency, but the cloud is also helping businesses drive innovation by eliminating day-to-day operations related to managing infrastructure, in turn placing complete focus on company objectives. By providing anytime, anywhere access to employees across the globe and streamlining overall business processes, the cloud supports business growth through its scalability and ability to free up critical IT resources for strategic initiatives.

Original Link

Building Your Own Docker Images [Video]

To get anything out of Docker, you must know how to build your own Docker images, so that you can, later on, deploy your Java application to it. In this quick and practical episode you will learn how to do so.

Original Link

Reflections From DevOps Enterprise Summit 2018 With Carmen DeArdo

Most enterprises do not yet think of their delivery pipeline as a product. When I mentioned this concept to a few folks, they literally slapped themselves on the head and said something like "that makes so much sense and it so simple, why didn’t we think of that!" – Carmen DeArdo, Tasktop

It’s been a week since the lively and deeply enriching DevOps Enterprise Summit 2018 in Las Vegas came to a thundering close, giving us some time to digest and reflect on yet another successful event by the IT Revolution team.

I caught up with Carmen DeArdo, Tasktop’s Senior Value Stream Strategist, about his experience at the event, including his favorite speaker sessions, the event’s major themes, noteworthy conversations he had with the DevOps community, and the launch of our CEO Mik Kersten’s eagerly-awaited book, Project To Product and the pioneering the Flow Framework™.

Original Link

What Are the Prerequisites to Learn Cloud Computing AWS?

What is Amazon Web Services (AWS)?

Amazon Web Services, commonly called AWS, is an extensive and secure cloud services platform presented by Amazon. The AWS Cloud or Amazon cloud provides a wide range of facilities services, such as storage options, processing power, networking and databases to businesses, helping them scale and expand. Amazon gives its services on-demand with pay-as-you-go pricing plan.

AWS promotions were first launched in 2006 and presently it is the leading cloud services supplier.

Original Link

Migrating Java Applications to Azure App Service (Part 1 – DataSources and Credentials)

Running on the cloud is not only for cool new applications following 12-factor principles and coded to be cloud-native. Many applications could be converted to be cloud-ready with minimal adjustments — just to be able to run in the cloud environment. In the following few articles we will demonstrate how to address the most common migration items in legacy Spring applications — handling JNDI, and credentials, externalizing configuration, remote debugging, logging, and monitoring.

This article demonstrates how to migrate Java Spring/Hibernate applications using JNDI settings to cloud environment, externalize configuration, and how to keep credentials out of code by using Azure Managed Service Identity.

Original Link

Create, Install, Upgrade, and Rollback a Helm Chart (Part 2)

In part 1 of this post, we explained how we can create a Helm Chart for our application and how to package it. In part 2, we will cover how to install the Helm package to a Kubernetes cluster, how to upgrade our Helm Chart, and how to rollback our Helm Chart.

Install Chart

After all the preparation work we have done in part 1, it is time to install our Helm package. First, check whether our Helm package is available:

Original Link

Best Practices and Anti-Patterns for Containerized Deployments

Kubernetes is easily the most transformational cloud technology available today. It is the de facto standard for container orchestration, essentially functioning as an operating system for cloud-native applications.

With its built-in high availability; granular, infinite scalability; portability; and rolling upgrades, Kubernetes provides many of the features that are critical for running cloud-native applications on a truly composable, interoperable infrastructure.

Original Link

K8s KnowHow – Running Your First Pod

In K8S, a pod is the smallest deployment unit. It’s the collection of one or more containers (preferably one container). All the containers packed in the pod share the same storage, network, and specifications of how to run the container. Let me make it simpler: running a pod represents a process and that’s it. 

For Java world developers, let me give you the other perspective of a pod. In simple terms, a pod is nothing but an execution environment consisting of JVM running the base container and other services to work with the K8S ecosystem. A pod is just a wrapper on the top of JVM. In the pre-container world, imagine you have virtual hosts on the physical node and different JVMs running on those virtual hosts. Now with K8S, a pod provides similar kind of isolation and abstraction in a quite elegant way with lightweight containers.

Original Link

The Role of Managed Service Providers in Software Development

Managed service providers are becoming a growing influence in the business world. With high-speed internet widely available, and the proliferation of cloud services in general, attitudes towards the remote management of services have changed considerably in a relatively short space of time. Businesses across the spectrum are increasingly turning to managed service providers to take care of some, or all, of their IT needs.

Software developers are just one of the many business individuals that can benefit from managed service providers. Many IT teams are leveraging external expertise to expand and improve their own development processes. An outside perspective can add much-needed input for organizations who are keen to streamline and make continuous improvements to their technology value stream. So, while many businesses can be adequately catered to with non-specific IT service management, others, such as those in software development, require a custom solution and specialized approach.

Original Link

An Introduction to Serverless Computing: Part 1

The most hyped technology trend in recent times is Serverless Computing. Some may think (going by the name) that there are no servers involved in serverless computing. There are servers running our code, but these servers are not visible in the infrastructure and need no management, handling or provisioning by the development or operation teams.

In serverless computing, or FaaS (Function as a Service), we generally write applications/functions that focus only on one thing in particular. We then upload that application on the cloud provider, which gets invoked via different events, such as HTTP requests, webhooks, etc. In very recent times people have started referring to serverless as BaaS (Backend as a Service). BaaS and FaaS are related in their operational attributes (e.g., no resource management) and are frequently used together.

Original Link

Containers Are and Will Be the New Linux

Linux is the operating system which has revolutionized data centers over last 2 decades, and today it is the undisputed leader in application hosting platforms. It’s very hard to imagine deploying any mission critical production workloads to any other platform than Linux.

A similar revolution in packaging, deploying, and hosting applications was started a few years ago, when Docker made the Linux containers popular. After that, the growth in container adoption across the industry was exponential and it’s multiplying with each passing day.

Original Link

DevOps Enterprise Summit Las Vegas 2018 — The Best Yet?

Its already over again — the annual get-together of the brightest DevOps minds (well, the brightest who could make it to Vegas). And in this instance, I want to make sure that what happens in Vegas does not stay in Vegas by sharing my highlights with all of you. It was a great event with a slightly higher focus on operations than last time.

The four trends that I picked up on:

Original Link

How The Cloud is Changing IT’s Role

Great having the opportunity to meet with Raj Sabhlok, President of ManageEngine at their Chicago user conference and learn more his vision for the role of IT in the cloud era. 

ManageEngine has been providing IT operations and service management since 2001. Their offerings include Active Directory management, operations management, analytics, service management, endpoint management, and security. Zoho’s operating system for business with more than 40 business apps to run entire businesses.

Original Link

The Future Of The Application Stack

If you have built and deployed an application in production over the last few years, odds are that you have deployed your code in containers. You might have created and deployed individual containers (Docker, Linux LXC, etc.) directly in the beginning, but quickly switched over to a container orchestration technology like Kubernetes (K8s) or Swarm when you needed to coordinate multi-node deployments and high availability (HA). In this container-driven world, what will the future of the application stack look like? Let’s start with what we need from this “future” application stack.

What Do We Need From This Future Application Stack?

  1. Cloud Agnostic

    We want to be cloud agnostic with the ability to deploy to any cloud of our choice. Ideally, we can even mix in various providers in a single deployment.

  2. On-Premise

    We need to be able to run our application stack on-premise with our own custom hardware, private cloud, and internally managed datacenters.

  3. Language Agnostic

    It almost goes with saying, but I’ll add it in for completeness. The future open stack needs to support all of the popular programming languages.

The Future Application Stack

The future application stack will be composed of a triad of technologies – K8s, Platform-as-a-Service (PaaS), and Database-as-a-Service (DBaaS):

Original Link

Running Java on AWS Infrastructure: How To Put the Bricks Together

Startups tend to grow and expand, and successful startups tend to propagate this progress. I see this variety of possible ways mainly from the engineer’s perspective, from the point of striving to create technically reliable, automated, simple to use, and supportive applications, which can be called "alive." I am calling them "alive" here because I see those apps as living beings with their own lifecycles. And now let me elaborate on how this quite abstract idea can be applied to the real world. 

I became a member of my current team at the moment when one crucial question had to be addressed. The question was "How are we going to make our product easier to develop and use?" Originally, the company agreed to deliver integration with a large third-party system, but there was one obstacle: it is hard or even impossible to integrate the system into the larger one (and I am talking here about medical, financial or similar areas) when the product is a mere web app, without a backend (AWS Lamdas are not included here) which should provide an easy way to scale and customize the app, without CI/CD flow, without profound data store (which DynamoDB is not) to effectively aggregate and analyze data (which is becoming a crucial requirement) and, moreover, without a clear and easy way to administer the application itself (and I mean here a customer’s onboarding mostly). Considering the fact that this integration is extremely determinative in terms of clients’ and the development company’s success, all these impediments were defined as a "dragon" that should be defeated to make our product "alive." 

Original Link

Build a Container Image Inside a K8s Cluster

Learn how to build a source into a container image from a Dockerfile inside a Kubernetes cluster and push the image to IBM Cloud Container Registry; all of this using Google’s Kaniko tool.

So, What is Kaniko?

Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.

Original Link

IBM Acquires Red Hat. Now Who Does Google Buy?

IBM announced yesterday that they had entered into an agreement to acquire Red Hat for $34 billion in cash. That’s one of those big milestones in IT history that will have a profound impact for years to come. But is that totally surprising? Not quite…

At the beginning of the year, as part of my 2018 predictions, here is what I had posted on Twitter:

Original Link

IBM to Buy Red Hat for $34 Billion

Red Hat, everyone’s favorite open source software giant, will soon be under new ownership.

IBM announced on Sunday that it will pay $34 billion to acquire Red Hat and its massive portfolio of OSS. The transaction still needs approval from regulators and shareholders (the latter of whom likely won’t mind, as Red Hat’s stock prices soared 50 percent after the news broke), but the deal is on pace to close in the second half of 2019.

Original Link

Cattle, Pets, and Pink Eye

You’ve likely heard the analogy of cattle versus pets with respect to running servers. While we used to nurture our precious service instances in production, in the age of Docker and Kubernetes we work dispassionately with many instances.

In the old way of doing things, we treat our servers like pets, for example, Bob the mail server. If Bob goes down, it’s all hands on deck. The CEO can’t get his email and it’s the end of the world. In the new way, servers are numbered, like cattle in a herd. For example, www001 to www100. When one server goes down, it’s taken out back, shot, and replaced on the line.

Original Link

Wanted: Managed Services for Murdering DevOps

The growth of managed services has provided the developers with cloud-based infrastructure management tools, thus making the DevOps teams obsolete for startups and small businesses.

After the introduction of Agile methodology, developers, operations engineers, and QA specialists received powerful tools for streamlining the delivery of new software. Their teams became more closely connected, as developers stopped tossing the code over the wall for Ops engineers to deploy and maintain.

Original Link

A Quick Guide to Serverless Computing World

Developing "serverless apps" and deploying "serverless architecture" are gaining a lot more traction in the tech industry. The reason behind the hype of serverless computing is simple: it requires no infrastructure management. Hence, enterprises finds this as a modern approach to lessen up the workload.

BBC, Airbnb, Netflix, and Nike are some of the early adopters of this new approach!

Original Link

Docker Containers and Kubernetes: An Architectural Perspective

Understanding both Docker and Kubernetes is essential if we want to build and run a modern cloud infrastructure. It’s even more important to understand how solution architects and developers can use them when designing different cloud-based solutions. Both VMs and Containers are aimed at improving server utilization and reducing physical server sprawl and total cost of ownership for the IT department. However, if we can see the growth in terms of adoption in the last few years, there is an exponential growth in container-based cloud deployment.

Fundamentals of VM and Docker Concepts

Virtual machines provide the ability to emulate a separate OS (the guest), and therefore a separate computer, from right within your existing OS (the host). A Virtual Machine is made up of a userspace plus the kernel space of an operating system. VMs run on top of a physical machine using a “hypervisor,” also called a virtual machine manager, which is a program that allows multiple operating systems to share a single hardware host.

Original Link

Azure Active Directory Is Not Active Directory!

If you’ve been working with Azure for a while you likely already know this, but this topic is something I see over and over again with people who are getting started with Azure. Azure Active Directory is not a cloud version of Active Directory, and in fact, it bears minimal resemblance to its on-premises namesake at all.

The question I see over and over again with people new to Azure, I even answered this question just this week, is "How do I join my servers to Azure AD?" People expect (not unexpectedly) to be able to use Azure Active Directory just like they have always used Active Directory. So this article is for you if your new to Azure and trying to get your head around what Azure AD is and how it works, and how it compares to Active Directory (or not).

Original Link

50 Useful Docker Tutorials, From Beginner to Advanced (Part 1)

Containers bring many benefits to DevOps teams along with a number of security concerns. This post brings you details about 50 Docker training resources that are designed to train beginner, intermediate, and advanced practitioners on current knowledge about Docker.

Containers can be a big help in shipping and deploying your application where it’s needed. But using them adds a layer of complexity to your architecture and can be painful to implement and operate. The introduction of Docker to the IT community transformed the way many departments handled this type of work.

Original Link

Steering the Wheel of Your App Deployment With Helm

Introduction

A packaging manager is essential as the number of services increases complexity in an enterprise application development environments. Historically, enterprise app developers used to deploy on-premise applications with a simple copy and paste of the binaries, and then started writing basic scripts to deploy. This evolved into package managers like rpm, yum, pip and install anywhere, etc. 

Deployment infrastructure is changing from on-premise to cloud, and deployment environment have gone from OS to container engines or orchestration engines. This step demands the need for a new package manager which never existed before. 

Original Link

Stateful and Stateless Horizontal Scaling for Cloud Environments

Horizontal scaling (adding several servers to the cluster) is commonly used to improve performance and provide high availability (HA). The important advantage is that it lets increase capacity on the fly and gives more freedom to grow. But at the same time, it requires the application to be carefully designed so that it is synchronized on all instances in the cloud. Jelastic tries to ease this process on maximum for admins not to waste time on reconfigurations.

Below, we’ll overview general specifics and benefits of horizontal scaling in Jelastic PaaS and go step-by-step through the process of setting the triggers for automatic horizontal scaling.

Original Link

(Yet Another) Intro to HANA as a Service

After the first mini-codejam on First steps with HANA as a Service at SAP Teched Las Vegas, I’d like to share some insights from the perspective of an on-premise HANA user (if I can call myself a user…). Particularly, an SAP HANA, express edition (ab)user.

In this series of posts, I’d like to cover the fundamental differences with other HANA offerings, some architecture basics and it wouldn’t be me if I didn’t point at how to get started/play with it.

Original Link

Ingress Controllers for Kubernetes

Kubernetes and DC/OS are a powerful combination for serving the apps that create great customer experiences. Once your microservices-based containerized apps are up and running, you’ll need to expose them to the outside world. This blog post explains several different ways to do this and provides detailed instructions for implementing an ingress controller.

Ingress Options Abound

Kubernetes handles East-West connectivity for services internally within our Kubernetes cluster by assigning a cluster-internal IP which can be reached by all services within the cluster. When it comes to external, or North-South, connectivity, there are a number of different methods we can use to achieve this.

Original Link