ALU

horizontal scaling

Reasons to Scale Horizontally

Here at Wallaroo Labs, we build Wallaroo, a distributed stream processor designed to make it easy to scale real-time Python data processing applications. That’s a real mouthful. What does it mean? To me, the critical part is the “scale” from “easy to scale.”

What does it mean to easily scale a data processing application? In the case of Wallaroo applications, it means that it’s easy to scale those applications horizontally.

Original Link

Stateful and Stateless Horizontal Scaling for Cloud Environments

Horizontal scaling (adding several servers to the cluster) is commonly used to improve performance and provide high availability (HA). The important advantage is that it lets increase capacity on the fly and gives more freedom to grow. But at the same time, it requires the application to be carefully designed so that it is synchronized on all instances in the cloud. Jelastic tries to ease this process on maximum for admins not to waste time on reconfigurations.

Below, we’ll overview general specifics and benefits of horizontal scaling in Jelastic PaaS and go step-by-step through the process of setting the triggers for automatic horizontal scaling.

Original Link

Vertical Scaling and Horizontal Scaling in AWS

Scaling an on-premise infrastructure is hard. You need to plan for peak capacity, wait for equipment to arrive, configure the hardware and software, and hope you get everything right the first time. But deploying your application in the cloud can address these headaches. If you plan to run your application on an increasingly large scale, you need to think about scaling in cloud computing from the beginning, as part of your planning process.

There are mainly two different ways to accomplish scaling, which is a transformation that enlarges or diminishes. One is vertical scaling and the other is horizontal scaling. Let’s understand these scaling types with AWS.

Vertical Scaling

For the initial users up to 100, a single EC2 instance would be sufficient, e.g. t2.micro/t2.nano. The one instance would run the entire web stack, for example, web app, database, management, etc. The original architecture is fine until your traffic ramps up. Here you can scale vertically by increasing the capacity of your EC2 instance to address the growing demands of the application when the users grow up to 100. Vertical scaling means that you scale by adding more power (CPU, RAM) to an existing machine. AWS provides instances up to 488 GB of RAM or 128 virtual cores.

Image title

There are few challenges in basic architecture. First, we are using a single machine which means you don’t have a redundant server. Second, machine resides in a single AZ, which means your application health is bound to a single location.

To address the vertical scaling challenge, you start with decoupling your application tiers. Application tiers are likely to have different resource needs and those needs might grow at different rates. By separating the tiers, you can compose each tier using the most appropriate instance type based on different resource needs.

Now, try to design your application so it can function in a distributed fashion. For example, you should be able to handle a request using any web server and produce the same user experience. Store application state independently so that subsequent requests do not need to be handled by the same server. Once the servers are stateless, you can scale by adding more instances to a tier and load balance incoming requests across EC2 instances using Elastic Load Balancing (ELB).

Horizontal Scaling

Horizontal scaling essentially involves adding machines in the pool of existing resources. When users grow up to 1000 or more, vertical scaling can’t handle requests and horizontal scaling is required. Horizontal scalability can be achieved with the help of clustering, distributed file system, and load balancing.

Loosely coupled distributed architecture allows for scaling of each part of the architecture independently. This means a group of software products can be created and deployed as independent pieces, even though they work together to manage a complete workflow. Each application is made up of a collection of abstracted services that can function and operate independently. This allows for horizontal scaling at the product level as well as the service level.

Image title

How To Achieve Effective Horizontal Scaling

The first is to make your application stateless on the server side as much as possible. Any time your application has to rely on server-side tracking of what it’s doing at a given moment, that user session is tied inextricably to that particular server. If, on the other hand, all session-related specifics are stored browser-side, that session can be passed seamlessly across literally hundreds of servers. The ability to hand a single session (or thousands or millions of single sessions) across servers interchangeably is the very epitome of horizontal scaling.

The second goal to keep square in your sights is to develop your app with a service-oriented architecture. The more your app is comprised of self-contained but interacting logical blocks, the more you’ll be able to scale each of those blocks independently as your use load demands. Be sure to develop your app with independent web, application, caching and database tiers. This is critical for realizing cost savings – because, without this microservice architecture, you’re going to have to scale up each component of your app to the demand levels of the services tier getting hit the hardest.

When designing your application, you must factor a scaling methodology into the design – to plan for handling increased load on your system, when that time arrives. This is should not be done as an afterthought, but rather as part of the initial architecture and its design.

Original Link

Multidimensional Scalability Capabilities

The IT industry is gradually moving away from the days when operational capabilities were solely in the hands of a bunch of operation engineers. The reactive approach of sitting in front of large screens, monitoring system health, and waiting for a bottleneck to arise before applying a predefined remedy is now being replaced by a smarter, proactive approach.

New age scalability demands a comprehensive, yet dynamically scalable model to address all possible grey areas of system architecture. This means that engineering activities are not just limited to development or quality, but also spread their wings to cover operational activities now.

Most mid to large size enterprises now demand a multidimensional, scalable dynamic model to constantly monitor the existing environmental situation for timely decision making. This, in turn, also helps in reducing system load by spawning additional system capabilities.

A multidimensionally scalable dynamic system, is what any large and mid-scale enterprise service provider seeks these days. Human-driven operation management has its own limitations; it needs a perfectly aligned combination of hardware and software to make a dynamically scalable system. It requires an operation management system to constantly monitor the existing environmental situation, which can take timely decisions to serve increasing system load, by spawning additional system capabilities.

The Art of Scalability describes a really useful, three-dimensional scalability model. The Scale Cube has given us the idea to understand the holistic approach to touch base with all aspects of system scalability. Here are the three dimensions of scalability:

  1. X-axis scaling consists of running cloned copies of an application behind a load balancer. If there are N cloned copies, then each copy will handle 1/N of the load.

  2. Y-axis scaling splits the application into multiple subsets of services. Each service is responsible for one or more closely related functions. The SOLID principles are relevant here.

  3. Z-axis scaling is mostly used to scale data storage. Data is partitioned across several machines on a whole cluster.

There are various scalability capabilities which are relevant for building a truly scalable system. These 3P (People/Process/Platform) capabilities are described below.

Image title

Platform capabilities

People capabilities

Process capabilities

Scalable system and platform

  • Scale-out storage architecture
  • Scale-out application layer architecture
  • Application layer cloning capability
  • Data partitioning and sharing
  • Vertical and horizontal scalability
  • Http layer scalability
  • Load balancer layer scalability
  • Operation layer scalability
  • Data storage layer scalability
  • Multidimensional system scalability

Scalable operation management

  • Continuous build, integration, and test
  • Build once deploy anywhere
  • Infrastructure as code
  • Centralized container-based application deployment management
  • Constantly maintained high Mean Time To Data Loss (MTTDL) at data layer

Platform with scalability testing support

  • Facility for benchmark testing
  • Facility for performance testing
  • Facility for reliability testing
  • Facility for load sharing testing
  • Facility for error recovery and failure testing
  • Facility for automated test report generation & dispatch

Clearly defined responsibilities to achieve system scalability objectives

  • Clearly defined roles & responsibilities in team
  • Alignment between scale objectives and roles in team
  • Adequate staffing and training, to meet the skill requirements of teams working on scale objectives

Scalable system architecture expertise

  • Scale-out storage architecture knowledge and expertise
  • Expertise of data partitioning and sharing to meet scalability objectives
  • Expertise in vertical and horizontal scaling techniques
  • Expertise on application layer scalability techniques
  • Expertise on HTTP/FTP layer scalability techniques
  • Expertise on load balancer layer scalability techniques
  • Expertise on operation layer scalability techniques
  • Expertise on data storage layer scalability
  • Multidimensional system scalability expertise

Scalable operation management expertise

  • Continuous integration and test expertise
  • Build once deploy anywhere expertise
  • Infrastructure as code expertise
  • Expertise to maintain Mean Time To Data Loss (MTTDL) at data layer
  • Centralized container-based application deployment management expertise
  • Cloud environment management expertise

System scalability testing expertise

  • Benchmark testing expertise
  • Performance testing expertise
  • Reliability testing expertise
  • Load sharing testing expertise
  • Error recovery and failure testing expertise

Defined process to maintain right skillset for achieving scalability objectives

  • Adequate staffing process, to meet the skill requirement of teams working on scale objectives
  • Adequate training process, to meet the skill requirement of teams working on scale objectives

Comprehensive system scalability guidelines

  • Clearly defined scale objectives for system in assessment
  • Scale-out storage architecture guidelines
  • Scale-out application layer architecture guidelines
  • Application layer cloning guidelines
  • Data partitioning and sharing guidelines
  • Vertical and horizontal scaling guidelines
  • Http layer scalability guidelines
  • Load balancer layer scalability guidelines
  • Operation layer scalability guidelines
  • Data storage layer scalability guidelines
  • Multidimensional system scalability guidelines

Scalable operation management process

  • Continuous integration and test
  • Build once deploy anywhere
  • Infrastructure as code
  • Guidelines to maintain Mean Time To Data Loss (MTTDL) at data layer
  • Centralized container-based application deployment management
  • Cloud environment management guidelines

System scalability testing guidelines

  • Guidelines for benchmark testing
  • Guidelines for performance testing
  • Guidelines for reliability testing
  • Guidelines for load sharing testing
  • Guidelines for error recovery and failure testing

These capabilities are applicable for organizations with self-managed operations. The next in this series will be scalability capabilities for clouds.

Original Link