ALU

cloud computing

Teraco said to be for sale with Permira seeking exit

Permira is working with an adviser to find a buyer for its data centre company, Teraco Data Environments, people with knowledge of the matter said. Original Link

Alibaba’s future is cloud computing, CEO Daniel Zhang says

Cloud computing revenue for Alibaba increased 90% year-on-year in the third quarter to RMB 5.7 billion. Original Link

IBM is Dancing Again

In 1992, IBM, the biggest technology company in the 20th century, was about to vanish. In 1993, the company was rescued by Louis V. Gerstner Jr.,  who took over as CEO and got them back on the track again. He retired in 2002, leaving IBM as a leading technology company again with billions of profits.

In his book, Who Says Elephants Can’t Dance, he explained that he did that through many things, mainly:

Original Link

Why hybrid cloud is here to stay

Hybrid cloud is a crisply strategic solution that provides the business with the platform of the future that respects the investment of the past. Original Link

Huawei to build cloud data centre facility in South Africa

Chinese ICT giant Huawei will build a data centre facility in Johannesburg to provide public cloud services, becoming the latest multinational after Microsoft and Amazon.com to unveil such plans. Original Link

The Cloud and ERP: Choosing the Best Solution for Your Business

As cloud computing continues to evolve, an increasing number of companies are opting to run their business applications in the cloud rather than on-premise. The current trend is that many organizations worldwide are using some form of cloud services and that use is expected to grow in the coming years.

The benefits of migrating to the cloud can be numerous. Not only does cloud computing support business growth by reducing overhead costs and ensuring all-around transparency, but the cloud is also helping businesses drive innovation by eliminating day-to-day operations related to managing infrastructure, in turn placing complete focus on company objectives. By providing anytime, anywhere access to employees across the globe and streamlining overall business processes, the cloud supports business growth through its scalability and ability to free up critical IT resources for strategic initiatives.

Original Link

What Are the Prerequisites to Learn Cloud Computing AWS?

What is Amazon Web Services (AWS)?

Amazon Web Services, commonly called AWS, is an extensive and secure cloud services platform presented by Amazon. The AWS Cloud or Amazon cloud provides a wide range of facilities services, such as storage options, processing power, networking and databases to businesses, helping them scale and expand. Amazon gives its services on-demand with pay-as-you-go pricing plan.

AWS promotions were first launched in 2006 and presently it is the leading cloud services supplier.

Original Link

Briefing: Pony Ma on why Tencent’s future is industrial

Tencent’s shift to enterprise users is broadly in line with China’s evolving industrial policy. Original Link

How The Cloud is Changing IT’s Role

Great having the opportunity to meet with Raj Sabhlok, President of ManageEngine at their Chicago user conference and learn more his vision for the role of IT in the cloud era. 

ManageEngine has been providing IT operations and service management since 2001. Their offerings include Active Directory management, operations management, analytics, service management, endpoint management, and security. Zoho’s operating system for business with more than 40 business apps to run entire businesses.

Original Link

A Quick Guide to Serverless Computing World

Developing "serverless apps" and deploying "serverless architecture" are gaining a lot more traction in the tech industry. The reason behind the hype of serverless computing is simple: it requires no infrastructure management. Hence, enterprises finds this as a modern approach to lessen up the workload.

BBC, Airbnb, Netflix, and Nike are some of the early adopters of this new approach!

Original Link

Emojis in AWS Instance Names [Comic]

This Gen Z dev is addicted to emojis! He even uses them for AWS instance names! His Gen X manager is totally annoyed.

Do you use emojis in AWS instance names?  Let us know in the comments below!

Original Link

Green Computing and Green Storage Techniques

Green computing and green storage – these terms are getting quite popular with an increasing number of carbon footprints as we think of ways to prevent them. It is our duty to reduce the number of eco-footprints as quickly as possible. Do you know that a single email generates a lot of footprints? A short email generates about 4 grams of CO2 equivalent to the atmosphere. When you send an email with a large attachment, it creates 50 grams of CO2. Even a text message, plastic bags, paper bags, bottled water, cappuccino, and large TVs, all generate a good amount of carbon footprint. I bet you were totally unaware of these impacts!

This indicates that it is our prime responsibility to keep in check the number of carbon footprints getting generated. For this, data centers and other organizations are going on the greener side. They are opting for green computing and storage options now. 

Original Link

A Brief History of Edge

How did edge computing begin — and what made its start possible? In just a few decades, the IT world has evolved from the mainframes, to client/servers, to the cloud, and now to edge computing. How do these eras interconnect, what spurred these evolutionary transitions, and where will we go next? The historical perspective outlined below explains how we landed where we are today: outside of the traditional data center, and at the rise of the era of edge computing.

1960s-1970s: The Mainframe Era

Computers were invented during the era of the mainframe. Large, monolithic systems were a treasured commodity that only sizable organizations could afford to deploy and maintain. Companies conducted all network computing in a physical datacenter. Computing was evolving to meet the rising demands for information access and availability.

Original Link

A Comparison of Kubernetes Distributions

Kubernetes is currently one of the most successful and fastest growing IT infrastructure projects. Kubernetes was introduced in 2014 as an open source version of the internal Google orchestrator Borg. 2017 saw an increase in Kubernetes adoption by enterprise and by 2018, it has become widely adopted across diverse businesses, from software developers to airline companies. One of the reasons why Kubernetes gained popularity so fast is its open source architecture and an incredible number of manuals, articles and support provided by its loyal community.

No wonder that just as for any successful open source project, several distributions can be found in the market (think Linux here), offering various extra features and targeting a specific category of users.

Original Link

Best Practices for Kubernetes’ Pods

Cloud computing is one of the most active and significant fields in modern computer science. For both businesses and individuals, it allows the kind of interconnectedness and productivity gains that can completely transform the way that we work and live. However, as anyone who uses cloud services professionally can attest to, simply being on the cloud isn’t enough. In order to utilize this technology to its full potential, businesses need to carefully consider the exact setup that they use.

Compatibility is one of the biggest challenges that any dynamic IT system faces. In situations where new products, hardware, and software are regularly being introduced into the ecosystem, all it takes is one incompatible component to completely disrupt the workflow. An elegant solution to this problem used to be to make use of virtual machines.

Original Link

Compete in the Digital Age with Cloud Computing

As the Digital Era is enhancing and advancing, it is getting closer to people in their day-to-day lives. Our generation has to face constant competition due to the various innovations rolling out in the IT industry. Every single thing is becoming technology-based and there are new developments for the needs that exist and upcoming needs. The evolution is evident and caters to the various needs of the human beings. One such innovation which changed the world is cloud computing.

Cloud Computing – Past and Present

Cloud computing was a term was coined in 1961 by John McCarthy as it was his concept to make computers the fundamental of IT industry. Later in 2006, cloud computing emerged and then the round of talks began on its power of revolutionizing the IT industry.

Original Link

Using AWS Lambda for Multi-Location Media Transform

With the rising number of global viewership, it is highly critical for media giants to serve their customer needs. With an increasing number of the viewers, the number of device formats has also increased. 

There could be a situation where you have a lot of media files that you need to change dimensions of or apply a watermark to, for example. To make such cases seamless, it is imperative to offer content in multiple formats at multiple locations.

Original Link

AWS Auto Scaling Group: Working With Lifecycle Hooks

Purpose of Lifecycle Hooks

Usually deploying instances inside an AutoScaling group involves installing applications on them and sometimes this can take time, especially if you are deploying complex Java and .Net applications. So, during the time your applications are getting deployed and your instance is basically not yet ready for serving traffic, the auto-scaling group or ELB may see the instance state as healthy and start serving traffic to it. This is where Lifecycle hooks come into play.

Lifecycle hooks pause the state of an instance at Pending:Wait, by default, for 60 minutes. This is the duration during which you can deploy your applications on the instance. Here you can find more information on instance states inside an Auto Scaling Group.

Original Link

Data Centers Are Partnering for DevOps Transformation

More and more data center executives are looking to partner with specialized DevOps consulting firms to increase operational efficiency and meet customer needs. Both enterprise and commercial data centers that are helping to support private and hybrid cloud deployments are building out DevOps capabilities. DevOps is a set of principles which impact process, culture, and toolchain, so data centers are looking for help to add automation, speed, and efficiency, as well as add value to customers and find advantages.

Larger data centers can gain a lot of operational leverage by using engineers and tools across multiple customers. Smaller data centers can move up the value chain by offering more application related services to their users.

Original Link

Alibaba Cloud launches City Brain 2.0

Hangzhou, the first city to embrace the system, dropped from the 5th to the 57th spot on the list for China’s most congested cities. Original Link

IaaS, PaaS, SaaS – Cloud Computing Services Comparison with Advantages

In my previous post, I defined and differentiated how principles of virtualization can help you to understand cloud computing. I have identified the main cloud services on demand: Infrastructure as a Service (IaaS), Platform as a Service (PasS), Software as a Service (SaaS). This triple concept of cloud computing is in demand, according to an ICT online survey report in 2014.

The Cloud as A Service and Deployment Model

Cloud computing is a way to manage virtualized IT resources. Servers, workstations, network, and software are remotely managed and deployed. The connected network of servers is grouped together to divide the load generated on site and manage uptime of a website even in critical traffic situations. Cloud servers need no time for hardware installation so it is the purchase of a managed IT service through an online ordering interface. You buy only what you consume. The most popular cloud services are the services infrastructure, the service, and the software platform.

Original Link

When to Rely on DevOps-as-a-Service

It is a popular misconception that DevOps workflows are centered around automating daily operations with the cloud infrastructure. Quite to the contrary, DevOps services can do so much more…

The main reason for ordering DevOps-as-a-Service from the very beginning is obvious — time and cost savings on cloud computing resources involved in software development. This gives DevOps teams more time to design and implement the required infrastructure and use it in all the stages of software delivery. More importantly, it allows experienced DevOps engineers to predict future system bottlenecks and design the cloud infrastructure to avoid them.

Original Link

Cloud Computing Security Challenges and Considerations

Cloud computing in its many forms, has proven to be a powerful, effective set of technologies which can provide even the smallest enterprise with significant benefits.

However, cloud computing does not come without its own challenges, including those that are security related. Below you will find an overview of the key security challenges faced by cloud computing adopters.

Original Link

How DevOps and the Cloud Are Perfect for Business Success

Most organizations understand that to increase competitiveness in this rapidly changing world, it is essential to obtain digital transformation. DevOps, as well as cloud computing, are oft-touted as the crucial ways for companies to achieve their needed transformation. The association between these two is confusing as cloud computing is about technology — its tools and services — whereas DevOps are about the processes and their improvement. Not mutually exclusive, it’s crucial to understand how DevOps and cloud work together, helping businesses achieve their transformational goals.

Agility is the core component of this relationship and DevOps behind the agile methods provides the automation. Traditional platforms need weeks or months of planning for the necessary software and hardware. The automated provisioning with virtualization of these resources can’t quickly be done on demand.

Original Link

What Is Event-Driven Programming And Why Is It So Popular?

Event-driven programming is currently the default paradigm in software engineering. As the name suggests, it uses events as the basis for developing the software. These events can be something the users are doing — clicking on a specific button, picking an option from drop-down, typing text into a field, giving voice commands, or uploading a video — or system-generated events such as a program loading.

The central idea of event-driven programming is that the application is designed to react.

Original Link

Why Investing In Cloud Engineering Is Good For Your Company

Adobe, the maker of the creative software suite, saw a three-fold increase in stock value after introducing its cloud-based subscription business model. Heineken was able to reach 10.5 million customers on a global scale swiftly using cloud services. Salesforce, Amazon, Microsoft, even Google — every IT behemoth you can think of, now lives on the cloud. Reports say that the public cloud platform will size up to $178 billion in 2018.

Cloud engineering has become the mainstream technology that is propelling digital enterprises to their next level of performance. It is common knowledge that cloud engineering brings several benefits to businesses, the primary one being cost savings.

Original Link

Five Serverless Aspects to Keep in Mind

Ever wonder how people develop apps so fast? Why the competition always beats you to market?

There’s an obvious answer that most people overlook: those companies don’t bother with servers.

Original Link

Why You Need to Shift to Cloud-Based AWS from IT Support

“Is it required to shift from IT support to AWS cloud-based technology?” In my opinion, it is because of the ocean of opportunities in cloud-based technology platform.

Perks Of Networking Profile

The basic responsibilities of a networking person would be planning, designing, implementation and support for LAN, WAN, and firewalls in production environments. But it also includes troubleshooting and providing technical support for products in different environments.

Original Link

Common Cloud Management Challenges Faced by Enterprises

Taking control of the management of cloud operations on an enterprise isn’t easy. Here are some of the most cited issues that managment of cloud infrastructures face. 

Hybrid Cloud Model

For operating a hybrid cloud model, the underlying application, integrations and data architectures need to be revisited, sometimes tweaked, and other times overhauled. New tools for deployment, monitoring, and management are required. Managers in charge of maintaining the Hybrid IT environments seek to ensure the availability of the complex set of skills, tools and processes needed to manage hybrid infrastructure on a consistent, global scale.

Original Link

Using DevOps in the Cloud for Improved Productivity

Think about the last time you visited your bank’s physical location to transfer money or stood in the long, spiral queue for a railway reservation.

A majority of us will have to think really hard to answer these questions. The reason is simple; technology is bringing everything to our fingertips, be it banking, grocery shopping, or travel booking. Consumers now expect better quality and feature-rich products which are simple to use. They judge the brands based on how user-friendly their app or website is and how seamless digital transactions are. To keep up with such demands, IT companies can’t afford to continue with the old-style application development cycle which is time-consuming, inefficient, and rigid. They need to keep looking for better technology and adapt it quickly.

Original Link

Top 8 Reasons Why Small Business Should Adopt Cloud Computing

Cloud computing is last decade’s most hyperbolic technology. It offers a savvy, multi-purpose, versatile alternative to local storage since your business can access and store information on an external platform. This has truly redefined how businesses compete in today’s world and yet it’s become such a normal part of our lives that even if you don’t realize you’re using the cloud, you probably already are. In fact, it’s quite commonly intertwined into our daily functions through things like using social media, making online transactions, and checking our email.

There are many great advantages to the cloud. This is why its adoption rates have soared above an astonishing 95% today. These benefits are especially great for business startups. As such, if you’re not already using the cloud, you should definitely start your migration there. Once you do, you, too, will see how the cloud has transcended from a viable option to a modern-day necessity if you want your business to have a sustainable, competitive advantage. There are many reasons for this.

Original Link

China Tower and Alibaba cooperate in 5G and edge computing

The world’s largest telecom infrastructure provider will be collaborating with Alibaba on cloud computing, edge computing, big data, and 5G. In return, the company will provide infrastructure for Alibaba’s IoT projects. Original Link

Speed, Transparency, and Empowerment: Cloud Is What Oil and Gas Needs

Cloud computing is rolling into the oil and gas sector and bringing the unprecedented speed of innovation into an industry that is thirsty for change. The technology has addressed security concerns that previously held back adoption of cloud-based systems and is rewarding pioneering companies with the transparency to revolutionize their antiquated, on-premise systems.

Today, “62% of businesses in the oil and gas industry are employing cloud-based managed security services to help integrate, manage and improve cybersecurity and privacy,” according to PwC’s Global State of Information Security Survey 2017. The majority of these services are used for “data loss prevention, monitoring and analytics, and authentication.”

Original Link

How to Set Up Your First CentOS 7 Server on Alibaba Cloud

Alibaba Cloud Elastic Compute Service (ECS) provides a faster and more powerful way to run your cloud applications as compared to traditional physical servers. With ECS, you can achieve more with the latest generation of CPUs as well as protect your instance from DDoS and Trojan attacks.

In this guide, we will talk about the best practices for provisioning your CentOS 7 server hosted on an Alibaba Cloud Elastic Compute Service (ECS) instance.

Original Link

What is Serverless and Why Does It Matter?

Let’s not kid ourselves, until Pied Piper’s "new internet" goes public and renders data centers irrelevant, the world will still run on the server. Whether under your desk or in a cloud container, a server is still a server.

How many of us have had conversations with non-techie friends and family asking us what the cloud "is" and whether or not they should be "in" the cloud? Queue eye roll and a pithy explanation of how the cloud is really just someone else’s server.

Original Link

Cloud Workloads, Simplified

Since its inception, cloud computing has come a long way, yet people still possess misgivings and misunderstandings about what it can do, particularly when it comes to workloads.

The Annals of The Cloud

The history of cloud computing, if only conceptually, stretches back further than you may imagine. The cloud genesis, so to speak, dates right back to the 1960s, when renowned computer scientist J.C.R. Licklider dreamed of an "intergalactic computer network." Licklider imagined a time when everyone on the planet would be interconnected and able to access programs and data from anywhere at any time. He may never have specifically called this grand plan "the cloud," but its similarities to what we now utilize daily are striking.

Original Link

Cloud Computing Can Help Your Business Reach the Next Level

If you know what cloud computing is then it isn’t too hard for you to recognize the real benefits of cloud technology. Since the introduction of cloud computing, there have been many organizations and individuals who have implemented it in their lives and have noticed the difference in ease of using the service and enhanced business processes. There are many benefits of cloud computing like storing data on the cloud, being able to retrieve the data stored on the cloud from any location, and using various software and applications on the cloud platform. Small businesses can make use of this kind of technology which wasn’t affordable to them previously. Big businesses usually spend more but small businesses cannot do the same and thus cloud computing has been a boon for the business owners. Moving your business to the cloud is safe and a smart step because a lot of benefits can be reaped through the implementation of cloud technology.

Let’s look at the business benefits achieved through migrating on cloud:

Original Link

Serverless Applications with AWS Lambda: 5 Use Cases

AWS Lambda is the serverless computing service aimed at powering up applications. The cloud architects and developers can use this service for various use cases, and we list them below.

AWS Lambda is one of the most important additions that AWS offers, as it signifies the next level of developer interaction with the infrastructure. This service literally allows the developers to use the AWS infrastructure without ever looking under the hood — Lambda deploys and manages the required infrastructure, monitors the server health, logs the code execution and provides detailed statistics. This might seem like a PaaS, yet there are some crucial differences:

Original Link

Hidden Truth About Containers and Their Management [Video]

The respondents of a recent research stated that faster deployment time (77%) is the most significant benefit of containers, followed by improved scalability while building applications (75%) and greater modularity (64%). However, there are still lots of companies that are not using containers due to various reasons, such as lack of developers’ experience or the need for modernizing legacy applications. Jelastic CEO, Ruslan Synytsky, shares his opinion below about containers, their pros, and cons, as well as a possibility to ease the entry point for those who still don’t benefit from this technology.


Transcript:

Original Link

Architecture 3.0: A New Era

At the risk of stating the obvious, we can say that IT has evolved a lot since its beginnings. From mainframe to cloud, a number of steps have been taken, technologies have appeared, and in this context it seems interesting to study the past to try to understand the future. Especially since it seems to us that the architect must and will play an increasingly important role.

We thus see three great eras emerging that we can trace from the inception of what is now called "Enterprise IT."

Original Link

Cloud Advancements of 2017-2018

There were some serious cloud advancements in 2017 and 2018 that might have gone undetected by you. We briefly review the ones we consider to be the most important.

Cloud-achievments_pie1.jpg

The Big Three of Cloud Service Providers — namely, AWS, Azure, and GCP — are the driving forces of modern IT evolution. They are closely followed by IBM, Oracle, Alibaba and the rest of the vendors, yet the most groundbreaking advancements in the public cloud field come from these three providers.

Cloud-achievments_pie1.jpg

Why do we say so? Let’s take a closer look at what these companies have to say for themselves. Most of the statistics are available in RightScale State of the Cloud report 2018.

New Microsoft Azure Tech and Achievements of 2017-2018

Microsoft Azure reported revenue of $5.6 billion in Q4 of 2017, and a YoY growth of 98%. While launching a 5,000 people strong R&D department dedicated solely to designing and implementing new AI and ML-based products for Azure might seem old news, as it was done in the end of 2016, the effects of their work are clearly visible today. Azure is aiming at restructuring their cloud offerings into “end-to-end customer-centric service that will be simple to understand and use”, according to Microsoft Ceo Satya Nadella.

“We do not distinguish our servers from our cloud. Intelligent cloud and intelligent edge are the architectural pattern we build for. From SQL server to container service, we assume the distributed computing will remain distributed.” — Satya Nadella, Microsoft CEO

Microsoft delivers according to the strategy voiced above. The company unites Office 365 Productivity Suite, Dynamics 365 CRM and Azure cloud offers to provide a new archetype of AI-powered customer-centric solutions:

  • Microsoft 365 is a new integrated solution from Microsoft, which combines Windows 10, Office 365 and Enterprise Mobility and Security products to enable a fundamental shift in the way the company designs, builds and delivers the apps for addressing the customer needs for the businesses of all sizes.

  • All three layers of the cloud pyramid — Azure provides the SaaS, PaaS, and IaaS capabilities to enable end-to-end cloud solutions, supporting the business goals of their customers. As Azure states, it serves 90% of Fortune 500 companies, Microsoft confidently scores $21 billion in annual revenue generated by their cloud products for the fiscal year ending June 30, 2018.

Cloud-achievments_table1.jpg

  • Holistic on-prem and public cloud suite. As quoted above, Satya Nadella emphasizes his vision of delivering integral service, where both cloud and on-prem services are interrelated parts of a logical continuum centered at what the customers WANT to achieve and what they NEED to turn their ideas into reality.

  • Microsoft uses their 5,000 people strong AI R&D department to deliver AI-powered solutions using Hololens that revolutionize the way their customers (like a car giant Ford) build, test and deliver new products. And this is only the beginning of a long and lucrative endeavor.

This long-term commitment and vision of Satya Nadella helps restructure and reinvigorate Microsoft, making it an undoubted leader in the software development and delivery world. The result of this devotion is the $250 billion increase in the Microsoft market cap over Nadella’s 3.5 years as the CEO of Microsoft. 

New AWS Tech and Achievements in 2017-2018

While Microsoft steadily holds the lead in software development and delivery as a whole, AWS is the undoubted leader in enterprise cloud computing services adoption.

Cloud-achievments_diagram-1.jpg

We have briefly outlined the new AWS tech and services introduced during AWS re:Invent 2017 event last November:

  • AWS Fargate, the managed Kubernetes-as-a-Service product, which allows managing Kubernetes clusters for supporting customer apps without even looking under the hood and configuring anything. This feature is a long-awaited addition to AWS product range, putting AWS EKS on-par with Azure ECS.

  • The other well-covered feature was Alexa and the improved ways Amazon uses it to capture and understand various types of data in order to make our households more comfortable. According to the latest reports, more than 4,000 smart home appliances from 1,200 varying vendors and brands currently support Alexa and interact with it. Amazon expected rapid Alexa adoption growth back in 2017 and was able to exceed their pretentious expectations — and the company is going to double down in 2018.

  • Yet another astonishing new feature Jeff Bezos presented was Amazon SageMaker — a comprehensive AWS platform for developing and training AI algorithms, which can be used by people without a Phd. in computer science. According to Wired, the Amazon AI development team is lead by Alex Smola, an AI creation genius with more than 90,000 academic citations. SageMaker provides AWS customers with a simple and affordable way to design, train and implement ML models at scale, using the AWS infrastructure.

“There are only a few thousand people worldwide capable of designing complex machine learning models.”
— Sundar Pichai, CEO of Google

Thus said, Amazon SageMaker can turn out to be an excellent platform for business, allowing to implement various AI-powered solutions without investing too heavily into the underlying infrastructure and hiring experts with 6-digit salaries.

Much more tech was discussed during the AWS Summit 2018 London, including the new developments for products like Aurora, S3/Glacier Select and AWS security features. AWS is also making a heavy emphasis on AI, ML and IoT development, which was illustrated by a Jaguar Land Rover case study on connected cars. As Dr. Werner Vogels, Amazon CTO puts it, “Amazon has made $22 billion this fiscal year and we have much more ambitious plans for 2018.” Knowing Amazon, we are sure they will deliver on to their promise.

Cloud-achievments_table2.jpg

Google Cloud in 2017-2018: New Horizons and Victories

2017 and the first half of 2018 was a good period for Google Cloud overall. The company reported making around $4 billion annually and was praised by Forrester Wave and Gartner for being leaders in public cloud from the perspective of providing innovative Infrastructure-as-a-Service and delivering the best native security across the field.

One of the major successes of Google was the adoption of Kubernetes as the default container management tool for Docker. For quite a long time Docker was trying to persuade the community to use Docker Swarm instead and was providing quite an impressive statistics. However, the huge part of the community was constantly pressing Docker to provide native support for Kubernetes, and this long overdue event was announced in the November of 2017.

This adoption resulted in a rapid increase of GKE usage due to serious simplification of setup and configuration. As of now, Google Cloud has above 4 million paying customers and is the most dynamically growing public cloud service provider. While sitting on the third place, GCP is definitely going to increase its market share.

Cloud-achievments_pie2.jpg

Google Cloud is a worthy competitor for AWS or Azure due to it being much more cost-efficient and intuitive. Google BigQuery is a worthy competitor to Azure or AWS products in terms of processing Big Data and enabling machine learning-powered analytics. In addition, per-second billing is a standard for many GCP services, both standard instances and serverless computing.

There was a downfall, however, which Google Cloud was able to turn into a boost. Most of the enterprises still have some concerns regarding transitioning their workloads to the cloud:

Image title

As you can see, data security, and fraud protection are the biggest concerns when using IaaS. The situation we mentioned before arose when one dissatisfied customer posted an angry review of Google Cloud security and fraud protection policy. The company responded respectfully and rectified that particular situation, but the incident forced them to update GCP account management policies to ensure better customer experience and satisfaction. The general outcome was rather positive and the community valued the fact their interests were kept at heart.

The combination of all the aforementioned factors is a powerful incentive to use Google Cloud and many businesses of all sizes flock to that banner.

Final Thoughts on the Cloud Advancements of 2017-2018

As you can see, the public cloud industry is evolving rapidly. Its infrastructure layer is largely matured, the use of Kubernetes allowed to transform the way of deploying and scaling the workloads and resulted in steady growth of cloud service adoption across all industries. As all of the Big Three cloud computing providers are now Ai-centered and aim to provide the competitive edge through the use of Machine Learning for better Big Data analytics.

We are sure there is more greatness ahead and we will be glad to tell you about new advancements in the cloud industry as they happen. If you think we missed any ground-breaking advancement in the field of cloud computing — please let us know in the comments below!

Original Link

Importance of Cloud Operations and Continuous Monitoring

Organizations across the world are able to provide consistent, reliable and quality services because of the support rendered by IT operations management (ITOM) and IT operations analytics (ITOA). IT operations that include cloud computing, machine-to-machine (M2M) communications, and the Internet of Things (IoT) are usually affected by fast-evolving IT trends. As a result, various procedures are followed and services are monitored by the organization’s (IT) department including administrative processes and support for hardware and software, serving to internal as well as external clients.

A company that primarily offers technology solutions to its clients should be capable of delivering effective services at a given desired quality and cost through its IT operations department. Thus, companies need solutions to help them manage software and hardware while including other forms of IT support, such as network administration, device management, mobile contracting and helpdesks. As compared to physical storage, cloud operations are highly flexible and offer great storage capacity as well. This means the cloud requires less hardware and contributes to a lower upfront cost.

When it comes to IT operations and the tasks and procedures performed to avoid unplanned outages, they are usually done manually in a monotonous and rigid way. As a result, automating the IT operations becomes as critical as any environmental shift towards Agile infrastructure where speed is everything. In such a situation, a cloud environment is the most suitable option when infrastructure is often virtual and scalable, with users demanding instant access and full control of deployed services. The IT operations of a company should have multiple administrators to foresee the efficiency of cloud computing. There are cloud service providers that support various requirements “as-a-service,” from a simple virtual instance.

The acronym “CloudOps” for Cloud Operations is a trending term and ensures that the benefits of cloud-based systems can be well-preserved without IT operations losing control. CloudOps is the validation of the best practices and processes that allow cloud-based platforms, applications, ,and locally residing data to perform well for a long period.

Since the enterprises rely on private and public cloud more these days, cloud computing has gained even more importance and it is even helping small businesses enjoy benefits of it. Now organizations are benefitting from cloud services without the stress of infrastructure management issues. They are now able to provision future customization that may be needed to support growth or implement newer technologies. Instead of conventional approaches to cloud operations, CloudOps bring in the right operational procedures and methods as per the capabilities of the cloud. When it comes to features of cloud operations and management, cloud infrastructure and capacity planning, business process management, performance monitoring, infrastructure monitoring and incident management are the prominent ones.

With CloudOps, timely updates and continuous monitoring saves a lot of time and manual efforts, as the software does all the monitoring and provisioning of servers by itself. Continuous infrastructure monitoring becomes a routine job that avoids service disruptions like the performance bottleneck, application failures, and system outages.

If you need assistance in assessing your cloud requirements or monitoring your cloud operations, you are at the right place. Our cloud and infrastructure experts are just a call away. Call now for a free consultation.

Original Link

How Developers and Containers Are Accelerating the Cloud Agenda

Jeff Chou is CEO of Diamanti, a company focused on bare metal container infrastructure, with an executive team out of Cisco and Veritas. For DZone’s focus this month on cloud trends for developers, we asked Chou to share his take on how developer requirements are forcing enterprises to rethink infrastructure and embrace the cloud. He also discusses the challenges ahead of IT as container infrastructure is tasked to bridge on-prem and cloud.

DZone:  Tell us about the new pressures that developers and containers are creating for IT infrastructure.

Chou:  We see an obvious pressure on developers to introduce new applications and shorten innovation cycles. It’s creating friction between IT and application owners because the application owner is being measured on time to market, but IT is still being measured on availability, uptime, and governance metrics like compliance and scalability.  

And [since] all of the new projects and technologies are driven by Docker and Kubernetes, microservices and scale-out architectures are the new developer tools that shorten innovation cycles. But they add to that pressure on IT, because it’s all open source, and while it’s easy for developers to build the new class of applications on laptops, it’s completely different when they want to move these applications into production.

DZone:  Are containers accelerating cloud modernization?

Chou: Absolutely. Developers go to the cloud because they hate waiting. It’s a whole lot easier for them to swipe their credit cards and start developing their apps using the Kubernetes service at Amazon than to have to go through IT internally.  Frankly, developers don’t care at all about infrastructure. They hate IT processes and hate waiting for IT. They don’t care if it’s on a VM or on bare metal. And to some degree, they don’t care if it’s in the cloud or on their laptop. They want to develop using their tools, their workflows and they want to develop in containers because it’s easier for them to do bug fixes and releases and testing.

So there’s that contingency of enterprise developers that are starting in the cloud. But then you also have the camp that’s heavily invested in on-prem, who have made the leap into containers, and whose container adoption is leading them on a slower creep towards the cloud. For the on-prem container adopters, we see it as a three-phase journey.

Their day one challenge is how to even get started. What are all of the different container technologies and which ones are they going to use? How are they going to stand up clusters and configure the networking and storage? It’s very difficult and generally takes teams of engineers nine to 18 months. The day two challenge, once they have the infrastructure, is how do they actually run containerized applications in production? They need infrastructure services, they need SLAs, and they need full stack support, all the way up to the open source.

And it’s really at day three that we start to see enterprises expand to the cloud. They want to decouple their applications from the underlying infrastructure, they want to run across multiple geographies, either in private or public cloud. And they increasingly want to do this on bare metal, without the VMware overhead.

DZone:  How much traction does the concept of hybrid cloud have with developers, from your point of view?

Chou: When we ask 10 different companies what hybrid cloud means, we get 15 different answers, so there’s a lot of definition confusion. For a lot of enterprises, when they talk about having a “hybrid cloud solution,” in reality some of their applications run on Amazon, some run on-prem — but there really isn’t coordination between the two.

I think the reality is that the market is still very early on hybrid and even multi-cloud architecture adoption.

And there are a lot of network and storage challenges with the multi-cloud and hybrid cloud. How do you do multi-zoning on your networks? How do you do VXLAN? How do you do VPN? How do you do data protection? How do you do data backup? How do you synchronize your data when you’re running a single Kubernetes cluster across 50 miles? So these are things that Amazon and cloud providers don’t really solve that are really at the infrastructure layer.

Original Link

Six Ways to Ensure You’re Conserving Your Cloud Resources

“I didn’t realize we were wasting so much time and money.”

That is the most likely reaction to your next cloud audit, particularly if it has been more than a few months since your last cloud-spending review. If your management checks aren’t happening at cloud speed, it’s likely you can squeeze more cycles out of whatever amount you’ve budgeted for cloud services.

Keep these six tips in mind when prepping for your next cloud cost accounting to maximize your benefits and minimize waste without increasing the risks to your valuable data resources.

1. Don’t Let the Cloud’s Simplicity Become a Governance Trap.

As Computerworld‘s John Edwards writes, “It’s dead simple to provision infrastructure resources in the cloud, and just as easy to lose sight of…policy, security, and cost.”

Edwards cites cloud infrastructure consultant Chris Hansen’s advice to apply governance from the get-go by relying on small iterations that are focused on automation. Doing so allows problems related to monitoring/management, security, and finance to surface and be remedied quickly. Hansen states that an important component of cost control is being prepared: you have to make it crystal clear who in the organization is responsible for cloud security, backups, and business continuity.

2. Update Your TCO Analysis for Cloud-based Management.

A mistake that’s easy to make when switching to cloud infrastructure is applying the same total cost of ownership metrics for cloud spending that you used when planning the budget for your in-house data center. For example, a single server running 24/7 in a data center won’t affect the facility’s utility bill much, but paying for a virtual cloud server’s idle time can triple your cloud bill, according to Edwards.

Subscription fees dominate cloud total-cost-of-ownership calculations, while TCO for on-premises IT is based almost entirely on such ongoing costs as maintenance, downtime, performance tuning, and hardware/software upgrades. Source: 451 Research, via Digitalist

A related problem is the belief that a “lift and shift” migration is the least expensive approach to cloud adoption. In fact, the resulting cloud infrastructure wastes so many resources you end up losing the efficiency and scalability benefits that motivated the transition in the first place. The pennywise approach is to invest a little time and money up front to redesign your apps to take advantage of the cloud’s cost-saving potential.

3. Monitor Cloud Utilization to Right-Size Your Instances.

Determining the optimal size of the instances you create when you port in-house servers to the cloud doesn’t have to be a guessing game. Robert Green explains in an article on TechTarget how to use steady state average utilization to capture server usage over a set period. Doing so lets you track the current use of server CPU, memory, disk, and network.

Size instances based on the average use over 30 to 90 days correlate to user sessions or another key metric. Any spikes in utilization can be accommodated via autoscaling, which is key to realizing cloud efficiencies. Once you’ve found the appropriate instance sizes, classify your instances as either dedicated (running 720 to 750 hours each month) or spot (not time-sensitive and activated based on demand).

The former, also called reserved instances, may qualify for steep discounts from the cloud provider if you can commit to running them for at least one year. The latter, which are appropriate only for specific use cases, can be purchased by bidding on unused AWS instances, for example. If your bid is highest, your workload will run until the spot price exceeds your bid.

You can take all the guesswork out of instance sizing by using the unified orchestration platform, which automates instance optimization via real-time cloud brokerage. The service’s clear, comprehensive management console lets you set custom tiers and pricing for the instances you provision. All costs from public cloud providers are visible instantly, allowing your users to balance cost, capacity, and performance with ease.

4. Be Realistic About Cloud Infrastructure Cost Savings.

Making the business case for migrating to the cloud requires collecting and analyzing a great deal of information about your existing IT setup. In addition to auditing servers, components, and applications, you must also monitor closely your peak and average demand for CPU and memory resources.

Because in-house systems are designed to accommodate peak loads, data-center utilization can be as low as 5 to 15 percent at some organizations, according to AWS senior consultant Mario Thomas, who is quoted by ‘s Steve Ranger. Even if they’re operating at the industry average of 45 percent utilization, companies see the switch to cloud services as an opportunity to reduce their infrastructure costs.

Nearly all organizations choose a hybrid cloud approach that keeps some key applications and systems in-house. However, even a down-sized data center will continue to incur such expenses as leased lines, physical and virtual servers, CPUs, RAM, and storage, whether SAN, NAS or direct attached. Without an accurate analysis of ongoing in-house IT costs, you may overestimate the savings you’ll realize from adopting cloud infrastructure.

Intel’s financial model for assessing the relative variable costs of public, private, and hybrid cloud setups found that hybrid clouds not only save businesses money, they let companies deliver new products faster and reallocate resources more quickly to meet changing demand. Source:

5. Confirm the Accuracy of Your Cloud Cost Accounting.

Making the best decisions about how your cloud budget is spent requires the highest-quality usage data you can get your hands on. Network World contributor Chris Churchey points out the importance of basing the profiles of your performance, capacity, and availability requirements on hard historical data. One year’s worth of records on your actual resource consumption captures sufficient fluctuations in demand, according to Churchey.

Comparing the costs of various cloud services is complicated in large part because each vendor uses a unique pricing structure. Among the options they may present are paying one fixed price, paying by the gigabyte, and paying for CPU and network bandwidth “as you go.” Prices also vary based on the use of multi-tenant or dedicated services, performance requirements, and security needs.

Keep in mind that services may not mention latency in their quote. If your high-storage, high-transaction apps require 2 milliseconds or less of latency, make sure the service’s agreement doesn’t allow latency as high as 5 milliseconds, for example. Such resource-intensive applications may require more-expensive dedicated services rather than hosting in a multi-tenant environment.

6. Run Your Numbers Through the “CloudOps” Calculator.

The obvious shortcoming of basing your future cost estimates on historical data is the failure to account for changes. Anyone who hasn’t slept through the past decade knows the ability to anticipate changes has never been more important. To address this conundrum, InfoWorld‘s David Linthicum has devised a “back-of-the-napkin” CloudOps calculator that factors in future changes in technology costs, and the cost of adding and deleting workloads on the public cloud.

Start with the number of workloads (NW); then rate their complexity (CW) on a scale from 1.01 to 2.0. Next, rate your security requirements (SR) from 100 to 500, then your monitoring requirements (MR) from 100 to 500, and finally apply your CloudOps multiplier (CM), from 1,000 to 10,000, based on people costs, service costs, and other required resources.

Here are a typical calculation and a typical use case:

Using the CloudOps calculator, you can create an accurate forecast of overall cloud costs based on workload number and complexity, security, monitoring, and overall scope. Source:

In the above example, the use case totals $9.8 million: $8.75 million for workload number/complexity using a median multiplier of 5,000; $612,500 for security using a multiplier of 350; and $437,500 for monitoring using a multiplier of 250. Because you’re starting with speculative data, the original calculation will be a rough estimate that you can refine over time as more accurate cloud-usage data becomes available.

The greatest benefit the CloudOps calculator provides is in opening a window into the actual costs associated with ongoing cloud operations rather than merely the startup costs. The “reality check” offered by the calculator goes a long way toward ensuring you won’t make the critical mistake of underestimating the cost of your cloud operations.

Get a jump on cloud optimization via Morpheus’s end-to-end lifecycle management, which lets you initiate requests in ServiceNow and automate your shutdown, power scheduling, and service expiration configurations. Morpheus Intelligent Analytics make it simple to track services from creation to deletion by setting approval and expiration policies, and pausing services in off hours.

Sign up for a Morpheus Demo to learn how companies are saving money by keeping their sprawling multi-cloud infrastructures under control.

Original Link

DevOps and Cloud — A Symbiotic Relationship Aimed at Business Success

DevOps and cloud computing are not mutually exclusive though the relationship between them has often been uncertain and confusing from a layman’s perspective. While cloud computing relates to technology and services, DevOps is more about processes and process improvement. However, both DevOps and cloud computing are vital in every organization’s journey towards an effective digital transformation.

Cloud-Centric DevOps Automation

Cloud computing’s centralized nature enables a standard and centralized platform for DevOps automation. Usage of the cloud platform resolves issues in centralized software deployment caused by the distributed nature of enterprise systems.

DevOps objectives including continuous integration and continuous delivery are supported by most of the private and public cloud service providers. This ensures centralized governance and control, in addition to reducing the costs associated with DevOps automation on-premises.

DevOps utilizes orchestration systems that can proactively monitor the cloud data sets and application workloads. End to end automation is a prerequisite for deriving the full advantage from the cloud. By expanding the concept of continuous improvement to cloud-based platforms, DevOps ensures lesser system vulnerabilities and greater security.

DevOps Best Practices for Effective Cloud-Based Computing

DevOps provides several core capabilities that assist organizations in effectively handling cloud challenges. These include:

  • Infrastructure as Code (IAC): IAC comprises the provisioning of servers and installation of application code, the core components of the system architecture.
  • Automated Application Deployments: DevOps practices facilitate the creation of a fully-automated application deployment pipeline that is of great importance in the development lifecycle.
  • Application Lifecycle Management (ALM): DevOps principles such as continuous integration and delivery help in effective ALM in the cloud.
  • Continuous Quality Assurance (QA) and Testing: DevOps capabilities address the challenges for QA and testing in the cloud by provisioning multiple test environments using cloud-based resources that result in enhanced quality and productivity.
  • Knowledge Sharing with the Cloud Provider: DevOps promotes the concept of knowledge sharing between organizations and SMEs of the cloud service providers throughout the application lifecycle using online communities that are focused on fostering communication and collaboration, thereby ensuring that most risks are mitigated effectively.

The Significance of Cloud Computing vis-a-vis DevOps

Cloud computing complements DevOps in several ways. The application-specific infrastructure in the cloud enables developers to own more components easily and enhance their productivity and efficiency. It allows developers to create the development environments quickly without the need for IT operations to provision resources and infrastructure.

Cloud computing ensures flexible IT infrastructure in addition to enabling greater business agility. It propels IT transformation by helping organizations in streamlining and embedding DevOps processes.

Cloud tools and DevOps automation services eliminate human error and help in establishing repeatability by automating and streamlining the process of building and managing code.

IT Transformation Enabled by DevOps Together With Cloud Technology

Cloud computing coupled with DevOps ensures that scalability becomes an integral part of application development, thereby contributing to reduced infrastructure costs and increased global reach.

Cloud-based operations increase the availability and failover ability of applications in addition to eliminating downtime, which leads to improved business reliability and rises in customer satisfaction.

DevOps automation and Infrastructure as Code (IaC) help in reducing the complexity of the cloud and improving system maintenance. Additionally, the automated repeatable processes increase security controls.

Process streamlining and faster access to the development environments ensure improved Product Time to Market (TTM).

In Conclusion

By realizing the importance and prioritizing the usage of DevOps in the cloud, organizations can derive a huge number of potential benefits including enhanced agility and reduced operational costs.

Original Link

Up and Running with Alibaba Cloud Container Registry

Let’s say you are a container microservices developer. You have a lot of container images, each with multiple versions, and all you are looking for is a fast, reliable, secure, and private container registry. You also want to instantly upload and retrieve images, and deploy them as a part of your uninterrupted integration and continuous delivery of services. Well, look no more! This article is for you.

This article introduces you to the Alibaba Cloud Container Registry service and its abundance of features. You can use it to build images in the cloud and deploy them in your Alibaba Cloud Docker cluster or premises. After reading this article, you should be able to deploy your own Alibaba Cloud Container Registry.

What is Alibaba Cloud Container Registry?

Alibaba Cloud Container Registry (ACR) is a scalable server application that builds and stores container images, and enables you to distribute Docker images. With ACR, you have full control over your stored images. ACR has a number of features, including integration with GitHub, Bitbucket, and self-built GitLab. It can also automatically build new images after the compile and test from source code to applications.

In this tutorial, we will build and deploy containerized images using Alibaba Cloud Container Registry.

Step 1: Activating Alibaba Cloud Container Registry

You should have an Alibaba Cloud account set up. If you don’t have one, you can sign up for an account and try over 40 products for free. Read this tutorial to learn more.

The first thing you need to do is to activate the Alibaba Cloud Container Registry. Go to the product page and click on Get it Free.

It will take you to the Container Registry Console where you can configure and deploy the service.

Step 2: Configuring Alibaba Cloud Container Registry

Create a Name Space

A namespace is a collection of repositories and repository is a collection of images. I recommend creating one namespace for each application and one repository for each service image.

Picture1

After creating a namespace, you can set it up as public read or private in the settings.

Create and Upload a Local Repository

A repository (repo) is a collection of images. I suggest you collect all versions of the image of one service in one repository. Click Create Repo and fill out the information in the page. Select Local Repository. After a short while, a new repository will be created, which has its own Repository URL. You can see it on the image list page.

You can now upload your locally built image to this repository.

Picture2

Step 3: Connecting to Container Registry with Docker Client

In order to connect to any container registry from Docker client, you first need to set Docker login password in the ACR console. You will use this password on your Docker client to login onto the registry.

Picture3

Next, on the Image List page, click on Admin in front of the repository you want to connect. Here you can find all the necessary information and commands to allow Docker client access the repository. You can see Image Name, Image Type, Internet and intranet addresses of the repository. You can use the internet address to access the repository from anywhere in the world. If you want to use the repository with your Alibaba Cloud container cluster, you should use the internet address because it will be much faster.

Copy the Login, push and pull commands. You will need it later.

Picture4

Start up the Docker client in your local machine. You can refer to docker.io to install a Docker client on to your computer. On MAC, run the docker.app application to start the Docker client.

Login as user on Docker client.

docker login --username=random_name@163.com registry-intl.ap-southeast-1.aliyuncs.com

Note: Replace random_name with the actual username.

You will see a login successful message after your enter the password and hit enter. At this point, you are authenticated and connected to the Alibaba Cloud Container Registry.

Picture5

Step 4: Building an Image Locally and Pushing to ACR

Let’s write a Dockerfile to build an image. The following is a sample Dockerfile; you can choose to write your own Dockerfie:

######################
# This is the first image for the static site.
#####################
FROM nginx
#A name can be given to a new build stage by adding AS name to the FROM instruction.
#ARG VERSION=0.0.0
LABEL NAME = static-Nginx-image START_TIME = 2018.03.10 FOR="Alibaba Community" AUTHOR = "Fouad"
LABEL DESCRIPTION = "This image is built for static site on DOCKER"
LABEL VERSION = 0.0.0
#RUN mkdir -p /var/www/
ADD /public /usr/share/nginx/html/
EXPOSE 80
RUN service nginx restart</code></pre> Run the Docker build command to build the image. In order to later push the image to the repository, you need to tag the new image with the registry: <pre><code>docker build -t registry-intl-internal.ap-southeast-1.aliyuncs.com/fouad-space/ati-image .

Picture6

Once the build is complete, it will be tagged with repository name already. You can see the new image in by using the command:

Docker image ls

Picture7

Push the image to ACR repository with the command:

docker push registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image:latest

Picture8

To verify that the image is pushed successfully, see it in the Container Registry console. Click on Admin in front of the repository name and then click Image version.

Picture9

Pull the image and create a container. Run the docker pull command:

docker pull registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image:latest

Picture10

Since I have already pulled the image to my local computer, the message says image is up to date.

Create a new container using this image:

docker run -ti -p 80:80 registry-intl.ap-southeast-1.aliyuncs.com/fouad-space/ati-image bash

Picture11

Step 5: Building an Image Repo with GitHub

With Alibaba Cloud Container Registry, you can build images in the cloud as well as push them directly to the registry. Besides this, Container Repository supports automatically triggering the build when the code changes.

If Automatically create an image when the code changes is selected in Build Settings, the image can be automatically built after you submit the code, without requiring you to manually trigger the build. This saves manual work and keeps the images up-to-date.

Create a GitHub repo and upload your Docker file to the repo.

Picture12

Then, return to the Container Registry console to create a repo. Select GitHub repo path and complete the repository creation steps.

Picture13

Once the repository is created, go to Image List and click on Admin on the repo name, click Build, and finally click Build Now.

You can see the build progress in the menu and the complete logs of the build process.

Picture14

You can also see all the build logs. Isn’t it neat?

Picture15

Once the build is complete, your image is ready to be deployed. You can pull it to the local Docker engine or deploy this image on Alibaba Cloud Container Service.

Step 6: Creating a Webhook Trigger

Webhook is a type of trigger. If you configure this, it will push a notification when an image is built and therefore set up a continuous integration pipeline.

How does it work? Well, suppose you have set a Container Service trigger for Webhook. When an image is built or rebuilt, the applications in Container Service are automatically triggered to pull the latest image and re-deployed.

To create a webhook, you first need to go to container service and get the application web URL.

Picture16

Now use this URL to configure a hook. Every time the image in the container registry is updated, this application will be re-deployed with the new image. Be very careful though, incorrect setup can bring down the whole application. But rollback is possible in the container service so no big worries.

Picture17

Summary

In this article, you should have learned the following:

  • What is Alibaba Cloud Container Registry service and how you can implement it.
  • How to create a name space and repository to host Docker images.
  • How to build a Docker image locally and push it to ACR.
  • How to pull a Docker image from ACR and instantiate a new scontainer with it.
  • Building an image in Container Registry with GitHub source code.
  • How to automatically trigger the pull request for the latest image and re-deploy the service.

Original Link