Five Evolutions of Cloud Computing

Cloud Computing is essentially the automation of IT service provisioning through software which replace human tasks. It is currently evolving in multiple directions. Free Software and Open Hardware will eventually lead to massive price cuts compared to current price levels in USA or Asia. Edge Computing will reduce the need of data centres whenever network latency or availability become critical. Service Workers will eliminate the need of servers, both at the edge or at the data centre, in 90% cases. Big Servers will eventually replace smaller servers as the standard fabric the data centre.
  • Last Update:2018-11-01
  • Version:001
  • Language:en

To many, Cloud Computing is just about storing data online (iCloud, Google Drive) or renting virtual machines (AWS, OVH). But it actually much more: it is in essence the complete automation of IT service provisioning through software which replace human tasks. Cloud Computing is to IT what mechanisation has been to agriculture or robots to mechanical industries.

It therefore does not matter whether cloud is provided from a few central data-centres, from millions of technical rooms or from billions of individual on-premise device. What matters is full automation of the complete life cycle of an IT service. And the core value in Cloud Computing is the software that is used to produce Cloud Computing, not the hardware infrastructure.

This understanding of Cloud Computing was first formalised in 2010 and presented at IEEE International Conference on Services Computing in 2011 (see SlapOS: A Multi-Purpose Distributed Cloud Operating System Based on an ERP Billing Model). SlapOS proves that any cloud computing system requires at least two components, and can be made actually from only two simple software components:

  • an ERP that automates ordering, delivering, accounting and billing a service;
  • a devops software that automates building, provisioning, monitoring and orchestrating a service.

SlapOS has been deployed commercially to provide Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), Disaster Recovery as a Service (DRaaS), Content Delivery Network (CDN), IoT Management System (IMS), etc. Each time SlapOS plays the same role: automate what IT engineers used to do.

In 2011, the idea of IT service automation was extended to IT consulting service automation as part of a Eurostars project: Cloud Consulting. The PhD thesis of Klaus Woelfel  demonstrated in 2014 that automation of complex IT consulting tasks was possible through machine learning or Artificial Intelligence (see "Automating ERP Package Configuration for Small Businesses"). This type IT automation differs however from IT service  life cycle automation since it consists primarily in automating all tasks that lead to the specification of configuration parameters of an IT service.

We will thus distinguish in this article two categories of automation:

  • life cycle automation of already specified IT services (Cloud Computing);
  • specification automation of IT service (AI Consulting).

We will focus only on the first case (Cloud Computing) and introduce five major evolutions that are currently happening.

Evolution 1: Free Software Cloud that really works (not OpenStack)

The first notable evolution of Cloud Computing is that there is today no economic rationale to depend on proprietary software and platforms created by Amazon, Google, Microsoft, Qingcloud, Alicloud, UCloud, etc. Anyone willing to create its own cloud operator can achieve this goal in a couple of hours or days thanks to SlapOS. The process is entirely documented with rapid.space, including tutorial, billing and business plan. A company can save up to 90% costs by operating its own cloud.

There has always been a competition between proprietary cloud and Free Software cloud. VMWare PC virtualisation was released in 1998 as proprietary software. Qemu PC virtualisation was released as Free Software in 2003.

SlaopOS bare metal nano-container provisioning and orchestration platform was released as Free Software in 2010 by Nexedi's subsidiary "VIFIB" (see "ViFiB Wants You to Host Cloud Computing At Home"). The proprietary equivalent, dotcloud, was an internal project in 2010 that later became the Free Software called Docker in 2013 with orchestration available after 2015.

Free Software has very often been in advance to implement simple solutions to problems that had been solved previously by complex proprietary software. Using level 3 routing and indirection is an example of a simpler solution (re6st) compared to traditional level 2 switching (open vswitch, virtual switch) which are hard to scale unlike routing protocols without loops (see p. 8 of "Protocoles Réseau V" and Bellman-Ford algorithm).

There is thus no significant innovation in proprietary cloud platforms compared to Free Software based cloud, with one exception: OpenStack. OpenStack was designed as a giant committee driven software suite based on requirements expressed by entreprise users mostly (a bit like OSF/1 in the 90s). It is thus a kind of inferior copy of what VMWare does (or what NiftyName did). And because its design is based on messaging passing architecture which ignores fundamental "promise theory" concept and autonomous computing of Marc Burgess, it is still not stable despite billions of dollar spent and tends to reboot quite often. This fact is now well known by CIOs of large companies who failed creating their own cloud platform. Proprietary cloud vendors even use OpenStack in their marketing as a proof that "open source cloud does not work" and as a way to convince prospects to migrate to their proprietary platforms.

So, if one selects Free Software for its cloud, one should ignore OpenStack as a solution and consider numerous other Free Software solutions that exist, most of which share some of its technical components. Such solutions include SlapOS and to some extent Proxmox. Container frameworks (Kubernetes, Docker) might also be considered in very specific cases where the risk of instability can be mitigated (see "Are Linux containers stable enough for production and why Nexedi uses SlapOS nano-containers instead ?").

Evolution 2: Open Hardware not only at Facebook

A second evolution of Cloud Computing is the growing adoption of hardware self-designed by big platforms, some of which was released under open hardware licenses.

Previously, Cloud Computing infrastructure was based on servers from brands such as IBM, HP, Dell, Supermicro, etc. Supermicro was actually the No1 choice of most cloud companies because it provided excellent features at very competitive price. It did not have the "extra layer" of marketing and non standard features targeted at entreprise market which is actually useless for cloud infrastructure. It had the right performance based on simple to grasp standards.

But Supermicro was still too expensive and sometimes not secure enough (just like other big brands actually). Supermicro servers stick to the 19 inch rack standard, which requires a fairly expensive data centre infrastructure (cabinets, power, air conditioning, etc.). And some companies such as Apple had to destroy 45.000 Supermicro servers for security reasons back in 2015.

Under the leadership of Facebook, a community of hardware designers has created a new way to deploy servers and data centres in a cost efficient way: the Open Compute Project (OCP). Rather than following the traditional approach of 19 inch racks and computers with a closed enclosure, OCP considers the data centre itself as the enclosure, and servers in the data centre as mere components that require the most efficient air flow, networking and electricity at the lowest cost. 

An OCP data centre is about 20% to 75% cheaper to deploy and operate.

OCP also opens the door to BIOS replacement with a non proprietary firmware: Linuxboot.  Thanks to Linuxboot, the boot process of the server can be controlled more precisely. Many errors or security holes found in proprietary firmware can be fixed. Linuxboot is now adopted by Google, which incidentally joined OCP after designing its own hardware.

Evolution 3: Edge Computing for low latency and resiliency

Edge computing - also known as Fog Computing - is the third evolution of Cloud Computing. It is a kind of Cloud without data centre. It is based on the same automation tools as traditional Cloud Computing. But instead of having those tools deployed inside servers located in a central location, they are deployed on servers that are located closer to the user.

From a business point of view, Edge Computing can be considered as an opportunity for new players, especially telecommunication industry, to acquire significant market share taken from existing Cloud giants (AWS, Azure, Aliyun, OVH, UCloud, etc.).

Nexedi is one of the inventors of Edge computing with SlapOS about 10 years ago. Nexedi's primary goal at that time was resiliency against terrorist attacks that could easily hit a few data centres and cause of massive collapse of IT infrastructure. The first commercial deployment of SlapOS was made with a French telecommunication company.

Nowadays, Edge Computing can be useful to provide Cloud services for applications that require any of the following;

  • low latency;
  • resiliency (especially to network outage);
  • low cost Internet transit.

Edge Computing servers can be located in different locations:

  1. at the regional core network of a telecommunication company: < 10 ms latency;
  2. at the antena (LTE/NR) or central office (FTTH): < 5 ms latency;
  3. on premise: < 1 ms latency;
  4. at the sensor or at the client side: 0 ms latency.

For most applications, 10 ms is enough. Regional core network seems the most efficient place to deploy ab edge architecture. A country such as Germany could be covered with 4 locations per telecom operator. In China, this would be about 100 locations per telecom operator. 

For applications that require low latency or network resiliency, on premise deployment of edge servers should to be the solution. Applications such as content delivery, buffering or resilient routing could be implemented by a small gateway. More complex applications such as hard real time control of actuators may require bigger on premise servers.

And for applications that require no latency at all, direct deployment into the sensor seems to be the only way.

The case of deployment at the antena (more precisely, at the eNodeB / gNodeB) is less clear. With solutions such as Saguna, it is now possible for a 4G/5G network operator to redirect part of S1 traffic from nearby eNodeB/gNodeB to on premise network without having to modify its core network architecture (this is what AT&T does). This "S1 redirection" is sufficient to provide latency of less than 5 ms to nearby campuses without installing any server at the antena. 

Another form of Edge deployment to consider in the future consists of central offices at a sub-regional level with software defined radio (PHY) emulation hosted by powerful servers. This type of architecture is equivalent to the regional core network approach but with more sites. Although it is cost efficient and handles densification much better, it is still uncertain when and how it will be adopted because it departs from current architectures of telecommunication companies. In case it is adopted, it would make sense to merge the concept of Edge Computing and Network Management System (NMS) as it was introduced by Nexedi at MWC 2018: SDR PHY becomes an orchestrated service, just as any other service of a Cloud architecture.

Evolution 4: Service Workers eating 90% of Cloud use cases

Service Workers are a new feature available since 2014 in modern HTML5 Web browser (but not on iOS as of November 2018). A Service Worker virtually turns a Web Browser into a Web Server. It is thus possible to implement complex applications that run entirely offline or which dynamic code is entirely executed in the Web Browser.

Service Workers are a form of implementation of Edge Computing (Case 4) "at the sensor or at the client". With Service Workers, servers and traditional cloud computing in general become virtually useless in 90% of use cases.

If one looks at OfficeJS suite and especially to its word processor based on OnlyOffice code, he or she will realise that it is executing entirely inside the web browser (with the exception of format conversion for now). This means that OfficeJS can serve millions of users at virtually no cost for Nexedi (the maintainer of OfficeJS). This should be compared to Google Drive of Microsoft Office Online which both require huge data centres and cloud infrastructures to operate, because in their case, code is executed mostly on the server side.Service Workers should thus be considered as the fourth evolution of cloud computing. They will eventually kill many server based business models and replace them with client (Web Browser) based approaches. They will let small companies such as Nexedi compete against big Cloud players without having to depend on costly infrastructure.

One of the few things which a Service Worker application can not handle is processing large data with both low latency and consistency. A Service Worker can process small data with low latency (see "OfficeJS Iodide: Ubiquitous Data Science Notebooks for Business and Education") by keeping a copy of all data in RAM. A Service Worker could process big data with high latency through some form of distributed computing such as Web Torrent torrent or (possibly) Tron blockchain. But the combination of low latency and big data is impossible to implement consistently using only distributed computing in our opinion.

Evolution 5: low cost Big Computing

Based on the assumption that Service Workers can not implement both low latency and big data, and that everything else can be implemented with Service Workers, an immediate consequence is that servers with the highest growth will likely be big servers (ex. 256 GB, 20 core) rather than small servers (ex. 2 GB, 2 core) or mid-size (ex. 32 GB, 6 core). Server based infrastructure at the data centre or at the edge (regional, premise) will thus likely look like increasingly similar to OCP hardware used by Facebook for its own big data infrastructure.

This is the reason why Rapid.Space focuses on big servers with low cost and high performance and will likely provide even bigger performance to serve market needs with GPU, TPU, NVME disks, etc. that are now becoming essential parts of any infrastructure.

Conclusion: The Future of Cloud

Cloud Computing has little or no future for any use case that can be implemented with Service Workers. Any use case that Service Workers can not cover will remain on the Cloud.

Here is a tentative list use cases that will remain on the Cloud:

  • read oriented applications that analyse large randomly accessed data and require low latency (ex. finite elements, OLAP reporting);
  • write oriented applications with high concurrency and high number of transactions per second (ex. big accounting, inventory management).

We could also add some cases where networking is the blocking factor:

  • low latency push notification (whenever distributed algorithm consumes too much bandwidth or is incompatible with the design of smartphone OS);
  • content distribution (whenever WebTorrent protocols are blocked by legacy infrastructure or DPI);
  • IoT buffering (whenever no buffering gateway is available on premise).

Last, any applications where Big Computing must be located close enough to "sensors / actuators" to handle hard real time constraints will still require a form of Edge Computing:

  • software defined radio (SDR) where physical virtualisation is implemented by a generic PC;
  • software defined industrial automation (SDIA) where physical virtualisation is implemented by a generic PC.

In all those use cases that can not be implemented with service workers or distributed algorithms, a stable cloud operation system such as SlapOS will be needed to automate  ordering, delivering, building, provisioning, monitoring, orchestration, accounting and billing of services deployed on the infrastructure in a way that unifies all use cases under the same paradigm, no matter whether services are deployed in a data centre, on premise or in a sensor.

Contact

  • Photo Jean-Paul Smets
  • Logo Nexedi
  • Jean-Paul Smets
  • jp (at) rapid (dot) space
  • Jean-Paul Smets is the founder and CEO of Nexedi. After graduating in mathematics and computer science at ENS (Paris), he started his career as a civil servant at the French Ministry of Economy. He then left government to start a small company called “Nexedi” where he developed his first Free Software, an Enterprise Resource Planning (ERP) designed to manage the production of swimsuits in the not-so-warm but friendly north of France. ERP5 was born. In parallel, he led with Hartmut Pilch (FFII) the successful campaign to protect software innovation against the dangers of software patents. The campaign eventually succeeeded by rallying more than 100.000 supporters and thousands of CEOs of European software companies (both open source and proprietary). The Proposed directive on the patentability of computer-implemented inventions was rejected on 6 July 2005 by the European Parliament by an overwhelming majority of 648 to 14 votes, showing how small companies can together in Europe defeat the powerful lobbying of large corporations. Since then, he has helped Nexedi to grow either organically or by investing in new ventures led by bright entrepreneurs.