The growing number of smart devices in the wake of Industry 4.0 sees the rise of Edge Computing – a decentralised network architecture where data is processed closer to the source. Could this mean that conventional Cloud data centres will disappear?
Times Are A-Changin’
In 2016, tech-investor Peter Levine predicted that Cloud Computing – by now more or less well-established as a business innovation driver – will be replaced by Edge Computing in the near future. Many in the industry echo this statement, arguing that computing will become decentralised and, therefore, the need for the centralised Cloud will vanish. Will it though?
Heads in the Cloud
For the past 12 years, decision makers have been switching from company-owned hardware and software to the service-based models of the Cloud, storing data and developing apps within it. Cloud computing means that system resources, such as data storage and computing power, are made available to users via data centres accessed through the internet.
The Cloud allows you to use only as much processing power as you really need (paying as-you-go), resulting in high scalability and also greatly facilitating innovation. This highly successful method proved that shared resources and outsourced maintenance were not only cheaper, but also allowed for greater flexibility.
Living on the Edge
Both hardware and IT-usage have considerably evolved since the Cloud was introduced in 2006, and we now find ourselves in the midst of the rapid evolution of the Internet of Things (IoT) and Industrial Internet of Things (IIoT).
This ever-increasing number of smart devices and connected services means that the sheer amount of data sent to data centres is simply too large to allow for the seamless, real-time experience that users have come to expect. Moreover, this overload of data prevents applications from running as smoothly as they would otherwise be able to run.
Enter Edge Computing: a paradigm that sees business logic and computing power shift from the centre of the network to its “edge”, i.e. closer to where the data is generated in the first place. Data will no longer have to travel to a remote data centre for processing, but instead it will be analysed in either an edge data centre (sometimes called “cloudlet”) or on the device itself. This proves particularly useful for use cases that involve the need for very short reaction times.
In a talk by Peter Levine, entitled “Return to the Edge and the End of Cloud Computing”, the tech-investor predicts that the rise of the Edge will impact networking, security, storage, and the management of IT solutions in the same way that the Cloud changed the IT landscape in the early 2000’s. Back then, we also saw a transition from the decentralised Client-Server model to the centralised Cloud model. And similarly, the change from the Cloud to the Edge will change the way computing is done in the near future.
Getting an Edge
All that talk about new paradigms and shifting focus can be rather vague. So, let’s look at three concrete examples of Edge Computing in action.
Smart manufacturing is characterised by a high degree of computer control and adaptability. On one hand, IT is a fundamental part of the product lifecycle: sensors and processors gather and analyse data and relay it back to the factory through a Cloud service.
But far more commonly, elements of production are interconnected so that manufacturing can be optimised to the highest possible degree. By way of example, let’s look at the way a German press manufacturer has implemented smart manufacturing.
Press lines are equipped with 30 industrial computers that ensure the automatic and secure transport of manufactured parts from one pressing phase to another. Individual presses, blanking lines and other automated components are also interconnected.
Before one part is transported to the next phase, the maximum speed at which the blank will have to be formed must be calculated. This is where simulations of that process provide the necessary info for the entire line to be optimised.
Long before the first toolkits are actually mounted, a virtual image of the press already produces part after part. Thanks to the simulation of the presses, blanking lines, and other components, the time for one part to be transported to the next phase can be minimised, for example.
Such a connected factory can produce up to 3 petabytes of data per day. That’s 4 quadrillion bytes! Sending this amount of data to a remote data centre would create pretty bad latencies (to say the least). So the collected data is analysed at the factory itself, allowing for instantaneous tweaks and implementations. An old-school data centre will generally be used for long-term data storage and more processing-heavy data analysis.
A typical sector in which this technology is used for is the automotive industry.
A self-driving car can generate up to 4 terabytes per day per car. Typically it will have more than 200 sensors that are responsible for a range of important functions – from parking assistance through blind spot detection to collision avoidance.
A number of these sensors are in charge of detecting pedestrians in front of the car and making sure the emergency break is triggered when necessary. The sensors don’t only have to determine the shape of the pedestrian but also instantly cross-analyse the data with several other sensors in the car that constantly gather info about weather, road conditions, surrounding vehicles, etc.
This critical data can’t simply be sent to the Cloud for processing – it takes roughly 100 milliseconds for Big Data volumes to travel to and from the Cloud – because even milliseconds could make the difference when it comes to maintaining road safety.
Additionally, an internet connection isn’t always guaranteed – even new technology like 5G still has a long way to go before becoming a stable and reliable connectivity tool. Using Edge Computing, smart cars can always stay connected thanks to the more immediate network at their disposal.
In order for a car to adhere to road safety, its sensors and cameras must work together flawlessly. This has sparked many innovations in the field of interconnectivity. One could even argue that smart cars – among other factors, due to their relevance and media attention – are driving the development of Edge Computing.
Today’s smart cars already have the computing power on board to deal with these huge data volumes. They gather, analyse, and utilise this data while driving, and at the end of the day, the data is sent to the Cloud for storage and more advanced data analysis.
The future is now. You might have heard of smart homes, in which things such as temperature, humidity, household appliances, and alarm systems are all monitored by interconnected sensors and can be controlled by the homeowner from anywhere.
Now imagine this on a bigger scale: a city in which public transport, traffic lights, street lights, and air quality control can be monitored and directed by city officials. You’d know exactly when the next bus would arrive at your stop, and how good the air is today. But, does such a place really exist?
The Republic of Singapore is considered the most advanced smart city on Earth. It was given the Smart City 2018 award at the Smart City Expo World Congress for its outstanding use of cutting-edge tech to improve people’s lives in the city – from real-time parent-teacher portals to dynamic bus routing algorithms.
Let’s take a closer look at Singapore’s water management systems. Among other innovative system enhancements, the Public Utilities Board (PUB) – Singapore’s national water agency – is developing an in-house prototype for the detection of microbes in water. This portable device uses Artificial Intelligence and split-second imaging to do its job in real time, and it’s linked to a mobile app and chatbot. It performs around the clock, responding to commands and sending live image reports, thereby triggering alerts when anomalies are detected.
Scheduled for large-scale deployment by the end of 2020, this device will be another addition to the already dense network of sensors, cameras, and autonomous devices that help run the city efficiently.
Now you can see where this is going: the sheer amount of data and the need for it to be available 24/7 requires a more agile and responsive solution than the centralised Cloud is able to provide. So-called micro and mini data centres are modular data processing units than can be deployed closer to the source of the data and would greatly reduce those pesky latencies.
Can There Be Only One?
According to some outlooks, global data usage will rise to a whooping 44 zettabytes (1 ZB is a billion terabytes), with the amount of connected devices reaching 80 billion by 2025. Edge computing is an organic response to this trend and feeds into it exponentially at the same time.
There is a shift towards more dynamic and agile networks that enable quick and tailored solutions. The apparent trade-off is simple: the raw processing power of centralised architectures for the agility of decentralised ones.
Ever smaller and cheaper components will facilitate the creation of smaller, modular Edge data centres. There, many data processes will find their new home.
But, unlike what Levine predicted, the Cloud itself is unlikely to be replaced. Rather, it will complete the Edge computing concept, acting as a hub for storage, backup, coordination, and Machine Learning. The more demanding data processing tasks will still require Cloud infrastructure.
Edge Computing is not a stand-alone solution. It will result in an increasingly complex and diverse network that is built around larger centres that serve as the backbone for the “cloudlets”. While concepts like connected cars – essentially driving data centres – indicate a trend towards greater independence from the Cloud, key aspects such as data security, network stability, and processing power will continue to drive the need for Big Data centres.
The Edge is not replacing the Cloud. If you want to store large scale data and online processes, virtual servers will still be the way to go. But if you want to build a responsive solution with reduced latency, supplement it with Edge processing to make it faster and more reliable.