In its basic form, Edge Computing is the placement of computational and storage servers closer to the devices generating the data and request for services; instead of having it performed at a central cloud processing center that is distant.3 Think of a traditional back-end cloud service as a hub and spoke model – like a bicycle wheel with the center being the mega, central cloud processing center and requests coming in from devices at the spokes. In edge computing, the processing power is distributed out to the spokes so that it is closer to the devices.
Edge computing was developed to address issues of latency and bandwidth created by the exponential growth of smart devices which require connections to the internet - for receiving or processing information in the cloud, delivering data to the cloud, or for computationally sensitive use cases which require low latency and prompt data processing.
Edge Computing servers can be placed almost anywhere as long as it is close to where the computation or data processing is needed. It can be in cabinets along the road and on roof tops, at the base of cell towers, and even in commercial establishments. Companies such as Vertiv are building prefabricated micro datacenters complete with chassis and infrastructure (power, cooling, monitoring, racks). These look like a small shipping container which allows them to be readily transported and placed onto concrete slabs.4 They are designed to function in a wide variety of environmental conditions.
Another form of edge computing is the processing of data on the device. Some companies are providing chips in their devices to perform on-board computing that in the past was done at a central data center. For example: Apple’s face recognition was originally performed at a datacenter but is now performed on the device. Google and Amazon are providing their smart speakers with some on-board processing capabilities as well.
With computational capacity closer to the device (or on the device), the request for “what’s the weather” does not have to travel hundreds or thousands of miles to be processed, shortening response time. This frees up computing resources and bandwidth at the cloud for projects that require more complex data processing.
The major benefits of edge computing are the ability to process data quickly which reduces latency, and to store data closer to the device. Data processing for applications such as virtual and augmented reality, self-driving cars, and smart cities5 can be performed closer to the source/device with faster response times.
Per Stephan Blum, CTO and co-founder at PubNub, Google, Apple, and Amazon have spent millions investing in their edge so that their AI (artificial intelligence) technology can answer your requests quickly.6 The more processing their smart speakers can perform locally (closer to the device or on device), the less they need to rely on the cloud, and the quicker the response.