Edge Computing: How Is The Revolution Or Evolution Knocking On Our Doors?
It is interesting to say that we are just about to link up to cloud computing power that the cloud era is closing to an end. What? You are going to say. Well yes the industry is so disruptive that the major changes are happening on the fly. We are now heading into Edge computing.
We have to give it to big companies such as Amazon, Microsoft and certainly Google. These names certainly have stepped up their games and have proven that we should and are able to trust them with our personal data. Changing the whole situation and that we should now reward the same trust by giving them complete control over our computers, toasters, fridge, cars etc., That’s why Microsoft is working on Azure Sphere, which is a managed Linux OS, a certified microcontroller, and a cloud service. The idea is that your toaster should be as difficult to hack, and as centrally updated and managed, as your Xbox.
Somehow, the Edge is a new buzzword. Yes, it is the new buzzword.
We still hear a lot of “IoT”, “Cloud” but these are making way for what I call “Edge”. Edge means everything and nothing at the same time.
It seems Wikipedia has this definition for Edge: Edge computing is a method of optimizing applications or cloud computing systems by taking some portion of an application, its data, or services away from one or more central nodes (the “core”) to the other logical extreme (the “edge“) of the Internet which makes contact with the physical world or end users.
It is difficult to still process in our minds. Does this mean that we are already far from the usual understanding?
So, let us try to view the reality. Once upon a time we had a gigantic one and big computer. In line with that came the Unix plethora and we eventually learned how to connect that computer using dumb terminals. From that end we came to the personal computers (PC).
In 2018, we’re firmly in the cloud computing era. Many of us still own personal computers, but we mostly use them to access centralized services like Dropbox, Gmail, Office 365, and Slack. Additionally, devices like Amazon Echo, Google Chromecast, and the Apple TV are powered by content and intelligence that’s in the cloud — as opposed to the DVD box set of Little House on the Prairie or CD-ROM copy of Encarta you might’ve enjoyed in the personal computing era.
You need to now understand that most of the new opportunities for the “Cloud” are just lying at the “Edge”.
Edge computing allows data produced by internet of things (IoT) devices to be processed closer to where it is created instead of sending it across long routes to data centers or clouds.
Doing this computing closer to the edge of the network lets organizations analyze important data in near real-time – a need of organizations across many industries, including manufacturing, health care, telecommunications and finance.
As centralized as this all sounds, the truly amazing thing about cloud computing is that a seriously large percentage of all companies in the world now rely on the infrastructure, hosting, machine learning, and compute power of a very select few cloud providers: Amazon, Microsoft, Google, and IBM.
Amazon, the largest by far of these “public cloud” providers (as opposed to the “private clouds” that companies like Apple, Facebook, and Dropbox host themselves) had 47 percent of the market in 2017.
The advent of edge computing as a buzzword you should pay attention to is the realization by these companies that there isn’t much growth left in the cloud space. Almost everything that can be centralized has been centralized. Most of the new opportunities for the “cloud” lie at the “edge.”
Edge computing terms and definitions
Like most technology areas, edge computing has its own lexicon. Here are brief definitions of some of the more commonly used terms
- Edge devices: These can be any device that produces data. These could be sensors, industrial machines or other devices that produce or collect data.
- Edge: What the edge is depends on the use case. In a telecommunications field, perhaps the edge is a cell phone or maybe it’s a cell tower. In an automotive scenario, the edge of the network could be a car. In manufacturing, it could be a machine on a shop floor; in enterprise IT, the edge could be a laptop.
- Edge gateway: A gateway is the buffer between where edge computing processing is done and the broader fog network. The gateway is the window into the larger environment beyond the edge of the network.
- Fat client: Software that can do some data processing in edge devices. This is opposed to a thin client, which would merely transfer data.
- Edge computing equipment: Edge computing uses a range of existing and new equipment. Many devices, sensors and machines can be outfitted to work in an edge computing environment by simply making them Internet-accessible. Cisco and other hardware vendors have a line of ruggedized network equipment that has hardened exteriors meant to be used in field environments. A range of compute servers, converged systems and even storage-based hardware systems like Amazon Web Service’s Snowball can be used in edge computing deployments.
- Mobile edge computing: This refers to the buildout of edge computing systems in telecommunications systems, particularly 5G scenarios.
But what exactly is edge computing?
Edge computing is a “mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet,” according to research firm IDC.
It is typically referred to in IoT use cases, where edge devices would collect data – sometimes massive amounts of it – and send it all to a data center or cloud for processing. Edge computing triages the data locally so some of it is processed locally, reducing the backhaul traffic to the central repository.
Typically, this is done by the IoT devices transferring the data to a local device that includes compute, storage and network connectivity in a small form factor. Data is processed at the edge, and all or a portion of it is sent to the central processing or storage repository in a corporate data center, co-location facility or IaaS cloud.
The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn’t mean the cloud will disappear. It means the cloud is coming to you.
One important driver for edge computing is the speed of light. If a Computer A needs to ask Computer B, half a globe away, before it can do anything, the user of Computer A perceives this delay as latency. The brief moments after you click a link before your web browser starts to actually show anything is in large part due to the speed of light. Multiplayer video games implement numerous elaborate techniques to mitigate true and perceived delay between you shooting at someone and you knowing, for certain, that you missed.
Voice assistants typically need to resolve your requests in the cloud, and the roundtrip time can be very noticeable. Your Echo has to process your speech, send a compressed representation of it to the cloud, the cloud has to uncompress that representation and process it — which may involve pinging another API somewhere, to figure out the weather, and adding more speed of light-bound delay — and then the cloud sends your Echo the answer, and finally you can learn that today you should expect a high of 85 and a low of 42, so to give up on dressing appropriately for the weather.
A very recent rumor that Amazon is working on its own AI chips for Alexa should be of no surprise. The more processing Amazon can do on your local Echo device, the less your Echo has to rely on the cloud. It means you get quicker replies, Amazon’s server costs are less expensive, and conceivably, if enough of the work is done locally you could end up with more privacy.
The security and privacy features of an iPhone are well accepted as an example of edge computing. Simply by doing encryption and storing biometric information on the device, Apple offloads a ton of security concerns from the centralized cloud to its diasporic users’ devices.
The management aspect of edge computing is hugely important for security. Think of how much pain and suffering consumers have experienced with poorly managed Internet of Things devices.
Think about it: you could probably tell me which version of Windows you’re running. But do you know which version of Chrome you have? Edge computing will be more like Chrome, less like Windows.
Google also is getting smarter at combining local AI features for the purpose of privacy and bandwidth savings. For instance, Google Clips keeps all your data local by default and does its magical AI inference locally. It doesn’t work very well at its stated purpose of capturing cool moments from your life. But, conceptually, it’s quintessential edge computing.
The ultimate example of edge computing. Due to latency, privacy, and bandwidth, you can’t feed all the numerous sensors of a self-driving car up to the cloud and wait for a response. Your trip can’t survive that kind of latency, and even if it could, the cellular network is too inconsistent to rely on it for this kind of work.
But cars also represent a full shift away from user responsibility for the software they run on their devices. A self-driving car almost has to be managed centrally. It needs to get updates from the manufacturer automatically, it needs to send processed data back to the cloud to improve the algorithm, and the nightmare scenario of a self-driving car botnet makes the toaster and dishwasher botnet we’ve been worried about look like a Disney movie.
Why does edge computing matter?
Edge computing deployments are ideal in a variety of circumstances. One is when IoT devices have poor connectivity and it’s not efficient for IoT devices to be constantly connected to a central cloud.
Other use cases have to do with latency-sensitive processing of information. Edge computing reduces latency because data does not have to traverse over a network to a data center or cloud for processing. This is ideal for situations where latencies of milliseconds can be untenable, such as in financial services or manufacturing.
Here’s an example of an edge computing deployment: An oil rig in the ocean that has thousands of sensors producing large amounts of data, most of which could be inconsequential; perhaps it is data that confirms systems are working properly.
That data doesn’t necessarily need to be sent over a network as soon as its produced, so instead the local edge computing system compiles the data and sends daily reports to a central data center or cloud for long-term storage. By only sending important data over the network, the edge computing system reduces the data traversing the network.
Another use case for edge computing has been the buildout of next-gen 5G cellular networks by telecommunication companies. Kelly Quinn, research manager at IDC who studies edge computing, predicts that as telecom providers build 5G into their wireless networks they will increasingly add micro-data centers that are either integrated into or located adjacent to 5G towers. Business customers would be able to own or rent space in these micro-data centers to do edge computing, then have direct access to a gateway into the telecom provider’s broader network, which could connect to a public IaaS cloud provider.
BUT WHAT ARE WE GIVING UP?
I have some fears about edge computing that are hard to articulate, and possibly unfounded, so I won’t dive into them completely.
But the big picture is that the companies who do it the best will control even more of your life experiences than they do right now.
When the devices in your home and garage are managed by Google Amazon Microsoft Apple, you don’t have to worry about security. You don’t have to worry about updates. You don’t have to worry about functionality. You don’t have to worry about capabilities. You’ll just take what you’re given and use it the best you can.
In this worst-case world, you wake up in the morning and ask Alexa Siri Cortana Assistant what features your corporate overlords have pushed to your toaster, dishwasher, car, and phone overnight. In the personal computer era you would “install” software. In the edge computing era, you’ll only use it.
It’s up to the big companies to decide how much control they want to gain over their users’ lives. But, it might also be up to us users to decide if there’s another way to build the future. Yes, it’s kind of a relief to take your hands off the steering wheel and let Larry Page guide you. But what if you don’t like where he’s going?
If you enjoyed this article, continue reading it here.
N.B: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official position of the African Academic Network on Internet Policy.