Over the past few weeks, we have covered the latest in AI, Quantum Computing, and Machine Learning. Now, what if I told you that instead of leaping forward into Star Trek, computing is going back a whole circle to the clunky Windows desktop machines we all owned. Maybe not as clunky after all. There is one more piece of emerging tech that is hanging just around the corner – and it’s called the Edge.
Nowadays we all have some smart device that is constantly feeding us information whether it’s going to rain later today or if you don’t have enough milk to last the week. It is easy to assume that all the processing happens at the cloud but sometimes it is far easier to process all this information locally before sending it to the cloud. Sending information over the internet to a cloud warehouse, having it processed there, and shipping it back takes precious time – something we never really have enough of.
According to Wikipedia, Edge computing is a computing design that promotes processing closer to the source of the data rather than on a remote system. With the increase in data source points like smart devices, smart sensors, smartphones, etc. they need to manage and process infinite amounts of data has increased exponentially faster than the growth of network technology. There are two primary reasons for the shift towards edge computing:
- Network Limitations
Speed: The ability to process the data right next to where it was generated offers very low latency and allows new information to be available almost instantly. Consider a self-driving car that is moving at 60 mph and the sensors detect debris on the road ahead – it is significantly faster for an edge device to immediately apply the brakes rather than wait for that data to be uploaded to the cloud, analyzed, and then a response is sent back to apply the brakes.
Network Limitations: As more and more applications move to the cloud, the demand for cloud resources has gone up and providers have not been able to provide as low a latency as some applications demand. Additionally, the amount of security required for certain information can’t be easily guaranteed if the data is sent from the source over the internet, processed, and then sent back over the internet to the source. Consider the fact that the facial/fingerprint recognition on your iPhone works even without the internet and at lightning speed. This is only possible because all the information is processed locally – also this saves Apple a trip to court if the Feds ever come asking for your data.
While we are transitioning back to the old system of processing data locally, don’t worry we won’t go back to the ugly old desktops we remember from the 90s. We are moving towards sleeker-looking devices for personal use and local content delivery networks that you probably will never end up seeing.
Fount of wisdom, insufferable know it all, make it go away are just some of the phrases used to define Melwyn. When he is not at his Consulting job, he spends his time reading about technology and current affairs.