<img height="1" width="1" alt="" style="display:none;" src="//www.bizographics.com/collect/?pid=8176&amp;fmt=gif">
rfcode-blog-header-bg.jpg

Edge Computing: Let’s Look at the Up-Side!

May 20, 2019

Edge Computing: Let’s Look at the Up-Side!

Posted by RF Code | Mon, May 20, 2019

The market is moving towards edge computing, there’s no question about that. According to Gartner within the next four years 75% of data generated by enterprises will be processed at the edge, up from just 10% today; and Intel predicts more than 200 billion devices will be connected by 2020.  But what’s the draw? Computing at the edge provides a technical advantage, which then drives business value. The move from centralized computing to distributing processing provides processing and storage in proximity to locations with specific computing demands and delivers three major benefits: decreased latency, reduced traffic, and scalability.

 

Latency: Every time a request for data or activity is made from a central data center it takes time for the requested content to travel back to the end user to complete: perhaps just a few milliseconds, but sometimes as much as several seconds.  While this may not seem like much, it adds up to a significant delay when you consider the massive amounts of data an individual network sends and receives. For example, the average web page needs to load over 100 resources (such as graphics, files, and scripts). Forrester research suggests that online shoppers will only wait about 3 seconds for a page to load before leaving a website, so even small delay in serving up web pages can add up to significant load times and possibly lost business.

Traffic: Edge deployments allow data to be processed and stored at the edge, dramatically reducing network traffic load, which in turn helps increase energy efficiency through computational offloading (such as to an IoT device or local server) and reduce operating costs in the central data center or cloud storage, or reduce traffic costs (especially when paying by volume or connection). For example, edge data centers allow retailers or banks to handle transactions locally, allowing point-of-sale systems, ATMs, bank branch offices and so forth to complete transactions rapidly regardless of network conditions outside the branch office or store, transmitting necessary data to their central sales and accounting systems later, perhaps after business hours.

Scalability: Edge networks provide on-demand resources that can store and process data locally, expanding the available network when needed – eliminating the need to overbuild centralized data centers and enabling expansion and provisioning of resources on demand. Using an edge topology with data centers at the fringes of the network enables businesses to add data center resources at unmanned (aka lights out) edge locations, rather than expanding centralized data center facilities to meet demand without having to increase either the footprint or staffing requirements of a core data center.

Reduced latency, localized service availability, and greatly improved data center scalability make the benefits and value of edge deployments clear. Yet, as companies move to adopt edge computing, the need to manage and monitor valuable assets in remote and often unstaffed locations can grow exponentially. In our next entry in this series we’ll take a brief look at how edge computing deployments present IT professionals with unique and significant challenges, particularly when it comes to managing critical assets, physical security, and environmental conditions.

Want to learn more about the benefits -- and challenges -- of edge computing? Download our white paper "Stand at The Edge? Look Before You Leap" today.

Download "Standing at The Edge" Now

Topics: edge management