With artificial intelligence requiring so much power and its usage growing so sharply, it is becoming clear that some AI functions need to be moved from the data center to be done instead at the edge of the network.
Drew Robb, writing for TechRepublic Premium, explains why many workloads need to move outward, what AI at the edge is, the benefits, the challenges, and how it can be accomplished.
Featured text from the download:
WHY EDGE AI?
There are many reasons why AI needs to achieve some level of decentralization:
Power: AI consumes more power than can easily be fed to and integrated into large, centralized data centers. There either isn’t enough power on the local grid to fulfill the needs of an AI data center, or that data center doesn’t have the underlying power infrastructure to support full-blown AI applications.
Cooling: Even if the power can be brought in to satisfy AI applications, many existing data centers wouldn’t be able to cool the servers and processors. Outages would be inevitable due to overheating. Liquid cooling has been proposed as the solution, but many data centers either don’t have room to retrofit it, don’t have the skilled manpower to support it, or can’t justify it economically.
Latency: If you send all data to and perform all analysis at a central point, you introduce round trip latency. If the data center is hundreds or thousands of miles away, valuable time is wasted as the data is moved around and the numbers are crunched. This is particularly a factor with real-time applications. Imagine self-driving vehicles having a lag of a second introduced for every decision. If the vehicle is traveling at more than about 20 miles per hour (32.19 kilometres per hour), crashes and accidents would be commonplace.
Enhance your tech knowledge with our in-depth 10-page PDF. This is available for download at just $9. Alternatively, enjoy complimentary access with a Premium annual subscription.
TIME SAVED: Crafting this content required 20 hours of dedicated writing, editing, research, and design.