Rethinking Your Infrastructure for Enterprise AI

Published by IBM

IDC strongly believes that the days of homogeneous compute, in which a single architecture dominates all compute in the datacenter, are over. This truth has become increasingly evident as more and more businesses have started to launch artificial intelligence (AI) initiatives. Many of them are in an experimental stage with AI and a few have reached production readiness, but all of them are cycling unusually fast through infrastructure options to run their newly developed AI applications and services on.

IDC strongly believes that the days of homogeneous compute, in which a single architecture dominates all compute in the datacenter, are over. This truth has become increasingly evident as more and more businesses have started to launch artificial intelligence (AI) initiatives. Many of them are in an experimental stage with AI and a few have reached production readiness, but all of them are cycling unusually fast through infrastructure options to run their newly developed AI applications and services on.
The main reason for this constant overhauling of infrastructure is that most of the standard infrastructure that is being used in the datacenter for the bulk of workloads is not very suitable for the extreme data intensive nature of AI. Not only is the performance and I/O of a typical server lacking for deep learning (DL) but the data lakes that are the breeding grounds for AI model development are unequipped for this critical task. The data lakes consist of slow monocultures based on traditional schemas that take weeks if not months to prepare for AI modeling. These data lakes are also considered noncritical for the business, whereas, once AI starts being developed on them, they will become hypercritical.
AI has thus become the lead actor in a play that tells the evolving story of emerging processor diversity in the datacenter — a diversity that manifests itself not only with the increasing presence of GPUs, FPGAs, many-core processors, and ASICs for specific workloads but also in a shift to other host processors and to better links between the host and the accelerator. Because, while accelerators can alleviate a lot of performance lag, it's in the interplay with a host processor that truly outstanding performance for a workload such as AI can be achieved.

Download Now

Required fields*

By submitting this form you agree to IBM contacting you with marketing-related emails or by telephone. You may unsubscribe at any time. Our web sites and communications are subject to our Privacy Notice.
By requesting this resource you agree to our terms of use. All data is protected by our Privacy Notice.

Required fields*

In order to provide you with this free service we may share your business information with companies whose content you choose to view on this website.

All information that you supply is protected by our privacy policy and you agree to our Terms of Use.