Intel working on Intel Nervana Neural Network chips with Facebook for AI will soon introduce the world’s first family of purpose-built Artificial Intelligence (AI)-enabled “Nervana Neural Network Processor”.
(NNP. According to Intel, Facebook’s advice was taken for designing the chips and the company has been able to bring “faster training time for deep learning models”.
Intel had earlier said it will improve deep learning training speeds by 100 times by 2020, although the company isn’t telling what it has achieved so far.
“Providers will be able to deliver a more personalised experience to their customers and offer more targeted reach to their advertisers,”
–Intel
Krzanich said he believes these Intel working on Intel Nervana Neural Network chips with Facebook for AI will enable “new class of AI applications” to help transform health care, social media, automobiles and weather forecasting, among others.
The new processors are likely a result of buying Mobileye in March.
Intel working on Intel Nervana Neural Network chips with Facebook for AI designed a new chip for Deep Learning called the Intel® Nervana™ Neural Network Processor (Intel® Nervana™ NNP).
The computational needs of Deep Learning have uncovered the need for new thinking around the hardware required to support AI computations. Intel® Xeon® Scalable Processors and Intel data center accelerators are powering the vast majority of general-purpose Machine Learning and inference workloads for businesses today.
Machine Learning and Deep Learning are quickly emerging as the most important computational workloads of our time.
The Intel Nervana NNP is a purpose-built architecture for deep learning. Intel designed the Intel Nervana NNP to free us from the limitations imposed by existing hardware, which wasn’t explicitly designed for AI.
The goal of this new architecture is to provide the needed flexibility to support all deep learning primitives while making core hardware components as efficient