Three major companies in the global technology industry, Nvidia, Arm and Intel established a standard format to facilitate the development of Artificial intelligence (AI). A white paper was released by the companies showing the specifications of the 8-bit floating point (FP8) that optimizes the use of memory and processing by the algorithms.
“FP8 minimizes deviations from IEEE 754 floating point formats with a good balance of hardware and software to accelerate adoption and improve developer productivity,” wrote Shar Narasimhan, Director of Marketing for Data AI and GPU Training Products. center on Nvidia.
Artificial Intelligence industry standard
The specification is implemented natively in the architecture Nvidia’s GH100 Hopper and on the training chipset Intel’s Gaudi2 AI. According to Narasimhan, FP8 can reach performance, in some cases, comparable to 16-bit.
The common 8-bit format would benefit other companies in the technology industry, such as SambaNova, AMD, Groq, IBM, Graphcore and Cerebras. These companies have tried or adopted FP8-based solutions for systems development.
The smaller number of bits reduces the memory demand to train and run the algorithms, including with lower bandwidth and energy usage while the calculations are accelerated. However, some more complex databases for AI training require 32 bits and occasionally 64 bits.