trendingNowenglish2325306https://zeenews.india.com/technology/amd-instinct-mi100-accelerator-worlds-fastest-gpu-launched-check-key-features-2325306.html
News> Technology
Advertisement

AMD Instinct MI100 accelerator, world's fastest GPU launched --Check key features

AMD Instinct MI100 accelerators are expected by end of the year in systems from major OEM and ODM partners in the enterprise markets.

AMD Instinct MI100 accelerator, world's fastest GPU launched --Check key features

New Delhi: AMD has launched Instinct MI100 accelerator – touted as the worlds fastest HPC GPU and the first x86 server GPU for scientific research.

"Supported by new accelerated compute platforms from Dell, GIGABYTE, HPE, and Supermicro, the MI100, combined with AMD EPYC™ CPUs and the ROCm 4.0 open software platform, is designed to propel new discoveries ahead of the exascale era," a company statement said.

Also read: Qualcomm Snapdragon 750G 5G mobile

Built on the new AMD CDNA architecture, the AMD Instinct MI100 GPU enables a new class of accelerated systems for HPC and AI when paired with 2nd Gen AMD EPYC processors. 

The MI100 offers up to 11.5 TFLOPS of peak FP64 performance for HPC and up to 46.1 TFLOPS peak FP32 Matrix performance for AI and machine learning workloads2. With new AMD Matrix Core technology, the MI100 also delivers a nearly 7x boost in FP16 theoretical peak floating point performance for AI training workloads compared to AMD’s prior generation accelerators.

Ultra-Fast HBM2 Memory features 32GB High-bandwidth HBM2 memory at a clock rate of 1.2 GHz and delivers an ultra-high 1.23 TB/s of memory bandwidth to support large data sets and help eliminate bottlenecks in moving data in and out of memory, the company said.

Key capabilities of AMD Instinct MI100 accelerator

  • Engineered to power AMD GPUs for the exascale era and at the heart of the MI100 accelerator, the AMD CDNA architecture offers exceptional performance and power efficiency
     
  • Delivers industry leading 11.5 TFLOPS peak FP64 performance and 23.1 TFLOPS peak FP32 performance, enabling scientists and researchers across the globe to accelerate discoveries in industries including life sciences, energy, finance, academics, government, defense and more.
     
  • Supercharged performance for a full range of single and mixed precision matrix operations, such as FP32, FP16, bFloat16, Int8 and Int4, engineered to boost the convergence of HPC and AI.
     
  • Instinct MI100 provides ~2x the peer-to-peer (P2P) peak I/O bandwidth over PCIe® 4.0 with up to 340 GB/s of aggregate bandwidth per card  with three AMD Infinity Fabric Links.4 In a server, MI100 GPUs can be configured with up to two fully-connected quad GPU hives, each providing up to 552 GB/s of P2P I/O bandwidth for fast data sharing.
     
  • Features 32GB High-bandwidth HBM2 memory at a clock rate of 1.2 GHz and delivers an ultra-high 1.23 TB/s of memory bandwidth to support large data sets and help eliminate bottlenecks in moving data in and out of memory.
     
  • Designed with the latest PCIe Gen 4.0 technology support providing up to 64GB/s peak theoretical transport data bandwidth from CPU to GPU.

The AMD Instinct MI100 accelerators are expected by end of the year in systems from major OEM and ODM partners in the enterprise markets.

Stay informed on all the latest news, real-time breaking news updates, and follow all the important headlines in india news and world News on Zee News.

TAGS

Read More
NEWS ON ONE CLICK