A New Block Floating Point Arithmetic Unit for AI/ML Workloads

Learn About the New Block Floating Point Arithmetic Unit for Processing AI/ML Workloads in Speedster®7t FPGA

Block floating point (BFP) is a hybrid of floating-point and fixed-point arithmetic where a block of data is assigned a common exponent. Learn about the only FPGA with machine learning processors that can deliver native BFP capabilities with higher performance and lower power consumption compared to traditional FPGA DSP blocks. 

  • Learn what block floating point is and why it's being used for AI/ML applications
  • Understand how the new MLP in Speedster7t FPGAs is optimized for BFP and AI/ML workloads
  • Get 8x the performance with similar power consumption using BFP vs. floating and fixed-point arithmetic  

 

Presented by:

placeholder image
Mike Fitton, PhD - Sr. Director of Strategy and Planning at Achronix

Mr. Fitton has 25+ years of experience in the signal processing domain, including system architecture, algorithm development, and semiconductors across wireless operators, network infrastructure and most recently in machine learning.