[TRACK B] Models & Data:
Reduce Your Speech Transcription Costs by 90%

Deep neural networks (DNNs), a subset of machine learning (ML), provide a foundation for automating conversational artificial intelligence (CAI) applications. FPGAs provide hardware acceleration enabling high-density and low latency CAI. In this presentation, we will provide an overview of CAI, data center use-cases, describe the traditional compute model and its limitations and show how an ML compute engine integrated into the Achronix FPGA can lead to 90% cost reductions for speech transcription.

Come see our technology

Join Achronix at our booth! We'll show the latest FPGA-based solutions for high-bandwidth, compute-intensive and real-time processing applications built for AI.

  • Speedster®7t FPGAs: high-performance FPGAs with 2D network on chip, delivering ASIC-level performance with the full programmability of FPGAs.
  • Speedcore™ eFPGA IP: 15+ million eFPGA IP cores shipped, bringing the performance and flexibility of programmable logic to ASICs and SoCs.
  • VectorPath® Accelerator Cards: PCIe accelerator cards for rapid prototyping and production, offering 400G and 200G Ethernet interfaces and 4 Tbps of GDDR6 memory bandwidth.

Could your next project benefit from an FPGA or eFPGA IP solution? Meet with our team.