NMAX™ is coming October 31st at the Linley Processor Conference

Listen to Flex Logix Co-Founder Cheng Wang’s talk Wednesday morning October 31st at the Linley Fall Processor Conference: A High Performance Reconfigurable Neural Accelerator with Low DRAM Bandwidth. More information HERE.

FPGA has been used extensively for Neural Network Acceleration by Microsoft, Harvard (using EFLX eFPGA) and others. But FPGA is not optimized for AI: the multipliers are big, spread out and too few relative to LUTs. NMAX is a new architecture that uses eFPGA and Flex Logix’ novel interconnect technologies to deliver a modular, scalable Neural Network Inferencing solution perfect for high performance edge applications.

NMAX achieves high TOPS (Trillions of Operations/Second) at batch size = 1 with much less DRAM bandwidth than existing solutions: less DRAM means smaller footprint, less cost and less power.

Return to this page on November 1st for details on NMAX.