HARVARD SELECTS EFLX FOR DEEP LEARNING

Another Leader Adopts EFLX for Updating RTL in system at any time

Harvard University is a leader in Deep Learning research.  You can see their recent ISSCC 2017 paper on their website here. 

Professor Gu-Yeon Wei, Harvard University

Professor Gu-Yeon Wei, Harvard University

Harvard approached Flex Logix in February 2017 asking to use EFLX in their new deep learning chip.  In deep learning, algorithms evolve quickly.  By the time a hard-wired chip appears in silicon, the algorithms are already out of date, so the pace of learning is slowed.  With embedded FPGA it is possible to implement some of the algorithms in reconfigurable logic so that algorithms can be updated and iterated in real time leading to a faster pace of innovation.  

Harvard selected EFLX as the best embedded FPGA and because it was available in TSMC16FFC which they had selected for their next generation deep learning chip.

Deep Learning (also known as AI, artificial intelligence or machine learning) has applications in data centers, mobile and IoT and reconfigurable logic can improve these chips in each of these applications.

In less than 3 months Harvard was able to integrate a 16K LUT, 2x2 array of Gen2 EFLX4K IP cores in TSMC16FFC (a mix of Logic and DSP) into their new deep learning chip which taped out in May 2017.   

After evaluating their new chip, Harvard expects to publish their results, including the application and benefits of embedded FPGA, in a paper at a future conference.

The press release can be found HERE.

 

Copyright © 2015-2017 Flex Logix Technologies, Inc. EFLX and Flex Logix are Trademarks of Flex Logix Technologies, Inc.