Flex Logix
InferX is

InferX Software makes AI Inference easy, provides more throughput on tough models, costs less, and requires less power.

AI Inference Acceleration

Top throughput on tough models.
More throughput for less $ & less watts.

Learn More

eFPGA

Accelerate workloads
& make your SoC flexible for changing needs.

eFPGA proven on 6/7, 12, 16, 22, 28, 40 & 180nm.

Learn More

Inference Events

Edge Computing World - Europe -Virtual

Edge Computing World - Europe -Virtual

Edge Computing World is the premiere series of events for the Edge Computing space, recognized for bringing together the entire edge market for learning and networking. Edge Computing World is a forum for pioneering thought leaders and influencers in edge native applications and infrastructure, to accelerate the edge market.

Flex Logix™ Technology

PROGRAMMABLE INTERCONNECT

Inference and eFPGA are both data flow architectures. A single inference layer can take over a billion multiply-accumulates. Our Reconfigurable Tensor Processor reconfigures the 64 TPUs and RAM resources to efficiently implement a layer with a full bandwidth, dedicated data path, like an ASIC; then repeats this layer by layer. Flex Logix utilizes a new breakthrough interconnect architecture: less than half the silicon area of traditional mesh interconnect, fewer metal layers, higher utilization and higher performance. The ISSCC 2014 paper detailing this technology won the ISSCC Lewis Winner Award for Outstanding Paper. The interconnect continues to be improved resulting in new patents.

SUPERIOR SCALABILITY

We can easily scale up our Inference and eFPGA architectures to deliver compute capacity of any size. Flex Logix does this using a patented tiling architecture with interconnects at the edge of the tiles that automatically form a larger array of any size.

TIGHTLY COUPLED SRAM AND COMPUTE

SRAM closely couples with our compute tiles using another patented interconnect. Inference efficiency is achieved by closely coupling local SRAM with compute which is 100x more energy efficient than DRAM bandwidth. This interconnect is also useful for many eFPGA applications.

DYNAMIC TENSOR PROCESSOR

Our dynamic tensor processor features 64 one-dimensional tensor processors closely coupled with SRAM. The tensor processors are dynamically reconfigurable during runtime, using our proprietary interconnect, thus enabling implementation of multi-dimensional tensor operations as required for each layer of a neural network model, resulting in high utilization and high throughput.

SOFTWARE

Unlike solutions designed around AI model development and training, our Inference accelerator starts with a trained ML model, typically in ONNX format and generates a program that runs on our InferX accelerators. 

Our eFPGA compiler has been in use by dozens of customers for several years. Software drivers will be available for common Server OS and real time OS for MCUs and FPGAs.

InferX PCI Express and M.2 offerings

The InferX X1 processor is in production and is available now in PCI Express (HHHL), M.2 (M+B key) and chip level offerings.
 

SUPERIOR LOW-POWER DESIGN METHODOLOGY

Flex Logix has numerous architecture and circuit design technologies to deliver the highest throughput at the lowest power.

Featured Articles

Silicon Catalyst welcomes  Flex Logix as an In-Kind Partner

Silicon Catalyst welcomes Flex Logix as an In-Kind Partner

Silicon Catalyst, the world’s only incubator focused exclusively on accelerating semiconductor solutions, is pleased to announce that Flex Logix® has joined as the newest member of its In-Kind Partner program (IKP). Portfolio companies in the Silicon Catalyst Incubator will have access to Flex Logix’s innovative embedded FPGA (eFPGA) IP and software, enabling silicon reconfigurability for use in their chip designs.

Flex Logix Partners With Roboflow to Enable Specialized AI Models for Computer Vision Applications

Flex Logix Partners With Roboflow to Enable Specialized AI Models for Computer Vision Applications

The availability of AI models optimized for the Flex Logix InferX accelerator enables edge device manufacturers to get to market quickly, reliably and affordably.

Speeding Up AI Algorithms

Speeding Up AI Algorithms

AI at the edge is very different than AI in the cloud. Salvador Alvarez, solution architect director at Flex Logix, talks about why a specialized inferencing chip with built-in programmability is more efficient and scalable than a general-purpose processor, why high-performance models are essential for getting accurate real-time results, and how low power and ambient temperatures can affect the performance and life expectancy of these devices.

Image Description