Abstract coming soon...
![](https://petcareinnovationusa.com/sites/default/files/styles/panopoly_image_square/public/speakers/jeremy_roberson_headshot_1.jpg?itok=dUBSU9-D&c=64a16bd42da57b9f1e43ade8f3cfd45f)
Jeremy Roberson
Director of Inference Software at Flex Logix. Jeremy earned his BSEE, MSEE, and PhD EE degrees from UC Davis specializing in Signal Processing Algorithms. Jeremy has worked on algorithms and hardware accelerator architectures for machine learning and signal processing in domains such as automatic speech recognition, object detection for biomedicine, capacitive sensing systems, and more. He has several patents and publications within these areas. He has spent the last 6 years working on inference SW for AI accelerators, first at Intel, and now at Flex Logix.
Flex Logix
Website: http://www.flex-logix.com
Flex Logix’ NMAX inferencing accelerators provide very high throughput and hardware utilization even at batch size = 1, with low system cost and system power.
NMAX MAC utilization is over 50% even at batch size = 1. For most inferencing engines utilization drops dramatically from batch 10 to batch 1 because they stall loading weights. High MAC utilization means less silicon area needed for the same throughput.
NMAX uses proprietary Flex Logix interconnect technology to utilize local, distributed SRAM very efficiently generating very high local bandwidth and dropping DRAM bandwidth requires to that of 1 or 2 LPDDR4 DRAMs, even for YOLOv3 at 30 frames/second. This lowers power and lowers cost.
NMAX will be available in TSMC16FFC/12FFC in mid 2019.