PRO TALK (AI): Skip the Zeros – Increase Your Deep Network Performance by over 10x

- PDT
AI DevWorld -- PRO Stage 4
Join on Hopin

Lawrence Spracklen
Numenta, Director of Machine Learning Architecture

Dr. Lawrence Spracklen is an experienced leader, with over two decades of experience in developing and delivering cutting-edge solutions. At Numenta Lawrence leads the machine learning architecture team focused on the intersection of AI and hardware. Prior to joining Numenta, Lawrence led research and development teams at several other AI startups; RSquared, SupportLogic, Alpine Data and Ayasdi. Before this, Lawrence spent over a decade working at Sun Microsystems, Nvidia and VMware, where he led teams focused on hardware architecture, software performance and scalability. Lawrence holds a Ph.D. in Electronics Engineering from the University of Aberdeen, a B.Sc. in Computational Physics from the University of York and has been issued over 65 US patents.


In recent years interest in sparse neural networks has steadily increased, accelerated by NVIDIA’s inclusion of dedicated hardware support in their recent Ampere GPUs. Sparse networks feature both limited interconnections between the neurons and restrictions on the number of neurons that are permitted to become active. By introducing this weight and activation sparsity, significant simplification of the computations required to both train and use the network is achieved. These sparse networks can achieve equivalent accuracy to their traditional ‘dense’ counterparts but have the potential to outperform the dense networks by an order of magnitude or more. In this presentation we start by discussing the opportunity associated with sparse networks and provide an overview of the state-of-the-art techniques used to create them. We conclude by presenting new software algorithms that unlock the full potential of sparsity on current hardware platforms, highlighting 100X speedups on FPGAs and 20X on CPUs and GPUs.