6:30 - 7:00 PM Networking & Refreshments
The event is free, refreshments are $5.
Speaker: Hai "Helen" Li
Bio: Hai “Helen” Li received the B.S. and M.S. degrees from Tsinghua University, Beijing, China, and the Ph.D. degree from the Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA, in 2004. She is currently Clare Boothe Luce Associate Professor with the Department of Electrical and Computer Engineering at Duke University, Durham, NC, USA. She was with Qualcomm Inc., San Diego, CA, USA, Intel Corporation, Santa Clara, CA, Seagate Technology, Bloomington, MN, USA, the Polytechnic Institute of New York University, Brooklyn, NY, USA, and the University of Pittsburgh, Pittsburgh, PA, USA. She has authored or co-authored over 200 technical papers published in peer-reviewed journals and conferences and holds 70+ granted U.S. patents. She authored a book entitled Nonvolatile Memory Design: Magnetic, Resistive, and Phase Changing (CRC Press, 2011). Her current research interests include memory design and architecture, neuromorphic architecture for brain-inspired computing systems, and architecture/circuit/device cross-layer optimization for low power and high performance. Dr. Li serves as an Associate Editor of TVLSI, TCAD, TODAES, TMSCS, TECS, CEM, and the IET Cyber-Physical Systems: Theory & Applications. She has served as organization committee and technical program committee members for over 30 international conference series. She received the NSF CAREER Award (2012), the DARPA YFA Award (2013), TUM-IAS Hans Fisher Fellowship (2017), seven best paper awards and another seven best paper nominations. Dr. Li is a senior member of IEEE and a distinguished member of ACM.
Abstract: Following technology advances in high performance computation systems and fast growth of data acquisition, machine learning, especially deep learning, made remarkable success in many research areas and applications. Such a success, to a great extent, is enabled by developing large-scale deep neural networks (DNN) that learn from a huge volume of data. The deployment of such a big model, however, is both computation-intensive and memory-intensive. Though the research on hardware acceleration for neural network has been extensively studied, the progress of hardware development still falls far behind the upscaling of DNN models at soft-ware level. We envision that hardware/software co-design for performance acceleration of deep neural networks is necessary. In this work, I will start with the trends of machine learning study in academia and industry, followed by our study on how to run sparse and low-precision neural networks, demonstrating an interactive play between software and hardware.