Spiking Neural Networks (SNNs) have recently emerged as an alternative to deep learning due to their huge energy efficiency benefits on neuromorphic hardware. In this presentation, I will talk about important techniques for training SNNs which bring a huge benefit in terms of latency, accuracy and even robustness. We will first delve into a recently proposed method Batch Normalization Through Time (BNTT) that allows us to train SNNs from scratch with very low latency and enables us to target interesting applications like video segmentation and beyond traditional learning scenarios, like federated training. Then, I will discuss novel architectures with temporal feedback connections discovered by SNNs by using neural architecture search that further lower latency, improve energy efficiency, and point to interesting temporal effects. Finally, I will delve into the hardware perspective of SNNs when implemented on standard CMOS and compute-in-memory accelerators with our recently proposed SATA and SpikeSim tools. It turns out that the multiple timestep computation in SNNs can lead to extra memory overhead and repeated DRAM access that annuls all the compute-sparsity related advantages. I will highlight some techniques such as, membrane-potential sharing, early time-step exit that use the temporal dimension in SNNs to reduce the overhead.
Priya Panda is an assistant professor in the electrical engineering department at Yale University, USA. She received her B.E. and Master's degree from BITS, Pilani, India in 2013 and her PhD from Purdue University, USA in 2019. During her PhD, she interned in Intel Labs where she developed large scale spiking neural network algorithms for benchmarking the Loihi chip. She is the recipient of the 2019 Amazon Research Award, 2022 Google Research Scholar Award, 2022 DARPA Riser Award. Her research interests lie in Neuromorphic Computing, energy-efficient accelerators, and in-memory processing.
The Applied AI/ML Seminar Series is presented with the goal of increasing communication and collaboration between scientists at the three facilities (Advanced Photon Source (APS), Argonne Tandem Linac Accelerator System (ATLAS) and the Center for Nanoscale Materials (CNM)) and providing a resource for both new and experienced AI/ML practitioners at Argonne National Lab. We plan to host a monthly seminar and tutorial series. Please see the group website for scheduling/resources: https://appliedai-anl.github.io.
Microsoft Teams meeting
Join on your computer, mobile app or room device
Click here to join the meeting
Meeting ID: 210 677 756 299
Download Teams | Join on the web
Or call in (audio only)
+1 630-556-7958,312050584# United States, Big Rock
Phone Conference ID: 312 050 584#
Find a local number | Reset PIN
Learn More | Meeting options