Talk: Dr. Anup Das "Next Generation Neuromorphic Computing: Memory-Efficient Bio-Inspired Scalable Architectures"
Short Speaker Bio: Dr. Anup Das is an Associate Professor at the Electrical and Computer Engineering at Drexel
University. He is also the Associate Department Head for Graduate Affairs. He received a Ph.D. in
Embedded Systems from the National University of Singapore in 2014. Following his Ph.D., he was a
postdoctoral fellow at the University of Southampton and a researcher at IMEC, Netherlands. He received
the National Science Foundation CAREER Award in 2020 and the Department of Energy CAREER Award
in 2021 to investigate the reliability and security of neuromorphic hardware. His research interests are
neuromorphic computing and architectural exploration.
Abstract: Spiking Neural Networks (SNNs) are biologically-inspired neural networks that mimic brain
computations. SNN accelerators (known as neuromorphic hardware) combine spike-based neural
computations and synaptic weight storage in the same physical location, addressing traditional
computing’s memory bandwidth limitations. In this talk, we will present solutions to two key challenges in
neuromorphic hardware design that could lead to future research in interesting areas, including robotics.
Challenge #1 (Memory Interface): While neuromorphic hardware achieves energy efficiency through
discrete spike processing and weight pruning, naive storage of sparse matrices leads to significant
memory waste, with approximately 60% of memory resources allocated to zero-weight synapses. We
introduce a novel sparse hashing approach that optimizes nonzero weight storage and retrieval, reducing
memory over-provisioning by 3.2x, memory access latency by 77%, and data bus bandwidth waste by
68% compared to traditional dense storage, with minimal architectural changes.
Challenge #2 (Interconnect Bottleneck): Current tile-based neuromorphic architectures use mesh-
based Network-on-Chip (NoC) for inter-tile communication, leading to congestion that impacts energy
consumption and accuracy. Our Dynamic Segmented Bus interconnect partitions bus lanes into
segments, connected via novel three-way segmentation switches. Combined with our intelligent
workload-aware compiler, this interconnect reduces switch area by 20x, interconnect energy by 6.2x, and
latency by 23%, compared to NoCs.
Future Outlook: While current SNNs primarily use integrate-and-fire neurons, we explore two advances:
dendritic computations for solving classical computer science problems like dynamic programming, and
astrocyte-inspired designs for fault-tolerant computing. These developments could enable more robust
and large-scale neuromorphic computing in key areas such as robotics, allowing parallel sensory
processing and natural environmental responses similar to biological systems.