Date of Award
Doctor of Philosophy (PhD)
Mechanical Engineering & Applied Mechanics
Human brains demonstrate how simple computational primitives can be combined in massively parallel ways to produce networks capable of identifying complicated patterns in sensory data. In contrast, electronic computers adopt hardware architectures that process information serially, leading to higher latency and power consumption when implementing intrinsically parallel algorithms, such as neural networks. This software-hardware architectural mismatch has acquired greater attention due to the widespread adoption of large neural networks and has encouraged the prospect of specialized neuromorphic computers. There is great interest in low latency analog neuromorphic designs that utilize passive crossbar arrays to accomplish the dual tasks of storing synaptic weights and computing dot products. Although this compute-in-memory paradigm promises high circuit density and 3D integrability, prevalent implementations combine them with crossbar-incompatible CMOS neurons, a paring that impedes overall system scalability. This thesis addresses this bottleneck by evolving fully crossbar-compatible neuromorphic architectures based on passive circuit embodiments of neurons and synapses. We demonstrate via SPICE circuit simulations how a shallow network of diode-resistor based passive neurons and resistive voltage summers, despite its inherent inability to buffer, amplify and invert signals, can recognize MNIST digits with 95.4% accuracy. We introduce weight-to-conductance mappings that enable negative weights to be implemented in hardware without excessive memory overheads. The performance impacts of nanoscale defects are evaluated and methods to boost fault-tolerance are proposed. Compared with conventional implementations, all-passive neuromorphic hardware promises higher speed, smaller footprints, and improved vertical scalability. As a validation of our proposition, we showcase a non-programmable all-passive XOR classifier built using graphitic synapses and AZO-PEDOT: PSS p-n junction diodes. In addition to achieving 100% classification accuracy, the physical instantiation highlights how simple all-passive hardware design enables facile fabrication methods. As an important step toward learning-on-the fly, we developed a water-based conductive bridge random access memory that showed good non-volatility (1.5 hours) and cyclability (100 cycles), and a large ON-to-OFF resistance margin (~ 100). By demonstrating the prospect of entirely dissipative and crossbar-compatible neuromorphic hardware, this thesis lays down the foundation for the design of “mostly-passive” deep learning hardware and fascinating wet approaches to realizing brain-like networks.
Ananthakrishnan, Akshay, "All-Passive Hardware Architectures For Neuromorphic Computation" (2020). Publicly Accessible Penn Dissertations. 3952.