Description
Modern machine learning (ML) algorithms are power-hungry. In many cases, the outputs of neural network models are computed using costly matrix-vector multiplications. As datasets grow exponentially fast, ML algorithms are expected to perform increasingly complicated operations that exacerbate power consumption. This strains the hardware and the supporting power grid, potentially shortening their lifetimes and increasing the frequency at which they must cool down or be repaired. Consequently, demand has soared for low-power neural networks.
To address this problem, our team is implementing a neural network classifier using analog circuit components. Performing vector-matrix multiplications using analog technology necessitates designing a Vector-Matrix-Multiplier (VMM) circuit. The VMM utilizes Kirchhoff’s Current Law (KCL) and current mirrors to implement scalar addition and multiplication. Current addition using KCL requires no power, and operations are parallelizable. Current mirrors will be operated in the subthreshold region to achieve ultra-low power. We will also implement a Winner-Take-All (WTA) neuromorphic circuit functioning as a current comparator to classify the input signal into k classes. We will implement the VMM+WTA architecture on the System-on-Chip Field Programmable Analog Array (SoC FPAA), a platform for rapidly programming and prototyping analog systems on modular hardware. The team will evaluate the performance of the VMM+WTA by classifying input audio signals into pre-defined categories.