Facial Recognition Using Electronic Synapses: A New Discovery by Huaqiang Wu from Oasis Publishers
New York, NY, November 21, 2018 --(PR.com)-- Machine learning is evolving and is at the frontier of a new techno-tsunami, ushering a new wave of revolution. Memristor-enabled neuromorphic computing machines are seen as an alternative to the traditional, conventional usage of technology.
Machines can learn cognitive computing to perform a multitude of tasks that require intelligence. These tasks range from real-time big data analytics, visual recognition, to navigating the city streets with the help of a self-driven car. At present the aforementioned tasks are handled by conventional central processing units, graphics processing units and other application specific integrated hardware. These are based on von Neumann computing architecture that comprises of separated memory and computing units. These conventional machines generally require huge amount of energy and latency to complete the data centric intelligent tasks because of the inherent constraint of the von Neumann architecture.
Huaqiang Wu, along with the team, wanted to resolve the issue of the low computing efficiency in conventional computing machines by using a novel device, i.e. resistive random-access memory (RRAM). The RRAM device is considered a kind of promising electronic synapse. It emulates the synapse behaviour by increasing and decreasing its conductance continuously by simply supplying voltage stimulus. This RRAM-based crossbars enable a neuromorphic computing paradigm by computing at where the data is stored. This is similar to computing approach of human brain that stores and computes information at the same place where synapses locate. In this manner, the RRAM-enabled neuromorphic computing is high-efficiency to compute the vector-matrix-multiplication without the frequent data-shuttling needs and is potential to greatly reduce the energy consumption to address the recognitive tasks.
The Huaqiang’s team further noticed that, compared to conventional complementary metal oxide semiconductor (CMOS) technology, the RRAM device is energy-and-area efficient to be an electronic device. The on-chip weight storage that used static random memory generally consumes large unit-area and the off-chip weight storage incurs more than 100 times larger power consumption than the on-chip memory. Oppositely, a single RRAM device is capable of acting as a weight unit and is promising to scale down to 2nm size. It is even possible to reach the density of the biology synapses in human brain with the 3D integration of the RRAM device. The large number of synaptic weight is essentially demanded for solving the complex problems.
At the same time, the RRAM-based neuromorphic computing also faces sever challenges. It is difficult to fabricate a uniform crossbar with reliable bi-directional continuous conductance tuning due to the device variances. Besides, the device non-ideal factors restraint the demonstration of the new computing paradigm to easy tasks on relatively small crossbar size.
The team of scientists figured that there was an urgent requirement to find a suitable structure to address these issues. They optimized the RRAM stacks by device engineering with CMOS fab-friendly material. The conductive metal oxide layer is inserted to improve the bidirectional analog switching performance. The resistive switching layer is changed into a laminate structure. The one-transistor-one-RRAM (1T1R) cell structure is eventually adopted as one basic synaptic weight to suppress the sneak path and improve the conductance modulation. Moreover, a 1024-cell electronic synapse crossbar array was fabricated with remarkable and reliable bidirectional analog switching behaviour. This paves the way for a monolithic integration with larger RRAM crossbars to serve the real-world.
Furthermore, the team successfully demonstrated a fully connected artificial neural network to recognize face images from Yale Face database with the fabricated 1024-cell array. During the demonstration, two programming schemes were proposed to realize different learning rules. The Grey-scale face classification experiment finally realized software-comparable accuracy with parallel online training. The network was even tested with unseen face images with up to 31.25% noise, the accuracy was approximately equivalent to the result of standard computing system. This was a huge success.
Apart from the high recognition accuracy, the group also evaluated the energy cost. The energy consumption within the analogue synapses for each iteration is 1,000×(20×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory).
It was seen that the exceptional performance of the neuromorphic network mainly resulted from such a reliable RRAM crossbar array. This experimental prototype is a convincing example for the application of neuromorphic computing system based on RRAM electronic synapses array. It is possible to integrate the RRAM crossbars with conventional CMOS circuits to complete the intelligent recognition tasks using more complex deep neural networks.
Prof. Huaqiang Wu is the corresponding author of this work. PhD student Peng Yao is the first author. They and their team summarised this brilliant technical study and shared that to achieve comparable classification accuracy on a larger network as the state of the art there are still some technical issues, which require attention.
The research paper is bound to ignite some curious minds. Several companies are using face classification processes and a research in this particular field was the need of the hour. It further throws open new challenges and augurs more intellectual discussions in the field of face classification vis-a-vis machine learning.
For more information:
doi: 10.1038/ncomms15199 (2017).
Machines can learn cognitive computing to perform a multitude of tasks that require intelligence. These tasks range from real-time big data analytics, visual recognition, to navigating the city streets with the help of a self-driven car. At present the aforementioned tasks are handled by conventional central processing units, graphics processing units and other application specific integrated hardware. These are based on von Neumann computing architecture that comprises of separated memory and computing units. These conventional machines generally require huge amount of energy and latency to complete the data centric intelligent tasks because of the inherent constraint of the von Neumann architecture.
Huaqiang Wu, along with the team, wanted to resolve the issue of the low computing efficiency in conventional computing machines by using a novel device, i.e. resistive random-access memory (RRAM). The RRAM device is considered a kind of promising electronic synapse. It emulates the synapse behaviour by increasing and decreasing its conductance continuously by simply supplying voltage stimulus. This RRAM-based crossbars enable a neuromorphic computing paradigm by computing at where the data is stored. This is similar to computing approach of human brain that stores and computes information at the same place where synapses locate. In this manner, the RRAM-enabled neuromorphic computing is high-efficiency to compute the vector-matrix-multiplication without the frequent data-shuttling needs and is potential to greatly reduce the energy consumption to address the recognitive tasks.
The Huaqiang’s team further noticed that, compared to conventional complementary metal oxide semiconductor (CMOS) technology, the RRAM device is energy-and-area efficient to be an electronic device. The on-chip weight storage that used static random memory generally consumes large unit-area and the off-chip weight storage incurs more than 100 times larger power consumption than the on-chip memory. Oppositely, a single RRAM device is capable of acting as a weight unit and is promising to scale down to 2nm size. It is even possible to reach the density of the biology synapses in human brain with the 3D integration of the RRAM device. The large number of synaptic weight is essentially demanded for solving the complex problems.
At the same time, the RRAM-based neuromorphic computing also faces sever challenges. It is difficult to fabricate a uniform crossbar with reliable bi-directional continuous conductance tuning due to the device variances. Besides, the device non-ideal factors restraint the demonstration of the new computing paradigm to easy tasks on relatively small crossbar size.
The team of scientists figured that there was an urgent requirement to find a suitable structure to address these issues. They optimized the RRAM stacks by device engineering with CMOS fab-friendly material. The conductive metal oxide layer is inserted to improve the bidirectional analog switching performance. The resistive switching layer is changed into a laminate structure. The one-transistor-one-RRAM (1T1R) cell structure is eventually adopted as one basic synaptic weight to suppress the sneak path and improve the conductance modulation. Moreover, a 1024-cell electronic synapse crossbar array was fabricated with remarkable and reliable bidirectional analog switching behaviour. This paves the way for a monolithic integration with larger RRAM crossbars to serve the real-world.
Furthermore, the team successfully demonstrated a fully connected artificial neural network to recognize face images from Yale Face database with the fabricated 1024-cell array. During the demonstration, two programming schemes were proposed to realize different learning rules. The Grey-scale face classification experiment finally realized software-comparable accuracy with parallel online training. The network was even tested with unseen face images with up to 31.25% noise, the accuracy was approximately equivalent to the result of standard computing system. This was a huge success.
Apart from the high recognition accuracy, the group also evaluated the energy cost. The energy consumption within the analogue synapses for each iteration is 1,000×(20×) lower compared to an implementation using Intel Xeon Phi processor with off-chip memory (with hypothetical on-chip digital resistive random access memory).
It was seen that the exceptional performance of the neuromorphic network mainly resulted from such a reliable RRAM crossbar array. This experimental prototype is a convincing example for the application of neuromorphic computing system based on RRAM electronic synapses array. It is possible to integrate the RRAM crossbars with conventional CMOS circuits to complete the intelligent recognition tasks using more complex deep neural networks.
Prof. Huaqiang Wu is the corresponding author of this work. PhD student Peng Yao is the first author. They and their team summarised this brilliant technical study and shared that to achieve comparable classification accuracy on a larger network as the state of the art there are still some technical issues, which require attention.
The research paper is bound to ignite some curious minds. Several companies are using face classification processes and a research in this particular field was the need of the hour. It further throws open new challenges and augurs more intellectual discussions in the field of face classification vis-a-vis machine learning.
For more information:
doi: 10.1038/ncomms15199 (2017).
Contact
Oasis Publishers
Huaqiang Wu
+1-646-751-8810
www.oasispub.org
Contact
Huaqiang Wu
+1-646-751-8810
www.oasispub.org
Categories