The human brain is one of the most complex, efficient systems that nature created. Our brains have limitless potential, and is responsible for every innovation and discovery the mankind prides itself on. Its efficiency to support cognitive abilities with low energy consumption and fast computational speed, has inspired scientists for years. Can you imagine a simulation of a brain! The potential it could unlock, its efficiency and application range. This may seem like a dream to some of you, but in reality, it’s not far away. Neuromorphic computing is a field that has the potential to open doors for several future possibilities.
What Is Neuromorphic Computing?
Neuromorphic computing, uses neutral networks to parallel how the brain performs its functions including decision making, memorizing information. “Neuromorphic” can be translated as “taking the form of the brain”. A neuromorphic device ideally will have functions analogous to parts of the human brain.
AI uses rules and logic to draw reasoned conclusions within a specific defined domain. This first-generation AI proved its worth in monitoring, reading data and performing predefined tasks with perfection. The second generation has integrated deep-learning, that can sense, analyse and perceive.
Abstraction is the next big leap for AI. Interpretation and adaptation, in addition to finding context and understanding will open up a whole new world of possibilities. Neuromorphic computing uses probabilistic computing to deal with the uncertainty and ambiguity in real time learning and decision making.
Origin of Neuromorphic Computing:
It is surprising that the concept of a neuromorphic chip was first mentioned by Caltech professor Carver Mead in 1990. Mead suggested that analog chips could mimic the electrical activity of neurons and synapses in the brain. Conventional chips keep transmissions at a fixed voltage. Power dissipation is a huge problem especially when today’s systems perform complex machine learning tasks. In comparison, neuromorphic chips would have low energy consumption. Thus, to render better operating environment and lower energy consumption, the chip would be event driven and only operate when needed.
Several companies have invested in this brain-inspired computing. Qualcomm made strides in the field through a neuromorphic chip-implemented robot in 2014. This robot was able to perform tasks which would have required a specially programmed computer. IBM’s SyNAPSE chip, introduced in 2014, has a neuromorphic type architecture. It that requires only an incredibly low energy consumption of 70mW. These companies also explore this technology for research purposes.
In 2012, Intel proposed a design where Magnetic Tunnel Junctions act like the cell body of the neuron, and the Domain Wall Magnets act as the synapse. The CMOS detection and transmission unit is comparable to the axon of the biological neuron that transmits electrical signal.
Advantages:
Aside from the advantage of low power consumption, neuromorphic devices are better at tasks that need pattern matching such as self-driving cars. They can cadre to the applications that require the imitation of ‘thinking’. The architecture has a high degree of parallelism, and can handle deep memory hierarchies in a uniform way. Parallelizing a task to more complicated neural network problems and scale out across multiple nodes. However, this currently has a limitation of a few hundred nodes. Neuromorphic computing would unlock a much higher degree of scalability.
Research Focuses
The key challenges in research are matching flexibility and ability to learn with the energy efficiency of the human brain. Spiking neural networks are a viable model that mimic the natural neural networks that exist in the human brains. The components of neuromorphic system must be logically analogous to neurons. By encoding information within the signals and the timing, SNNs simulate the learning process by dynamically remapping the synapses.
Multi-Disciplinary Computing:
Neuromorphic computing intersection of diverse disciplines, like computational neuroscience, machine learning, microelectronics, and computer architecture. Right from experimentation and development, to identifying applications to solve real-world problems.
The uncertainty and noise modulated into natural data are a key challenge for the advancement of AI. Algorithms must become adept at tasks based on natural data, which humans manage intuitively but computer systems have difficulty with.
Having the capability to understand and compute with uncertainties will enable intelligent applications in diverse AI domains. For example, in medical imaging, uncertainty measures can prioritize which images a radiologist with specific regions highlighted.
The current state of AI enables the systems to recognize and respond to their surroundings, and aid in avoiding collision. For fully autonomous driving, however, the algorithms must involve uncertainty as well. The decision making must involve understanding the environment and predicting future events. The perception and understanding tasks need to be aware of the uncertainty inherent in such tasks.
Examples of neuromorphic engineering projects:
Today, there are several academic and commercial experiments under way to produce working, reproducible neuromorphic models, including:
SpiNNaker is a low-grade supercomputer. Its job is to simulate cortical microcircuits. In August 2018, Spinnaker was responsible for the largest neural network simulation to date, involving around 80,000 neurons connected by 300 million synapses.
Neuromorphic computing is a new and emerging field that has a plethora of opportunities for the future. If this article has spiked your interest, go ahead and delve into more information. You can look into this field as a career choice and be a pioneer for a new phase in artificial intelligence!