Author: Varsha Raghuram
Editor: Dr. Rimjhim Agrawal
What are artificial neural networks?
An artificial neural network (ANN) is a computing system, based on the biological neural network (BNN) of the animal brain. It is inspired by the way various neural circuits in the brain interconnect to form large-scale networks, but is often not an exact copy of its biological basis.
How do they work?
ANN has a collection of nodes called artificial neurons - loosely based on actual brain neurons. Each connection between the nodes functions like a synapse and can transmit a signal from one ‘neuron’ to the other. The artificial neuron receives a signal, processes it and then signals other neurons connected to it.
These neurons are separated into different layers. Each neuron performs a different function on its input. Signals travel from layer to layer, having to travel through each one at least once before going further.
Neural networks have parameters for their inputs. These parameters are known as ‘weight’ and ‘bias/threshold’. Weight can be thought of as a deciding factor. For example, if you want to go to a restaurant, you would consider: if you liked their menu, if you liked their service, and if your friend was willing to go with you. These three variables can be corresponded to with the binary 1 or 0, based on the result. Now a weight is assigned to each of those factors - let’s say that the menu matters most to you, so we’d assign it a weight of 5. Your friend’s company, and the service, can have a weight of 2. Now the bias comes into play - functioning as the minimum ‘threshold’ that the resulting sum needs to meet. Say that the menu eclipses everything else, then choosing a bias of 5 would make it so that your friend’s company and the service wouldn’t matter if the menu was good. Vice versa, even if your friend came and the service was great, it would make no difference if the result for the variable ‘good menu’ was 0 (since the weights would only add up to 4).
The general structure of a neural network, and the way weight and bias work, is highlighted in fig. 1.
Neural networks need to be taught in order to calibrate the correct weight and bias for various data sets. They first set a random value for both parameters, then by processing samples of training data, modify their weights and bias according to the results. This is called ‘backpropagation’, and helps the network learn in order to get the right conclusions when performing the same operation on very different data.
Image credit: Jayesh Bapu Ahire, dzone.com
What are the strengths and limitations of ANNs? How are they different from biological neural networks?
ANNs are based on a feed-forward strategy, where information is continuously passed through the nodes till it reaches the next layer of processing. They can process data at extremely high speeds - far faster than biological neural networks - since they are computing systems, which have the advantage of memory always being in immediate reach. They also tend to have a central control unit that oversees all processes of the network.
These networks tend to struggle with parallel inputs and processing. This is likely because they are only multilayer networks, whereas the brain can be considered a multidimensional network - an advanced type of multilayer network. In a multidimensional network, nodes on different layers can interconnect and share data, making parallel processing far easier. They can also interlink and extrapolate by themselves. However, ANN’s nodes can only connect to nodes on adjacent layers. They lack this interdimensional capability and are restricted to go layer by layer while processing.
ANNs also face a lot of difficulty in making predictions. They have advanced pattern recognition and can extrapolate on existing patterns - but struggle with predicting how the pattern might change in the future. This can probably be explained by the fact that neural networks originated with the perceptron (fig. 2), which was created in order to solve a certain type of problem - a linearly separable problem (a property of two sets of points, such that there exists a line in the Euclidean plane with each set of points wholly on either side of the line). Having been created with this kind of specification, neural networks find it hard to predict how the specification could change.
Image credit: Gerry Saporito, towardsdatascience.com
So why can’t neural networks work exactly like the human brain?
The human brain can be considered a black box, where we are unaware of its inner workings and features. It does not operate digitally but via electro-chemical signaling, a process developed over millions of years of evolution. Simulating this exact structure becomes impossible with our current knowledge on how the brain works, so neural networks are currently limited to being a rough approximation of what we do understand.
In his 2015 paper ‘Are Neural Networks Imitations of the Mind’, Dr. Gaetano Licata highlights an interesting point - neural networks attempt to imitate the brain, and not the mind. We have some clue of how the physical brain works and connects, but are in the dark about our consciousnesses, so the most we can do is try to simulate the network ‘hardware’ at our disposal. Dr. Licata stresses that our lack of a clear theory about thought, consciousness, perception and action makes it much harder to bridge the gap between brain and mind.
He mentions that the brain can work as both a feed-forward and a feedback network. Sometimes the activation goes back to the first neuron that caused it, an example of its feedback quality. Artificial neural networks with feed-forward and feedback mechanisms - Recurrent Neural Networks (fig. 3) - face issues in obtaining a stable output from a given input, and can fluctuate a lot.
Image credit: Niklas Donges, builtin.com
Neural networks also learn much differently than the human brain. While we can stably change the structure of connections between our neurons and have a flexible neural architecture, ANNs are focused on generating the right output for a given input. Once the correct output is generated, the network is considered to have ‘learned’, and its nodes and connections remain the same.
How to improve neural networks’ mimicry of the brain?
There is a long way to go before we can create a neural network that perfectly imitates every aspect of the human brain. As they stand, ANNs don’t aspire to act as sentient brains; they are advanced computing systems seeking to mimic the brain’s processing power via digital signaling.
The output-focused approach of neural networks is both broadening and limiting. By concentrating and tweaking what they do over how they do it, we can fine-tune them to keep achieving similar results and increase the strength of their connections. However, this limits them to only processing what they’re taught.
A 2017 paper, ‘Towards deep learning with segregated dendrites’ (Jordan Guerguiev et. al) aims to change this approach. The authors suggest improving the functionality of a neural network to better mimic the human brain, using the dendritic structure of real neurons.
A key point of the paper is the credit assignment problem (fig. 4). Assigning credit in multilayer networks is difficult since the behavioral impact of neurons in early layers of the network is dependent on synaptic connections flowing downwards, that is, the postsynaptic connections. The Hebbian learning theory is a popular theory claiming that synaptic efficiency is grounded in the presynaptic cell’s constant stimulation of the postsynaptic cell; most models of sensory learning are based on this theory, but it does not solve the credit assignment problem.
The most common solution to the credit assignment problem in AI has been to use the ‘backpropagation of error’ algorithm. The problem with this algorithm is that it involves non-local transmission of synaptic weight information between layers, which is biologically unrealistic and makes scientists skeptical that deep learning actually occurs in the human brain (fig. 4).
Image credit: Guerguiev et al., elifesciences.org
Research has shown that the credit assignment problem can be solved even avoiding weight transport, by using feedback signals to convey enough information about credit to calculate local error signals in the hidden layers. However, these pieces of research assume that a separate feedback pathway exists to transmit that information. The error signal is dependent on the difference between feedback and feed-forward signals, making such a pathway necessary to separate them. This is not feasible using single compartment neurons.
Separating the feed-forward and feedback signals does not require a separate pathway if we try to imitate real neuron structure. In the primary sensory areas of the neocortex, feedback from higher-order areas arrives in the distal apical dendrites of pyramidal neurons. These are farther from the basal dendrites that receive feed-forward sensory information - resulting in the required segregation of information to help calculate local error signals and perform credit assignment biologically.
Jordan Guerguiev and his team designed artificial neurons with two compartments, similar to the apical and basal dendritic compartments in real neurons. The network was able to recognise handwritten digits easily when it had these extra layers. The researchers used the distinct apical and basal compartments in their neurons to integrate feedback and feed-forward signals separately. This helped them build a local error signal for each layer, which ensured appropriate credit assignment. Even with random synaptic weights for the feedback into the apical compartment, the algorithm achieved a level of classification of the MNIST dataset of handwritten digits which was superior to that of single-layer networks.
The imitation of real neurons down to their exact biological structure is a promising field to help develop neural networks that mimic the brain. It is also an interesting avenue to focus on implementing deep learning in a biologically feasible manner, as well as explore the theory that deep learning occurs in the mammalian brain too.
Citations
Oleinik, A. (2019). What are neural networks not good at? On artificial creativity. Big Data & Society. Retrieved from https://doi.org/10.1177/2053951719839433
Baeldung (2022, June 22). Advantages and Disadvantages of Neural Networks. Retrieved from https://www.baeldung.com/cs/neural-net-advantages-disadvantages
Juneja, M. (2020, Dec 14). Difference between ANN and BNN. Retrieved from https://www.geeksforgeeks.org/difference-between-ann-and-bnn/
Neural circuit. (2022, June 26). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Neural_circuit&oldid=1095168178
Artificial neural network. (2022, August 17). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Artificial_neural_network&oldid=1104951751
Kedia, P. (2021). How Perceptrons solve the linearly separable problems. Medium.com. Retrieved from https://medium.com/mlearning-ai/how-perceptrons-solve-the-linearly-separable-problems-b8a623055550
Michael A. Nielsen, "Neural Networks and Deep Learning", Determination Press, 2015. Retrieved from: http://neuralnetworksanddeeplearning.com/chap1.html
Licata, G. (2020, April 8). Are neural networks imitations of mind? Journal of Computer Science & Systems Biology. Retrieved from https://www.hilarispublisher.com/abstract/are-neural-networks-imitations-of-mind-35381.html
Bhatia, R. (2018, October 31). Neural networks do not work like human brains – let's debunk the myth. Analytics India Magazine. Retrieved from https://analyticsindiamag.com/neural-networks-not-work-like-human-brains-lets-debunk-myth/
Sharma, V. (2017, November 6). How do neural networks mimic the human brain? How do Neural networks mimic the human brain? | USC Marshall. Retrieved from https://www.marshall.usc.edu/blog/how-do-neural-networks-mimic-human-brain
Guerguiev, J., Lillicrap, T.P., Richards, B.A. (2017). Towards deep learning with segregated dendrites. eLife. Retrieved from https://doi.org/10.7554/eLife.22901
M, S. (2021, July 10). Let's understand the problems with recurrent neural networks. Analytics Vidhya. Retrieved from https://www.analyticsvidhya.com/blog/2021/07/lets-understand-the-problems-with-recurrent-neural-networks/
Sagar, R. (2019, October 31). What are the challenges of training recurrent neural networks? Analytics India Magazine. Retrieved from https://analyticsindiamag.com/what-are-the-challenges-of-training-recurrent-neural-networks/
Comments