American Journal of Intelligent Systems
p-ISSN: 2165-8978 e-ISSN: 2165-8994
2014; 4(5): 159-195
doi:10.5923/j.ajis.20140405.01
Luca Marchese
Genova, Italy
Correspondence to: Luca Marchese, Genova, Italy.
| Email: | ![]() |
Copyright © 2014 Scientific & Academic Publishing. All Rights Reserved.
This paper describes a bio-inspired spiking neural network that is proposed as a model of a cortical area network and is tailored to be the brick of a modular framework for building self-organizing neurocognitive networks. This study originated in engineering research on the development of a cortical processor that is efficiently implementable with common digital devices. Thus, the model is presented as a hypothetical candidate for emulating a cortical area network of the cerebral cortex in the human brain. The neuron models are biologically inspired but not biologically plausible.
Keywords: Sparse Distributed Representation, Local Cortical Area Network, Cortical Minicolumn, Cortical Macrocolumn, ALs, Hebb, Winner Takes All, Pattern Recognition, Insect Olfactory System, Spiking Neuron, Rank Order Coding, Agnostic Resonance, Restricted Coulomb Energy, Radial Basis Function, Deep Learning, Probabilistic Neural Network, Evidence Accumulation, Pulsed Neural Network, STDP, Cognitive Systems, Autonomous Machine Learning, Global Workspace, Concept Cells, Neurocognitive Networks
Cite this paper: Luca Marchese, SHARP (Systolic Hebb - Agnostic Resonance - Perceptron): A Bio-Inspired Spiking Neural Network Model that can be simulated Very Efficiently on Von Neumann Architectures, American Journal of Intelligent Systems, Vol. 4 No. 5, 2014, pp. 159-195. doi: 10.5923/j.ajis.20140405.01.
![]() | Figure 1. Simulated X-ray view of the Mushroom Body, Lateral Horn and Antennal Lobe, which are parts of the Drosophila olfactory system |
![]() | Figure 2. Simplified interconnection scheme of neurons in the Drosophila olfactory system |
![]() | (1) |
![]() | (2) |
![]() | (3) |
![]() | (4) |
![]() | (5) |
![]() | (6) |
![]() | (7) |
![]() | (8) |
![]() | Figure 3. The mathematical concepts of the L1 distance and LSUP distance |
![]() | (9) |
![]() | (10) |
![]() | (11) |
![]() | (12) |
![]() | (13) |
![]() | (14) |
![]() | (15) |
![]() | (16) |
![]() | (17) |
![]() | (18) |
![]() | (19) |
![]() | (20) |
![]() | (21) |
![]() | (22) |
![]() | (23) |
![]() | (24) |
![]() | (25) |
![]() | (26) |
![]() | (27) |
![]() | (28) |
![]() | (29) |
![]() | (30) |
![]() | (31) |
![]() | (32) |
(excitatory fixed)enable supervised learning. They transport the spikes from the SL neuron, which are activated by spikes that are generated outside of the network, to neurons in the feature clusters that are associated with the category, enabling them to fire and disabling the WTA behavior. The synapses between resonating neurons in different clusters can be updated because these neurons fire as a result of the contribution of action potentials from the SL. When these synapses become sufficiently conductive, unsupervised reinforcement learning through STDP can occur without the contribution of the SL neuron (Unified Learning). Spike timing in the systolic structure of the clusters reflects the sequence of the features. The spike of the category neuron could occur before or after the spike of the postsynaptic IRF neuron but within a range that is limited by the STF (Systolic Time Frame).STF is the total time that is required for the complete propagation of spikes through all of the feature clusters.Thus, (29) is reformulated as follows:![]() | (33) |
![]() | (34) |
![]() | (35) |

![]() | (36) |
![]() | (37) |
![]() | (38) |
![]() | (39) |
![]() | (40) |
The IF neurons that are used in SHARP do not leak membrane potential as in the more biologically plausible LIF (Leaky Integrate and Fire) model and must be reset at the right time if the neuron is not firing. In the next chapter, a detailed explanation of the neuron models will clarify this concept. The reset of the neurons that remain in a transition state is warranted by a PLL (Phase Locked Loop) circuit that locks on the phase of the action potentials from the neurons of the first feature cluster.The neural PLL circuit is composed of a chain of neurons, which reflects the systolic sequence of the feature clusters. The last neuron in the chain produces an inhibitory activity on all of the neurons in the network.This retroaction makes the neural network working as an oscillator with the STF as a period. In this framework, we define local cortical area networks as neural oscillators that behave asynchronously within their own network.The synapses in the NPLL circuit are excitatory and inhibitory but fixed. The learning process does not affect at all the behavior of the circuit (Figs.11 and 12). Next, we must define the matrix of synapses that are involved in the NPLL circuit.![]() | (41) |
![]() | (42) |
![]() | (43) |
![]() | (44) |
![]() | (45) |
![]() | (46) |
![]() | (47) |
which enables or disables the influence of the SL neuron depending on the amount of past plastic activity of the synapses. If the density of the activated synapses (k>0) exceeds the threshold, then the action potential from the SL neuron cannot trigger learning. Here,
is the excitatory synapse between the SL neuron and the CC neuron, and its plastic activity is almost on-off; it is driven by an integrative process of the plastic activity of the synapses that are connected to the IRF neurons.![]() | (48) |
![]() | (49) |
![]() | (50) |
![]() | Figure 15. A very simplified example of PCS. The example does not account for the fact that the concept is in reality built by the cooperation and/or hierarchy of multiple streams |
• A threshold function
• A delay
for each synapseIn spiking NNs, the rate-versus-timing debate has been framed as a question of whether all of the information about the stimulus is carried by the firing rate (or by the spike count in some specified time window) or whether the timing of individual spikes within this window also correlates with the variations in the stimulus [9]. This debate cannot have a single answer because spikes can play different roles in different contexts and thus they can encode information in multiple appropriate manners.The basic consideration is that spikes are, probably, the most appropriate method for transmitting information within biological matter, and we use the term “appropriate” while referring to physical issues and not to information theory. From the point of view of information theory, Maass demonstrated that the spiking neural code can carry more entropy.Entropy, as defined in thermodynamics and statistical mechanics, is a measure of “variability” or “available information”.We can define, mathematically, entropy as the logarithm of the number of possible states that the system can assume:![]() | (51) |
![]() | (52) |
![]() | (53) |
![]() | (54) |
![]() | (55) |
![]() | (56) |
![]() | (57) |
![]() | (58) |
![]() | (59) |
![]() | (60) |
![]() | (61) |
![]() | (62) |
![]() | (63) |
![]() | (64) |
![]() | (65) |
![]() | (66) |
![]() | (67) |
and is given as follows:![]() | (68) |
![]() | (69) |
![]() | (70) |
![]() | (71) |
![]() | (72) |
![]() | Figure 23. Two Concept Associative Nodes. These are inter-stream interconnections between category neurons of different streams |
![]() | Figure 24a. A multimodal concept network (HiCAN) is created through the STDP between two category neurons in the two cortical area networks |
![]() | Figure 24b. In the second hypothetical circuit, a multimodal concept network (HiCAN) is created through STDP between the category neurons and the SL neurons in the two cortical area networks |
![]() | Figure 25. The representation of the XOR problem |
![]() | Figure 26. The spikes raster related to the XOR problem |
TEST_2: 5000 training examples + 5000 validation with
TEST_3: 5000 training examples + 5000 validation with
Here
is the maximum noise added to any component of the vector ![]() | (73) |
![]() | (74) |
![]() | Figure 29. The SOM representation of the complex database generated with the tool. All patterns have been distributed between eight classes with similarities between couples of classes |
|
![]() | (75) |
![]() | Figure 30. The result of “Circle in the square test” with SHARP. On the upside: entangled prototypes (200 examples learned). On the downside: 10,000 points recognized (yellow/green = uncertainty/error, black(except contour) = not identified) |
![]() | Figure 31. The prototypes for the “Circle in the square” test of a RCE neural network in L_SUP mode |
![]() | Figure 32. The recognition time required by SHARP and that required by an RCE in a Von Neumann computer |
![]() | Figure 33. The growth of clock frequency and the growth of ram capacity in the last twenty years. The growth of memory has been 1000 times the growth of clock frequency |
![]() | (76) |
![]() | (77) |
![]() | (78) |
![]() | (79) |
![]() | (80) |
|
![]() | Figure 34. The hardware implementation with stages composed of a CPLD and a flash memory. In this picture, each stage manages one feature; thus, the parallelism and the speed performance are maximized |
| [1] | Nowotny T, Huerta R, Abarbanel HD, Rabinovich MI. Self-organization in the olfactory system: one shot odor recognition in insects. BiolCybern. 93(6):436-46, 2005. (2003). |
| [2] | Sachse, S. and C. G. Galizia (2002). "Role of Inhibition for Temporal and Spatial Odor Representation in Olfactory Output Neurons: A Calcium Imaging Study." J Neurophysiol 87(2): 1106-1117. |
| [3] | Paolo Arena, Luca Patané, Pietro Savio Termini (2012): Learning expectation in insects: A recurrent spiking neural model for spatio-temporal representation. Neural Networks 32: 35-45 (2012). |
| [4] | Izhikevich E.M. (2003) Simple Model of Spiking Neurons. IEEE Transactions on Neural Networks, 14:1569- 1572. |
| [5] | Izhikevich E.M. (2001) Resonate-and-Fire Neurons. Neural Networks, 14:883-894. |
| [6] | Llinas RR 1988 - The intrinsic electrophysiological properties of mammalian neurons: insights into central nervous system functionDec 23;242(4886):1654-1664, Science— id: 9930, year: 1988, vol: 242, page: 1654, stat: Journal Article. |
| [7] | W. Maass. (1996) On the computational power of noisy spiking neurons. InAdvances in Neural Information Processing Systems, D. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, volume 8, pages 211-217. MIT Press (Cambridge), 1996. |
| [8] | W. Maass. (1997) Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10:1659-1671, 1997. |
| [9] | Fred Rieke, David Warland, Rob de Ruyter van Steveninck, William Bialek (1999) -Spikes: Exploring the Neural Code - Bradford Book. |
| [10] | J. Gautrais, S. Thorpe (1998) Rate coding versus temporal order coding: a theoretical approachBiosystems 48 (1), 57-65. |
| [11] | Bressler S.L. (1995) Large-scale cortical networks andcognition. Brain Res Rev 20:288-304. |
| [12] | Bressler S.L. (2002) Understanding cognition through large-scale cortical networks. Curr Dir Psychol Sci 11:58-61. |
| [13] | Bressler S.L., Tognoli E. (2006) Operational principles of neurocognitive networks. Int J Psychophysiol 60:139-148. |
| [14] | Cerf et al (2010), On-line, voluntary control of human temporal lobe neurons. Nature, October 28, 2010. doi:10.1038/nature09510. |
| [15] | Gelbard-Sagiv et al. 2008Internally Generated Reactivation of Single Neurons in Human Hippocampus During Free Recall-Published Online September 4 2008Science-3-October-2008: Vol. 322 no. 5898 pp. 96-101 - DOI: 10.1126/science.1164685. |
| [16] | Kreiman, Koch, Fried (2000) - Category-specific visual responses of single neurons in the human medial temporal lobe - Nature Neuroscience3, 946 - 953 (2000) - doi:10.1038/ 78868. |
| [17] | Quian Quiroga, R., Reddy, L., Kreiman, G., Koch, C. & Fried, I. (2005).Invariant visual representation by single neurons in the human brain. Nature, 435:1102–1107. |
| [18] | Quian Quiroga, R., Kreiman, G., Koch, C. & Fried, I. (2008). Sparse but not “Grandmother-cell” coding in the medial temporal lobe. Trends in Cognitive Science, 12, 3, 87–94. |
| [19] | Quian Quiroga, R., Kraskov, A., Koch, C., & Fried, I. (2009). Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain.Current Biology, 19, 1308–1313. |
| [20] | Quian Quiroga, R. & Kreiman, G. (2010a). Measuring sparseness in the brain: Comment on Bowers (2009). Psychological Review, 117, 1, 291–297. |
| [21] | Quian Quiroga, R. & Kreiman, G. (2010b). Postscript: About Grandmother Cells and Jennifer Aniston Neurons. Psychological Review, 117, 1, 297–299. |
| [22] | Viskontas, I., Quian Quiroga, R. & Fried, I. (2009). Human medial temporal lobe neurons respond preferentially to personally relevant images. Proceedings of the National Academy Sciences, 106, 50, 21329-21334. |
| [23] | Asim Roy (2012) Discovery of Concept Cells in the Human Brain – CouldIt Change Our Science? –Natural Intelligence Vol.1 Issue.1 INNS magazine. |
| [24] | Barlow, H. (1972). Single units and sensation: A neuron doctrine for perceptual psychology. Perception, 1, 371–394. |
| [25] | Barlow, H. (1995). The neuron doctrine in perception. In The cognitive neurosciences, M. Gazzaniga ed., 415–436. MIT Press, Cambridge, MA. |
| [26] | Gross, C. (2002). Genealogy of the grandmother cell. The Neuroscientist, 8, 512–518. |
| [27] | Baars, Bernard J. (1988), A Cognitive Theory of Consciousness (Cambridge, MA: Cambridge University Press). |
| [28] | Baars, Bernard J.(1997), In the Theater of Consciousness(New York, NY: Oxford University Press). |
| [29] | Baars, Bernard J. (2002) The conscious access hypothesis: Origins and recent evidence. Trends in Cognitive Sciences, 6 (1), 47-52. |
| [30] | J.A. Reggia – Neural Networks, 2013 – Elsevier The rise of machine consciousness: Studying consciousness with computational models. |
| [31] | Baars, B. J., & Franklin, S. An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA. Neural Networks (2007), doi:10.1016/j.neunet.2007.09.013. |
| [32] | H.T Kung. Why Systolic Architectures? January 1982 37 c3 1982 IEEE. |