A biologically inspired computational model of insect perception

Sashank Pisupati

Motivation

Much of our current understanding of perception involves a reduction in dimensionality of the stimulus, and then learning some features and behaviours of these lower dimensional units, which may help simulate certain aspects of the physical world.

One well studied example is how bees utilize optic flow information from their visual field as a heuristic to avoid obstacles, modulate flight etc.Implementing such a model in a biologically plausible neural network may help gain insight into perceptual processes in higher vision systems such as the human cortex.

Background

A lot has been understood about the neural architecture underlying perceptual processes such as vision. For example, in the visual system, it is quite well known that edge detection can be achieved through a hierarchical system: Individual neurons are first tuned to many small regions of the visual field, and adjacent such neurons can be linked together to form a rudimentary edge. Many such groups can form neurons that are tuned to specific orientations of edges in the visual field. These can then be further abstracted to yield basic “physics” models such as rotations(gradual change in orientations), translations(same orientation spreading its activation across the visual field) etc. Hence computational models of these have been built, like the orientation columns in the monkey striate cortex (Hubel&Wiesel)

Meanwhile, a lot of progress has been made in biologically relevant neural networks that have architectures similar to that of the cortex. Classical neural networks (feedforward perceptrons, based on the McCulloch Pitts neuron, as well as recurrent such as the Hopfield networks) face many issues, since they require large training sets, and methods that were proposed to improve error correction, such as the backpropagation algorithm, are seen to be biologically implausible.

To address this, some recent approaches involve so called “deep learning” which address the problems of back propagation etc, two of the most important approaches being Long Short Term Memory and multiple non-linear, levels in a Restricted Boltzmann machine. The first approach (Hochwreiter &Schmidhuber) of LSTM, involves having certain gates that determine if an input is significant to remember or not. This is reminiscent of the perceptual salience landscape in animals, which biases a network towards or against remembering an input, through the use of neuromodulators. The second (Hinton et. al) that uses a RBM,  involves having feature detectors at multiple levels (eg: pixels, edges, orientations) where all levels can learn regularities in the environment in an unsupervised manner, and this helps the network learn supervised training sets extremely fast using top down passes. This is also reminiscent of the hierarchical columns present in the visual cortex that process the stimulus at different levels, and backpropagation can be in the form of a top down bias. These approaches have been very successful in many tasks such as handwriting recognition etc. and seem to be good explanatory models.

Parallely, simpler vision modules, such as optic flow and motion detection in lower animals like insects, have been well studied. For example, bees use a simple optic flow method to avoid obstacles: by maintaining the optic flow of their left and right visual fields at a constant value, they can navigate narrow gaps quite well, similarly by maintaining constant frontal optic flow they can modulate landing speed.(Srinivasan et.al)

Proposal

Given these developments, I propose to implement a simple perceptual system such as that of the bees’ optic flow, in a biologically plausible network such as the deep belief net/LSTM network. Since insect models have been very well studied and characterized at the level of neural architecture itself, such an implementation would help better understand both the explanatory merit of deep learning approaches as well as understand the computations possibly occurring in the actual organism(since high level models of primate/human visual systems, while they can achieve great success at tasks, may be difficult to extend to the biological networks themselves, however insect models can be dissected and the models can possibly yield predictions that can be tested)

References

Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. "A fast learning algorithm for deep belief nets." Neural computation 18.7 (2006): 1527-1554.

Bengio, Yoshua, et al. "Greedy layer-wise training of deep networks." Advances in neural information processing systems 19 (2007): 153.

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.

M.V. Srinivasan (2011) Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics. Physiological Reviews 91, 389-411 (With cover illustration)

Frontiers in Sensing, F.G. Barth, J.A.C. Humphrey and M.V. Srinivasan (eds), Springer-Verlag, Berlin, Heidelberg (in press).

Doya, Kenji. "Metalearning and neuromodulation." Neural Networks 15.4 (2002): 495-506.