TY - JOUR
T1 - Bio-inspired networks of visual sensors, neurons, and oscillators
AU - Ghosh, Bijoy K.
AU - Polpitiya, Ashoka D.
AU - Wang, Wenxue
N1 - Funding Information:
Manuscript received September 6, 2005; revised August 31, 2006. This work was supported in part by the National Science Foundation (NSF) under Grant EIA-0218186 and in part by the NSF under Grant ECS-0323693. B. K. Ghosh and W. Wang are with the Department of Mathematics and Statistics, Texas Tech University, Lubbock, TX 79409-1042 USA (e-mail: bijoy.ghosh@ttu.edu; wenxue.wang@ttu.edu). A. D. Polpitiya is with Pacific Northwest National Laboratory, Richland, WA 99352 USA (e-mail: ashoka.polpitiya@pnl.gov).
PY - 2007/1
Y1 - 2007/1
N2 - Animals routinely rely on their eyes to localize fixed and moving targets. Such a localization process might include prediction of future target location, recalling a sequence of previously visited places or, for the motor control circuit, actuating a successful movement. Typically, target localization is carried out by fusing images from two eyes, in the case of binocular vision, wherein the challenge is to have the images calibrated before fusion. In the field of machine vision, a typical problem of interest is to localize the position and orientation of a network of mobile cameras (sensor network) that are distributed in space and are simultaneously tracking a target. Inspired by the animal visual circuit, we study the problem of binocular image fusion for the purpose of localizing an unknown target in space. Guided by the dynamics of "eye rotation", we introduce control strategies that could be used to build machines with multiple sensors. In particular, we address the problem of how a group of visual sensors can be optimally controlled in a formation. We also address how images from multiple sensors are encoded using a set of basis functions, choosing a "larger than minimum" number of basis functions so that the resulting code that represents the image is sparse. We address the problem of how a sparsely encoded visual data stream is internally represented by a pattern of neural activity. In addition to the control mechanism, the synaptic interaction between cells is also subjected to "adaptation" that enables the activity waves to respond with greater sensitivity to visual input. We study how the rat hippocampal place cells are used to form a cognitive map of the environment so that the animal's location can be determined from its place cell activity. Finally, we study the problem of "decoding" location of moving targets from the neural activity wave in the cortex
AB - Animals routinely rely on their eyes to localize fixed and moving targets. Such a localization process might include prediction of future target location, recalling a sequence of previously visited places or, for the motor control circuit, actuating a successful movement. Typically, target localization is carried out by fusing images from two eyes, in the case of binocular vision, wherein the challenge is to have the images calibrated before fusion. In the field of machine vision, a typical problem of interest is to localize the position and orientation of a network of mobile cameras (sensor network) that are distributed in space and are simultaneously tracking a target. Inspired by the animal visual circuit, we study the problem of binocular image fusion for the purpose of localizing an unknown target in space. Guided by the dynamics of "eye rotation", we introduce control strategies that could be used to build machines with multiple sensors. In particular, we address the problem of how a group of visual sensors can be optimally controlled in a formation. We also address how images from multiple sensors are encoded using a set of basis functions, choosing a "larger than minimum" number of basis functions so that the resulting code that represents the image is sparse. We address the problem of how a sparsely encoded visual data stream is internally represented by a pattern of neural activity. In addition to the control mechanism, the synaptic interaction between cells is also subjected to "adaptation" that enables the activity waves to respond with greater sensitivity to visual input. We study how the rat hippocampal place cells are used to form a cognitive map of the environment so that the animal's location can be determined from its place cell activity. Finally, we study the problem of "decoding" location of moving targets from the neural activity wave in the cortex
KW - Cortical waves
KW - Eye movement
KW - Formation sensing
KW - Gaze control
KW - Hebbian and anti-Hebbian adaptation
KW - Kuramoto model
KW - Listing's law
KW - Localization
KW - Neural network
KW - Oscillator network
KW - Place cells
KW - Sensor network
KW - Sparse coding
KW - Theta phase precession
UR - http://www.scopus.com/inward/record.url?scp=39549095440&partnerID=8YFLogxK
U2 - 10.1109/JPROC.2006.887320
DO - 10.1109/JPROC.2006.887320
M3 - Article
AN - SCOPUS:39549095440
SN - 0018-9219
VL - 95
SP - 188
EP - 214
JO - Proceedings of the IEEE
JF - Proceedings of the IEEE
IS - 1
M1 - 4118464
ER -