Background: Learning algorithms come in three orders of complexity: zeroth-order (perturbation), first-order (gradient descent), and second-order (e.g., quasi-Newton).
View Article and Find Full Text PDFRecent work in computer science has shown the power of deep learning driven by the backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are different from most of these artificial ones in at least three crucial ways: they emit spikes rather than graded outputs, their inputs and outputs are related dynamically rather than by piecewise-smooth functions, and they have no known way to coordinate arrays of synapses in separate forward and feedback pathways so that they change simultaneously and identically, as they do in backpropagation. Given these differences, it is unlikely that current deep learning algorithms can operate in the brain, but we that show these problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses.
View Article and Find Full Text PDFThe brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream.
View Article and Find Full Text PDFWhile sensorimotor adaptation to prisms that displace the visual field takes minutes, adapting to an inversion of the visual field takes weeks. In spite of a long history of the study, the basis of this profound difference remains poorly understood. Here, we describe the computational issue that underpins this phenomenon and presents experiments designed to explore the mechanisms involved.
View Article and Find Full Text PDFMany neural control systems are at least roughly optimized, but how is optimal control learned? There are algorithms for this purpose, but in their current forms, they are not suited for biological neural networks because they rely on a type of communication that is not available in the brain, namely, weight transport-transmitting the strengths, or "weights," of individual synapses to other synapses and neurons. Here we show how optimal control can be learned without weight transport. Our method involves a set of simple mechanisms that can compensate for the absence of weight transport in the brain and so may be useful for neural computation generally.
View Article and Find Full Text PDFWhen we learn something new, our brain may store the information in synapses or in reverberating loops of electrical activity, but current theories of motor learning focus almost entirely on the synapses. Here we show that loops could also play a role and would bring advantages: loop-based algorithms can learn complex control tasks faster, with exponentially fewer neurons, and avoid the problem of weight transport. They do all this at a cost: in the presence of long feedback delays, loop algorithms cannot control very fast movements, but in this case, loop and synaptic mechanisms can complement each other-mixed systems quickly learn to make accurate but not very fast motions and then gradually speed up.
View Article and Find Full Text PDFBackground: To learn, a motor system needs to know its sensitivity derivatives, which quantify how its neural commands affect motor error. But are these derivatives themselves learned, or are they known solely innately? Here we test a recent theory that the brain's estimates of sensitivity derivatives are revisable based on sensory feedback. In its simplest form, the theory says that each control system has a single, adjustable estimate of its sensitivity derivatives which affects all aspects of its task, e.
View Article and Find Full Text PDFTo learn effectively, an adaptive controller needs to know its sensitivity derivatives--the variables that quantify how system performance depends on the commands from the controller. In the case of biological sensorimotor control, no one has explained how those derivatives themselves might be learned, and some authors suggest they are not learned at all but are known innately. Here we show that this knowledge cannot be solely innate, given the adaptive flexibility of neural systems.
View Article and Find Full Text PDFPurpose: Skew deviation is a vertical strabismus caused by damage to the otolithic-ocular reflex pathway and is associated with abnormal ocular torsion. This study was conducted to determine whether patients with skew deviation show the normal pattern of three-dimensional eye control called Listing's law, which specifies the eye's torsional angle as a function of its horizontal and vertical position.
Methods: Ten patients with skew deviation caused by brain stem or cerebellar lesions and nine normal control subjects were studied.
Most studies of neural control have looked at constrained tasks, with only a few degrees of freedom, but real sensorimotor systems are high dimensional--e.g. gaze-control systems that coordinate the head and two eyes have to work with 12 degrees of freedom in all.
View Article and Find Full Text PDFIt is known that people misperceive scenes they see during rapid eye movements called saccades. It has been suggested that some of these misperceptions could be an artifact of neurophysiological processes related to the internal remapping of spatial coordinates during saccades. Alternatively, we have recently suggested, based on a computational model, that transsaccadic misperceptions result from optimal inference.
View Article and Find Full Text PDFThe theoretical horopter is an interesting qualitative tool for conceptualizing binocular correspondence, but its quantitative applications have been limited because they have ignored ocular kinematics and vertical binocular sensory fusion. Here we extend the mathematical definition of the horopter to a full surface over visual space, and we use this extended horopter to quantify binocular alignment and visualize its dependence on eye position. We reproduce the deformation of the theoretical horopter into a spiral shape in tertiary gaze as first described by Helmholtz (1867).
View Article and Find Full Text PDFHere we examined the level of the lateral occipital (LO) area within the processing stream of the ventral visual cortex. An important determinant of an area's level of processing is whether it codes visual elements on both sides of the visual field, as do higher visual areas, or prefers those in the contralateral visual field, as do early visual areas. The former would suggest that LO, on one side, combines bilateral visual elements into a whole, while the latter suggests that it codes only the parts of forms.
View Article and Find Full Text PDFPalsy of a nerve might be expected to lower vestibulo-ocular reflex (VOR) responses in its fields of motion, but effects of peripheral neuromuscular disease were unknown. We recorded the VOR during sinusoidal head rotations in yaw, pitch, and roll at 0.5-2 Hz and static torsional gain in 43 patients with unilateral nerve palsies.
View Article and Find Full Text PDFJ Neurophysiol
October 2003
Static head roll about the naso-occipital axis is known to produce an opposite ocular counterroll with a gain of approximately 10%, but the purpose and neural mechanism of this response remain obscure. In theory counterroll could be maintained either by direct tonic vestibular inputs to motoneurons, or by a neurally integrated pulse, as observed in the saccade generator and vestibulo-ocular reflex. When simulated together with ocular drift related to torsional integrator failure, the direct tonic input model predicted that the pattern of drift would shift torsionally as in ordinary counterroll, but the integrated pulse model predicted that the equilibrium position of torsional drift would be unaffected by head roll.
View Article and Find Full Text PDFAs we move through space, stationary objects around us show motion parallax: their directions relative to us change at different rates, depending on their distance. Does the brain incorporate parallax when it updates its stored representations of space? We had subjects fixate a distant target and then we flashed lights, at different distances, onto the retinal periphery. Subjects translated sideways while keeping their gaze on the distant target, and then they looked to the remembered location of the flash.
View Article and Find Full Text PDFWhen we move our eyes, why does the world look stable even as its image flows across our retinas, and why do afterimages, which are stationary on the retinas, appear to move? Current theories say this is because we perceive motion by summation: if an object slips across the retina at r degrees/s while the eye turns at e degrees/s, the object's perceived velocity in space should be r + e. We show that activity in MT+, the visual-motion complex in human cortex, does reflect a mix of r and e rather than r alone. But we show also that, for optimal perception, r and e should not summate; rather, the signals coding e interact multiplicatively with the spatial gradient of illumination.
View Article and Find Full Text PDFIn animals with binocular depth vision, or stereopsis, the visual fields of the two eyes overlap, shrinking the overall field of view. Eye movements increase the field of view, but they also complicate the first stage of stereopsis: the search for corresponding images on the two retinas. If the eyes were stationary in the head, corresponding images would always lie on retina-fixed bands called epipolar lines.
View Article and Find Full Text PDFWe scan our surroundings with quick eye movements called saccades, and from the resulting sequence of images we build a unified percept by a process known as transsaccadic integration. This integration is often said to be flawed, because around the time of saccades, our perception is distorted and we show saccadic suppression of displacement (SSD): we fail to notice if objects change location during the eye movement. Here we show that transsaccadic integration works by optimal inference.
View Article and Find Full Text PDFChronic medical conditions drastically affect the lives of those afflicted, leading to pain, disability, and disruption. Comorbid depression can exacerbate the effects of medical illness and may be an independent source of suffering and disability. Data from the Epidemiological Follow-Up Study (NHEFS) of the first National Health and Nutrition Examination Survey (NHANES I) are used to examine the manner in which depression and comorbid medical conditions interact to affect health-related quality of life (HRQOL).
View Article and Find Full Text PDFThe effects of fourth nerve palsy on the vestibulo-ocular reflex (VOR) had not been systematically investigated. We used the magnetic scleral search coil technique to study the VOR in patients with unilateral fourth nerve palsy during sinusoidal head rotations in yaw, pitch and roll at different frequencies. In darkness, VOR gains are reduced during incyclotorsion, depression and abduction of the paretic eye, as anticipated from paresis of the superior oblique muscle.
View Article and Find Full Text PDFObjective: To detect and determine the magnitude of vertical deviation in patients with unilateral sixth nerve palsy.
Design: Prospective consecutive comparative case series.
Participants: Twenty patients with unilateral peripheral sixth nerve palsy, 7 patients with central palsy caused by brainstem lesions, and 10 normal subjects.
Invest Ophthalmol Vis Sci
June 2002
Purpose: During fixation and saccades, human eye movements obey Listing's law, which specifies the eye's torsional angle as a function of its horizontal and vertical position. Torsion of the eye is in part controlled by the fourth nerve. This study investigates whether the brain adapts to defective torsional control after fourth nerve palsy.
View Article and Find Full Text PDF