Optic Flow Research Papers - Academia.edu (original) (raw)
Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow esti- mation has not been among the tasks where CNNs were suc- cessful.... more
Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow esti- mation has not been among the tasks where CNNs were suc- cessful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations.
Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.
In our ongoing project on the autonomous guidance of Micro-Air Vehicles (MAVs) in confined indoor and outdoor environments, we have developed a bio-inspired optic flow based autopilot enabling a hovercraft to travel safely, and avoid the... more
In our ongoing project on the autonomous guidance of Micro-Air Vehicles (MAVs) in confined indoor and outdoor environments, we have developed a bio-inspired optic flow based autopilot enabling a hovercraft to travel safely, and avoid the walls of a corridor. The hovercraft is an air vehicle endowed with natural roll and pitch stabilization characteristics, in which planar flight control can be developed conveniently. It travels at a constant ground height (~2mm) and senses the environment by means of two lateral eyes that measure the right and left optic flows (OFs). The visuomotor feedback loop, which is called LORA(1) (Lateral Optic flow Regulation Autopilot, Mark 1), consists of a lateral OF regulator that adjusts the hovercraft’s yaw velocity and keeps the lateral OF constant on one wall equal to an OF set-point. Simulations have shown that the hovercraft manages to navigate in a corridor at a “pre-set” groundspeed (1m/s) without requiring a supervisor to make it switch abruptly between the control-laws corresponding to behaviours such as automatic wall-following, automatic centring, and automatically reacting to an opening encountered on a wall. The passive visual sensors and the simple control system used here are suitable for use on MAVs with an avionic payload of only a few grams.
- by Julien R Serres and +1
- •
- Biomimetics, Bionics, Optic Flow, Motion Detection
- by Cathy Craig and +1
- •
- Engineering, Decision Making, Time Series, Motion perception
Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based, egocentric frame of... more
Optic flow provides visual self-motion information and is shown to modulate gait and provoke postural reactions. We have previously reported an increased reliance on the visual, as opposed to the somatosensory-based, egocentric frame of reference (FoR) for spatial orientation with age. In this study, we evaluated FoR reliance for self-motion perception with respect to the ground surface. We examined how effects of ground optic flow direction on posture may be enhanced by an intermittent podal contact with the ground, and reliance on the visual FoR and aging. Young, middle-aged and old adults stood quietly (QS) or stepped in place (SIP) for 30s under static stimulation, approaching and receding optic flow on the ground and a control condition. We calculated center of pressure (COP) translation and optic flow sensitivity was defined as the ratio of COP translation velocity over absolute optic flow velocity: the visual self-motion quotient (VSQ). COP translation was more influenced by ...
- by Franck Ruffier
- •
- Robotics, Zoology, Optic Flow, Animals
The impact of a central or peripheral visual field loss on the vision strategy used to guide walking was determined by measuring walking paths of visually impaired participants. An immersive virtual environment was used to dissociate the... more
The impact of a central or peripheral visual field loss on the vision strategy used to guide walking was determined by measuring walking paths of visually impaired participants. An immersive virtual environment was used to dissociate the expected paths of the optic-flow and egocentric-direction strategies by offsetting the walker’s point of view from the actual direction of walking. Environments consisted
In eight experiments, we examined the ability to judge heading during tracking eye movements. To assess the use of retinal-image and extra-retinal information in this task, we compared heading judgments with executed as opposed to... more
In eight experiments, we examined the ability to judge heading during tracking eye movements. To assess the use of retinal-image and extra-retinal information in this task, we compared heading judgments with executed as opposed to simulated eye movements. In general, judgments were much more accurate during executed eye movements. Observers in the simulated eye movement condition misperceived their self-motion as curvilinear translation rather than the linear translation plus eye rotation that was simulated. There were some experimental conditions in which observers could judge heading reasonably accurately during simulated eye movements; these included conditions in which eye movement velocities were 1 deg/sec or less and conditions which made available a horizon cue that exists for locomotion parallel to a ground plane with a visible horizon. Overall, our results imply that extra-retinal, eye-velocity signals are used in determining heading under many, perhaps most, viewing condit...
- by Mobin Rastgar Agah and +1
- •
- Biomedical Engineering, Humans, Movement, Cues
During locomotion humans can judge where they are heading relative to the scene and the movement of objects within the scene. Both judgments rely on identifying global components of optic flow. What is the relationship between the... more
During locomotion humans can judge where they are heading relative to the scene and the movement of objects within the scene. Both judgments rely on identifying global components of optic flow. What is the relationship between the perception of heading, and the identification of object movement during self-movement? Do they rely on a shared mechanism? One way to address these questions is to compare performance on the two tasks. We designed stimuli that allowed direct comparison of the precision of heading and object movement judgments. Across a series of experiments, we found the precision was typically higher when judging scene-relative object movement than when judging heading. We also found that manipulations of the content of the visual scene can change the relative precision of the two judgments. These results demonstrate that the ability to judge scene-relative object movement during self-movement is not limited by, or yoked to, the ability to judge the direction of self-move...
The visual responsiveness and spatial tuning of frontal eye field (FEF) neurons were determined using a delayed memory saccade task. Neurons with visual responses were then tested for direction selectivity using moving random dot patterns... more
The visual responsiveness and spatial tuning of frontal eye field (FEF) neurons were determined using a delayed memory saccade task. Neurons with visual responses were then tested for direction selectivity using moving random dot patterns centered in the visual receptive field. The preferred axis of motion showed a significant tendency to be aligned with the receptive-field location so as to favor motion toward or away from the center of gaze. Centrifugal (outward) motion was preferred over centripetal motion. Motion-sensitive neurons in FEF thus appear to have a direction bias at the population level. This bias may facilitate the detection or discrimination of expanding optic flow patterns. The direction bias is similar to that seen in visual area MT and in posterior parietal cortex, from which FEF receives afferent projections. The outward motion bias may explain asymmetries in saccades made to moving targets. A representation of optic flow in FEF might be useful for planning eye ...
We presented optic flow simulating eight directions of self-movement in the ground plane, while monkeys performed delayed match-to-sample tasks, and we recorded dorsal medial superior temporal (MSTd) neuronal activity. Randomly selected... more
We presented optic flow simulating eight directions of self-movement in the ground plane, while monkeys performed delayed match-to-sample tasks, and we recorded dorsal medial superior temporal (MSTd) neuronal activity. Randomly selected sample headings yield smaller test responses to the neuron's preferred heading when it is near the sample's heading direction and larger test responses to the preferred heading when it is far from the sample's heading. Limiting test stimuli to matching or opposite headings suppresses responses to preferred stimuli in both test conditions, whereas focusing on each neuron's preferred vs. antipreferred stimuli enhances responses to the antipreferred stimulus. Match vs. opposite paradigms create bimodal heading profiles shaped by interactions with late delay-period activity. We conclude that task contingencies, determining the prior probabilities of specific stimuli, interact with the monkeys' perceptual strategy for optic flow analys...
Sensory conflict theories predict that adding simulated viewpoint oscillation to self-motion displays should generate significant and sustained visual-vestibular conflict and reduce the likelihood of illusory self-motion (vection).... more
Sensory conflict theories predict that adding simulated viewpoint oscillation to self-motion displays should generate significant and sustained visual-vestibular conflict and reduce the likelihood of illusory self-motion (vection). However, research shows that viewpoint oscillation enhances vection in upright observers. This study examined whether the oscillation advantage for vection depends on head orientation with respect to gravity. Displays that simulated forward/backward self-motion with/without horizontal and vertical viewpoint oscillation were presented to observers in upright (seated and standing) and lying (supine, prone, and left side down) body postures. Viewpoint oscillation was found to enhance vection for all of the body postures tested. Vection also tended to be stronger in upright postures than in lying postures. Changing the orientation of the head with respect to gravity was expected to alter the degree/saliency of the sensory conflict, which may explain the overa...
We examined the effect of observer motion on perceived time-to-contact (TTC) with an approaching target simulated on a wide-field display. In Experiment 1, we compared observer motion (OM) alone with target motion (TM) alone, and found... more
We examined the effect of observer motion on perceived time-to-contact (TTC) with an approaching target simulated on a wide-field display. In Experiment 1, we compared observer motion (OM) alone with target motion (TM) alone, and found that TTC estimates were significantly shorter for OM even though the value of τ was the same in both cases. In Experiment 2, we compared TTC estimates for different combinations of OM and TM, and found that estimated TTC decreased as the proportion of OM was increased. The present data are further evidence that TTC estimates are not based solely on visual cues (e.g., τ) associated with the rate of optical expansion of a target, and further suggest that, for more complex imagery, display-related information such as optical flow can contribute independently to perceived TTC.
The robot navigation task presented in this paper is to drive through the center of a corridor, based on a sequence of images from an on-board camera. Our measurements of the system state the distance to the wall and orientation of the... more
The robot navigation task presented in this paper is to drive through the center of a corridor, based on a sequence of images from an on-board camera. Our measurements of the system state the distance to the wall and orientation of the wall, are derived from the optic flow. Whereas the structure of the environment is usually computed from the spatial derivatives of the optic flow, we used the structure contained in the temporal derivatives of the optic flow to compare the environment structure and hence the system state. The algorithm is used to control a `remote brain' robot and results on the accuracy of the state estimates are presented