Current Search: Vision (x)
Pages
-
-
Title
-
Visual cues in active monocular vision for autonomous navigation.
-
Creator
-
Yang, Lingdi., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
-
Abstract/Description
-
In this dissertation, visual cues using an active monocular camera for autonomous vehicle navigation are investigated. A number of visual cues suitable to such an objective are proposed and effective methods to extract them are developed. Unique features of these visual cues include: (1) There is no need to reconstruct the 3D scene; (2) they utilize short image sequences taken by a monocular camera; and (3) they operate on local image brightness information. Taking these features into account...
Show moreIn this dissertation, visual cues using an active monocular camera for autonomous vehicle navigation are investigated. A number of visual cues suitable to such an objective are proposed and effective methods to extract them are developed. Unique features of these visual cues include: (1) There is no need to reconstruct the 3D scene; (2) they utilize short image sequences taken by a monocular camera; and (3) they operate on local image brightness information. Taking these features into account, the algorithms developed are computationally efficient. Simulation and experimental studies confirm the efficacy of the algorithms developed. The major contribution of the research work in this dissertation is the extraction of visual information suitable for autonomous navigation in an active monocular camera without 3D reconstruction by use of local image information. In the studies addressed, the first visual cue is related to camera focusing parameters. An objective function relating focusing parameters to local image brightness is proposed. A theoretical development is conducted to show that by maximizing the objective function one can focus successfully the camera by choosing the focusing parameters. As a result, the dense distance map between a camera and a front scene can be estimated without using the Gaussian spread function. The second visual cue, namely, the clearance invariant (first proposed by Raviv (97)), is extended here to include arbitrary translational motion of a camera. It is shown that the angle between the optical axis and moving direction of a camera can be estimated by minimizing the relevant estimated error residual. This method needs only one image projection from a 3D surface point at an arbitrary time instant. The third issue discussed in this dissertation refers to extracting the looming and the magnitude of rotation using a new visual cue designated as the rotation invariant under the camera fixation. An algorithm to extract the looming is proposed using the image information available from only one 3D surface point at an arbitrary time instant. Further, an additional algorithm is proposed to estimate the magnitude of rotational velocity of the camera by using the image projections of only two 3D surface points measured over two time instants. Finally, a method is presented to extract the focus of expansion robustly without using image brightness derivatives. It decomposes an image projection trajectory into two independent linear models, and applies the Kalman filters to estimate the focus of expansion.
Show less
-
Date Issued
-
1997
-
PURL
-
http://purl.flvc.org/fcla/dt/12527
-
Subject Headings
-
Computer vision, Robot vision
-
Format
-
Document (PDF)
-
-
Title
-
Visual threat cues for autonomous navigation.
-
Creator
-
Kundur, Sridhar Reddy, Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
-
Abstract/Description
-
This dissertation deals with novel vision-based motion cues called the Visual Threat Cues (VTCs), suitable for autonomous navigation tasks such as collision avoidance and maintenance of clearance. The VTCs are time-based and provide some measure for a relative change in range as well as clearance between a 3D surface and a moving observer. They are independent of the 3D environment around the observer and need almost no a-priori knowledge about it. For each VTC presented in this dissertation,...
Show moreThis dissertation deals with novel vision-based motion cues called the Visual Threat Cues (VTCs), suitable for autonomous navigation tasks such as collision avoidance and maintenance of clearance. The VTCs are time-based and provide some measure for a relative change in range as well as clearance between a 3D surface and a moving observer. They are independent of the 3D environment around the observer and need almost no a-priori knowledge about it. For each VTC presented in this dissertation, there is a corresponding visual field associated with it. Each visual field constitutes a family of imaginary 3D surfaces attached to the moving observer. All the points that lie on a particular imaginary 3D surface, produce the same value of the VTC. These visual fields can be used to demarcate the space around the moving observer into safe and danger zones of varying degree. Several approaches to extract the VTCs from a sequence of monocular images have been suggested. A practical method to extract the VTCs from a sequence of images of 3D textured surfaces, obtained by a visually fixation, fixed-focus moving camera is also presented. This approach is based on the extraction of a global image dissimilarity measure called the Image Quality Measure (IQM), which is extracted directly from the raw data of the gray level images. Based on the relative variations of the measured IQM, the VTCs are extracted. This practical approach to extract the VTCs needs no 3D reconstruction, depth information, optical flow or feature tracking. This algorithm to extract the VTCs was tested on several indoor as well as outdoor real image sequences. Two vision-based closed-loop control schemes for autonomous navigation tasks were implemented in a-priori unknown textured environments using one of the VTCs as relevant sensory feedback information. They are based on a set of IF-THEN fuzzy rules and need almost no a-priori information about the vehicle dynamics, speed, direction of motion, etc. They were implemented in real-time using a camera mounted on a six degree-of-freedom flight simulator.
Show less
-
Date Issued
-
1996
-
PURL
-
http://purl.flvc.org/fcla/dt/12476
-
Subject Headings
-
Computer vision, Robot vision, Visual perception
-
Format
-
Document (PDF)
-
-
Title
-
Performance analysis of compression algorithms for noisy multispectral underwater images of small targets.
-
Creator
-
Schmalz, Mark S., Ritter, G. X., Caimi, F. M., Harbor Branch Oceanographic Institute
-
Date Issued
-
1997
-
PURL
-
http://purl.flvc.org/FCLA/DT/3180413
-
Subject Headings
-
Image compression, Underwater vision
-
Format
-
Document (PDF)
-
-
Title
-
Effects of adaptation on the perception of motion: The influence of competing mechanisms.
-
Creator
-
Espinoza, Jessica K., Florida Atlantic University, Hock, Howard S.
-
Abstract/Description
-
The effects of adaptation on motion were investigated using a modified apparent motion display. Unlike the classical apparent motion display, a BRLC (background relative luminance contrast) apparent motion display consists of two visible dots, each of a different luminance, which remain in the same position but exchange luminances on successive frames. This forms a bistable stimulus; stationarity-flicker or motion may be perceived, depending on the value of the BRLC. There was a significant...
Show moreThe effects of adaptation on motion were investigated using a modified apparent motion display. Unlike the classical apparent motion display, a BRLC (background relative luminance contrast) apparent motion display consists of two visible dots, each of a different luminance, which remain in the same position but exchange luminances on successive frames. This forms a bistable stimulus; stationarity-flicker or motion may be perceived, depending on the value of the BRLC. There was a significant interaction between condition (baseline or adaptation) and BRLC when testing motion perception following adaptation to a moving stimulus, a flickering stimulus and a static stimulus. Additionally, adaptation to flicker decreased motion perception at high BRLC values and increased it at low BRLC values. Our results reflected the presence of strong inhibitory competition between the mechanisms concerned with the perception of motion and stationarity which restricted adaptation effects to certain values of BRLC.
Show less
-
Date Issued
-
1998
-
PURL
-
http://purl.flvc.org/fcla/dt/15574
-
Subject Headings
-
Luminescence, Motion perception (Vision)
-
Format
-
Document (PDF)
-
-
Title
-
On the perception of relational motion.
-
Creator
-
Field, Linda C., Florida Atlantic University, Hock, Howard S.
-
Abstract/Description
-
Six experiments were performed to examine the adequacy of detection/computation models for understanding the perception of relational motion, and in particular, the perception of three-dimensional motion in two-dimensional displays. The stimuli were a pair of dots which moved relationally (i.e., the relative location of the dots changed). Three-dimensional motion was seen when a contraction of the stimulus preceded an expansion (i.e., the dot separation first decreased, then increased), the...
Show moreSix experiments were performed to examine the adequacy of detection/computation models for understanding the perception of relational motion, and in particular, the perception of three-dimensional motion in two-dimensional displays. The stimuli were a pair of dots which moved relationally (i.e., the relative location of the dots changed). Three-dimensional motion was seen when a contraction of the stimulus preceded an expansion (i.e., the dot separation first decreased, then increased), the angular difference between the pattern orientation and the direction of movement was small, and the spatial separation between dots was small. Neither the activation of higher-order, relational feature detectors, nor the construction/computation of relational motion from the detected motion of individual dots, can adequately explain the perception of three-dimensional motion.
Show less
-
Date Issued
-
1990
-
PURL
-
http://purl.flvc.org/fcla/dt/14630
-
Subject Headings
-
Motion perception (Vision)
-
Format
-
Document (PDF)
-
-
Title
-
Vision in the hyperiid amphipod Scina crassiicornis.
-
Creator
-
Cohen, Jonathan H., Frank, Tamara M., Harbor Branch Oceanographic Institute
-
Date Issued
-
2007
-
PURL
-
http://purl.flvc.org/fau/fd/FA00007160
-
Subject Headings
-
Amphipoda, Hyperiidae, Vision, Light microscopy
-
Format
-
Document (PDF)
-
-
Title
-
Computer vision techniques for quantifying, tracking, and identifying bioluminescent plankton.
-
Creator
-
Kocak, D. M., da Vitoria Lobo, N., Widder, Edith A., Harbor Branch Oceanographic Institute
-
Date Issued
-
1999
-
PURL
-
http://purl.flvc.org/FCLA/DT/3183711
-
Subject Headings
-
Underwater imaging systems, Computer vision
-
Format
-
Document (PDF)
-
-
Title
-
Characterization of A Stereo Vision System For Object Classification For USV Navigation.
-
Creator
-
Kaplowitz, Chad, Dhanak, Manhar, Florida Atlantic University, Department of Ocean and Mechanical Engineering, College of Engineering and Computer Science
-
Abstract/Description
-
This experiment used different methodologies and comparisons that helped to determine the direction of future research on water-based perception systems for unmanned surface vehicles (USV) platforms. This would be using a stereo-vison based system. Presented in this work is object color and shape classification in the real-time maritime environment. This was coupled with HSV color space that allowed for different thresholds to be identified and detected. The algorithm was then calibrated and...
Show moreThis experiment used different methodologies and comparisons that helped to determine the direction of future research on water-based perception systems for unmanned surface vehicles (USV) platforms. This would be using a stereo-vison based system. Presented in this work is object color and shape classification in the real-time maritime environment. This was coupled with HSV color space that allowed for different thresholds to be identified and detected. The algorithm was then calibrated and executed to configure the depth, color and shape accuracies. The approach entails the characterization of a stereo-vision camera and mount that was designed with 8.5° horizontal viewing increments and mounted on the WAMV. This characterization has depth, color and shape object detection and its classification. Different shapes and buoys were used to complete the testing with assorted colors and shapes. The main program used was OpenCV which entails Gaussian blurring, Morphological operators and Canny edge detection libraries with a ROS integration. The code focuses on the area size and the number of contours detected on the shape for successes. A summary of what this thesis entails is the installation and characterization of the stereovision system on the WAMV-USV by obtaining specific inputs to the high-level controller.
Show less
-
Date Issued
-
2022
-
PURL
-
http://purl.flvc.org/fau/fd/FA00014035
-
Subject Headings
-
Computer vision, Unmanned surface vehicles
-
Format
-
Document (PDF)
-
-
Title
-
Crepuscular and nocturnal illumination and its effects on color perception in the nocturnal hawkmoth Deilephila elpenor.
-
Creator
-
Johnsen, Sonke, Kelber, A., Warrant, E., Sweeney, A. M., Widder, Edith A., Lee, Raymond L. Jr., Hernandez-Andres, J., Harbor Branch Oceanographic Institute
-
Date Issued
-
2006
-
PURL
-
http://purl.flvc.org/fau/fd/FA00007078
-
Subject Headings
-
Crepuscule, Hawkmoths, Sphingidae, Color Perception, Color vision
-
Format
-
Document (PDF)
-
-
Title
-
Temporal resolution and spectral sensitivity of the visual system of three coastal shark speciesfrom different light environments.
-
Creator
-
McComb, Dawn Michelle, Frank, Tamara M., Hueter, R. E., Kajiura, Stephen M., Harbor Branch Oceanographic Institute
-
Date Issued
-
2010
-
PURL
-
http://purl.flvc.org/fau/fd/FA00007091
-
Subject Headings
-
Sharks, Visual system, Spectral sensitivity, Night Vision
-
Format
-
Document (PDF)
-
-
Title
-
A Deep Learning Approach To Target Recognition In Side-Scan Sonar Imagery.
-
Creator
-
Einsidler, Dylan, Dhanak, Manhar R., Florida Atlantic University, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
-
Abstract/Description
-
Automatic target recognition capabilities in autonomous underwater vehicles has been a daunting task, largely due to the noisy nature of sonar imagery and due to the lack of publicly available sonar data. Machine learning techniques have made great strides in tackling this feat, although not much research has been done regarding deep learning techniques for side-scan sonar imagery. Here, a state-of-the-art deep learning object detection method is adapted for side-scan sonar imagery, with...
Show moreAutomatic target recognition capabilities in autonomous underwater vehicles has been a daunting task, largely due to the noisy nature of sonar imagery and due to the lack of publicly available sonar data. Machine learning techniques have made great strides in tackling this feat, although not much research has been done regarding deep learning techniques for side-scan sonar imagery. Here, a state-of-the-art deep learning object detection method is adapted for side-scan sonar imagery, with results supporting a simple yet robust method to detect objects/anomalies along the seabed. A systematic procedure was employed in transfer learning a pre-trained convolutional neural network in order to learn the pixel-intensity based features of seafloor anomalies in sonar images. Using this process, newly trained convolutional neural network models were produced using relatively small training datasets and tested to show reasonably accurate anomaly detection and classification with little to no false alarms.
Show less
-
Date Issued
-
2018
-
PURL
-
http://purl.flvc.org/fau/fd/FA00013025
-
Subject Headings
-
Deep learning, Sidescan sonar, Underwater vision
-
Format
-
Document (PDF)
-
-
Title
-
Comparative studies of retinal design among sea turtles: Histological and behavioral correlates of the visual streak.
-
Creator
-
DeCarlo, Lisa Joy., Florida Atlantic University, Salmon, Michael, Wyneken, Jeanette
-
Abstract/Description
-
We understand very little about the relationships between eye anatomy and visual ecology in sea turtles. Sea turtles use visual information in important contexts, such as selecting habitats, detecting predators, or locating mates or food. This study represents an effort to clarify the form/function relationship between retinal morphology and the behavioral ecology of sea turtle hatchlings. Thus, it is an important first step in relating sea turtle eye anatomy with visual ecology and relating...
Show moreWe understand very little about the relationships between eye anatomy and visual ecology in sea turtles. Sea turtles use visual information in important contexts, such as selecting habitats, detecting predators, or locating mates or food. This study represents an effort to clarify the form/function relationship between retinal morphology and the behavioral ecology of sea turtle hatchlings. Thus, it is an important first step in relating sea turtle eye anatomy with visual ecology and relating the two to sea turtle natural history. Some organisms possess retinas that contain morphologically specialized cellular areas. The "visual streak," is one such area; receptor cells and associated interneurons are concentrated in a horizontal band in the retina. Three species of sea turtles (Chelonia mydas, Caretta caretta, and Dermochelys coriacea) possess a visual streak located along the horizontal mid-line of the retina, although they differed in streak development. The differences in streak development can be related to their ecology.
Show less
-
Date Issued
-
1998
-
PURL
-
http://purl.flvc.org/fcla/dt/15548
-
Subject Headings
-
Sea turtles, Eye--Anatomy, Vision
-
Format
-
Document (PDF)
-
-
Title
-
Cooperative self-organization in the perception of coherent motion.
-
Creator
-
Balz, Gunther William, Florida Atlantic University, Hock, Howard S.
-
Abstract/Description
-
A row of dots is presented in a series of alternating frames; dots in each frame are located at the midpoints between dots of the preceding frame. Although the perceived frame-to-frame direction of motion could vary randomly, cooperativity is indicated by the emergence of two coherent motion patterns, one unidirectional, the other oscillatory. Small increases in the time between frames are sufficient for the bias, which maintains the previously established motion direction (unidirectional...
Show moreA row of dots is presented in a series of alternating frames; dots in each frame are located at the midpoints between dots of the preceding frame. Although the perceived frame-to-frame direction of motion could vary randomly, cooperativity is indicated by the emergence of two coherent motion patterns, one unidirectional, the other oscillatory. Small increases in the time between frames are sufficient for the bias, which maintains the previously established motion direction (unidirectional motion), to be reversed, becoming a bias which inhibits that direction (oscillatory motion). Unidirectional motion, which predominates for small dot separations, and oscillatory motion, which predominates for large separations, are associated with short-range and long-range motion (Braddick, 1974) by manipulating the shape of the dots, their luminance, and the luminance of the inter-frame blank field. Pulsing/flicker emerges as a third perceptual state that competes with unidirectional motion for very small dot separations.
Show less
-
Date Issued
-
1991
-
PURL
-
http://purl.flvc.org/fcla/dt/14712
-
Subject Headings
-
Motion perception (Vision), Perceptual-motor learning
-
Format
-
Document (PDF)
-
-
Title
-
A visual rotation invariant in fixated motion.
-
Creator
-
Ozery, Nissim Jossef., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
-
Abstract/Description
-
This thesis studies the 2-D-based visual invariant that exists during relative motion between a camera and a 3-D object. We show that during fixation there is a measurable nonlinear function of optical flow that produces the same value for all points of a stationary environment regardless of the 3-D shape of the environment. During fixated camera motion relative to a rigid object, e.g., a stationary environment, the projection of the fixated point remains (by definition) at the same location...
Show moreThis thesis studies the 2-D-based visual invariant that exists during relative motion between a camera and a 3-D object. We show that during fixation there is a measurable nonlinear function of optical flow that produces the same value for all points of a stationary environment regardless of the 3-D shape of the environment. During fixated camera motion relative to a rigid object, e.g., a stationary environment, the projection of the fixated point remains (by definition) at the same location in the image, and all other points located on the 3-D rigid object can only rotate relative to that 3-D fixation point. This rotation rate of the points is invariant for all points that lie on the particular environment, and it is measurable from a sequence of images. This new invariant is obtained from a set of monocular images and is expressed explicitly as a closed form solution.
Show less
-
Date Issued
-
1994
-
PURL
-
http://purl.flvc.org/fcla/dt/15095
-
Subject Headings
-
Invariants, Visual perception, Motion perception (Vision)
-
Format
-
Document (PDF)
-
-
Title
-
Spontaneous pattern changes for bistable apparent motion stimuli: Perceptual satiation or memory attraction?.
-
Creator
-
Voss, Audrey A., Florida Atlantic University, Hock, Howard S.
-
Abstract/Description
-
Subjects judge motion direction for an apparent motion stimulus with competing perceptual organizations: Vertical vs. horizontal motion. The two patterns are coupled. When one is perceptually instantiated the other remains active in memory, resulting in sudden changes in perceived motion direction under constant stimulus conditions. The probability of change from an initially horizontal to a vertical pattern remains constant over time, showing that perceptual satiation is insufficient to...
Show moreSubjects judge motion direction for an apparent motion stimulus with competing perceptual organizations: Vertical vs. horizontal motion. The two patterns are coupled. When one is perceptually instantiated the other remains active in memory, resulting in sudden changes in perceived motion direction under constant stimulus conditions. The probability of change from an initially horizontal to a vertical pattern remains constant over time, showing that perceptual satiation is insufficient to explain the occurrence of spontaneous perceptual changes. It is proposed that spontaneous changes also occur because the pattern active in memory attracts the percept away from the currently instantiated pattern. The attraction hypothesis specifies that the activation of the memory pattern (and hence its attractive strength) increases as a result of previous experience. It is supported by evidence that the likelihood of changing, say from horizontal to vertical motion, is increased if the motion pattern was previously vertical.
Show less
-
Date Issued
-
1991
-
PURL
-
http://purl.flvc.org/fcla/dt/14721
-
Subject Headings
-
Motion perception (Vision), Perceptual-motor learning
-
Format
-
Document (PDF)
-
-
Title
-
Selective texture characterization using Gabor filters.
-
Creator
-
Boutros, George., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
-
Abstract/Description
-
The objective of this dissertation is to develop effective algorithms for texture characterization, segmentation and labeling that operate selectively to label image textures, using the Gabor representation of signals. These representations are an analog of the spatial frequency tuning characteristics of the visual cortex cells. The Gabor function, of all spatial/spectral signal representations, provides optimal resolution between both domains. A discussion of spatial/spectral representations...
Show moreThe objective of this dissertation is to develop effective algorithms for texture characterization, segmentation and labeling that operate selectively to label image textures, using the Gabor representation of signals. These representations are an analog of the spatial frequency tuning characteristics of the visual cortex cells. The Gabor function, of all spatial/spectral signal representations, provides optimal resolution between both domains. A discussion of spatial/spectral representations focuses on the Gabor function and the biological analog that exists between it and the simple cells of the striate cortex. A simulation generates examples of the use of the Gabor filter as a line detector with synthetic data. Simulations are then presented using Gabor filters for real texture characterization. The Gabor filter spatial and spectral attributes are selectively chosen based on the information from a scale-space image in order to maximize resolution of the characterization process. A variation of probabilistic relaxation that exploits the Gabor filter spatial and spectral attributes is devised, and used to force a consensus of the filter responses for texture characterization. We then perform segmentation of the image using the concept of isolation of low energy states within an image. This iterative smoothing algorithm, operating as a Gabor filter post-processing stage, depends on a line processes discontinuity threshold. Selection of the discontinuity threshold is obtained from the modes of the histogram of the relaxed Gabor filter responses using probabilistic relaxation to detect the significant modes. We test our algorithm on simple synthetic and real textures, then use a more complex natural texture image to test the entire algorithm. Limitations on textural resolution are noted, as well as for the resolution of the image segmentation process.
Show less
-
Date Issued
-
1993
-
PURL
-
http://purl.flvc.org/fcla/dt/12342
-
Subject Headings
-
Image processing--Digital techniques, Computer vision
-
Format
-
Document (PDF)
-
-
Title
-
Synthesis of vision-based robot calibration using moving cameras.
-
Creator
-
Wang, Kuanchih., Florida Atlantic University, Roth, Zvi S., Zhuang, Hanqi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
-
Abstract/Description
-
Robot calibration using a vision system and moving cameras is the focus of this dissertation. The dissertation contributes in the areas of robot modeling, kinematic identification and calibration measurement. The effects of perspective distortion of circular camera calibration points is analyzed. A new modified complete and parametrically continuous robot kinematic model, an evolution of the complete and parametrically continuous (CPC) model, is proposed. It is shown that the model's error...
Show moreRobot calibration using a vision system and moving cameras is the focus of this dissertation. The dissertation contributes in the areas of robot modeling, kinematic identification and calibration measurement. The effects of perspective distortion of circular camera calibration points is analyzed. A new modified complete and parametrically continuous robot kinematic model, an evolution of the complete and parametrically continuous (CPC) model, is proposed. It is shown that the model's error-model can be developed easily as the structure of this new model is very simple and similar to the Denavit-Hartenbert model. The derivation procedure of the error-model follows a systematic method that can be applied to any kind of robot arms. Pose measurement is the most crucial step in robot calibration. The use of stereo as well as mono mobile camera measurement system for collection of pose data of the robot end-effector is investigated. The Simulated Annealing technique is applied to the problem of optimal measurement configuration selection. Joint travel limits can be included in the cost function. It is shown that trapping into local minimum points can be effectively avoided by properly choosing an initial point and a temperature schedule. The concept of simultaneous calibration of camera and robot is developed and implemented as an automated process that determines the system model parameters using only the system's internal sensors. This process uses a unified mathematical model for the entire robot/camera system. The results of the kinematic identification, optimal configuration selection, and simultaneous calibration of robot and camera using the PUMA 560 robot arm have demonstrated that the modified complete and parametrically continuous model is a viable and simple modeling tool, which can achieve desired accuracy. The systematic way of modeling and performing of different kinds of vision-based robot applications demonstrated in this dissertation will pave the way for industrial standardizing of robot calibration done by the robot user on the manufacturing floor.
Show less
-
Date Issued
-
1993
-
PURL
-
http://purl.flvc.org/fcla/dt/12339
-
Subject Headings
-
Robot vision, Robot cameras--Calibration
-
Format
-
Document (PDF)
-
-
Title
-
Competition between opposing motion directions in the perception of apparent motion: A new look at an old stimulus.
-
Creator
-
Huisman, Avia, Florida Atlantic University, Hock, Howard S.
-
Abstract/Description
-
This study tested the hypothesis that the perception of 2-flash apparent motion (points of light are briefly presented in succession at a nearby locations) is the outcome of competition between two opposing motion directions activated by the stimulus. Experiment 1 replicated previous results obtained using 2-flash stimuli; motion was optimal for a non-zero inter-frame interval (Kolers, 1972; Wertheimer, 1912). In Experiment 2, stimuli were pared down to a single luminance change toward the...
Show moreThis study tested the hypothesis that the perception of 2-flash apparent motion (points of light are briefly presented in succession at a nearby locations) is the outcome of competition between two opposing motion directions activated by the stimulus. Experiment 1 replicated previous results obtained using 2-flash stimuli; motion was optimal for a non-zero inter-frame interval (Kolers, 1972; Wertheimer, 1912). In Experiment 2, stimuli were pared down to a single luminance change toward the background at one location, and a single luminance change away from the background at one location at another. Results were consistent with apparent motion being specified by the counter-changing luminance; motion was optimal for a non-zero inter-frame interval. A subtractive model based on counter-change stimulating opposing motion directions did not account for the results of the 2-flash experiment. An alternative model based on the combined transient responses of biphasic detectors is discussed.
Show less
-
Date Issued
-
2005
-
PURL
-
http://purl.flvc.org/fcla/dt/13209
-
Subject Headings
-
Contrast sensitivity (Vision), Visual perception, Motion perception (Vision), Movement, Psychology of
-
Format
-
Document (PDF)
-
-
Title
-
Temporal resolution in mesopelagic crustaceans.
-
Creator
-
Frank, Tamara M., Harbor Branch Oceanographic Institute
-
Date Issued
-
2000
-
PURL
-
http://purl.flvc.org/fau/fd/FA00007155
-
Subject Headings
-
Crustaceans, Vision, Deep-sea animals, Flicker fusion
-
Format
-
Document (PDF)
-
-
Title
-
Underwater applications of solid-state laser technology.
-
Creator
-
Tusting, Robert F., Harbor Branch Oceanographic Institute
-
Date Issued
-
1995
-
PURL
-
http://purl.flvc.org/FCLA/DT/3338513
-
Subject Headings
-
Solid-state lasers, Semiconductor lasers, Underwater vision
-
Format
-
Document (PDF)
Pages