You are here

Visual cues in active monocular vision for autonomous navigation

Download pdf | Full Screen View

Date Issued:
1997
Summary:
In this dissertation, visual cues using an active monocular camera for autonomous vehicle navigation are investigated. A number of visual cues suitable to such an objective are proposed and effective methods to extract them are developed. Unique features of these visual cues include: (1) There is no need to reconstruct the 3D scene; (2) they utilize short image sequences taken by a monocular camera; and (3) they operate on local image brightness information. Taking these features into account, the algorithms developed are computationally efficient. Simulation and experimental studies confirm the efficacy of the algorithms developed. The major contribution of the research work in this dissertation is the extraction of visual information suitable for autonomous navigation in an active monocular camera without 3D reconstruction by use of local image information. In the studies addressed, the first visual cue is related to camera focusing parameters. An objective function relating focusing parameters to local image brightness is proposed. A theoretical development is conducted to show that by maximizing the objective function one can focus successfully the camera by choosing the focusing parameters. As a result, the dense distance map between a camera and a front scene can be estimated without using the Gaussian spread function. The second visual cue, namely, the clearance invariant (first proposed by Raviv (97)), is extended here to include arbitrary translational motion of a camera. It is shown that the angle between the optical axis and moving direction of a camera can be estimated by minimizing the relevant estimated error residual. This method needs only one image projection from a 3D surface point at an arbitrary time instant. The third issue discussed in this dissertation refers to extracting the looming and the magnitude of rotation using a new visual cue designated as the rotation invariant under the camera fixation. An algorithm to extract the looming is proposed using the image information available from only one 3D surface point at an arbitrary time instant. Further, an additional algorithm is proposed to estimate the magnitude of rotational velocity of the camera by using the image projections of only two 3D surface points measured over two time instants. Finally, a method is presented to extract the focus of expansion robustly without using image brightness derivatives. It decomposes an image projection trajectory into two independent linear models, and applies the Kalman filters to estimate the focus of expansion.
Title: Visual cues in active monocular vision for autonomous navigation.
0 views
0 downloads
Name(s): Yang, Lingdi.
Florida Atlantic University, Degree grantor
Raviv, Daniel, Thesis advisor
College of Engineering and Computer Science
Department of Computer and Electrical Engineering and Computer Science
Type of Resource: text
Genre: Electronic Thesis Or Dissertation
Issuance: monographic
Date Issued: 1997
Publisher: Florida Atlantic University
Place of Publication: Boca Raton, Fla.
Physical Form: application/pdf
Extent: 205 p.
Language(s): English
Summary: In this dissertation, visual cues using an active monocular camera for autonomous vehicle navigation are investigated. A number of visual cues suitable to such an objective are proposed and effective methods to extract them are developed. Unique features of these visual cues include: (1) There is no need to reconstruct the 3D scene; (2) they utilize short image sequences taken by a monocular camera; and (3) they operate on local image brightness information. Taking these features into account, the algorithms developed are computationally efficient. Simulation and experimental studies confirm the efficacy of the algorithms developed. The major contribution of the research work in this dissertation is the extraction of visual information suitable for autonomous navigation in an active monocular camera without 3D reconstruction by use of local image information. In the studies addressed, the first visual cue is related to camera focusing parameters. An objective function relating focusing parameters to local image brightness is proposed. A theoretical development is conducted to show that by maximizing the objective function one can focus successfully the camera by choosing the focusing parameters. As a result, the dense distance map between a camera and a front scene can be estimated without using the Gaussian spread function. The second visual cue, namely, the clearance invariant (first proposed by Raviv (97)), is extended here to include arbitrary translational motion of a camera. It is shown that the angle between the optical axis and moving direction of a camera can be estimated by minimizing the relevant estimated error residual. This method needs only one image projection from a 3D surface point at an arbitrary time instant. The third issue discussed in this dissertation refers to extracting the looming and the magnitude of rotation using a new visual cue designated as the rotation invariant under the camera fixation. An algorithm to extract the looming is proposed using the image information available from only one 3D surface point at an arbitrary time instant. Further, an additional algorithm is proposed to estimate the magnitude of rotational velocity of the camera by using the image projections of only two 3D surface points measured over two time instants. Finally, a method is presented to extract the focus of expansion robustly without using image brightness derivatives. It decomposes an image projection trajectory into two independent linear models, and applies the Kalman filters to estimate the focus of expansion.
Identifier: 9780591597042 (isbn), 12527 (digitool), FADT12527 (IID), fau:9418 (fedora)
Collection: FAU Electronic Theses and Dissertations Collection
Note(s): College of Engineering and Computer Science
Thesis (Ph.D.)--Florida Atlantic University, 1997.
Subject(s): Computer vision
Robot vision
Held by: Florida Atlantic University Libraries
Persistent Link to This Record: http://purl.flvc.org/fcla/dt/12527
Sublocation: Digital Library
Use and Reproduction: Copyright © is held by the author, with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Use and Reproduction: http://rightsstatements.org/vocab/InC/1.0/
Host Institution: FAU
Is Part of Series: Florida Atlantic University Digital Library Collections.