You are here

Visual threat cues for autonomous navigation

Download pdf | Full Screen View

Date Issued:
1996
Summary:
This dissertation deals with novel vision-based motion cues called the Visual Threat Cues (VTCs), suitable for autonomous navigation tasks such as collision avoidance and maintenance of clearance. The VTCs are time-based and provide some measure for a relative change in range as well as clearance between a 3D surface and a moving observer. They are independent of the 3D environment around the observer and need almost no a-priori knowledge about it. For each VTC presented in this dissertation, there is a corresponding visual field associated with it. Each visual field constitutes a family of imaginary 3D surfaces attached to the moving observer. All the points that lie on a particular imaginary 3D surface, produce the same value of the VTC. These visual fields can be used to demarcate the space around the moving observer into safe and danger zones of varying degree. Several approaches to extract the VTCs from a sequence of monocular images have been suggested. A practical method to extract the VTCs from a sequence of images of 3D textured surfaces, obtained by a visually fixation, fixed-focus moving camera is also presented. This approach is based on the extraction of a global image dissimilarity measure called the Image Quality Measure (IQM), which is extracted directly from the raw data of the gray level images. Based on the relative variations of the measured IQM, the VTCs are extracted. This practical approach to extract the VTCs needs no 3D reconstruction, depth information, optical flow or feature tracking. This algorithm to extract the VTCs was tested on several indoor as well as outdoor real image sequences. Two vision-based closed-loop control schemes for autonomous navigation tasks were implemented in a-priori unknown textured environments using one of the VTCs as relevant sensory feedback information. They are based on a set of IF-THEN fuzzy rules and need almost no a-priori information about the vehicle dynamics, speed, direction of motion, etc. They were implemented in real-time using a camera mounted on a six degree-of-freedom flight simulator.
Title: Visual threat cues for autonomous navigation.
165 views
69 downloads
Name(s): Kundur, Sridhar Reddy
Florida Atlantic University, Degree grantor
Raviv, Daniel, Thesis advisor
College of Engineering and Computer Science
Department of Computer and Electrical Engineering and Computer Science
Type of Resource: text
Genre: Electronic Thesis Or Dissertation
Issuance: monographic
Date Issued: 1996
Publisher: Florida Atlantic University
Place of Publication: Boca Raton, Fla.
Physical Form: application/pdf
Extent: 304 p.
Language(s): English
Summary: This dissertation deals with novel vision-based motion cues called the Visual Threat Cues (VTCs), suitable for autonomous navigation tasks such as collision avoidance and maintenance of clearance. The VTCs are time-based and provide some measure for a relative change in range as well as clearance between a 3D surface and a moving observer. They are independent of the 3D environment around the observer and need almost no a-priori knowledge about it. For each VTC presented in this dissertation, there is a corresponding visual field associated with it. Each visual field constitutes a family of imaginary 3D surfaces attached to the moving observer. All the points that lie on a particular imaginary 3D surface, produce the same value of the VTC. These visual fields can be used to demarcate the space around the moving observer into safe and danger zones of varying degree. Several approaches to extract the VTCs from a sequence of monocular images have been suggested. A practical method to extract the VTCs from a sequence of images of 3D textured surfaces, obtained by a visually fixation, fixed-focus moving camera is also presented. This approach is based on the extraction of a global image dissimilarity measure called the Image Quality Measure (IQM), which is extracted directly from the raw data of the gray level images. Based on the relative variations of the measured IQM, the VTCs are extracted. This practical approach to extract the VTCs needs no 3D reconstruction, depth information, optical flow or feature tracking. This algorithm to extract the VTCs was tested on several indoor as well as outdoor real image sequences. Two vision-based closed-loop control schemes for autonomous navigation tasks were implemented in a-priori unknown textured environments using one of the VTCs as relevant sensory feedback information. They are based on a set of IF-THEN fuzzy rules and need almost no a-priori information about the vehicle dynamics, speed, direction of motion, etc. They were implemented in real-time using a camera mounted on a six degree-of-freedom flight simulator.
Identifier: 9780591122633 (isbn), 12476 (digitool), FADT12476 (IID), fau:9369 (fedora)
Collection: FAU Electronic Theses and Dissertations Collection
Note(s): College of Engineering and Computer Science
Thesis (Ph.D.)--Florida Atlantic University, 1996.
Subject(s): Computer vision
Robot vision
Visual perception
Held by: Florida Atlantic University Libraries
Persistent Link to This Record: http://purl.flvc.org/fcla/dt/12476
Sublocation: Digital Library
Use and Reproduction: Copyright © is held by the author, with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Use and Reproduction: http://rightsstatements.org/vocab/InC/1.0/
Host Institution: FAU
Is Part of Series: Florida Atlantic University Digital Library Collections.