You are here

Autonomous landing and road following using two-dimensional visual cues

Download pdf | Full Screen View

Date Issued:
1994
Summary:
This dissertation deals with vision-based perception-action closed-loop control systems based on 2-D visual cues. These visual cues are used to calculate the relevant control signals required for autonomous landing and road following. In the landing tasks it has been shown that nine 2-D visual cues can be extracted from a single image of the runway. Seven of these cues can be used to accomplish parallel flight and glideslope tracking tasks of the landing. For the road following task, three different algorithms based on two different 2-D visual cues are developed. One of the road following algorithms can be used to generate steering and velocity commands for the vehicle. Glideslope tracking of the landing task has been implemented in real-time on a six-degree-of-freedom flight simulator. It has been shown that the relevant information computed from 2-D visual cues is robust and reliable for the landing tasks. Road following algorithms were tested successfully up to 50km/h on a US Army High Mobility and Multipurpose Wheeled Vehicle (HMMWV) equipped with a vision system and on a Denning mobile robot. The algorithms have also been tested successfully using PC-based software simulation programs.
Title: Autonomous landing and road following using two-dimensional visual cues.
54 views
12 downloads
Name(s): Yakali, Huseyin Hakan.
Florida Atlantic University, Degree grantor
Raviv, Daniel, Thesis advisor
College of Engineering and Computer Science
Department of Computer and Electrical Engineering and Computer Science
Type of Resource: text
Genre: Electronic Thesis Or Dissertation
Issuance: monographic
Date Issued: 1994
Publisher: Florida Atlantic University
Place of Publication: Boca Raton, Fla.
Physical Form: application/pdf
Extent: 269 p.
Language(s): English
Summary: This dissertation deals with vision-based perception-action closed-loop control systems based on 2-D visual cues. These visual cues are used to calculate the relevant control signals required for autonomous landing and road following. In the landing tasks it has been shown that nine 2-D visual cues can be extracted from a single image of the runway. Seven of these cues can be used to accomplish parallel flight and glideslope tracking tasks of the landing. For the road following task, three different algorithms based on two different 2-D visual cues are developed. One of the road following algorithms can be used to generate steering and velocity commands for the vehicle. Glideslope tracking of the landing task has been implemented in real-time on a six-degree-of-freedom flight simulator. It has been shown that the relevant information computed from 2-D visual cues is robust and reliable for the landing tasks. Road following algorithms were tested successfully up to 50km/h on a US Army High Mobility and Multipurpose Wheeled Vehicle (HMMWV) equipped with a vision system and on a Denning mobile robot. The algorithms have also been tested successfully using PC-based software simulation programs.
Identifier: 12365 (digitool), FADT12365 (IID), fau:9266 (fedora)
Collection: FAU Electronic Theses and Dissertations Collection
Note(s): College of Engineering and Computer Science
Thesis (Ph.D.)--Florida Atlantic University, 1994.
Subject(s): Visual perception
Landing aids (Aeronautics)
Held by: Florida Atlantic University Libraries
Persistent Link to This Record: http://purl.flvc.org/fcla/dt/12365
Sublocation: Digital Library
Use and Reproduction: Copyright © is held by the author, with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Use and Reproduction: http://rightsstatements.org/vocab/InC/1.0/
Host Institution: FAU
Is Part of Series: Florida Atlantic University Digital Library Collections.