Current Search: Image processing (x)
View All Items
Pages
- Title
- Estimation of motion parameters of a planar body using binocular camera configuration.
- Creator
- Haliyur, Padma., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
This thesis is concerned with the estimation of motion parameters of planar-object surfaces viewed with a binocular camera configuration. Possible application of this method includes autonomous guidance of a moving platform (AGVS) via imaging, and segmentation of moving objects by the use of the information concerning the motion and the structure. The brightness constraint equation is obtained by assuming the brightness of a moving patch as almost invariant. This equation is solved for single...
Show moreThis thesis is concerned with the estimation of motion parameters of planar-object surfaces viewed with a binocular camera configuration. Possible application of this method includes autonomous guidance of a moving platform (AGVS) via imaging, and segmentation of moving objects by the use of the information concerning the motion and the structure. The brightness constraint equation is obtained by assuming the brightness of a moving patch as almost invariant. This equation is solved for single camera case as well as binocular camera case by knowing values of the surface normal or by iteratively determining it using the estimates of motion parameters. For this value of the surface normal, rotational and translational motion components are determined over the entire image using a least squares algorithm. This algorithm is tested for simulated images as well as real images pertaining to a single camera as well as binocular camera situations. (Abstract shortened with permission of author.)
Show less - Date Issued
- 1991
- PURL
- http://purl.flvc.org/fcla/dt/14692
- Subject Headings
- Motion, Image processing
- Format
- Document (PDF)
- Title
- Image classification and image resolution issues for DOQQ analysis.
- Creator
- Boruff, Bryan Jeffery., Florida Atlantic University, Roberts, Charles
- Abstract/Description
-
High-resolution imagery is becoming readily available to the public. Private firms and government organizations are using high-resolution images but are running into problems with storage space and processing time. High-resolution images are extremely large files, and have proven cumbersome to work with and control. By resampling fine resolution imagery to a lower resolution, storage and processing space can be dramatically reduced. Fine-resolution imagery is not needed to map most features...
Show moreHigh-resolution imagery is becoming readily available to the public. Private firms and government organizations are using high-resolution images but are running into problems with storage space and processing time. High-resolution images are extremely large files, and have proven cumbersome to work with and control. By resampling fine resolution imagery to a lower resolution, storage and processing space can be dramatically reduced. Fine-resolution imagery is not needed to map most features and resampled high-resolution imagery can be used as a replacement for low-resolution satellite imagery in some cases. The effects of resampling on the spectral quality of a high-resolution image can be demonstrated by answering the following questions: (1) Is the quality of spectral information on a color infrared DOQQ comparable to SPOT and TM Landsat satellite imagery for the purpose of digital image classification? (2) What is the appropriate resolution for mapping surface features using high-resolution imagery for spectral categories of information? (3) What is the appropriate resolution for mapping surface features using high-resolution imagery for land-use land-cover information?
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/15787
- Subject Headings
- Remote sensing, Image processing, Image analysis
- Format
- Document (PDF)
- Title
- A review of recent underwater imaging methods and advancements.
- Creator
- Caimi, F. M., Harbor Branch Oceanographic Institute
- Date Issued
- 1996
- PURL
- http://purl.flvc.org/FCLA/DT/3351972
- Subject Headings
- Underwater imaging systems, Photogrammetry, Image compression, Image processing, Review
- Format
- Document (PDF)
- Title
- Perceptual methods for video coding.
- Creator
- Adzic, Velibor, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are...
Show moreThe main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are implemented in the state-of- the-art video encoders. Result of using our algorithms is visually lossless compression with improved efficiency, as verified by standard subjective quality and psychophysical tests. Savings in bitrate compared to the High Efficiency Video Coding / H.265 reference implementation are up to 45%.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004074, http://purl.flvc.org/fau/fd/FA00004074
- Subject Headings
- Algorithms, Coding theory, Digital coding -- Data processing, Imaging systems -- Image quality, Perception, Video processing -- Data processing
- Format
- Document (PDF)
- Title
- Underwater laser serial imaging using compressive sensing and digital mirror device.
- Creator
- Ouyang, Bing, Dalgleish, Fraser R., Caimi, F. M., Giddings, T. E., Shirron, J. J., Vuorenkoski, Anni K., Nootz, G., Britton, W. B., Ramos, Brian
- Date Issued
- 2011
- PURL
- http://purl.flvc.org/FCLA/DT/3340793
- Subject Headings
- Underwater imaging systems, Image compression, Lasers, Signal processing
- Format
- Document (PDF)
- Title
- Two-dimensional feature tracking algorithm for motion analysis.
- Creator
- Krishnan, Srivatsan., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the...
Show moreIn this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the first image of the sequence to be analyzed. Any kind of motion, i.e., 6 DOF (translation and rotation), can be tolerated keeping in mind the pixels-per-frame motion limitations. No subpixel computations are necessary. Taking into account constraints of temporal continuity, the algorithm uses simple and efficient predictive tracking over multiple frames. Trajectories of features on multiple objects can also be computed. The algorithm accepts a slow, continuous change of brightness D.C. level in the pixels of the feature. Another important aspect of the algorithm is the use of an adaptive feature matching threshold that accounts for change in relative brightness of neighboring pixels. As applications of the feature-tracking algorithm and to test the accuracy of the tracking, we show how the algorithm has been used to extract the Focus of Expansion (FOE) and compute the Time-to-contact using real image sequences of unstructured, unknown environments. In both these applications, information from multiple frames is used.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15030
- Subject Headings
- Algorithms, Image transmission, Motion perception (Vision), Image processing
- Format
- Document (PDF)
- Title
- A broadband signal processor for acoustic imaging using ambient noise.
- Creator
- Olivieri, Marc P., Florida Atlantic University, Glegg, Stewart A. L., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
Buckingham et al. (Nature Vol. 356, p 327) first introduced the concept of acoustic imaging using ambient noise as a method for passively detecting objects in the ocean. Several analytical studies followed, and it was shown that a two dimensional acoustic image could be obtained based on this approach, and that at least 900 pixels are necessary to restitute the details of spherical objects placed in an underwater sound channel. The alternative approach described in this paper is based on a...
Show moreBuckingham et al. (Nature Vol. 356, p 327) first introduced the concept of acoustic imaging using ambient noise as a method for passively detecting objects in the ocean. Several analytical studies followed, and it was shown that a two dimensional acoustic image could be obtained based on this approach, and that at least 900 pixels are necessary to restitute the details of spherical objects placed in an underwater sound channel. The alternative approach described in this paper is based on a signal processing which uses the broadband nature of the ambient noise in the ocean, and therefore, optimizes the use of available sound energy scattered by the object. Images with thousands of pixels can be obtained using a relatively small number of transducers. This method has been validated using simple experiments in air, scaled to represent an ocean application, and results showing images of various objects will be presented.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15065
- Subject Headings
- Acoustic imaging, Signal processing, Underwater acoustics
- Format
- Document (PDF)
- Title
- HVS-based wavelet color image coding.
- Creator
- Guo, Linfeng., Florida Atlantic University, Glenn, William E., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This work is an attempt of incorporating the latest advances in vision research and signal processing into the field of image coding. The scope of the dissertation is twofold. Firstly, it sets up a framework of the wavelet color image coder and makes optimizations of its performance. Secondly, it investigates the human vision models and implements human visual properties into the wavelet color image coder. A wavelet image coding framework consisting of image decomposition, coefficients...
Show moreThis work is an attempt of incorporating the latest advances in vision research and signal processing into the field of image coding. The scope of the dissertation is twofold. Firstly, it sets up a framework of the wavelet color image coder and makes optimizations of its performance. Secondly, it investigates the human vision models and implements human visual properties into the wavelet color image coder. A wavelet image coding framework consisting of image decomposition, coefficients quantization, data representation, and entropy coding is first set up, and then a couple of unsolved issues of wavelet image coding are studied and the consequent optimization schemes are presented and applied to the basic framework. These issues include the best wavelet bases selection, quantizer optimization, adaptive probability estimation in arithmetic coding, and the explicit transmission of significant map of wavelet data. Based on the established wavelet image coding framework, a human visual system (HVS) based adaptive color image coding scheme is proposed. Compared with the non-HVS-based coding methods, our method results in a superior performance without any cost of additional side information. As the rudiments of the proposed HVS-based coding scheme, the visual properties of the early stage of human vision are investigated first, especially the contrast sensitivity, the luminance adaptation, and the complicated simultaneous masking and crossed masking effects. To implement these visual properties into the wavelet image coding, the suitable estimation of local background luminance and contrast in the wavelet domain is also re-investigated. Based upon these prerequisite works, the effects of contrast sensitivity weighting and luminance adaptation are incorporated into our coding scheme. Furthermore, the mechanisms of all kinds of masking effects in color image, e.g., the self-masking, the neighbor masking, the crossbands masking, and the luminance-chrominance crossed-masking, are also studied and properly utilized into the coding scheme through an adaptive quantization scheme. Owing to elaborate arrangement and integration of the different parts of the perception based quantization scheme, the coefficient-dependent adaptive quantization step size can be losslessly restored during the decoding without any overhead of side information.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/11941
- Subject Headings
- Wavelets (Mathematics), Image processing--Digital techniques
- Format
- Document (PDF)
- Title
- Selective texture characterization using Gabor filters.
- Creator
- Boutros, George., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The objective of this dissertation is to develop effective algorithms for texture characterization, segmentation and labeling that operate selectively to label image textures, using the Gabor representation of signals. These representations are an analog of the spatial frequency tuning characteristics of the visual cortex cells. The Gabor function, of all spatial/spectral signal representations, provides optimal resolution between both domains. A discussion of spatial/spectral representations...
Show moreThe objective of this dissertation is to develop effective algorithms for texture characterization, segmentation and labeling that operate selectively to label image textures, using the Gabor representation of signals. These representations are an analog of the spatial frequency tuning characteristics of the visual cortex cells. The Gabor function, of all spatial/spectral signal representations, provides optimal resolution between both domains. A discussion of spatial/spectral representations focuses on the Gabor function and the biological analog that exists between it and the simple cells of the striate cortex. A simulation generates examples of the use of the Gabor filter as a line detector with synthetic data. Simulations are then presented using Gabor filters for real texture characterization. The Gabor filter spatial and spectral attributes are selectively chosen based on the information from a scale-space image in order to maximize resolution of the characterization process. A variation of probabilistic relaxation that exploits the Gabor filter spatial and spectral attributes is devised, and used to force a consensus of the filter responses for texture characterization. We then perform segmentation of the image using the concept of isolation of low energy states within an image. This iterative smoothing algorithm, operating as a Gabor filter post-processing stage, depends on a line processes discontinuity threshold. Selection of the discontinuity threshold is obtained from the modes of the histogram of the relaxed Gabor filter responses using probabilistic relaxation to detect the significant modes. We test our algorithm on simple synthetic and real textures, then use a more complex natural texture image to test the entire algorithm. Limitations on textural resolution are noted, as well as for the resolution of the image segmentation process.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12342
- Subject Headings
- Image processing--Digital techniques, Computer vision
- Format
- Document (PDF)
- Title
- BEHAVIORAL ANALYSIS OF DEEP CONVOLUTIONAL NEURAL NETWORKS FOR IMAGE CLASSIFICATION.
- Creator
- Clark, James Alex, Barenholtz, Elan, Florida Atlantic University, Center for Complex Systems and Brain Sciences, Charles E. Schmidt College of Science
- Abstract/Description
-
Within Deep CNNs there is great excitement over breakthroughs in network performance on benchmark datasets such as ImageNet. Around the world competitive teams work on new ways to innovate and modify existing networks, or create new ones that can reach higher and higher accuracy levels. We believe that this important research must be supplemented with research into the computational dynamics of the networks themselves. We present research into network behavior as it is affected by: variations...
Show moreWithin Deep CNNs there is great excitement over breakthroughs in network performance on benchmark datasets such as ImageNet. Around the world competitive teams work on new ways to innovate and modify existing networks, or create new ones that can reach higher and higher accuracy levels. We believe that this important research must be supplemented with research into the computational dynamics of the networks themselves. We present research into network behavior as it is affected by: variations in the number of filters per layer, pruning filters during and after training, collapsing the weight space of the trained network using a basic quantization, and the effect of Image Size and Input Layer Stride on training time and test accuracy. We provide insights into how the total number of updatable parameters can affect training time and accuracy, and how “time per epoch” and “number of epochs” affect network training time. We conclude with statistically significant models that allow us to predict training time as a function of total number of updatable parameters in the network.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00013940
- Subject Headings
- Neural networks (Computer science), Image processing
- Format
- Document (PDF)
- Title
- Using color image processing techniques to improve the performance of content-based image retrieval systems.
- Creator
- Costa, Fabio Morais., Florida Atlantic University, Furht, Borko
- Abstract/Description
-
A Content-Based Image Retrieval (CBIR) system is a mechanism intended to retrieve a particular image from a large image repository without resorting to any additional information about the image. Query-by-example (QBE) is a technique used by CBIR systems where an image is retrieved from the database based on an example given by the user. The effectiveness of a CBIR system can be measured by two main indicators: how close the retrieved results are to the desired image and how fast we got those...
Show moreA Content-Based Image Retrieval (CBIR) system is a mechanism intended to retrieve a particular image from a large image repository without resorting to any additional information about the image. Query-by-example (QBE) is a technique used by CBIR systems where an image is retrieved from the database based on an example given by the user. The effectiveness of a CBIR system can be measured by two main indicators: how close the retrieved results are to the desired image and how fast we got those results. In this thesis, we implement some classical image processing operations in order to improve the average rank of the desired image, and we also implement two object recognition techniques to improve the subjective quality of the best ranked images. Results of experiments show that the proposed system outperforms an equivalent CBIR system in QBE mode, both from the point of view of precision as well as recall.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12870
- Subject Headings
- Image processing--Digital techniques, Imaging systems--Image quality, Information storage and retrieval systems
- Format
- Document (PDF)
- Title
- Image retrieval using visual attention.
- Creator
- Mayron, Liam M., College of Engineering and Computer Science, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The retrieval of digital images is hindered by the semantic gap. The semantic gap is the disparity between a user's high-level interpretation of an image and the information that can be extracted from an image's physical properties. Content based image retrieval systems are particularly vulnerable to the semantic gap due to their reliance on low-level visual features for describing image content. The semantic gap can be narrowed by including high-level, user-generated information. High-level...
Show moreThe retrieval of digital images is hindered by the semantic gap. The semantic gap is the disparity between a user's high-level interpretation of an image and the information that can be extracted from an image's physical properties. Content based image retrieval systems are particularly vulnerable to the semantic gap due to their reliance on low-level visual features for describing image content. The semantic gap can be narrowed by including high-level, user-generated information. High-level descriptions of images are more capable of capturing the semantic meaning of image content, but it is not always practical to collect this information. Thus, both content-based and human-generated information is considered in this work. A content-based method of retrieving images using a computational model of visual attention was proposed, implemented, and evaluated. This work is based on a study of contemporary research in the field of vision science, particularly computational models of bottom-up visual attention. The use of computational models of visual attention to detect salient by design regions of interest in images is investigated. The method is then refined to detect objects of interest in broad image databases that are not necessarily salient by design. An interface for image retrieval, organization, and annotation that is compatible with the attention-based retrieval method has also been implemented. It incorporates the ability to simultaneously execute querying by image content, keyword, and collaborative filtering. The user is central to the design and evaluation of the system. A game was developed to evaluate the entire system, which includes the user, the user interface, and retrieval methods.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fcla/flaent/EN00154040/68_1/98p0137i.pdf, http://purl.flvc.org/FAU/58006
- Subject Headings
- Image processing, Digital techniques, Database systems, Cluster analysis, Multimedia systems
- Format
- Document (PDF)
- Title
- Artificial Intelligence Based Electrical Impedance Tomography for Local Tissue.
- Creator
- Rao, Manasa, Pandya, Abhijit S., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This research aims at proposing the use of Electrical Impedance Tomography (EIT), a non-invasive technique that makes it possible to measure two or three dimensional impedance of living local tissue in a human body which is applied for medical diagnosis of diseases. In order to achieve this, electrodes are attached to the part of human body and an image of the conductivity or permittivity of living tissue is deduced from surface electrodes. In this thesis we have worked towards alleviating...
Show moreThis research aims at proposing the use of Electrical Impedance Tomography (EIT), a non-invasive technique that makes it possible to measure two or three dimensional impedance of living local tissue in a human body which is applied for medical diagnosis of diseases. In order to achieve this, electrodes are attached to the part of human body and an image of the conductivity or permittivity of living tissue is deduced from surface electrodes. In this thesis we have worked towards alleviating drawbacks of EIT such as estimating parameters by incorporating it in an electrode structure and determining a solution to spatial distribution of bio-impedance to a close proximity. We address the issue of initial parameter estimation and spatial resolution accuracy of an electrode structure by using an arrangement called "divided electrode" for measurement of bio-impedance in a cross section of a local tissue. Its capability is examined by computer simulations, where a distributed equivalent circuit is utilized as a model for the cross section tissue. Further, a novel hybrid model is derived which is a combination of artificial intelligence based gradient free optimization technique and numerical integration in order to estimate parameters. This arne! iorates the achievement of spatial resolution of equivalent circuit model to the closest accuracy.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012544
- Subject Headings
- Electrical impedance tomography, Diagnostic imaging--Data processing, Computational intelligence
- Format
- Document (PDF)
- Title
- Densely-centered uniform P-search: A fast motion estimation algorithm.
- Creator
- Greenberg, Joshua H., Florida Atlantic University, Furht, Borko
- Abstract/Description
-
Video compression technology promises to be the key to the transmission of motion video. A number of techniques have been introduced in the past few years, particularly that developed by the Motion Picture Experts Group (MPEG). The MPEG algorithm uses Motion Estimation to reduce the amount of data that is stored for each frame. Motion Estimation uses a reference frame as a codebook for a modified Vector Quantization process. While an exhaustive search for Motion Estimation Vectors is time...
Show moreVideo compression technology promises to be the key to the transmission of motion video. A number of techniques have been introduced in the past few years, particularly that developed by the Motion Picture Experts Group (MPEG). The MPEG algorithm uses Motion Estimation to reduce the amount of data that is stored for each frame. Motion Estimation uses a reference frame as a codebook for a modified Vector Quantization process. While an exhaustive search for Motion Estimation Vectors is time-consuming, various fast search algorithms have been developed. These techniques are surveyed, and the theoretical framework for a new search algorithm is developed: Densely-Centered Uniform P-Search. The time complexity of Densely-Centered Uniform P-Search is comparable to other popular Motion Estimation techniques, and shows superior results on a variety of motion video sources.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15286
- Subject Headings
- Image processing--Digital techniques, Data compression (Telecommunication)
- Format
- Document (PDF)
- Title
- Automatic extraction and tracking of eye features from facial image sequences.
- Creator
- Xie, Xangdong., Florida Atlantic University, Sudhakar, Raghavan, Zhuang, Hanqi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the...
Show moreThe dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the processing system. A new corner detection algorithm is presented in which the problem of detecting corners is solved by minimizing a cost function. Each cost factor captures a desirable characteristic of the corner using both the gray level information and the geometrical structure of a corner. This approach additionally provides corner orientations and angles along with corner locations. The advantage of the new approach over the existing corner detectors is that it is able to improve the reliability of detection and localization by imposing criteria related to both the gray level data and the corner structure. The extraction of eye features is performed by using an improved method of deformable templates which are geometrically arranged to resemble the expected shape of the eye. The overall energy function is redefined to simplify the minimization process. The weights for the energy terms are selected based on the normalized value of the energy term. Thus the weighting schedule of the modified method does not demand any expert knowledge for the user. Rather than using a sequential procedure, all parameters of the template are changed simultaneously during the minimization process. This reduces not only the processing time but also the probability of the template being trapped in local minima. An efficient algorithm for real-time eye feature tracking from a sequence of eye images is developed in the dissertation. Based on a geometrical model which describes the characteristics of the eye, the measurement equations are formulated to relate suitably selected measurements to the tracking parameters. A discrete Kalman filter is then constructed for the recursive estimation of the eye features, while taking into account the measurement noise. The small processing time allows this tracking algorithm to be used in real-time applications. This tracking algorithm is suitable for an automated, non-intrusive and inexpensive system as the algorithm is capable of measuring the time profiles of the eye movements. The issue of compensating head movements during the tracking of eye movements is also discussed. An appropriate measurement model was established to describe the effects of head movements. Based on this model, a Kalman filter structure was formulated to carry out the compensation. The whole tracking scheme which cascades two Kalman filters is constructed to track the iris movement, while compensating the head movement. The presence of the eye blink is also taken into account and its detection is incorporated into the cascaded tracking scheme. The above algorithms have been integrated to design an automated, non-intrusive and inexpensive system which provides accurate time profile of eye movements tracking from video image frames.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12377
- Subject Headings
- Kalman filtering, Eye--Movements, Algorithms, Image processing
- Format
- Document (PDF)
- Title
- Radar cross section of an open-ended rectangular waveguide cavity: A massively parallel implementation applied to high-resolution radar cross section imaging.
- Creator
- Vann, Laura Dominick., Florida Atlantic University, Helmken, Henry, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis is concerned with adapting a sequential code that calculates the Radar Cross Section (RCS) of an open-ended rectangular waveguide cavity to a massively parallel computational platform. The primary motivation for doing this is to obtain wideband data over a large range of incident angles in order to generate a two-dimensional radar cross section image. Images generated from measured and computed data will be compared to evaluate program performance. The computer used in this...
Show moreThis thesis is concerned with adapting a sequential code that calculates the Radar Cross Section (RCS) of an open-ended rectangular waveguide cavity to a massively parallel computational platform. The primary motivation for doing this is to obtain wideband data over a large range of incident angles in order to generate a two-dimensional radar cross section image. Images generated from measured and computed data will be compared to evaluate program performance. The computer used in this implementation is a MasPar MP-1 single instruction, multiple data massively parallel computer consisting of 4,096 processors arranged in a two-dimensional mesh. The algorithm uses the mode matching method of analysis to match fields over the cavity aperture to obtain an expression for the scattered far field.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14984
- Subject Headings
- Radar cross sections, Algorithms--Data processing, Imaging systems
- Format
- Document (PDF)
- Title
- The human face recognition problem: A solution based on third-order synthetic neural networks and isodensity analysis.
- Creator
- Uwechue, Okechukwu A., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Third-order synthetic neural networks are applied to the recognition of isodensity facial images extracted from digitized grayscale facial images. A key property of neural networks is their ability to recognize invariances and extract essential parameters from complex high-dimensional data. In pattern recognition an input image must be recognized regardless of its position, size, and angular orientation. In order to achieve this, the neural network needs to learn the relationships between the...
Show moreThird-order synthetic neural networks are applied to the recognition of isodensity facial images extracted from digitized grayscale facial images. A key property of neural networks is their ability to recognize invariances and extract essential parameters from complex high-dimensional data. In pattern recognition an input image must be recognized regardless of its position, size, and angular orientation. In order to achieve this, the neural network needs to learn the relationships between the input pixels. Pattern recognition requires the nonlinear subdivision of the pattern space into subsets representing the objects to be identified. Single-layer neural networks can only perform linear discrimination. However, multilayer first-order networks and high-order neural networks can both achieve this. The most significant advantage of a higher-order net over a traditional multilayer perceptron is that invariances to 2-dimensional geometric transformations can be incorporated into the network and need not be learned through prolonged training with an extensive family of exemplars. It is shown that a third-order network can be used to achieve translation-, scale-, and rotation-invariant recognition with a significant reduction in training time over other neural net paradigms such as the multilayer perceptron. A model based on an enhanced version of the Widrow-Hoff training algorithm and a new momentum paradigm are introduced and applied to the complex problem of human face recognition under varying facial expressions. Arguments for the use of isodensity information in the recognition algorithm are put forth and it is shown how the technique of coarse-coding is applied to reduce the memory required for computer simulations. The combination of isodensity information and neural networks for image recognition is described and its merits over other image recognition methods are explained. It is shown that isodensity information coupled with the use of an "adaptive threshold strategy" (ATS) yields a system that is relatively impervious to image contrast noise. The new momentum paradigm produces much faster convergence rates than ordinary momentum and renders the network behaviour independent of its training parameters over a broad range of parameter values.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12464
- Subject Headings
- Image processing, Face perception, Neural networks (Computer science)
- Format
- Document (PDF)
- Title
- Video and Image Analysis using Statistical and Machine Learning Techniques.
- Creator
- Luo, Qiming, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Digital videos and images are effective media for capturing spatial and ternporal information in the real world. The rapid growth of digital videos has motivated research aimed at developing effective algorithms, with the objective of obtaining useful information for a variety of application areas, such as security, commerce, medicine, geography, etc. This dissertation presents innovative and practical techniques, based on statistics and machine learning, that address some key research...
Show moreDigital videos and images are effective media for capturing spatial and ternporal information in the real world. The rapid growth of digital videos has motivated research aimed at developing effective algorithms, with the objective of obtaining useful information for a variety of application areas, such as security, commerce, medicine, geography, etc. This dissertation presents innovative and practical techniques, based on statistics and machine learning, that address some key research problems in video and image analysis, including video stabilization, object classification, image segmentation, and video indexing. A novel unsupervised multi-scale color image segmentation algorithm is proposed. The basic idea is to apply mean shift clustering to obtain an over-segmentation, and then merge regions at multiple scales to minimize the MDL criterion. The performance on the Berkeley segmentation benchmark compares favorably with some existing approaches. This algorithm can also operate on one-dimensional feature vectors representing each frame in ocean survey videos, which results in a novel framework for building a hierarchical video index. The advantage is to provide the user with the flexibility of browsing the videos at arbitrary levels of detail, which makes it more efficient for users to browse a long video in order to find interesting information based on the hierarchical index. Also, an empirical study on classification of ships in surveillance videos is presented. A comparative performance study on three classification algorithms is conducted. Based on this study, an effective feature extraction and classification algorithm for classifying ships in coastline surveillance videos is proposed. Finally, an empirical study on video stabilization is presented, which includes a comparative performance study on four motion estimation methods and three motion correction methods. Based on this study, an effective real-time video stabilization algorithm for coastline surveillance is proposed, which involves a novel approach to reduce error accumulation.
Show less - Date Issued
- 2007
- PURL
- http://purl.flvc.org/fau/fd/FA00012574
- Subject Headings
- Image processing--Digital techniques, Electronic surveillance, Computational learning theory
- Format
- Document (PDF)
- Title
- Low-level and high-level correlation for image registration.
- Creator
- Mandalia, Anil Dhirajlal., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The fundamental goal of a machine vision system in the inspection of an assembled printed circuit board is to locate the integrated circuit(IC) components. These components are then checked for their position and orientation with respect to a given position and orientation of the model and to detect deviations. To this end, a method based on a modified two-level correlation scheme is presented in this thesis. In the first level, Low-Level correlation, a modified two-stage template matching...
Show moreThe fundamental goal of a machine vision system in the inspection of an assembled printed circuit board is to locate the integrated circuit(IC) components. These components are then checked for their position and orientation with respect to a given position and orientation of the model and to detect deviations. To this end, a method based on a modified two-level correlation scheme is presented in this thesis. In the first level, Low-Level correlation, a modified two-stage template matching method is proposed. It makes use of the random search techniques, better known as the Monte Carlo method, to speed up the matching process on binarized version of the images. Due to the random search techniques, there is uncertainty involved in the location where the matches are found. In the second level, High-Level correlation, an evidence scheme based on the Dempster-Shafer formalism is presented to resolve the uncertainty. Experiment results performed on a printed circuit board containing mounted integrated components is also presented to demonstrate the validity of the techniques.
Show less - Date Issued
- 1990
- PURL
- http://purl.flvc.org/fcla/dt/14635
- Subject Headings
- Image processing--Digital techniques, Computer vision, Integrated circuits
- Format
- Document (PDF)
- Title
- Bioinformatics-inspired binary image correlation: application to bio-/medical-images, microsarrays, finger-prints and signature classifications.
- Creator
- Pappusetty, Deepti, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The efforts addressed in this thesis refer to assaying the extent of local features in 2D-images for the purpose of recognition and classification. It is based on comparing a test-image against a template in binary format. It is a bioinformatics-inspired approach pursued and presented as deliverables of this thesis as summarized below: 1. By applying the so-called 'Smith-Waterman (SW) local alignment' and 'Needleman-Wunsch (NW) global alignment' approaches of bioinformatics, a test 2D-image...
Show moreThe efforts addressed in this thesis refer to assaying the extent of local features in 2D-images for the purpose of recognition and classification. It is based on comparing a test-image against a template in binary format. It is a bioinformatics-inspired approach pursued and presented as deliverables of this thesis as summarized below: 1. By applying the so-called 'Smith-Waterman (SW) local alignment' and 'Needleman-Wunsch (NW) global alignment' approaches of bioinformatics, a test 2D-image in binary format is compared against a reference image so as to recognize the differential features that reside locally in the images being compared 2. SW and NW algorithms based binary comparison involves conversion of one-dimensional sequence alignment procedure (indicated traditionally for molecular sequence comparison adopted in bioinformatics) to 2D-image matrix 3. Relevant algorithms specific to computations are implemented as MatLabTM codes 4. Test-images considered are: Real-world bio-/medical-images, synthetic images, microarrays, biometric finger prints (thumb-impressions) and handwritten signatures. Based on the results, conclusions are enumerated and inferences are made with directions for future studies.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3333052
- Subject Headings
- Bioinformatics, Statistical methods, Diagnostic imaging, Digital techniques, Image processing, Digital techniques, Pattern perception, Data processing, DNA microarrays
- Format
- Document (PDF)