Current Search: Eigenvectors (x)
View All Items
- Title
- Subspace detection and scale evolutionary eigendecomposition.
- Creator
- Kyperountas, Spyros C., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A measure of the potential of a receiver for detection is detectability. Detectability is a function of the signal and noise, and given any one of them the detectability is fixed. In addition, complete transforms of the signal and noise cannot change detectability. Throughout this work we show that "Subspace methods" as defined here can improve detectability in specific subspaces, resulting in improved Receiver Operating Curves (ROC) and thus better detection in arbitrary noise environments....
Show moreA measure of the potential of a receiver for detection is detectability. Detectability is a function of the signal and noise, and given any one of them the detectability is fixed. In addition, complete transforms of the signal and noise cannot change detectability. Throughout this work we show that "Subspace methods" as defined here can improve detectability in specific subspaces, resulting in improved Receiver Operating Curves (ROC) and thus better detection in arbitrary noise environments. Our method is tested and verified on various signals and noises, both simulated and real. The optimum detection of signals in noise requires the computation of noise eigenvalues and vectors (EVD). This process neither is a trivial one nor is it computationally cheap, especially for non-stationary noise and can result in numerical instabilities when the covariance matrix is large. This work addresses this problem and provides solutions that take advantage of the subspace structure through plane rotations to improve on existing algorithms for EVD by improving their convergence rate and reducing their computational expense for given thresholds.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/11965
- Subject Headings
- Eigenvalues, Eigenvectors, Wavelets (Mathematics)
- Format
- Document (PDF)
- Title
- On the Study of the Aizawa System.
- Creator
- Fleurantin, Emmanuel, Mireles-James, Jason D., Florida Atlantic University, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
In this report we study the Aizawa field by first computing a Taylor series expansion for the solution of an initial value problem. We then look for singularities (equilibrium points) of the field and plot the set of solutions which lie in the linear subspace spanned by the eigenvectors. Finally, we use the Parameterization Method to compute one and two dimensional stable and unstable manifolds of equilibria for the system.
- Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00005994
- Subject Headings
- Series, Mathematics, Eigenvectors, Aizawa field
- Format
- Document (PDF)
- Title
- Generalized Feature Embedding Learning for Clustering and Classication.
- Creator
- Golinko, Eric David, Zhu, Xingquan, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Data comes in many di erent shapes and sizes. In real life applications it is common that data we are studying has features that are of varied data types. This may include, numerical, categorical, and text. In order to be able to model this data with machine learning algorithms, it is required that the data is typically in numeric form. Therefore, for data that is not originally numerical, it must be transformed to be able to be used as input into these algorithms. Along with this...
Show moreData comes in many di erent shapes and sizes. In real life applications it is common that data we are studying has features that are of varied data types. This may include, numerical, categorical, and text. In order to be able to model this data with machine learning algorithms, it is required that the data is typically in numeric form. Therefore, for data that is not originally numerical, it must be transformed to be able to be used as input into these algorithms. Along with this transformation it is common that data we study has many features relative to the number of samples in the data. It is often desirable to reduce the number of features that are being trained in a model to eliminate noise and reduce time in training. This problem of high dimensionality can be approached through feature selection, feature extraction, or feature embedding. Feature selection seeks to identify the most essential variables in a dataset that will lead to a parsimonious model and high performing results, while feature extraction and embedding are techniques that utilize a mathematical transformation of the data into a represented space. As a byproduct of using a new representation, we are able to reduce the dimension greatly without sacri cing performance. Oftentimes, by using embedded features we observe a gain in performance. Though extraction and embedding methods may be powerful for isolated machine learning problems, they do not always generalize well. Therefore, we are motivated to illustrate a methodology that can be applied to any data type with little pre-processing. The methods we develop can be applied in unsupervised, supervised, incremental, and deep learning contexts. Using 28 benchmark datasets as examples which include di erent data types, we construct a framework that can be applied for general machine learning tasks. The techniques we develop contribute to the eld of dimension reduction and feature embedding. Using this framework, we make additional contributions to eigendecomposition by creating an objective matrix that includes three main vital components. The rst being a class partitioned row and feature product representation of one-hot encoded data. Secondarily, the derivation of a weighted adjacency matrix based on class label relationships. Finally, by the inner product of these aforementioned values, we are able to condition the one-hot encoded data generated from the original data prior to eigenvector decomposition. The use of class partitioning and adjacency enable subsequent projections of the data to be trained more e ectively when compared side-to-side to baseline algorithm performance. Along with this improved performance, we can adjust the dimension of the subsequent data arbitrarily. In addition, we also show how these dense vectors may be used in applications to order the features of generic data for deep learning. In this dissertation, we examine a general approach to dimension reduction and feature embedding that utilizes a class partitioned row and feature representation, a weighted approach to instance similarity, and an adjacency representation. This general approach has application to unsupervised, supervised, online, and deep learning. In our experiments of 28 benchmark datasets, we show signi cant performance gains in clustering, classi cation, and training time.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013063
- Subject Headings
- Eigenvectors--Data processing., Algorithms., Cluster analysis.
- Format
- Document (PDF)