Current Search: Algorithms (x)
View All Items
Pages
- Title
- A novel NN paradigm for the prediction of hematocrit value during blood transfusion.
- Creator
- Thakkar, Jay., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
During the Leukocytapheresis (LCAP) process used to treat patients suffering from acute Ulcerative Colitis, medical practitioners have to continuously monitor the Hematocrit (Ht) level in the blood to ensure it is within the acceptable range. The work done, as a part of this thesis, attempts to create an early warning system that can be used to predict if and when the Ht values will deviate from the acceptable range. To do this we have developed an algorithm based on the Group Method of Data...
Show moreDuring the Leukocytapheresis (LCAP) process used to treat patients suffering from acute Ulcerative Colitis, medical practitioners have to continuously monitor the Hematocrit (Ht) level in the blood to ensure it is within the acceptable range. The work done, as a part of this thesis, attempts to create an early warning system that can be used to predict if and when the Ht values will deviate from the acceptable range. To do this we have developed an algorithm based on the Group Method of Data Handling (GMDH) and compared it to other Neural Network algorithms, in particular the Multi Layer Perceptron (MLP). The standard GMDH algorithm captures the fluctuation very well but there is a time lag that produces larger errors when compared to MLP. To address this drawback we modified the GMDH algorithm to reduce the prediction error and produce more accurate results.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3174078
- Subject Headings
- Neural networks (Computer science), Scientific applications, GMDH algorithms, Pattern recognition systems, Genetic algorithms, Fuzzy logic
- Format
- Document (PDF)
- Title
- Enhanced 1-D chaotic key-based algorithm for image encryption.
- Creator
- Furht, Borko, Socek, Daniel, Magliveras, Spyros S.
- Abstract/Description
-
A recently proposed Chaotic-Key Based Algorithm (CKBA) has been shown to be unavoidably susceptible to chosen/known-plaintext attacks and ciphertext-only attacks. In this paper we enhance the CKBA algorithm three-fold: 1) we change the 1-D chaotic Logistic map to a piecewise linear chaotic map (PWLCM) to improve the balance property, 2) we increase the key size to 128 bits, and 3) we add two more cryptographic primitives and extend the scheme to operate on multiple rounds so that the chosen...
Show moreA recently proposed Chaotic-Key Based Algorithm (CKBA) has been shown to be unavoidably susceptible to chosen/known-plaintext attacks and ciphertext-only attacks. In this paper we enhance the CKBA algorithm three-fold: 1) we change the 1-D chaotic Logistic map to a piecewise linear chaotic map (PWLCM) to improve the balance property, 2) we increase the key size to 128 bits, and 3) we add two more cryptographic primitives and extend the scheme to operate on multiple rounds so that the chosen/knownplaintext attacks are no longer possible. The new cipher has much stronger security and its performance characteristics remain very good.
Show less - Date Issued
- 2004-11-22
- PURL
- http://purl.flvc.org/fcla/dt/358402
- Subject Headings
- Data encryption (Computer science), Computer algorithm, Mulitmedia systems --Security measures.
- Format
- Document (PDF)
- Title
- An Algorithmic Approach to Tran Van Trung's Basic Recursive Construction of t-Designs.
- Creator
- Lopez, Oscar A., Magliveras, Spyros S., Florida Atlantic University, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
It was not until the 20th century that combinatorial design theory was studied as a formal subject. This field has many applications, for example in statistical experimental design, coding theory, authentication codes, and cryptography. Major approaches to the problem of discovering new t-designs rely on (i) the construction of large sets of t designs, (ii) using prescribed automorphism groups, (iii) recursive construction methods. In 2017 and 2018, Tran Van Trung introduced new recursive...
Show moreIt was not until the 20th century that combinatorial design theory was studied as a formal subject. This field has many applications, for example in statistical experimental design, coding theory, authentication codes, and cryptography. Major approaches to the problem of discovering new t-designs rely on (i) the construction of large sets of t designs, (ii) using prescribed automorphism groups, (iii) recursive construction methods. In 2017 and 2018, Tran Van Trung introduced new recursive techniques to construct t – (v, k, λ) designs. These methods are of purely combinatorial nature and require using "ingredient" t-designs or resolutions whose parameters satisfy a system of non-linear equations. Even after restricting the range of parameters in this new method, the task is computationally intractable. In this work, we enhance Tran Van Trung's "Basic Construction" by a robust and efficient hybrid computational apparatus which enables us to construct hundreds of thousands of new t – (v, k, Λ) designs from previously known ingredient designs. Towards the end of the dissertation we also create a new family of 2-resolutions, which will be infinite if there are infinitely many Sophie Germain primes.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013233
- Subject Headings
- Combinatorial designs and configurations, Algorithms, t-designs
- Format
- Document (PDF)
- Title
- An Algorithmic Approach to The Lattice Structures of Attractors and Lyapunov functions.
- Creator
- Kasti, Dinesh, Kalies, William D., Florida Atlantic University, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
Ban and Kalies [3] proposed an algorithmic approach to compute attractor- repeller pairs and weak Lyapunov functions based on a combinatorial multivalued mapping derived from an underlying dynamical system generated by a continuous map. We propose a more e cient way of computing a Lyapunov function for a Morse decomposition. This combined work with other authors, including Shaun Harker, Arnoud Goulet, and Konstantin Mischaikow, implements a few techniques that makes the process of nding a...
Show moreBan and Kalies [3] proposed an algorithmic approach to compute attractor- repeller pairs and weak Lyapunov functions based on a combinatorial multivalued mapping derived from an underlying dynamical system generated by a continuous map. We propose a more e cient way of computing a Lyapunov function for a Morse decomposition. This combined work with other authors, including Shaun Harker, Arnoud Goulet, and Konstantin Mischaikow, implements a few techniques that makes the process of nding a global Lyapunov function for Morse decomposition very e - cient. One of the them is to utilize highly memory-e cient data structures: succinct grid data structure and pointer grid data structures. Another technique is to utilize Dijkstra algorithm and Manhattan distance to calculate a distance potential, which is an essential step to compute a Lyapunov function. Finally, another major technique in achieving a signi cant improvement in e ciency is the utilization of the lattice structures of the attractors and attracting neighborhoods, as explained in [32]. The lattice structures have made it possible to let us incorporate only the join-irreducible attractor-repeller pairs in computing a Lyapunov function, rather than having to use all possible attractor-repeller pairs as was originally done in [3]. The distributive lattice structures of attractors and repellers in a dynamical system allow for general algebraic treatment of global gradient-like dynamics. The separation of these algebraic structures from underlying topological structure is the basis for the development of algorithms to manipulate those structures, [32, 31]. There has been much recent work on developing and implementing general compu- tational algorithms for global dynamics which are capable of computing attracting neighborhoods e ciently. We describe the lifting of sublattices of attractors, which are computationally less accessible, to lattices of forward invariant sets and attract- ing neighborhoods, which are computationally accessible. We provide necessary and su cient conditions for such a lift to exist, in a general setting. We also provide the algorithms to check whether such conditions are met or not and to construct the lift when they met. We illustrate the algorithms with some examples. For this, we have checked and veri ed these algorithms by implementing on some non-invertible dynamical systems including a nonlinear Leslie model.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004668
- Subject Headings
- Differential equations -- Numerical solutions., Differentiable dynamical systems., Algorithms.
- Format
- Document (PDF)
- Title
- Distributed Algorithms for Energy-Efficient Data Gathering and Barrier Coverage in Wireless Sensor Networks.
- Creator
- Aranzazu-Suescun, Catalina, Cardei, Mihaela, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Wireless sensor networks (WSNs) provide rapid, untethered access to information, eliminating the barriers of distance, time, and location for many applications in national security, civilian search and rescue operations, surveillance, border monitoring, and many more. Sensor nodes are resource constraint in terms of power, bandwidth, memory, and computing capabilities. Sensor nodes are typically battery powered and depending on the application, it may be impractical or even impossible to...
Show moreWireless sensor networks (WSNs) provide rapid, untethered access to information, eliminating the barriers of distance, time, and location for many applications in national security, civilian search and rescue operations, surveillance, border monitoring, and many more. Sensor nodes are resource constraint in terms of power, bandwidth, memory, and computing capabilities. Sensor nodes are typically battery powered and depending on the application, it may be impractical or even impossible to recharge them. Thus, it is important to develop mechanisms for WSN which are energy efficient, in order to reduce the energy consumption in the network. Energy efficient algorithms result in an increased network lifetime. Data gathering is an important operation in WSNs, dealing with collecting sensed data or event reporting in a timely and efficient way. There are various scenarios that have to be carefully addressed. In this dissertation we propose energy efficient algorithms for data gathering. We propose a novel event-based clustering mechanism, and propose several efficient data gathering algorithms for mobile sink WSNs and for spatio-temporal events. Border surveillance is an important application of WSNs. Typical border surveillance applications aim to detect intruders attempting to enter or exit the border of a certain region. Deploying a set of sensor nodes on a region of interest where sensors form barriers for intruders is often referred to as the barrier coverage problem. In this dissertation we propose some novel mechanisms for increasing the percentage of events detected successfully. More specifically, we propose an adaptive sensor rotation mechanism, which allow sensors to decide their orientation angle adaptively, based on the location of the incoming events. In addition, we propose an Unmanned Aerial Vehicle UAV aided mechanism, where an UAV is used to cover gaps dynamically, resulting in an increased quality of the surveillance.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013180
- Subject Headings
- Wireless sensor networks, Distributed algorithms, Wireless sensor nodes
- Format
- Document (PDF)
- Title
- INVESTIGATING MACHINE LEARNING ALGORITHMS WITH IMBALANCED BIG DATA.
- Creator
- Hasanin, Tawfiq, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Recent technological developments have engendered an expeditious production of big data and also enabled machine learning algorithms to produce high-performance models from such data. Nonetheless, class imbalance (in binary classifications) between the majority and minority classes in big data can skew the predictive performance of the classification algorithms toward the majority (negative) class whereas the minority (positive) class usually holds greater value for the decision makers. Such...
Show moreRecent technological developments have engendered an expeditious production of big data and also enabled machine learning algorithms to produce high-performance models from such data. Nonetheless, class imbalance (in binary classifications) between the majority and minority classes in big data can skew the predictive performance of the classification algorithms toward the majority (negative) class whereas the minority (positive) class usually holds greater value for the decision makers. Such bias may lead to adverse consequences, some of them even life-threatening, when the existence of false negatives is generally costlier than false positives. The size of the minority class can vary from fair to extraordinary small, which can lead to different performance scores for machine learning algorithms. Class imbalance is a well-studied area for traditional data, i.e., not big data. However, there is limited research focusing on both rarity and severe class imbalance in big data.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013316
- Subject Headings
- Algorithms, Machine learning, Big data--Data processing, Big data
- Format
- Document (PDF)
- Title
- Achieving Higher Receiver Satisfaction using Multicast-Favored Bandwidth Allocation Protocols.
- Creator
- Yousefizadeh, Hooman, Zilouchian, Ali, Ilyas, Mohammad, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In recent years, many protocols for efficient Multicasting have been proposed. However, many of the Internet Service Providers (ISPs) are reluctant to use multicastenabled routers in their networks. To provide such incentives, new protocols are needed to improve the quality of their services. The challenge is to find a compromise between allocating Bandwidth (BW) among different flows in a fair manner, and favoring multicast sessions over unicast sessions. In addition, the overall higher...
Show moreIn recent years, many protocols for efficient Multicasting have been proposed. However, many of the Internet Service Providers (ISPs) are reluctant to use multicastenabled routers in their networks. To provide such incentives, new protocols are needed to improve the quality of their services. The challenge is to find a compromise between allocating Bandwidth (BW) among different flows in a fair manner, and favoring multicast sessions over unicast sessions. In addition, the overall higher level of receiver satisfaction should be achieved. In this dissertation, we propose three new innovative protocols to favor multicast sessions over unicast sessions. Multicast Favored BW Allocation- Logarithmic (MFBA-Log) and Multicast Favored BW Allocation-Linear (MFBALin) protocols allocate BW proportional to the number of down stream receivers. The proposed Multicast Reserved BW Allocation (MRBA) protocol allocates part of the BW in the links only to multicast sessions. Simulation results show the increase in the overall level of Receiver Satisfaction in the network.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fau/fd/FA00012581
- Subject Headings
- Multicasting (Computer networks), Computer network protocols, Computer algorithms
- Format
- Document (PDF)
- Title
- Quantum Circuits for Symmetric Cryptanalysis.
- Creator
- Langenberg, Brandon Wade, Steinwandt, Rainer, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
Quantum computers and quantum computing is a reality of the near feature. Companies such as Google and IBM have already declared they have built a quantum computer and tend to increase their size and capacity moving forward. Quantum computers have the ability to be exponentially more powerful than classical computers today. With this power modeling behavior of atoms or chemical reactions in unusual conditions, improving weather forecasts and traffic conditions become possible. Also, their...
Show moreQuantum computers and quantum computing is a reality of the near feature. Companies such as Google and IBM have already declared they have built a quantum computer and tend to increase their size and capacity moving forward. Quantum computers have the ability to be exponentially more powerful than classical computers today. With this power modeling behavior of atoms or chemical reactions in unusual conditions, improving weather forecasts and traffic conditions become possible. Also, their ability to exponentially speed up some computations makes the security of todays data and items a major concern and interest. In the area of cryptography, some encryption schemes (such as RSA) are already deemed broken by the onset of quantum computing. Some encryption algorithms have already been created to be quantum secure and still more are being created each day. While these algorithms in use today are considered quantum-safe not much is known of what a quantum attack would look like on these algorithms. Specifically, this paper discusses how many quantum bits, quantum gates and even the depth of these gates that would be needed for such an attack. The research below was completed to shed light on these areas and offer some concrete numbers of such an attack.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013010
- Subject Headings
- Quantum computing, Cryptography, Cryptanalysis, Data encryption (Computer science), Computer algorithms
- Format
- Document (PDF)
- Title
- Applications of evolutionary algorithms in mechanical engineering.
- Creator
- Nelson, Kevin M., Florida Atlantic University, Huang, Ming Z., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
Many complex engineering designs have conflicting requirements that must be compromised to effect a successful product. Traditionally, the engineering approach breaks up the complex problem into smaller sub-components in known areas of study. Tradeoffs occur between the conflicting requirements and a sub-optimal design results. A new computational approach based on the evolutionary processes observed in nature is explored in this dissertation. Evolutionary algorithms provide methods to solve...
Show moreMany complex engineering designs have conflicting requirements that must be compromised to effect a successful product. Traditionally, the engineering approach breaks up the complex problem into smaller sub-components in known areas of study. Tradeoffs occur between the conflicting requirements and a sub-optimal design results. A new computational approach based on the evolutionary processes observed in nature is explored in this dissertation. Evolutionary algorithms provide methods to solve complex engineering problems by optimizing the entire system, rather than sub-components of the system. Three standard forms of evolutionary algorithms have been developed: evolutionary programming, genetic algorithms and evolution strategies. Mathematical and algorithmic details are described for each of these methods. In this dissertation, four engineering problems are explored using evolutionary programming and genetic algorithms. Exploiting the inherently parallel nature of evolution, a parallel version of evolutionary programming is developed and implemented on the MasPar MP-1. This parallel version is compared to a serial version of the same algorithm in the solution of a trial set of unimodal and multi-modal functions. The parallel version had significantly improved performance over the serial version of evolutionary programming. An evolutionary programming algorithm is developed for the solution of electronic part placement problems with different assembly devices. The results are compared with previously published results for genetic algorithms and show that evolutionary programming can successfully solve this class of problem using fewer genetic operators. The finite element problem is cast into an optimization problem and an evolutionary programming algorithm is developed to solve 2-D truss problems. A comparison to LU-decomposition showed that evolutionary programming can solve these problems and that it has the capability to solve the more complex nonlinear problems. Finally, ordinary differential equations are discretized using finite difference representation and an objective function is formulated for the application of evolutionary programming and genetic algorithms. Evolutionary programming and genetic algorithms have the benefit of permitting over-constraining a problem to obtain a successful solution. In all of these engineering problems, evolutionary algorithms have been shown to offer a new solution method.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12514
- Subject Headings
- Mechanical engineering, Genetic algorithms, Evolutionary programming (Computer science)
- Format
- Document (PDF)
- Title
- Asynchronous distributed algorithms for multi-agent supporting systems.
- Creator
- Jin, Kai., Florida Atlantic University, Larrondo-Petrie, Maria M.
- Abstract/Description
-
Based on multi-agent supporting system (MASS) structures used to investigate the synchronous algorithms in my previous work, the partially and totally asynchronous distributed algorithms are proposed in this thesis. The stability of discrete MASS with asynchronous distributed algorithms is analyzed. The partially asynchronous algorithms proposed for both 1- and 2-dimensional MASS are proven to be convergent, if the vertical disturbances vary sufficiently slower than the convergent time of the...
Show moreBased on multi-agent supporting system (MASS) structures used to investigate the synchronous algorithms in my previous work, the partially and totally asynchronous distributed algorithms are proposed in this thesis. The stability of discrete MASS with asynchronous distributed algorithms is analyzed. The partially asynchronous algorithms proposed for both 1- and 2-dimensional MASS are proven to be convergent, if the vertical disturbances vary sufficiently slower than the convergent time of the system. The adjacent error becomes zero when the system converges. It is also proven that in 1-dimensional MASS using the proposed totally asynchronous algorithm, the maximum of the absolute value of the adjacent error is non-increasing over time. Finally, the simulation results for all the above cases are presented to demonstrate the theoretical findings.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15277
- Subject Headings
- Electronic data processing--Distributed processing, Computer algorithms
- Format
- Document (PDF)
- Title
- Design and modeling of hybrid software fault-tolerant systems.
- Creator
- Zhang, Man-xia Maria., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Fault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic...
Show moreFault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic methods are developed to construct hybrid fault tolerant systems with total cost constraints. The algorithms provide a systematic approach to the design of hybrid fault tolerant systems.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14783
- Subject Headings
- Computer software--Reliability, Fault-tolerant computing, Algorithms
- Format
- Document (PDF)
- Title
- Docking the Ocean Explorer Autonomous Underwater Vehicle using a low-cost acoustic positioning system and a fuzzy logic guidance algorithm.
- Creator
- Kronen, David Mitchell., Florida Atlantic University, Smith, Samuel M., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
Having the ability to dock an Autonomous Underwater Vehicle (AUV) can significantly enhance the operation of such vehicles. In order to dock an AUV, the vehicle's position must be known precisely and a guidance algorithm must be used to drive the AUV to its dock. This thesis will examine and implement a low cost acoustic positioning system to meet the positioning requirements. At-sea tests will be used as a method of verifying the systems specifications and proper incorporation into the AUV....
Show moreHaving the ability to dock an Autonomous Underwater Vehicle (AUV) can significantly enhance the operation of such vehicles. In order to dock an AUV, the vehicle's position must be known precisely and a guidance algorithm must be used to drive the AUV to its dock. This thesis will examine and implement a low cost acoustic positioning system to meet the positioning requirements. At-sea tests will be used as a method of verifying the systems specifications and proper incorporation into the AUV. Analyses will be run on the results using several methods of interpreting the data. The second portion of this thesis will develop and test a fuzzy logic docking algorithm which will guide the AUV from a location within the range of the sonar system to the docking station. A six degree of freedom simulation incorporating the Ocean Explorer's hydrodynamic coefficients will be used for the simulation.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15502
- Subject Headings
- Oceanographic submersibles, Acoustical engineering, Underwater acoustics, Fuzzy algorithms
- Format
- Document (PDF)
- Title
- Automatic extraction and tracking of eye features from facial image sequences.
- Creator
- Xie, Xangdong., Florida Atlantic University, Sudhakar, Raghavan, Zhuang, Hanqi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the...
Show moreThe dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the processing system. A new corner detection algorithm is presented in which the problem of detecting corners is solved by minimizing a cost function. Each cost factor captures a desirable characteristic of the corner using both the gray level information and the geometrical structure of a corner. This approach additionally provides corner orientations and angles along with corner locations. The advantage of the new approach over the existing corner detectors is that it is able to improve the reliability of detection and localization by imposing criteria related to both the gray level data and the corner structure. The extraction of eye features is performed by using an improved method of deformable templates which are geometrically arranged to resemble the expected shape of the eye. The overall energy function is redefined to simplify the minimization process. The weights for the energy terms are selected based on the normalized value of the energy term. Thus the weighting schedule of the modified method does not demand any expert knowledge for the user. Rather than using a sequential procedure, all parameters of the template are changed simultaneously during the minimization process. This reduces not only the processing time but also the probability of the template being trapped in local minima. An efficient algorithm for real-time eye feature tracking from a sequence of eye images is developed in the dissertation. Based on a geometrical model which describes the characteristics of the eye, the measurement equations are formulated to relate suitably selected measurements to the tracking parameters. A discrete Kalman filter is then constructed for the recursive estimation of the eye features, while taking into account the measurement noise. The small processing time allows this tracking algorithm to be used in real-time applications. This tracking algorithm is suitable for an automated, non-intrusive and inexpensive system as the algorithm is capable of measuring the time profiles of the eye movements. The issue of compensating head movements during the tracking of eye movements is also discussed. An appropriate measurement model was established to describe the effects of head movements. Based on this model, a Kalman filter structure was formulated to carry out the compensation. The whole tracking scheme which cascades two Kalman filters is constructed to track the iris movement, while compensating the head movement. The presence of the eye blink is also taken into account and its detection is incorporated into the cascaded tracking scheme. The above algorithms have been integrated to design an automated, non-intrusive and inexpensive system which provides accurate time profile of eye movements tracking from video image frames.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12377
- Subject Headings
- Kalman filtering, Eye--Movements, Algorithms, Image processing
- Format
- Document (PDF)
- Title
- Binary representation of DNA sequences towards developing useful algorithms in bioinformatic data-mining.
- Creator
- Pandya, Shivani., Florida Atlantic University, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis refers to a research addressing the use of binary representation of the DNA for the purpose of developing useful algorithms for Bioinformatics. Pertinent studies address the use of a binary form of the DNA base chemicals in information-theoretic base so as to identify symmetry between DNA and complementary DNA. This study also refers to "fuzzy" (codon-noncodon) considerations in delinating codon and noncodon regimes in a DNA sequences. The research envisaged further includes a...
Show moreThis thesis refers to a research addressing the use of binary representation of the DNA for the purpose of developing useful algorithms for Bioinformatics. Pertinent studies address the use of a binary form of the DNA base chemicals in information-theoretic base so as to identify symmetry between DNA and complementary DNA. This study also refers to "fuzzy" (codon-noncodon) considerations in delinating codon and noncodon regimes in a DNA sequences. The research envisaged further includes a comparative analysis of the test results on the aforesaid efforts using different statistical metrics such as Hamming distance Kullback-Leibler measure etc. the observed details supports the symmetry aspect between DNA and CDNA strands. It also demonstrates capability of identifying non-codon regions in DNA even under diffused (overlapped) fuzzy states.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13089
- Subject Headings
- Bioinformatics, Data mining, Nucleotide sequence--Databases, Computer algorithms
- Format
- Document (PDF)
- Title
- PRGMDH algorithm for neural network development and its applications.
- Creator
- Tangadpelli, Chetan., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The existing Group Method of Data Handling (GMDH) algorithm has characteristics that are ideal for neural network design. This thesis introduces a new algorithm that applies some of the best characteristics of GMDH to neural network design and develops a Pruning based Regenerated Network by discarding the neurons in a layer which don't contribute for the creation of neurons in next layer. Unlike other conventional algorithms, which generate a network which is a black box, the new algorithm...
Show moreThe existing Group Method of Data Handling (GMDH) algorithm has characteristics that are ideal for neural network design. This thesis introduces a new algorithm that applies some of the best characteristics of GMDH to neural network design and develops a Pruning based Regenerated Network by discarding the neurons in a layer which don't contribute for the creation of neurons in next layer. Unlike other conventional algorithms, which generate a network which is a black box, the new algorithm provides visualization of the network displaying all the neurons in the network. The algorithm is general enough that it will accept any number of inputs and any sized training set. To show the flexibility of the Pruning based Regenerated Network, this algorithm is used to analyze different combinations of drugs and determine which pathways in these networks interact and determine the combination of drugs that take advantage of these interactions to maximize a desired effect on genes.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fcla/dt/13397
- Subject Headings
- Neural networks (Computer science), GMDH algorithms, Pattern recognition systems
- Format
- Document (PDF)
- Title
- Performance evaluation of blind equalization techniques in the digital cellular environment.
- Creator
- Boccuzzi, Joseph., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
This thesis presents simulation results evaluating the performance of blind equalization techniques in the Digital Cellular environment. A new method of a simple zero memory non-linear detector for complex signals is presented for various forms of Fractionally Spaced Equalizers (FSE). Initial simulations are conducted with Binary Phase Shift Keying (BPSK) to study the characteristics of FSEs. The simulations are then extended to complex case via $\pi/$4-Differential Quaterny Phase Shift...
Show moreThis thesis presents simulation results evaluating the performance of blind equalization techniques in the Digital Cellular environment. A new method of a simple zero memory non-linear detector for complex signals is presented for various forms of Fractionally Spaced Equalizers (FSE). Initial simulations are conducted with Binary Phase Shift Keying (BPSK) to study the characteristics of FSEs. The simulations are then extended to complex case via $\pi/$4-Differential Quaterny Phase Shift Keying ($\pi/$4-DQPSK) modulation. The primary focus in this thesis is the performance of this complex case when operating in Additive White Gaussian Noise (AWGN) and Rayleigh Multipath Fading channels.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14859
- Subject Headings
- Equalizers (Electronics), Computer algorithms, Data transmission systems, Programming electronic computers
- Format
- Document (PDF)
- Title
- PIREN(copyright): A heuristic algorithm for standard cell placement.
- Creator
- Horvath, Elizabeth Iren., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The placement problem is an important part in the design process of VLSI chips. It is necessary to have a proper placement so that all connections between modules in a chip can be routed in a minimum area without violating any physical or electrical constraints. Current algorithms either do not give optimum solutions, are computationally slow, or are difficult to parallelize. PIREN(copyright) is a parallel implementation of a force directed algorithm which seeks to overcome the large amount...
Show moreThe placement problem is an important part in the design process of VLSI chips. It is necessary to have a proper placement so that all connections between modules in a chip can be routed in a minimum area without violating any physical or electrical constraints. Current algorithms either do not give optimum solutions, are computationally slow, or are difficult to parallelize. PIREN(copyright) is a parallel implementation of a force directed algorithm which seeks to overcome the large amount of computer time associated with solving the placement problem. Each active processor in the massively parallel SIMD machine, the MasPar MP-2.2, can perform in parallel the computation necessary to place cells in an optimum location relative to one another based upon the connectivity between cells. This is due to a salient feature of the serial algorithm which allows multiple permutations to be made simultaneously on all modules in order to minimize the objective function. The serial implementation of PIREN(copyright) compares favorably in both run time and layout quality to the simulated annealing based algorithm, TimberWolf3.2$\sp\copyright$. The parallel implementation on the MP-2.2 has a speedup of 4.5 to 58.0 over the serial version of PIREN$\sp\copyright$ running of the VAX 6320, while producing layouts for several MCNC benchmarks which are of the same quality as those produced by the serial implementation.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12301
- Subject Headings
- Integrated circuits--Very large scale integration, Algorithms
- Format
- Document (PDF)
- Title
- Novel Techniques in Genetic Programming.
- Creator
- Fernandez, Thomas, Furht, Borko, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Three major problems make Genetic Programming unfeasible or impractical for real world problems. The first is the excessive time complexity.In nature the evolutionary process can take millions of years, a time frame that is clearly not acceptable for the solution of problems on a computer. In order to apply Genetic Programming to real world problems, it is essential that its efficiency be improved. The second is called overfitting (where results are inaccurate outside the training data). In a...
Show moreThree major problems make Genetic Programming unfeasible or impractical for real world problems. The first is the excessive time complexity.In nature the evolutionary process can take millions of years, a time frame that is clearly not acceptable for the solution of problems on a computer. In order to apply Genetic Programming to real world problems, it is essential that its efficiency be improved. The second is called overfitting (where results are inaccurate outside the training data). In a paper[36] for the Federal Reserve Bank, authors Neely and Weller state “a perennial problem with using flexible, powerful search procedures like Genetic Programming is overfitting, the finding of spurious patterns in the data. Given the well-documented tendency for the genetic program to overfit the data it is necessary to design procedures to mitigate this.” The third is the difficulty of determining optimal control parameters for the Genetic Programming process. Control parameters control the evolutionary process. They include settings such as, the size of the population and the number of generations to be run. In his book[45], Banzhaf describes this problem, “The bad news is that Genetic Programming is a young field and the effect of using various combinations of parameters is just beginning to be explored.” We address these problems by implementing and testing a number of novel techniques and improvements to the Genetic Programming process. We conduct experiments using data sets of various degrees of difficulty to demonstrate success with a high degree of statistical confidence.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fau/fd/FA00012570
- Subject Headings
- Evolutionary programming (Computer science), Genetic algorithms, Genetic programming (Computer science)
- Format
- Document (PDF)
- Title
- Radar cross section of an open-ended rectangular waveguide cavity: A massively parallel implementation applied to high-resolution radar cross section imaging.
- Creator
- Vann, Laura Dominick., Florida Atlantic University, Helmken, Henry, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis is concerned with adapting a sequential code that calculates the Radar Cross Section (RCS) of an open-ended rectangular waveguide cavity to a massively parallel computational platform. The primary motivation for doing this is to obtain wideband data over a large range of incident angles in order to generate a two-dimensional radar cross section image. Images generated from measured and computed data will be compared to evaluate program performance. The computer used in this...
Show moreThis thesis is concerned with adapting a sequential code that calculates the Radar Cross Section (RCS) of an open-ended rectangular waveguide cavity to a massively parallel computational platform. The primary motivation for doing this is to obtain wideband data over a large range of incident angles in order to generate a two-dimensional radar cross section image. Images generated from measured and computed data will be compared to evaluate program performance. The computer used in this implementation is a MasPar MP-1 single instruction, multiple data massively parallel computer consisting of 4,096 processors arranged in a two-dimensional mesh. The algorithm uses the mode matching method of analysis to match fields over the cavity aperture to obtain an expression for the scattered far field.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14984
- Subject Headings
- Radar cross sections, Algorithms--Data processing, Imaging systems
- Format
- Document (PDF)
- Title
- Learning in connectionist networks using the Alopex algorithm.
- Creator
- Venugopal, Kootala Pattath., Florida Atlantic University, Pandya, Abhijit S., Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Alopex algorithm is presented as a universal learning algorithm for connectionist models. It is shown that the Alopex procedure could be used efficiently as a supervised learning algorithm for such models. The algorithm is demonstrated successfully on a variety of network architectures. Such architectures include multilayer perceptrons, time-delay models, asymmetric, fully recurrent networks and memory neuron networks. The learning performance as well as the generation capability of the...
Show moreThe Alopex algorithm is presented as a universal learning algorithm for connectionist models. It is shown that the Alopex procedure could be used efficiently as a supervised learning algorithm for such models. The algorithm is demonstrated successfully on a variety of network architectures. Such architectures include multilayer perceptrons, time-delay models, asymmetric, fully recurrent networks and memory neuron networks. The learning performance as well as the generation capability of the Alopex algorithm are compared with those of the backpropagation procedure, concerning a number of benchmark problems, and it is shown that the Alopex has specific advantages over the backpropagation. Two new architectures (gain layer schemes) are proposed for the on-line, direct adaptive control of dynamical systems using neural networks. The proposed schemes are shown to provide better dynamic response and tracking characteristics, than the other existing direct control schemes. A velocity reference scheme is introduced to improve the dynamic response of on-line learning controllers. The proposed learning algorithm and architectures are studied on three practical problems; (i) Classification of handwritten digits using Fourier Descriptors; (ii) Recognition of underwater targets from sonar returns, considering temporal dependencies of consecutive returns and (iii) On-line learning control of autonomous underwater vehicles, starting with random initial conditions. Detailed studies are conducted on the learning control applications. Effect of the network learning rate on the tracking performance and dynamic response of the system are investigated. Also, the ability of the neural network controllers to adapt to slow and sudden varying parameter disturbances and measurement noise is studied in detail.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12325
- Subject Headings
- Computer algorithms, Computer networks, Neural networks (Computer science), Machine learning
- Format
- Document (PDF)