Current Search: info:fedora/islandora:sp_large_image_cmodel (x) » College of Engineering and Computer Science (x) » Algorithms (x)
View All Items
Pages
- Title
- A VLSI implementable thinning algorithm.
- Creator
- Zhang, Wei, Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Thinning is a very important step in a Character Recognition System. This thesis evolves a thinning algorithm that can be hardware implemented to improve the speed of the process. The software thinning algorithm features a simple set of rules that can be applied on both hexagonal and orthogonal character images. The hardware architecture features the SIMD structure, simple processing elements and near neighbor communications. The algorithm was simulated against the U.S. Postal Service...
Show moreThinning is a very important step in a Character Recognition System. This thesis evolves a thinning algorithm that can be hardware implemented to improve the speed of the process. The software thinning algorithm features a simple set of rules that can be applied on both hexagonal and orthogonal character images. The hardware architecture features the SIMD structure, simple processing elements and near neighbor communications. The algorithm was simulated against the U.S. Postal Service Character Database. The architecture, evolved with consideration of both the software constraints and the physical layout limitations, was simulated using VHDL hardware description language. Subsequent to VLSI design and simulations the chip was fabricated. The project provides for a feasibility study in utilizing the parallel processor architecture for the implementation of a parallel image thinning algorithm. It is hoped that such a hardware implementation will speed up the processing and lead eventually to a real time system.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14837
- Subject Headings
- Optical character recognition devices--Computer simulation, Algorithms, Integrated circuits--Very large scale integration
- Format
- Document (PDF)
- Title
- Two-dimensional feature tracking algorithm for motion analysis.
- Creator
- Krishnan, Srivatsan., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the...
Show moreIn this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the first image of the sequence to be analyzed. Any kind of motion, i.e., 6 DOF (translation and rotation), can be tolerated keeping in mind the pixels-per-frame motion limitations. No subpixel computations are necessary. Taking into account constraints of temporal continuity, the algorithm uses simple and efficient predictive tracking over multiple frames. Trajectories of features on multiple objects can also be computed. The algorithm accepts a slow, continuous change of brightness D.C. level in the pixels of the feature. Another important aspect of the algorithm is the use of an adaptive feature matching threshold that accounts for change in relative brightness of neighboring pixels. As applications of the feature-tracking algorithm and to test the accuracy of the tracking, we show how the algorithm has been used to extract the Focus of Expansion (FOE) and compute the Time-to-contact using real image sequences of unstructured, unknown environments. In both these applications, information from multiple frames is used.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15030
- Subject Headings
- Algorithms, Image transmission, Motion perception (Vision), Image processing
- Format
- Document (PDF)
- Title
- SUSTAINING CHAOS USING DEEP REINFORCEMENT LEARNING.
- Creator
- Vashishtha, Sumit, Verma, Siddhartha, Florida Atlantic University, Department of Ocean and Mechanical Engineering, College of Engineering and Computer Science
- Abstract/Description
-
Numerous examples arise in fields ranging from mechanics to biology where disappearance of Chaos can be detrimental. Preventing such transient nature of chaos has been proven to be quite challenging. The utility of Reinforcement Learning (RL), which is a specific class of machine learning techniques, in discovering effective control mechanisms in this regard is shown. The autonomous control algorithm is able to prevent the disappearance of chaos in the Lorenz system exhibiting meta-stable...
Show moreNumerous examples arise in fields ranging from mechanics to biology where disappearance of Chaos can be detrimental. Preventing such transient nature of chaos has been proven to be quite challenging. The utility of Reinforcement Learning (RL), which is a specific class of machine learning techniques, in discovering effective control mechanisms in this regard is shown. The autonomous control algorithm is able to prevent the disappearance of chaos in the Lorenz system exhibiting meta-stable chaos, without requiring any a-priori knowledge about the underlying dynamics. The autonomous decisions taken by the RL algorithm are analyzed to understand how the system’s dynamics are impacted. Learning from this analysis, a simple control-law capable of restoring chaotic behavior is formulated. The reverse-engineering approach adopted in this work underlines the immense potential of the techniques used here to discover effective control strategies in complex dynamical systems. The autonomous nature of the learning algorithm makes it applicable to a diverse variety of non-linear systems, and highlights the potential of RLenabled control for regulating other transient-chaos like catastrophic events.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013498
- Subject Headings
- Machine learning--Technique, Reinforcement learning, Algorithms, Chaotic behavior in systems, Nonlinear systems
- Format
- Document (PDF)
- Title
- PIREN(copyright): A heuristic algorithm for standard cell placement.
- Creator
- Horvath, Elizabeth Iren., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The placement problem is an important part in the design process of VLSI chips. It is necessary to have a proper placement so that all connections between modules in a chip can be routed in a minimum area without violating any physical or electrical constraints. Current algorithms either do not give optimum solutions, are computationally slow, or are difficult to parallelize. PIREN(copyright) is a parallel implementation of a force directed algorithm which seeks to overcome the large amount...
Show moreThe placement problem is an important part in the design process of VLSI chips. It is necessary to have a proper placement so that all connections between modules in a chip can be routed in a minimum area without violating any physical or electrical constraints. Current algorithms either do not give optimum solutions, are computationally slow, or are difficult to parallelize. PIREN(copyright) is a parallel implementation of a force directed algorithm which seeks to overcome the large amount of computer time associated with solving the placement problem. Each active processor in the massively parallel SIMD machine, the MasPar MP-2.2, can perform in parallel the computation necessary to place cells in an optimum location relative to one another based upon the connectivity between cells. This is due to a salient feature of the serial algorithm which allows multiple permutations to be made simultaneously on all modules in order to minimize the objective function. The serial implementation of PIREN(copyright) compares favorably in both run time and layout quality to the simulated annealing based algorithm, TimberWolf3.2$\sp\copyright$. The parallel implementation on the MP-2.2 has a speedup of 4.5 to 58.0 over the serial version of PIREN$\sp\copyright$ running of the VAX 6320, while producing layouts for several MCNC benchmarks which are of the same quality as those produced by the serial implementation.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12301
- Subject Headings
- Integrated circuits--Very large scale integration, Algorithms
- Format
- Document (PDF)
- Title
- Perceptual methods for video coding.
- Creator
- Adzic, Velibor, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are...
Show moreThe main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are implemented in the state-of- the-art video encoders. Result of using our algorithms is visually lossless compression with improved efficiency, as verified by standard subjective quality and psychophysical tests. Savings in bitrate compared to the High Efficiency Video Coding / H.265 reference implementation are up to 45%.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004074, http://purl.flvc.org/fau/fd/FA00004074
- Subject Headings
- Algorithms, Coding theory, Digital coding -- Data processing, Imaging systems -- Image quality, Perception, Video processing -- Data processing
- Format
- Document (PDF)
- Title
- PATH PLANNING ALGORITHMS FOR UNMANNED AIRCRAFT SYSTEMS WITH A SPACE-TIME GRAPH.
- Creator
- Steinberg, Andrew, Cardei, Mihaela, Cardei, Ionut, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Unmanned Aircraft Systems (UAS) have grown in popularity due to their widespread potential applications, including efficient package delivery, monitoring, surveillance, search and rescue operations, agricultural uses, along with many others. As UAS become more integrated into our society and airspace, it is anticipated that the development and maintenance of a path planning collision-free system will become imperative, as the safety and efficiency of the airspace represents a priority. The...
Show moreUnmanned Aircraft Systems (UAS) have grown in popularity due to their widespread potential applications, including efficient package delivery, monitoring, surveillance, search and rescue operations, agricultural uses, along with many others. As UAS become more integrated into our society and airspace, it is anticipated that the development and maintenance of a path planning collision-free system will become imperative, as the safety and efficiency of the airspace represents a priority. The dissertation defines this problem as the UAS Collision-free Path Planning Problem. The overall objective of the dissertation is to design an on-demand, efficient and scalable aerial highway path planning system for UAS. The dissertation explores two solutions to this problem. The first solution proposes a space-time algorithm that searches for shortest paths in a space-time graph. The solution maps the aerial traffic map to a space-time graph that is discretized on the inter-vehicle safety distance. This helps compute safe trajectories by design. The mechanism uses space-time edge pruning to maintain the dynamic availability of edges as vehicles move on a trajectory. Pruning edges is critical to protect active UAS from collisions and safety hazards. The dissertation compares the solution with another related work to evaluate improvements in delay, run time scalability, and admission success while observing up to 9000 flight requests in the network. The second solution to the path planning problem uses a batch planning algorithm. This is a new mechanism that processes a batch of flight requests with prioritization on the current slack time. This approach aims to improve the planning success ratio. The batch planning algorithm is compared with the space-time algorithm to ascertain improvements in admission ratio, delay ratio, and running time, in scenarios with up to 10000 flight requests.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013696
- Subject Headings
- Unmanned aerial vehicles, Drone aircraft, Drone aircraft--Automatic control, Space and time, Algorithms
- Format
- Document (PDF)
- Title
- PALSAM INPUT DATA FILE GENERATOR.
- Creator
- ROBINSON, WILLIAM ROBERT, JR., Florida Atlantic University, Marcovitz, Alan B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The capabilities and limitations of Programmable Array Logic devices (PALs) are presented and compared to other logic devices. PALs are field programmable devices and a program called PALSAM exists to assist the designer in programming PALs. The attributes and limitations of PALSAM are discussed. The PALSAM Input Data File Generator program was written to eliminate many of the limitations of PALSAM. The need for an algorithmic method of reducing a general logic expression to a minimal sum-of...
Show moreThe capabilities and limitations of Programmable Array Logic devices (PALs) are presented and compared to other logic devices. PALs are field programmable devices and a program called PALSAM exists to assist the designer in programming PALs. The attributes and limitations of PALSAM are discussed. The PALSAM Input Data File Generator program was written to eliminate many of the limitations of PALSAM. The need for an algorithmic method of reducing a general logic expression to a minimal sum-of-products form is demonstrated. Several algorithms are discussed. The Zissos, Duncan and Jones Algorithm, which claims to produce a minimal sum-of-products expression but is presented without proof by its authors, is disproved by example. A modification of this algorithm is presented without proof. When tested in the 276 possible cases involving up to three variables, this new algorithm always produced a minimal sum-of-products expression, while the original algorithm failed in six of these cases. Finally, the PALSAM Input Data File Generator program which uses the modified algorithm is presented and documented.
Show less - Date Issued
- 1984
- PURL
- http://purl.flvc.org/fcla/dt/14199
- Subject Headings
- Programmable array logic, Microprocessors--Programming, Algorithms
- Format
- Document (PDF)
- Title
- Optimal coordination of robotic systems with redundancy.
- Creator
- Varma, K. R. Hareendra., Florida Atlantic University, Huang, Ming Z., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The research work described in this dissertation is primarily aimed at developing efficient algorithms for the rate allocation problem in redundant serial chain manipulators. While the problem of redundancy resolution in the context of robot manipulators, had been a well researched one, search for optimality in computational efficiency has caught the attention only recently. Further, the idea of modifying the already developed performance criteria to improve computational efficiency, had...
Show moreThe research work described in this dissertation is primarily aimed at developing efficient algorithms for the rate allocation problem in redundant serial chain manipulators. While the problem of redundancy resolution in the context of robot manipulators, had been a well researched one, search for optimality in computational efficiency has caught the attention only recently. Further, the idea of modifying the already developed performance criteria to improve computational efficiency, had rarely been treated with the importance it deserves. The present work in fact, provides many alternative formulations to the existing performance criteria. As a result of the present investigation, we developed a mathematical tool for the minimum norm solution for underdetermined systems of linear equations, using the orthogonal null space. Closed form equations were provided for cases with two or three degrees of redundancy. Detailed study of computational efficiency showed substantial reduction in the arithmetic operations necessary for such a solution. The above concept was later generalized to utilize the self motion characteristics of redundant manipulators, to provide alternate solutions. The duality concept between the Jacobian and the null space, established in this work, enabled the authors to develop a highly efficient formulation as an alternative to the commonly used pseudoinverse-based solution. In addition, by providing the example of a 7R anthropomorphic arm, the feasibility of obtaining analytical formulation of null space coefficient matrix and the transformed end effector velocity vector for any geometry has been demonstrated. By utilizing the duality between the Jacobian and its null space, different performance criteria commonly used in the redundancy resolution problem have been modified, increasing the computational efficiency. Various simulations performed as part of the present work, utilizing the analytical null space coefficient matrix and the transformed end effector velocity vector for 3R planar case and 7R spatial anthropomorphic arm corroborates the theories. Another practical application has been demonstrated by the example of a Titan 7F arm mounted on a mobile base. The work is consolidated by reiterating the insight obtained to the physical aspects of the redundancy resolution problem and providing a direction for future work. Suggestions are given for extending the work for high d.o.r. systems, with relevant mathematical foundations. Future work in the area of dynamic modelling, is delineated which also includes an example of modified dynamic manipulability measure.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12292
- Subject Headings
- Algorithms, Redundancy (Engineering), Robotics, Robots--Motion
- Format
- Document (PDF)
- Title
- NEIGHBORING NEAR MINIMUM-TIME CONTROLS WITH DISCONTINUITIES AND THE APPLICATION TO THE CONTROL OF MANIPULATORS (PATH-PLANNING, TRACKING, FEEDBACK).
- Creator
- Zhuang, Hanqi, Florida Atlantic University, Hamano, Fumio, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents several algorithms to treat the problem of closed-loop near minimum-time controls with discontinuities. First, a neighboring control algorithm is developed to solve the problem in which controls are bounded by constant constraints. Secondly, the scheme is extended to account for state-dependent control constraints. And finally, a path tracking algorithm for robotic manipulators is presented, which is also a neighboring control algorithm. These algorithms are suitable for...
Show moreThis thesis presents several algorithms to treat the problem of closed-loop near minimum-time controls with discontinuities. First, a neighboring control algorithm is developed to solve the problem in which controls are bounded by constant constraints. Secondly, the scheme is extended to account for state-dependent control constraints. And finally, a path tracking algorithm for robotic manipulators is presented, which is also a neighboring control algorithm. These algorithms are suitable for real time controls because the on-line computations involved are relatively simple. Simulation results show that these algorithms work well despite the fact that the prescribed final points can not be reached exactly.
Show less - Date Issued
- 1986
- PURL
- http://purl.flvc.org/fcla/dt/14326
- Subject Headings
- Manipulators (Mechanism), Control theory, Algorithms
- Format
- Document (PDF)
- Title
- Machine Learning Algorithms with Big Medicare Fraud Data.
- Creator
- Bauder, Richard Andrew, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Healthcare is an integral component in peoples lives, especially for the rising elderly population, and must be affordable. The United States Medicare program is vital in serving the needs of the elderly. The growing number of people enrolled in the Medicare program, along with the enormous volume of money involved, increases the appeal for, and risk of, fraudulent activities. For many real-world applications, including Medicare fraud, the interesting observations tend to be less frequent...
Show moreHealthcare is an integral component in peoples lives, especially for the rising elderly population, and must be affordable. The United States Medicare program is vital in serving the needs of the elderly. The growing number of people enrolled in the Medicare program, along with the enormous volume of money involved, increases the appeal for, and risk of, fraudulent activities. For many real-world applications, including Medicare fraud, the interesting observations tend to be less frequent than the normative observations. This difference between the normal observations and those observations of interest can create highly imbalanced datasets. The problem of class imbalance, to include the classification of rare cases indicating extreme class imbalance, is an important and well-studied area in machine learning. The effects of class imbalance with big data in the real-world Medicare fraud application domain, however, is limited. In particular, the impact of detecting fraud in Medicare claims is critical in lessening the financial and personal impacts of these transgressions. Fortunately, the healthcare domain is one such area where the successful detection of fraud can garner meaningful positive results. The application of machine learning techniques, plus methods to mitigate the adverse effects of class imbalance and rarity, can be used to detect fraud and lessen the impacts for all Medicare beneficiaries. This dissertation presents the application of machine learning approaches to detect Medicare provider claims fraud in the United States. We discuss novel techniques to process three big Medicare datasets and create a new, combined dataset, which includes mapping fraud labels associated with known excluded providers. We investigate the ability of machine learning techniques, unsupervised and supervised, to detect Medicare claims fraud and leverage data sampling methods to lessen the impact of class imbalance and increase fraud detection performance. Additionally, we extend the study of class imbalance to assess the impacts of rare cases in big data for Medicare fraud detection.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013108
- Subject Headings
- Medicare fraud, Big data, Machine learning, Algorithms
- Format
- Document (PDF)
- Title
- MACHINE LEARNING ALGORITHMS FOR THE DETECTION AND ANALYSIS OF WEB ATTACKS.
- Creator
- Zuech, Richard, Khoshgoftaar, Taghi M., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
The Internet has provided humanity with many great benefits, but it has also introduced new risks and dangers. E-commerce and other web portals have become large industries with big data. Criminals and other bad actors constantly seek to exploit these web properties through web attacks. Being able to properly detect these web attacks is a crucial component in the overall cybersecurity landscape. Machine learning is one tool that can assist in detecting web attacks. However, properly using...
Show moreThe Internet has provided humanity with many great benefits, but it has also introduced new risks and dangers. E-commerce and other web portals have become large industries with big data. Criminals and other bad actors constantly seek to exploit these web properties through web attacks. Being able to properly detect these web attacks is a crucial component in the overall cybersecurity landscape. Machine learning is one tool that can assist in detecting web attacks. However, properly using machine learning to detect web attacks does not come without its challenges. Classification algorithms can have difficulty with severe levels of class imbalance. Class imbalance occurs when one class label disproportionately outnumbers another class label. For example, in cybersecurity, it is common for the negative (normal) label to severely outnumber the positive (attack) label. Another difficulty encountered in machine learning is models can be complex, thus making it difficult for even subject matter experts to truly understand a model’s detection process. Moreover, it is important for practitioners to determine which input features to include or exclude in their models for optimal detection performance. This dissertation studies machine learning algorithms in detecting web attacks with big data. Severe class imbalance is a common problem in cybersecurity, and mainstream machine learning research does not sufficiently consider this with web attacks. Our research first investigates the problems associated with severe class imbalance and rarity. Rarity is an extreme form of class imbalance where the positive class suffers extremely low positive class count, thus making it difficult for the classifiers to discriminate. In reducing imbalance, we demonstrate random undersampling can effectively mitigate the class imbalance and rarity problems associated with web attacks. Furthermore, our research introduces a novel feature popularity technique which produces easier to understand models by only including the fewer, most popular features. Feature popularity granted us new insights into the web attack detection process, even though we had already intensely studied it. Even so, we proceed cautiously in selecting the best input features, as we determined that the “most important” Destination Port feature might be contaminated by lopsided traffic distributions.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013823
- Subject Headings
- Machine learning, Computer security, Algorithms, Cybersecurity
- Format
- Document (PDF)
- Title
- INVESTIGATING MACHINE LEARNING ALGORITHMS WITH IMBALANCED BIG DATA.
- Creator
- Hasanin, Tawfiq, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Recent technological developments have engendered an expeditious production of big data and also enabled machine learning algorithms to produce high-performance models from such data. Nonetheless, class imbalance (in binary classifications) between the majority and minority classes in big data can skew the predictive performance of the classification algorithms toward the majority (negative) class whereas the minority (positive) class usually holds greater value for the decision makers. Such...
Show moreRecent technological developments have engendered an expeditious production of big data and also enabled machine learning algorithms to produce high-performance models from such data. Nonetheless, class imbalance (in binary classifications) between the majority and minority classes in big data can skew the predictive performance of the classification algorithms toward the majority (negative) class whereas the minority (positive) class usually holds greater value for the decision makers. Such bias may lead to adverse consequences, some of them even life-threatening, when the existence of false negatives is generally costlier than false positives. The size of the minority class can vary from fair to extraordinary small, which can lead to different performance scores for machine learning algorithms. Class imbalance is a well-studied area for traditional data, i.e., not big data. However, there is limited research focusing on both rarity and severe class imbalance in big data.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013316
- Subject Headings
- Algorithms, Machine learning, Big data--Data processing, Big data
- Format
- Document (PDF)
- Title
- Information-theoretics based analysis of hard handoffs in mobile communications.
- Creator
- Bendett, Raymond Morris., Florida Atlantic University, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of information-theoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the log-likelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a...
Show moreThe research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of information-theoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the log-likelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a statistical model to evaluate optimum instants of measurements of the metric used for hard handoff decision. The aforesaid objectives refer to a practical scenario in which a mobile station (MS) traveling away from a serving base station (BS-I) may suffer communications impairment due to interference and shadowing affects, especially in an urban environment. As a result, it will seek to switch over to another base station (BS-II) that facilitates a stronger signal level. This is called handoff procedure. (The hard handoff refers to the specific case in which only one base station serves the mobile at the instant of handover). Classically, the handoff decision is done on the basis of the difference between received signal strengths (RSS) from BS-I and BS-II. The algorithms developed here, in contrast, stipulate the decision criterion set by the statistical divergence and/or log-likelihood ratio that exists between the received signals. The purpose of the present study is to evaluate the relative efficacy of the conventional and proposed algorithms in reference to: (i) Minimization of unnecessary handoffs ("ping-pongs"); (ii) Minimization of delay in handing over; (iii) Ease of implementation and (iv) Minimization of possible call dropouts due to ineffective handover envisaged. Simulated results with data commensurate with practical considerations are furnished and discussed. Background literature is presented in the introductory chapter and scope for future work is identified via open questions in the concluding chapter.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12639
- Subject Headings
- Mobile communication systems, Information theory, Algorithms
- Format
- Document (PDF)
- Title
- GENERATIVE ADVERSARIAL NETWORK DATA GENERATION FOR THE USE OF REAL TIME IMAGE DETECTION IN SIDE-SCAN SONAR IMAGERY.
- Creator
- McGinley, James Patrick, Dhanak, Manhar, Florida Atlantic University, Department of Ocean and Mechanical Engineering, College of Engineering and Computer Science
- Abstract/Description
-
Automatic target recognition of unexploded ordnances in side scan sonar imagery has been a struggling task, due to the lack of publicly available side-scan sonar data. Real time image detection and classification algorithms have been implemented to combat this task, however, machine learning algorithms require a substantial amount of training data to properly detect specific targets. Transfer learning methods are used to replace the need of large datasets, by using a pre trained network on...
Show moreAutomatic target recognition of unexploded ordnances in side scan sonar imagery has been a struggling task, due to the lack of publicly available side-scan sonar data. Real time image detection and classification algorithms have been implemented to combat this task, however, machine learning algorithms require a substantial amount of training data to properly detect specific targets. Transfer learning methods are used to replace the need of large datasets, by using a pre trained network on the side-scan sonar images. In the present study the implementation of a generative adversarial network is used to generate meaningful sonar imagery from a small dataset. The generated images are then added to the existing dataset to train an image detection and classification algorithm. The study looks to demonstrate that generative images can be used to aid in detecting objects of interest in side-scan sonar imagery.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013394
- Subject Headings
- Sidescan sonar, Algorithms, Machine learning
- Format
- Document (PDF)
- Title
- A general pressure based Navier-Stokes solver in arbitrary configurations.
- Creator
- Ke, Zhao Ping., Florida Atlantic University, Chow, Wen L., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
A pressure-based computer program for a general Navier-Stokes equations has been developed. Body-fitted coordinate system is employed to handle flows with complex geometry. Non-staggered grid is used while the pressure oscillation is eliminated by a special pressure interpolation scheme. The hybrid algorithm is adopted to discretize the equations and the finite-difference equations are solved by TDMA, while the whole solution is obtained through an under-relaxed iterative process. The...
Show moreA pressure-based computer program for a general Navier-Stokes equations has been developed. Body-fitted coordinate system is employed to handle flows with complex geometry. Non-staggered grid is used while the pressure oscillation is eliminated by a special pressure interpolation scheme. The hybrid algorithm is adopted to discretize the equations and the finite-difference equations are solved by TDMA, while the whole solution is obtained through an under-relaxed iterative process. The pressure field is evaluated using the compressible from of the SIMPLE algorithm., To test the accuracy and efficiency of the computer program, problems of incompressible and compressible flows are calculated. As examples of inviscid compressible flow problems, flows over a bump with 10% and 4% thickness are computed with the incoming Mach numbers of M[infinity] = 0.5 (subsonic flow), M[infinity] = 0.675 (transonic flow and M[infinity] = 1.65 (supersonic flow). One laminar subsonic flow over a bump with 5% thickness at M[infinity] = 0.5 is also calculated with the consideration of the full energy equation. With the help of the k-epsilon model incorporating the wall function, the computations of two turbulent incompressible flows are carried out. One is the flow past a flat plate and the other over a flame holder. As an application to the three-dimensional flow, a laminar flow in a driven cubic cavity is calculated. All the numerical results obtained here are compared with experimental data or other numerical results available in the literature.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12330
- Subject Headings
- Navier-Stokes equations--Numerical solutions--Data processing, Algorithms, Flows (Differential dynamical systems)
- Format
- Document (PDF)
- Title
- Evolutionary algorithms for design and control of material handling and manufacturing systems.
- Creator
- Kanwar, Pankaj., Florida Atlantic University, Han, Chingping (Jim), College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution...
Show moreThe crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution strategies and neural networks. The emergence of massively parallel systems has made these inherently parallel algorithms of high practical interest. The advantages offered by these algorithms over other classical techniques has resulted in their wide acceptance. These algorithms have been applied for solving a large class of interesting problems, for which no efficient or reasonably fast algorithm exists. This thesis extends their usage to the domain of production research. Problems of high practical interest in the domain of production research are solved using a subclass of these algorithms i.e. those based on the principle of evolution. The problems include: the flowpath design of AGV systems and vehicle routing in a transportation system. Furthermore, a Genetic Based Machine Learning (GBML) system has been developed for optimal scheduling and control of a job shop.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15025
- Subject Headings
- Industrial productivity--Data processing, Algorithms, Genetic algorithms, Motor vehicles--Automatic location systems, Materials handling--Computer simulation, Manufacturing processes--Computer simulation
- Format
- Document (PDF)
- Title
- Evolution and application of a parallel algorithm for explicit transient finite element analysis on SIMD/MIMD computers.
- Creator
- Das, Partha S., Florida Atlantic University, Case, Robert O., Tsai, Chi-Tay, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP-1) machine, is presented, and then extended to implementation on the MIMD computer, Cray-T3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric...
Show moreThe development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP-1) machine, is presented, and then extended to implementation on the MIMD computer, Cray-T3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric elements for the nonlinear dynamic analysis of shells of revolution. The parallel algorithm required the development of a new procedure, called an 'exchange', which consists of an exchange of nodal forces at each time step to replace the standard gather-assembly operations in sequential code. In addition, the data was reconfigured so that all nodal variables associated with an element are stored in a processor along with other element data. The architectural and Fortran programming language features of the MasPar MP-1 and Cray-T3D computers which are pertinent to finite element computations are also summarized, and sample code segments are provided to illustrate programming in a data parallel environment. The governing equations, the finite element discretization and a comparison between their implementation on Von Neumann and SIMD-MIMD parallel computers are discussed to demonstrate their applicability and the important differences in the new algorithm. Various large scale transient problems are solved using the parallel data structure and elemental decomposition algorithm and measured performances are presented and analyzed in detail. Results show that Cray-T3D is a very promising parallel computer for finite element computation. The 32 processors of this machine shows an overall speedup of 27-28, i.e. an efficiency of 85% or more and 128 processors shows a speedup of 70-77, i.e. an efficiency of 55% or more. The Cray-T3D results demonstrated that this machine is capable of outperforming the Cray-YMP by a factor of about 10 for finite element problems with 4K elements, therefore, the method of developing the parallel data structure and its associated elemental decomposition algorithm is recommended for implementation on other finite element code in this machine. However, the results from MasPar MP-1 show that this new algorithm for explicit finite element computations do not produce very efficient parallel code on this computer and therefore, the new data structure is not recommended for further use on this MasPar machine.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12500
- Subject Headings
- Finite element method, Algorithms, Parallel computers
- Format
- Document (PDF)
- Title
- Enhanced Fibonacci Cubes.
- Creator
- Qian, Haifeng., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
We propose the enhanced Fibonacci cube (EFC), which is defined based on the sequence Fn = 2(n-2) + 2F(n-4). We study its topological properties, embeddings, applications, routings, VLSI/WSI implementations, and its extensions. Our results show that EFC retains many properties of the hypercube. It contains the Fibonacci cube (FC) and extended Fibonacci cube of the same order as subgraphs and maintains virtually all the desirable properties of FC. EFC is even better in some structural...
Show moreWe propose the enhanced Fibonacci cube (EFC), which is defined based on the sequence Fn = 2(n-2) + 2F(n-4). We study its topological properties, embeddings, applications, routings, VLSI/WSI implementations, and its extensions. Our results show that EFC retains many properties of the hypercube. It contains the Fibonacci cube (FC) and extended Fibonacci cube of the same order as subgraphs and maintains virtually all the desirable properties of FC. EFC is even better in some structural properties, embeddings, applications and VLSI designs than FC or hypercube. With EFC, there are more cubes with various structures and sizes for selection, and more backup cubes into which faulty hypercubes can be reconfigured, which alleviates the size limitation of the hypercube and results in a higher level of fault tolerance.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15196
- Subject Headings
- Integrated circuits--Very large scale integration, Hypercube networks (Computer networks), Algorithms, Fault-tolerant computing, Multiprocessors
- Format
- Document (PDF)
- Title
- Efficient localized broadcast algorithms in mobile ad hoc networks.
- Creator
- Lou, Wei., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The broadcast operation has a most fundamental role in mobile ad hoc networks because of the broadcasting nature of radio transmission, i.e., when a sender transmits a packet, all nodes within the sender's transmission range will be affected by this transmission. The benefit of this property is that one packet can be received by all neighbors while the negative effect is that it interferes with other transmissions. Flooding ensures that the entire network receives the packet but generates...
Show moreThe broadcast operation has a most fundamental role in mobile ad hoc networks because of the broadcasting nature of radio transmission, i.e., when a sender transmits a packet, all nodes within the sender's transmission range will be affected by this transmission. The benefit of this property is that one packet can be received by all neighbors while the negative effect is that it interferes with other transmissions. Flooding ensures that the entire network receives the packet but generates many redundant transmissions which may trigger a serious broadcast storm problem that may collapse the entire network. The broadcast storm problem can be avoided by providing efficient broadcast algorithms that aim to reduce the number of nodes that retransmit the broadcast packet while still guaranteeing that all nodes receive the packet. This dissertation focuses on providing several efficient localized broadcast algorithms to reduce the broadcast redundancy in mobile ad hoc networks. In my dissertation, the efficiency of a broadcast algorithm is measured by the number of forward nodes for relaying a broadcast packet. A classification of broadcast algorithms for mobile ad hoc networks has been provided at the beginning. Two neighbor-designating broadcast algorithms, called total dominant pruning and partial dominant pruning, have been proposed to reduce the number of the forward nodes. Several extensions based on the neighbor-designating approach have also been investigated. The cluster-based broadcast algorithm shows good performance in dense networks, and it also provides a constant upper bound approximation ratio to the optimum solution for the number of forward nodes in the worst case. A generic broadcast framework with K hop neighbor information has a trade-off between the number of the forward nodes and the size of the K-hop zone. A reliable broadcast algorithm, called double-covered broadcast, is proposed to improve the delivery ratio of a broadcast package when the transmission error rate of the network is high. The effectiveness of all these algorithms has been confirmed by simulations.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fau/fd/FADT12103
- Subject Headings
- Wireless LANS, Mobile communication systems, Wireless communication systems--Mathematics, Algorithms
- Format
- Document (PDF)
- Title
- Design and modeling of hybrid software fault-tolerant systems.
- Creator
- Zhang, Man-xia Maria., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Fault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic...
Show moreFault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic methods are developed to construct hybrid fault tolerant systems with total cost constraints. The algorithms provide a systematic approach to the design of hybrid fault tolerant systems.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14783
- Subject Headings
- Computer software--Reliability, Fault-tolerant computing, Algorithms
- Format
- Document (PDF)