Current Search: College of Engineering and Computer Science (x) » Algorithms (x)
View All Items
 Title
 An evaluation of Unsupervised Machine Learning Algorithms for Detecting Fraud and Abuse in the U.S. Medicare Insurance Program.
 Creator
 Da Rosa, Raquel C., Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The population of people ages 65 and older has increased since the 1960s and current estimates indicate it will double by 2060. Medicare is a federal health insurance program for people 65 or older in the United States. Medicare claims fraud and abuse is an ongoing issue that wastes a large amount of money every year resulting in higher health care costs and taxes for everyone. In this study, an empirical evaluation of several unsupervised machine learning approaches is performed which...
Show moreThe population of people ages 65 and older has increased since the 1960s and current estimates indicate it will double by 2060. Medicare is a federal health insurance program for people 65 or older in the United States. Medicare claims fraud and abuse is an ongoing issue that wastes a large amount of money every year resulting in higher health care costs and taxes for everyone. In this study, an empirical evaluation of several unsupervised machine learning approaches is performed which indicates reasonable fraud detection results. We employ two unsupervised machine learning algorithms, Isolation Forest and Unsupervised Random Forest, which have not been previously used for the detection of fraud and abuse on Medicare data. Additionally, we implement three other machine learning methods previously applied on Medicare data which include: Local Outlier Factor, Autoencoder, and kNearest Neighbor. For our dataset, we combine the 2012 to 2015 Medicare provider utilization and payment data and add fraud labels from the List of Excluded Individuals/Entities (LEIE) database. Results show that Local Outlier Factor is the best model to use for Medicare fraud detection.
Show less  Date Issued
 2018
 PURL
 http://purl.flvc.org/fau/fd/FA00013042
 Subject Headings
 Machine learning, Medicare fraud, Algorithms
 Format
 Document (PDF)
 Title
 Automatic extraction and tracking of eye features from facial image sequences.
 Creator
 Xie, Xangdong., Florida Atlantic University, Sudhakar, Raghavan, Zhuang, Hanqi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and nonintrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the...
Show moreThe dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and nonintrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the processing system. A new corner detection algorithm is presented in which the problem of detecting corners is solved by minimizing a cost function. Each cost factor captures a desirable characteristic of the corner using both the gray level information and the geometrical structure of a corner. This approach additionally provides corner orientations and angles along with corner locations. The advantage of the new approach over the existing corner detectors is that it is able to improve the reliability of detection and localization by imposing criteria related to both the gray level data and the corner structure. The extraction of eye features is performed by using an improved method of deformable templates which are geometrically arranged to resemble the expected shape of the eye. The overall energy function is redefined to simplify the minimization process. The weights for the energy terms are selected based on the normalized value of the energy term. Thus the weighting schedule of the modified method does not demand any expert knowledge for the user. Rather than using a sequential procedure, all parameters of the template are changed simultaneously during the minimization process. This reduces not only the processing time but also the probability of the template being trapped in local minima. An efficient algorithm for realtime eye feature tracking from a sequence of eye images is developed in the dissertation. Based on a geometrical model which describes the characteristics of the eye, the measurement equations are formulated to relate suitably selected measurements to the tracking parameters. A discrete Kalman filter is then constructed for the recursive estimation of the eye features, while taking into account the measurement noise. The small processing time allows this tracking algorithm to be used in realtime applications. This tracking algorithm is suitable for an automated, nonintrusive and inexpensive system as the algorithm is capable of measuring the time profiles of the eye movements. The issue of compensating head movements during the tracking of eye movements is also discussed. An appropriate measurement model was established to describe the effects of head movements. Based on this model, a Kalman filter structure was formulated to carry out the compensation. The whole tracking scheme which cascades two Kalman filters is constructed to track the iris movement, while compensating the head movement. The presence of the eye blink is also taken into account and its detection is incorporated into the cascaded tracking scheme. The above algorithms have been integrated to design an automated, nonintrusive and inexpensive system which provides accurate time profile of eye movements tracking from video image frames.
Show less  Date Issued
 1994
 PURL
 http://purl.flvc.org/fcla/dt/12377
 Subject Headings
 Kalman filtering, EyeMovements, Algorithms, Image processing
 Format
 Document (PDF)
 Title
 Derivation and identification of linearly parametrized robot manipulator dynamic models.
 Creator
 Xu, Hua., Florida Atlantic University, Roth, Zvi S., Zilouchian, Ali, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The dissertation focuses on robot manipulator dynamic modeling, and inertial and kinematic parameters identification problem. An automatic dynamic parameters derivation symbolic algorithm is presented. This algorithm provides the linearly independent dynamic parameters set. It is shown that all the dynamic parameters are identifiable when the trajectory is persistently exciting. The parameters set satisfies the necessary condition of finding a persistently exciting trajectory. Since in...
Show moreThe dissertation focuses on robot manipulator dynamic modeling, and inertial and kinematic parameters identification problem. An automatic dynamic parameters derivation symbolic algorithm is presented. This algorithm provides the linearly independent dynamic parameters set. It is shown that all the dynamic parameters are identifiable when the trajectory is persistently exciting. The parameters set satisfies the necessary condition of finding a persistently exciting trajectory. Since in practice the system data matrix is corrupted with noise, conventional estimation methods do not converge to the true values. An error bound is given for Kalman filters. Total least squares method is introduced to obtain unbiased estimates. Simulations studies are presented for five particular identification methods. The simulations are performed under different noise levels. Observability problems for the inertial and kinematic parameters are investigated. U%wer certain conditions all L%wearly Independent Parameters derived from are observable. The inertial and kinematic parameters can be categorized into three parts according to their influences on the system dynamics. The dissertation gives an algorithm to classify these parameters.
Show less  Date Issued
 1992
 PURL
 http://purl.flvc.org/fcla/dt/12291
 Subject Headings
 Algorithms, Manipulators (Mechanism), RobotsControl systems
 Format
 Document (PDF)
 Title
 Design and modeling of hybrid software faulttolerant systems.
 Creator
 Zhang, Manxia Maria., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

Fault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), Nversion programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic...
Show moreFault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), Nversion programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic methods are developed to construct hybrid fault tolerant systems with total cost constraints. The algorithms provide a systematic approach to the design of hybrid fault tolerant systems.
Show less  Date Issued
 1992
 PURL
 http://purl.flvc.org/fcla/dt/14783
 Subject Headings
 Computer softwareReliability, Faulttolerant computing, Algorithms
 Format
 Document (PDF)
 Title
 Efficient localized broadcast algorithms in mobile ad hoc networks.
 Creator
 Lou, Wei., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The broadcast operation has a most fundamental role in mobile ad hoc networks because of the broadcasting nature of radio transmission, i.e., when a sender transmits a packet, all nodes within the sender's transmission range will be affected by this transmission. The benefit of this property is that one packet can be received by all neighbors while the negative effect is that it interferes with other transmissions. Flooding ensures that the entire network receives the packet but generates...
Show moreThe broadcast operation has a most fundamental role in mobile ad hoc networks because of the broadcasting nature of radio transmission, i.e., when a sender transmits a packet, all nodes within the sender's transmission range will be affected by this transmission. The benefit of this property is that one packet can be received by all neighbors while the negative effect is that it interferes with other transmissions. Flooding ensures that the entire network receives the packet but generates many redundant transmissions which may trigger a serious broadcast storm problem that may collapse the entire network. The broadcast storm problem can be avoided by providing efficient broadcast algorithms that aim to reduce the number of nodes that retransmit the broadcast packet while still guaranteeing that all nodes receive the packet. This dissertation focuses on providing several efficient localized broadcast algorithms to reduce the broadcast redundancy in mobile ad hoc networks. In my dissertation, the efficiency of a broadcast algorithm is measured by the number of forward nodes for relaying a broadcast packet. A classification of broadcast algorithms for mobile ad hoc networks has been provided at the beginning. Two neighbordesignating broadcast algorithms, called total dominant pruning and partial dominant pruning, have been proposed to reduce the number of the forward nodes. Several extensions based on the neighbordesignating approach have also been investigated. The clusterbased broadcast algorithm shows good performance in dense networks, and it also provides a constant upper bound approximation ratio to the optimum solution for the number of forward nodes in the worst case. A generic broadcast framework with K hop neighbor information has a tradeoff between the number of the forward nodes and the size of the Khop zone. A reliable broadcast algorithm, called doublecovered broadcast, is proposed to improve the delivery ratio of a broadcast package when the transmission error rate of the network is high. The effectiveness of all these algorithms has been confirmed by simulations.
Show less  Date Issued
 2004
 PURL
 http://purl.flvc.org/fau/fd/FADT12103
 Subject Headings
 Wireless LANS, Mobile communication systems, Wireless communication systemsMathematics, Algorithms
 Format
 Document (PDF)
 Title
 Enhanced Fibonacci Cubes.
 Creator
 Qian, Haifeng., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

We propose the enhanced Fibonacci cube (EFC), which is defined based on the sequence Fn = 2(n2) + 2F(n4). We study its topological properties, embeddings, applications, routings, VLSI/WSI implementations, and its extensions. Our results show that EFC retains many properties of the hypercube. It contains the Fibonacci cube (FC) and extended Fibonacci cube of the same order as subgraphs and maintains virtually all the desirable properties of FC. EFC is even better in some structural...
Show moreWe propose the enhanced Fibonacci cube (EFC), which is defined based on the sequence Fn = 2(n2) + 2F(n4). We study its topological properties, embeddings, applications, routings, VLSI/WSI implementations, and its extensions. Our results show that EFC retains many properties of the hypercube. It contains the Fibonacci cube (FC) and extended Fibonacci cube of the same order as subgraphs and maintains virtually all the desirable properties of FC. EFC is even better in some structural properties, embeddings, applications and VLSI designs than FC or hypercube. With EFC, there are more cubes with various structures and sizes for selection, and more backup cubes into which faulty hypercubes can be reconfigured, which alleviates the size limitation of the hypercube and results in a higher level of fault tolerance.
Show less  Date Issued
 1995
 PURL
 http://purl.flvc.org/fcla/dt/15196
 Subject Headings
 Integrated circuitsVery large scale integration, Hypercube networks (Computer networks), Algorithms, Faulttolerant computing, Multiprocessors
 Format
 Document (PDF)
 Title
 Evolution and application of a parallel algorithm for explicit transient finite element analysis on SIMD/MIMD computers.
 Creator
 Das, Partha S., Florida Atlantic University, Case, Robert O., Tsai, ChiTay, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
 Abstract/Description

The development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP1) machine, is presented, and then extended to implementation on the MIMD computer, CrayT3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric...
Show moreThe development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP1) machine, is presented, and then extended to implementation on the MIMD computer, CrayT3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric elements for the nonlinear dynamic analysis of shells of revolution. The parallel algorithm required the development of a new procedure, called an 'exchange', which consists of an exchange of nodal forces at each time step to replace the standard gatherassembly operations in sequential code. In addition, the data was reconfigured so that all nodal variables associated with an element are stored in a processor along with other element data. The architectural and Fortran programming language features of the MasPar MP1 and CrayT3D computers which are pertinent to finite element computations are also summarized, and sample code segments are provided to illustrate programming in a data parallel environment. The governing equations, the finite element discretization and a comparison between their implementation on Von Neumann and SIMDMIMD parallel computers are discussed to demonstrate their applicability and the important differences in the new algorithm. Various large scale transient problems are solved using the parallel data structure and elemental decomposition algorithm and measured performances are presented and analyzed in detail. Results show that CrayT3D is a very promising parallel computer for finite element computation. The 32 processors of this machine shows an overall speedup of 2728, i.e. an efficiency of 85% or more and 128 processors shows a speedup of 7077, i.e. an efficiency of 55% or more. The CrayT3D results demonstrated that this machine is capable of outperforming the CrayYMP by a factor of about 10 for finite element problems with 4K elements, therefore, the method of developing the parallel data structure and its associated elemental decomposition algorithm is recommended for implementation on other finite element code in this machine. However, the results from MasPar MP1 show that this new algorithm for explicit finite element computations do not produce very efficient parallel code on this computer and therefore, the new data structure is not recommended for further use on this MasPar machine.
Show less  Date Issued
 1997
 PURL
 http://purl.flvc.org/fcla/dt/12500
 Subject Headings
 Finite element method, Algorithms, Parallel computers
 Format
 Document (PDF)
 Title
 Evolutionary algorithms for design and control of material handling and manufacturing systems.
 Creator
 Kanwar, Pankaj., Florida Atlantic University, Han, Chingping (Jim), College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
 Abstract/Description

The crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution...
Show moreThe crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution strategies and neural networks. The emergence of massively parallel systems has made these inherently parallel algorithms of high practical interest. The advantages offered by these algorithms over other classical techniques has resulted in their wide acceptance. These algorithms have been applied for solving a large class of interesting problems, for which no efficient or reasonably fast algorithm exists. This thesis extends their usage to the domain of production research. Problems of high practical interest in the domain of production research are solved using a subclass of these algorithms i.e. those based on the principle of evolution. The problems include: the flowpath design of AGV systems and vehicle routing in a transportation system. Furthermore, a Genetic Based Machine Learning (GBML) system has been developed for optimal scheduling and control of a job shop.
Show less  Date Issued
 1994
 PURL
 http://purl.flvc.org/fcla/dt/15025
 Subject Headings
 Industrial productivityData processing, Algorithms, Genetic algorithms, Motor vehiclesAutomatic location systems, Materials handlingComputer simulation, Manufacturing processesComputer simulation
 Format
 Document (PDF)
 Title
 A general pressure based NavierStokes solver in arbitrary configurations.
 Creator
 Ke, Zhao Ping., Florida Atlantic University, Chow, Wen L., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
 Abstract/Description

A pressurebased computer program for a general NavierStokes equations has been developed. Bodyfitted coordinate system is employed to handle flows with complex geometry. Nonstaggered grid is used while the pressure oscillation is eliminated by a special pressure interpolation scheme. The hybrid algorithm is adopted to discretize the equations and the finitedifference equations are solved by TDMA, while the whole solution is obtained through an underrelaxed iterative process. The...
Show moreA pressurebased computer program for a general NavierStokes equations has been developed. Bodyfitted coordinate system is employed to handle flows with complex geometry. Nonstaggered grid is used while the pressure oscillation is eliminated by a special pressure interpolation scheme. The hybrid algorithm is adopted to discretize the equations and the finitedifference equations are solved by TDMA, while the whole solution is obtained through an underrelaxed iterative process. The pressure field is evaluated using the compressible from of the SIMPLE algorithm., To test the accuracy and efficiency of the computer program, problems of incompressible and compressible flows are calculated. As examples of inviscid compressible flow problems, flows over a bump with 10% and 4% thickness are computed with the incoming Mach numbers of M[infinity] = 0.5 (subsonic flow), M[infinity] = 0.675 (transonic flow and M[infinity] = 1.65 (supersonic flow). One laminar subsonic flow over a bump with 5% thickness at M[infinity] = 0.5 is also calculated with the consideration of the full energy equation. With the help of the kepsilon model incorporating the wall function, the computations of two turbulent incompressible flows are carried out. One is the flow past a flat plate and the other over a flame holder. As an application to the threedimensional flow, a laminar flow in a driven cubic cavity is calculated. All the numerical results obtained here are compared with experimental data or other numerical results available in the literature.
Show less  Date Issued
 1993
 PURL
 http://purl.flvc.org/fcla/dt/12330
 Subject Headings
 NavierStokes equationsNumerical solutionsData processing, Algorithms, Flows (Differential dynamical systems)
 Format
 Document (PDF)
 Title
 Informationtheoretics based analysis of hard handoffs in mobile communications.
 Creator
 Bendett, Raymond Morris., Florida Atlantic University, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of informationtheoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the loglikelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a...
Show moreThe research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of informationtheoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the loglikelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a statistical model to evaluate optimum instants of measurements of the metric used for hard handoff decision. The aforesaid objectives refer to a practical scenario in which a mobile station (MS) traveling away from a serving base station (BSI) may suffer communications impairment due to interference and shadowing affects, especially in an urban environment. As a result, it will seek to switch over to another base station (BSII) that facilitates a stronger signal level. This is called handoff procedure. (The hard handoff refers to the specific case in which only one base station serves the mobile at the instant of handover). Classically, the handoff decision is done on the basis of the difference between received signal strengths (RSS) from BSI and BSII. The algorithms developed here, in contrast, stipulate the decision criterion set by the statistical divergence and/or loglikelihood ratio that exists between the received signals. The purpose of the present study is to evaluate the relative efficacy of the conventional and proposed algorithms in reference to: (i) Minimization of unnecessary handoffs ("pingpongs"); (ii) Minimization of delay in handing over; (iii) Ease of implementation and (iv) Minimization of possible call dropouts due to ineffective handover envisaged. Simulated results with data commensurate with practical considerations are furnished and discussed. Background literature is presented in the introductory chapter and scope for future work is identified via open questions in the concluding chapter.
Show less  Date Issued
 2000
 PURL
 http://purl.flvc.org/fcla/dt/12639
 Subject Headings
 Mobile communication systems, Information theory, Algorithms
 Format
 Document (PDF)
 Title
 Machine Learning Algorithms with Big Medicare Fraud Data.
 Creator
 Bauder, Richard Andrew, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

Healthcare is an integral component in peoples lives, especially for the rising elderly population, and must be affordable. The United States Medicare program is vital in serving the needs of the elderly. The growing number of people enrolled in the Medicare program, along with the enormous volume of money involved, increases the appeal for, and risk of, fraudulent activities. For many realworld applications, including Medicare fraud, the interesting observations tend to be less frequent...
Show moreHealthcare is an integral component in peoples lives, especially for the rising elderly population, and must be affordable. The United States Medicare program is vital in serving the needs of the elderly. The growing number of people enrolled in the Medicare program, along with the enormous volume of money involved, increases the appeal for, and risk of, fraudulent activities. For many realworld applications, including Medicare fraud, the interesting observations tend to be less frequent than the normative observations. This difference between the normal observations and those observations of interest can create highly imbalanced datasets. The problem of class imbalance, to include the classification of rare cases indicating extreme class imbalance, is an important and wellstudied area in machine learning. The effects of class imbalance with big data in the realworld Medicare fraud application domain, however, is limited. In particular, the impact of detecting fraud in Medicare claims is critical in lessening the financial and personal impacts of these transgressions. Fortunately, the healthcare domain is one such area where the successful detection of fraud can garner meaningful positive results. The application of machine learning techniques, plus methods to mitigate the adverse effects of class imbalance and rarity, can be used to detect fraud and lessen the impacts for all Medicare beneficiaries. This dissertation presents the application of machine learning approaches to detect Medicare provider claims fraud in the United States. We discuss novel techniques to process three big Medicare datasets and create a new, combined dataset, which includes mapping fraud labels associated with known excluded providers. We investigate the ability of machine learning techniques, unsupervised and supervised, to detect Medicare claims fraud and leverage data sampling methods to lessen the impact of class imbalance and increase fraud detection performance. Additionally, we extend the study of class imbalance to assess the impacts of rare cases in big data for Medicare fraud detection.
Show less  Date Issued
 2018
 PURL
 http://purl.flvc.org/fau/fd/FA00013108
 Subject Headings
 Medicare fraud, Big data, Machine learning, Algorithms
 Format
 Document (PDF)
 Title
 NEIGHBORING NEAR MINIMUMTIME CONTROLS WITH DISCONTINUITIES AND THE APPLICATION TO THE CONTROL OF MANIPULATORS (PATHPLANNING, TRACKING, FEEDBACK).
 Creator
 Zhuang, Hanqi, Florida Atlantic University, Hamano, Fumio, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

This thesis presents several algorithms to treat the problem of closedloop near minimumtime controls with discontinuities. First, a neighboring control algorithm is developed to solve the problem in which controls are bounded by constant constraints. Secondly, the scheme is extended to account for statedependent control constraints. And finally, a path tracking algorithm for robotic manipulators is presented, which is also a neighboring control algorithm. These algorithms are suitable for...
Show moreThis thesis presents several algorithms to treat the problem of closedloop near minimumtime controls with discontinuities. First, a neighboring control algorithm is developed to solve the problem in which controls are bounded by constant constraints. Secondly, the scheme is extended to account for statedependent control constraints. And finally, a path tracking algorithm for robotic manipulators is presented, which is also a neighboring control algorithm. These algorithms are suitable for real time controls because the online computations involved are relatively simple. Simulation results show that these algorithms work well despite the fact that the prescribed final points can not be reached exactly.
Show less  Date Issued
 1986
 PURL
 http://purl.flvc.org/fcla/dt/14326
 Subject Headings
 Manipulators (Mechanism), Control theory, Algorithms
 Format
 Document (PDF)
 Title
 Optimal coordination of robotic systems with redundancy.
 Creator
 Varma, K. R. Hareendra., Florida Atlantic University, Huang, Ming Z., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
 Abstract/Description

The research work described in this dissertation is primarily aimed at developing efficient algorithms for the rate allocation problem in redundant serial chain manipulators. While the problem of redundancy resolution in the context of robot manipulators, had been a well researched one, search for optimality in computational efficiency has caught the attention only recently. Further, the idea of modifying the already developed performance criteria to improve computational efficiency, had...
Show moreThe research work described in this dissertation is primarily aimed at developing efficient algorithms for the rate allocation problem in redundant serial chain manipulators. While the problem of redundancy resolution in the context of robot manipulators, had been a well researched one, search for optimality in computational efficiency has caught the attention only recently. Further, the idea of modifying the already developed performance criteria to improve computational efficiency, had rarely been treated with the importance it deserves. The present work in fact, provides many alternative formulations to the existing performance criteria. As a result of the present investigation, we developed a mathematical tool for the minimum norm solution for underdetermined systems of linear equations, using the orthogonal null space. Closed form equations were provided for cases with two or three degrees of redundancy. Detailed study of computational efficiency showed substantial reduction in the arithmetic operations necessary for such a solution. The above concept was later generalized to utilize the self motion characteristics of redundant manipulators, to provide alternate solutions. The duality concept between the Jacobian and the null space, established in this work, enabled the authors to develop a highly efficient formulation as an alternative to the commonly used pseudoinversebased solution. In addition, by providing the example of a 7R anthropomorphic arm, the feasibility of obtaining analytical formulation of null space coefficient matrix and the transformed end effector velocity vector for any geometry has been demonstrated. By utilizing the duality between the Jacobian and its null space, different performance criteria commonly used in the redundancy resolution problem have been modified, increasing the computational efficiency. Various simulations performed as part of the present work, utilizing the analytical null space coefficient matrix and the transformed end effector velocity vector for 3R planar case and 7R spatial anthropomorphic arm corroborates the theories. Another practical application has been demonstrated by the example of a Titan 7F arm mounted on a mobile base. The work is consolidated by reiterating the insight obtained to the physical aspects of the redundancy resolution problem and providing a direction for future work. Suggestions are given for extending the work for high d.o.r. systems, with relevant mathematical foundations. Future work in the area of dynamic modelling, is delineated which also includes an example of modified dynamic manipulability measure.
Show less  Date Issued
 1992
 PURL
 http://purl.flvc.org/fcla/dt/12292
 Subject Headings
 Algorithms, Redundancy (Engineering), Robotics, RobotsMotion
 Format
 Document (PDF)
 Title
 PALSAM INPUT DATA FILE GENERATOR.
 Creator
 ROBINSON, WILLIAM ROBERT, JR., Florida Atlantic University, Marcovitz, Alan B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The capabilities and limitations of Programmable Array Logic devices (PALs) are presented and compared to other logic devices. PALs are field programmable devices and a program called PALSAM exists to assist the designer in programming PALs. The attributes and limitations of PALSAM are discussed. The PALSAM Input Data File Generator program was written to eliminate many of the limitations of PALSAM. The need for an algorithmic method of reducing a general logic expression to a minimal sumof...
Show moreThe capabilities and limitations of Programmable Array Logic devices (PALs) are presented and compared to other logic devices. PALs are field programmable devices and a program called PALSAM exists to assist the designer in programming PALs. The attributes and limitations of PALSAM are discussed. The PALSAM Input Data File Generator program was written to eliminate many of the limitations of PALSAM. The need for an algorithmic method of reducing a general logic expression to a minimal sumofproducts form is demonstrated. Several algorithms are discussed. The Zissos, Duncan and Jones Algorithm, which claims to produce a minimal sumofproducts expression but is presented without proof by its authors, is disproved by example. A modification of this algorithm is presented without proof. When tested in the 276 possible cases involving up to three variables, this new algorithm always produced a minimal sumofproducts expression, while the original algorithm failed in six of these cases. Finally, the PALSAM Input Data File Generator program which uses the modified algorithm is presented and documented.
Show less  Date Issued
 1984
 PURL
 http://purl.flvc.org/fcla/dt/14199
 Subject Headings
 Programmable array logic, MicroprocessorsProgramming, Algorithms
 Format
 Document (PDF)
 Title
 Perceptual methods for video coding.
 Creator
 Adzic, Velibor, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are...
Show moreThe main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are implemented in the stateof theart video encoders. Result of using our algorithms is visually lossless compression with improved efficiency, as verified by standard subjective quality and psychophysical tests. Savings in bitrate compared to the High Efficiency Video Coding / H.265 reference implementation are up to 45%.
Show less  Date Issued
 2014
 PURL
 http://purl.flvc.org/fau/fd/FA00004074, http://purl.flvc.org/fau/fd/FA00004074
 Subject Headings
 Algorithms, Coding theory, Digital coding  Data processing, Imaging systems  Image quality, Perception, Video processing  Data processing
 Format
 Document (PDF)
 Title
 PIREN(copyright): A heuristic algorithm for standard cell placement.
 Creator
 Horvath, Elizabeth Iren., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

The placement problem is an important part in the design process of VLSI chips. It is necessary to have a proper placement so that all connections between modules in a chip can be routed in a minimum area without violating any physical or electrical constraints. Current algorithms either do not give optimum solutions, are computationally slow, or are difficult to parallelize. PIREN(copyright) is a parallel implementation of a force directed algorithm which seeks to overcome the large amount...
Show moreThe placement problem is an important part in the design process of VLSI chips. It is necessary to have a proper placement so that all connections between modules in a chip can be routed in a minimum area without violating any physical or electrical constraints. Current algorithms either do not give optimum solutions, are computationally slow, or are difficult to parallelize. PIREN(copyright) is a parallel implementation of a force directed algorithm which seeks to overcome the large amount of computer time associated with solving the placement problem. Each active processor in the massively parallel SIMD machine, the MasPar MP2.2, can perform in parallel the computation necessary to place cells in an optimum location relative to one another based upon the connectivity between cells. This is due to a salient feature of the serial algorithm which allows multiple permutations to be made simultaneously on all modules in order to minimize the objective function. The serial implementation of PIREN(copyright) compares favorably in both run time and layout quality to the simulated annealing based algorithm, TimberWolf3.2$\sp\copyright$. The parallel implementation on the MP2.2 has a speedup of 4.5 to 58.0 over the serial version of PIREN$\sp\copyright$ running of the VAX 6320, while producing layouts for several MCNC benchmarks which are of the same quality as those produced by the serial implementation.
Show less  Date Issued
 1992
 PURL
 http://purl.flvc.org/fcla/dt/12301
 Subject Headings
 Integrated circuitsVery large scale integration, Algorithms
 Format
 Document (PDF)
 Title
 Twodimensional feature tracking algorithm for motion analysis.
 Creator
 Krishnan, Srivatsan., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

In this thesis we describe a localneighborhoodpixelbased adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2D intensity correlation surface' constructed from a local neighborhood in the...
Show moreIn this thesis we describe a localneighborhoodpixelbased adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2D intensity correlation surface' constructed from a local neighborhood in the first image of the sequence to be analyzed. Any kind of motion, i.e., 6 DOF (translation and rotation), can be tolerated keeping in mind the pixelsperframe motion limitations. No subpixel computations are necessary. Taking into account constraints of temporal continuity, the algorithm uses simple and efficient predictive tracking over multiple frames. Trajectories of features on multiple objects can also be computed. The algorithm accepts a slow, continuous change of brightness D.C. level in the pixels of the feature. Another important aspect of the algorithm is the use of an adaptive feature matching threshold that accounts for change in relative brightness of neighboring pixels. As applications of the featuretracking algorithm and to test the accuracy of the tracking, we show how the algorithm has been used to extract the Focus of Expansion (FOE) and compute the Timetocontact using real image sequences of unstructured, unknown environments. In both these applications, information from multiple frames is used.
Show less  Date Issued
 1994
 PURL
 http://purl.flvc.org/fcla/dt/15030
 Subject Headings
 Algorithms, Image transmission, Motion perception (Vision), Image processing
 Format
 Document (PDF)
 Title
 A VLSI implementable thinning algorithm.
 Creator
 Zhang, Wei, Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
 Abstract/Description

Thinning is a very important step in a Character Recognition System. This thesis evolves a thinning algorithm that can be hardware implemented to improve the speed of the process. The software thinning algorithm features a simple set of rules that can be applied on both hexagonal and orthogonal character images. The hardware architecture features the SIMD structure, simple processing elements and near neighbor communications. The algorithm was simulated against the U.S. Postal Service...
Show moreThinning is a very important step in a Character Recognition System. This thesis evolves a thinning algorithm that can be hardware implemented to improve the speed of the process. The software thinning algorithm features a simple set of rules that can be applied on both hexagonal and orthogonal character images. The hardware architecture features the SIMD structure, simple processing elements and near neighbor communications. The algorithm was simulated against the U.S. Postal Service Character Database. The architecture, evolved with consideration of both the software constraints and the physical layout limitations, was simulated using VHDL hardware description language. Subsequent to VLSI design and simulations the chip was fabricated. The project provides for a feasibility study in utilizing the parallel processor architecture for the implementation of a parallel image thinning algorithm. It is hoped that such a hardware implementation will speed up the processing and lead eventually to a real time system.
Show less  Date Issued
 1992
 PURL
 http://purl.flvc.org/fcla/dt/14837
 Subject Headings
 Optical character recognition devicesComputer simulation, Algorithms, Integrated circuitsVery large scale integration
 Format
 Document (PDF)