Current Search: Algorithms (x)
View All Items
Pages
- Title
- Alopex for handwritten digit recognition: Algorithmic verifications.
- Creator
- Martin, Gregory A., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Alopex is a biologically influenced computation paradigm that uses a stochastic procedure to find the global optimum of linear and nonlinear functions. It maps to a hierarchical SIMD (Single-Instruction-Multiple-Data) architecture with simple neuronal processing elements (PE's), therefore the large amount of interconnects in other types of neural networks are not required and more efficient utilization of chip level and board level "real estate" is realized. In this study, verifications were...
Show moreAlopex is a biologically influenced computation paradigm that uses a stochastic procedure to find the global optimum of linear and nonlinear functions. It maps to a hierarchical SIMD (Single-Instruction-Multiple-Data) architecture with simple neuronal processing elements (PE's), therefore the large amount of interconnects in other types of neural networks are not required and more efficient utilization of chip level and board level "real estate" is realized. In this study, verifications were performed on the use of a simplified Alopex algorithm in handwritten digit recognition with the intent that the verified algorithm be digitally implementable. The inputs to the simulated Alopex hardware are a set of 32 features extracted from the input characters. Although the goal of verifying the algorithm was not achieved, a firm direction for future studies has been established and a flexible software model for these future studies is available.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14842
- Subject Headings
- Algorithms--Data processing, Stochastic processes
- Format
- Document (PDF)
- Title
- Derivation and identification of linearly parametrized robot manipulator dynamic models.
- Creator
- Xu, Hua., Florida Atlantic University, Roth, Zvi S., Zilouchian, Ali, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The dissertation focuses on robot manipulator dynamic modeling, and inertial and kinematic parameters identification problem. An automatic dynamic parameters derivation symbolic algorithm is presented. This algorithm provides the linearly independent dynamic parameters set. It is shown that all the dynamic parameters are identifiable when the trajectory is persistently exciting. The parameters set satisfies the necessary condition of finding a persistently exciting trajectory. Since in...
Show moreThe dissertation focuses on robot manipulator dynamic modeling, and inertial and kinematic parameters identification problem. An automatic dynamic parameters derivation symbolic algorithm is presented. This algorithm provides the linearly independent dynamic parameters set. It is shown that all the dynamic parameters are identifiable when the trajectory is persistently exciting. The parameters set satisfies the necessary condition of finding a persistently exciting trajectory. Since in practice the system data matrix is corrupted with noise, conventional estimation methods do not converge to the true values. An error bound is given for Kalman filters. Total least squares method is introduced to obtain unbiased estimates. Simulations studies are presented for five particular identification methods. The simulations are performed under different noise levels. Observability problems for the inertial and kinematic parameters are investigated. U%wer certain conditions all L%wearly Independent Parameters derived from are observable. The inertial and kinematic parameters can be categorized into three parts according to their influences on the system dynamics. The dissertation gives an algorithm to classify these parameters.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12291
- Subject Headings
- Algorithms, Manipulators (Mechanism), Robots--Control systems
- Format
- Document (PDF)
- Title
- Complexity metrics in parallel computing.
- Creator
- Larrondo-Petrie, Maria M., Florida Atlantic University, Fernandez, Eduardo B., Coulter, Neal S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Accompanying the potential increase in power offered by parallel computers is an increase in the complexity of program design, implementation, testing and maintenance. It is important to understand the logical complexity of parallel programs in order to support the development of concurrent software. Measures are needed to quantify the components of parallel software complexity and to establish a basis for comparison and analysis of parallel algorithms at various stages of development and...
Show moreAccompanying the potential increase in power offered by parallel computers is an increase in the complexity of program design, implementation, testing and maintenance. It is important to understand the logical complexity of parallel programs in order to support the development of concurrent software. Measures are needed to quantify the components of parallel software complexity and to establish a basis for comparison and analysis of parallel algorithms at various stages of development and implementation. A set of primitive complexity measures is proposed that collectively describe the total complexity of parallel programs. The total complexity is separated into four dimensions or components: requirements, sequential, parallel and communication. Each proposed primitive measure is classified under one of these four areas. Two additional possible dimensions, fault-tolerance and real-time, are discussed. The total complexity measure is expressed as a vector of dimensions; each component is defined as a vector of primitive metrics. The method of quantifying each primitive metric is explained in detail. Those primitive metrics that contribute to the parallel and communications complexity are exercised against ten published summation algorithms and programs, illustrating that architecture has a significant effect on the complexity of parallel programs--even if the same programming language is used. The memory organization and the processor interconnection scheme had no effect on the parallel component, but did affect the communication component. Programming style and language did not have a noticeable effect on either component. The proposed metrics are quantifiable, consistent, and useful in comparing parallel algorithms. Unlike existing parallel metrics, they are general and applicable to different languages, architectures, algorithms, paradigms, programming styles and stages of software development.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12296
- Subject Headings
- Parallel programming (Computer Science), Computer algorithms
- Format
- Document (PDF)
- Title
- Cryptanalysis of small private key RSA.
- Creator
- Guild, Jeffrey Kirk, Florida Atlantic University, Klingler, Lee
- Abstract/Description
-
RSA cryptosystems with decryption exponent d less than N 0.292, for a given RSA modulus N, show themselves to be vulnerable to an attack which utilizes modular polynomials and the LLL Basis Reduction Algorithm. This result, presented by Dan Boneh and Glenn Durfee in 1999, is an improvement on the bound of N0.25 established by Wiener in 1990. This thesis examines in detail the LLL Basis Reduction Algorithm and the attack on RSA as presented by Boneh and Durfee.
- Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15730
- Subject Headings
- Cryptography, Algorithms, Data encryption (Computer science)
- Format
- Document (PDF)
- Title
- Evolution and application of a parallel algorithm for explicit transient finite element analysis on SIMD/MIMD computers.
- Creator
- Das, Partha S., Florida Atlantic University, Case, Robert O., Tsai, Chi-Tay, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP-1) machine, is presented, and then extended to implementation on the MIMD computer, Cray-T3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric...
Show moreThe development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP-1) machine, is presented, and then extended to implementation on the MIMD computer, Cray-T3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric elements for the nonlinear dynamic analysis of shells of revolution. The parallel algorithm required the development of a new procedure, called an 'exchange', which consists of an exchange of nodal forces at each time step to replace the standard gather-assembly operations in sequential code. In addition, the data was reconfigured so that all nodal variables associated with an element are stored in a processor along with other element data. The architectural and Fortran programming language features of the MasPar MP-1 and Cray-T3D computers which are pertinent to finite element computations are also summarized, and sample code segments are provided to illustrate programming in a data parallel environment. The governing equations, the finite element discretization and a comparison between their implementation on Von Neumann and SIMD-MIMD parallel computers are discussed to demonstrate their applicability and the important differences in the new algorithm. Various large scale transient problems are solved using the parallel data structure and elemental decomposition algorithm and measured performances are presented and analyzed in detail. Results show that Cray-T3D is a very promising parallel computer for finite element computation. The 32 processors of this machine shows an overall speedup of 27-28, i.e. an efficiency of 85% or more and 128 processors shows a speedup of 70-77, i.e. an efficiency of 55% or more. The Cray-T3D results demonstrated that this machine is capable of outperforming the Cray-YMP by a factor of about 10 for finite element problems with 4K elements, therefore, the method of developing the parallel data structure and its associated elemental decomposition algorithm is recommended for implementation on other finite element code in this machine. However, the results from MasPar MP-1 show that this new algorithm for explicit finite element computations do not produce very efficient parallel code on this computer and therefore, the new data structure is not recommended for further use on this MasPar machine.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12500
- Subject Headings
- Finite element method, Algorithms, Parallel computers
- Format
- Document (PDF)
- Title
- Examples of deterministic and Monte Carlo algorithms for cryptographic applications.
- Creator
- McPherson, Joe Cullen, Florida Atlantic University, Hoffman, Frederick
- Abstract/Description
-
In this thesis two different types of computer algorithms, Deterministic and Monte Carlo, are illustrated. Implementations of the Berlekamp-Massey algorithm and the Parallelized Pollard Rho Search are described here. The questions of what these two algorithms provide to the field of cryptography and why they have proven themselves important to cryptography are briefly discussed. It is also shown that with a little extra knowledge, the Parallelized Pollard Rho Search may be easily modified to...
Show moreIn this thesis two different types of computer algorithms, Deterministic and Monte Carlo, are illustrated. Implementations of the Berlekamp-Massey algorithm and the Parallelized Pollard Rho Search are described here. The questions of what these two algorithms provide to the field of cryptography and why they have proven themselves important to cryptography are briefly discussed. It is also shown that with a little extra knowledge, the Parallelized Pollard Rho Search may be easily modified to improve its performance.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12687
- Subject Headings
- Monte Carlo method, Computer algorithms, Cryptography
- Format
- Document (PDF)
- Title
- Optimal planning of robot calibration experiments by genetic algorithms.
- Creator
- Huang, Weizhen., Florida Atlantic University, Wu, Jie
- Abstract/Description
-
In this thesis work, techniques developed in the science of genetic computing is applied to solve the problem of planning a robot calibration experiment. Robot calibration is a process by the robot accuracy is enhanced through modification of its control software. The selection of robot measurement configurations is an important element in successfully completing a robot calibration experiment. A classical genetic algorithm is first customized for a type of robot measurement configuration...
Show moreIn this thesis work, techniques developed in the science of genetic computing is applied to solve the problem of planning a robot calibration experiment. Robot calibration is a process by the robot accuracy is enhanced through modification of its control software. The selection of robot measurement configurations is an important element in successfully completing a robot calibration experiment. A classical genetic algorithm is first customized for a type of robot measurement configuration selection problem in which the robot workspace constraints are defined in terms of robot joint limits. The genetic parameters are tuned in a systematic way to greatly enhance the performance of the algorithm. A recruit-oriented genetic algorithm is then proposed, together with new genetic operators. Examples are also given to illustrate the concepts of this new genetic algorithm. This new algorithm is aimed at solving another type of configuration selection problem, in which not all points in the robot workspace are measurable by an external measuring device. Extensive simulation studies are conducted for both classical and recruit-oriented genetic algorithms, to examine the effectiveness of these algorithms.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15186
- Subject Headings
- Genetic algorithms, Robots--Calibration, Combinatorial optimization
- Format
- Document (PDF)
- Title
- Optimal coordination of robotic systems with redundancy.
- Creator
- Varma, K. R. Hareendra., Florida Atlantic University, Huang, Ming Z., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The research work described in this dissertation is primarily aimed at developing efficient algorithms for the rate allocation problem in redundant serial chain manipulators. While the problem of redundancy resolution in the context of robot manipulators, had been a well researched one, search for optimality in computational efficiency has caught the attention only recently. Further, the idea of modifying the already developed performance criteria to improve computational efficiency, had...
Show moreThe research work described in this dissertation is primarily aimed at developing efficient algorithms for the rate allocation problem in redundant serial chain manipulators. While the problem of redundancy resolution in the context of robot manipulators, had been a well researched one, search for optimality in computational efficiency has caught the attention only recently. Further, the idea of modifying the already developed performance criteria to improve computational efficiency, had rarely been treated with the importance it deserves. The present work in fact, provides many alternative formulations to the existing performance criteria. As a result of the present investigation, we developed a mathematical tool for the minimum norm solution for underdetermined systems of linear equations, using the orthogonal null space. Closed form equations were provided for cases with two or three degrees of redundancy. Detailed study of computational efficiency showed substantial reduction in the arithmetic operations necessary for such a solution. The above concept was later generalized to utilize the self motion characteristics of redundant manipulators, to provide alternate solutions. The duality concept between the Jacobian and the null space, established in this work, enabled the authors to develop a highly efficient formulation as an alternative to the commonly used pseudoinverse-based solution. In addition, by providing the example of a 7R anthropomorphic arm, the feasibility of obtaining analytical formulation of null space coefficient matrix and the transformed end effector velocity vector for any geometry has been demonstrated. By utilizing the duality between the Jacobian and its null space, different performance criteria commonly used in the redundancy resolution problem have been modified, increasing the computational efficiency. Various simulations performed as part of the present work, utilizing the analytical null space coefficient matrix and the transformed end effector velocity vector for 3R planar case and 7R spatial anthropomorphic arm corroborates the theories. Another practical application has been demonstrated by the example of a Titan 7F arm mounted on a mobile base. The work is consolidated by reiterating the insight obtained to the physical aspects of the redundancy resolution problem and providing a direction for future work. Suggestions are given for extending the work for high d.o.r. systems, with relevant mathematical foundations. Future work in the area of dynamic modelling, is delineated which also includes an example of modified dynamic manipulability measure.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12292
- Subject Headings
- Algorithms, Redundancy (Engineering), Robotics, Robots--Motion
- Format
- Document (PDF)
- Title
- PALSAM INPUT DATA FILE GENERATOR.
- Creator
- ROBINSON, WILLIAM ROBERT, JR., Florida Atlantic University, Marcovitz, Alan B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The capabilities and limitations of Programmable Array Logic devices (PALs) are presented and compared to other logic devices. PALs are field programmable devices and a program called PALSAM exists to assist the designer in programming PALs. The attributes and limitations of PALSAM are discussed. The PALSAM Input Data File Generator program was written to eliminate many of the limitations of PALSAM. The need for an algorithmic method of reducing a general logic expression to a minimal sum-of...
Show moreThe capabilities and limitations of Programmable Array Logic devices (PALs) are presented and compared to other logic devices. PALs are field programmable devices and a program called PALSAM exists to assist the designer in programming PALs. The attributes and limitations of PALSAM are discussed. The PALSAM Input Data File Generator program was written to eliminate many of the limitations of PALSAM. The need for an algorithmic method of reducing a general logic expression to a minimal sum-of-products form is demonstrated. Several algorithms are discussed. The Zissos, Duncan and Jones Algorithm, which claims to produce a minimal sum-of-products expression but is presented without proof by its authors, is disproved by example. A modification of this algorithm is presented without proof. When tested in the 276 possible cases involving up to three variables, this new algorithm always produced a minimal sum-of-products expression, while the original algorithm failed in six of these cases. Finally, the PALSAM Input Data File Generator program which uses the modified algorithm is presented and documented.
Show less - Date Issued
- 1984
- PURL
- http://purl.flvc.org/fcla/dt/14199
- Subject Headings
- Programmable array logic, Microprocessors--Programming, Algorithms
- Format
- Document (PDF)
- Title
- A connected dominating-set-based routing in ad hoc wireless networks.
- Creator
- Gao, Ming., Florida Atlantic University, Wu, Jie
- Abstract/Description
-
In ad hoc wireless networks, routing protocols are challenged with establishing and maintaining multihop routes in the face of mobility, bandwidth limitation and power constraints. Routing based on a connected dominating set is a promising approach, where the searching space for a router reduced to nodes in the set. A set is dominating if all the nodes in the system are either in the dominating set or adjacent to some nodes in the dominating set. In this thesis, we propose a method of...
Show moreIn ad hoc wireless networks, routing protocols are challenged with establishing and maintaining multihop routes in the face of mobility, bandwidth limitation and power constraints. Routing based on a connected dominating set is a promising approach, where the searching space for a router reduced to nodes in the set. A set is dominating if all the nodes in the system are either in the dominating set or adjacent to some nodes in the dominating set. In this thesis, we propose a method of calculating power-aware connected dominating set. Our simulation results show that the proposed approach outperforms several existing approaches in terms of life span of the network. We also discuss mobility management in dominating-set-based networks. Three operations are considered which are mobile host switch on, mobile host switch off and mobile host movement. We also discuss the use of dynamic source routing as an application of the connected dominating set.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12780
- Subject Headings
- Mobile computing, Computer networks, Computer algorithms
- Format
- Document (PDF)
- Title
- Information-theoretics based genetic algorithm: Application to Hopfield's associative memory model of neural networks.
- Creator
- Arredondo, Tomas Vidal., Florida Atlantic University, Neelakanta, Perambur S.
- Abstract/Description
-
This thesis refers to a research addressing the use of information-theoretic techniques in optimizing an artificial neural network (ANN) via a genetic selection algorithm. Pertinent studies address emulating relevant experiments on a test ANN (based on Hopfield's associative memory model) wherein the said optimization is tried with different sets of control parameters. These parameters include a new entity based on the concept of entropy as conceived in the field of information theory. That...
Show moreThis thesis refers to a research addressing the use of information-theoretic techniques in optimizing an artificial neural network (ANN) via a genetic selection algorithm. Pertinent studies address emulating relevant experiments on a test ANN (based on Hopfield's associative memory model) wherein the said optimization is tried with different sets of control parameters. These parameters include a new entity based on the concept of entropy as conceived in the field of information theory. That is, the mutual entropy (Shannon entropy) or information-distance (Kullback-Leibler-Jensen distance) measure between a pair of candidates is considered in the reproduction process of the genetic algorithm (GA) and adopted as a selection-constraint parameter. The research envisaged further includes a comparative analysis of the test results which indicate the importance of proper parameter selection to realize an optimal network performance. It also demonstrates the ability of the concepts proposed here in developing a new neural network approach for pattern recognition problems.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15397
- Subject Headings
- Neuro network (Computer science), Genetic algorithms
- Format
- Document (PDF)
- Title
- Information-theoretics based analysis of hard handoffs in mobile communications.
- Creator
- Bendett, Raymond Morris., Florida Atlantic University, Neelakanta, Perambur S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of information-theoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the log-likelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a...
Show moreThe research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of information-theoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the log-likelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a statistical model to evaluate optimum instants of measurements of the metric used for hard handoff decision. The aforesaid objectives refer to a practical scenario in which a mobile station (MS) traveling away from a serving base station (BS-I) may suffer communications impairment due to interference and shadowing affects, especially in an urban environment. As a result, it will seek to switch over to another base station (BS-II) that facilitates a stronger signal level. This is called handoff procedure. (The hard handoff refers to the specific case in which only one base station serves the mobile at the instant of handover). Classically, the handoff decision is done on the basis of the difference between received signal strengths (RSS) from BS-I and BS-II. The algorithms developed here, in contrast, stipulate the decision criterion set by the statistical divergence and/or log-likelihood ratio that exists between the received signals. The purpose of the present study is to evaluate the relative efficacy of the conventional and proposed algorithms in reference to: (i) Minimization of unnecessary handoffs ("ping-pongs"); (ii) Minimization of delay in handing over; (iii) Ease of implementation and (iv) Minimization of possible call dropouts due to ineffective handover envisaged. Simulated results with data commensurate with practical considerations are furnished and discussed. Background literature is presented in the introductory chapter and scope for future work is identified via open questions in the concluding chapter.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12639
- Subject Headings
- Mobile communication systems, Information theory, Algorithms
- Format
- Document (PDF)
- Title
- Fault-tolerant routing in two-dimensional and three-dimensional meshes.
- Creator
- Chen, Xiao., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Mesh-connected multicomputers are one of the simplest and least expensive structures to build a system using hundreds and even thousands of processors. The nodes communicate with each other by sending and receiving messages. As the system gets larger and larger, it not only requires the routing algorithms be efficient but also fault-tolerant. The fault model we use in 2-D meshes is a faulty block while in 3-D meshes, the fault model is a faculty cube. In order to route messages through...
Show moreMesh-connected multicomputers are one of the simplest and least expensive structures to build a system using hundreds and even thousands of processors. The nodes communicate with each other by sending and receiving messages. As the system gets larger and larger, it not only requires the routing algorithms be efficient but also fault-tolerant. The fault model we use in 2-D meshes is a faulty block while in 3-D meshes, the fault model is a faculty cube. In order to route messages through feasible minimum paths, the extended safety level is used to determine the existence of a minimal path and faulty block (cube) information is used to guide the routing. This dissertation presents an in-depth study of fault-tolerant minimal routing in 2-D tori, 3-D meshes, and tree-based fault-tolerant multicasting in 2-D and 3-D meshes using extended safety levels. Also path-based fault-tolerant deadlock-free multicasting in 2-D and 3-D meshes is studied. In fault-tolerant minimal routing in 2-D meshes, if no faulty block is encountered, any adaptive minimal routing can be used until the message encounters a faulty block. The next step is guided by the faulty block information until the message gets away from the faulty block. After that, any minimal adaptive routing can be used again. The minimal routing in 2-D tori is similar to that in 2-D meshes if at the beginning of the routing a conversion is made from a 2-D torus to a 2-D mesh. The fault-tolerant minimal routing in 3-D meshes can be done in a similar way. In the tree-based multicasting in 2-D and 3-D meshes, a time-step optimal and traffic-step suboptimal algorithm is proposed. Several heuristic strategies are presented to resolve a conflict, which are compared by simulations. A path-based fault-tolerant deadlock-free multicast algorithm in 2-D meshes with inter-block distance of at least three is presented to solve the deadlock problem in tree-based multicast algorithms. The approach is then extended to 3-D meshes and to inter-block distance of at least two in 2-D meshes. The path is Hamiltonian that is only updated locally in the neighborhood of a faulty block when a faulty block is encountered. Two virtual channels are used to prevent deadlock in 2-D and 3-D meshes with inter-block (inter-cube) distance of at least three and two more virtual channels are added if the inter-block distance is at least two.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/12597
- Subject Headings
- Fault-tolerant computing, Computer algorithms
- Format
- Document (PDF)
- Title
- Mobility pattern-based routing algorithm for mobile ad hoc wireless networks.
- Creator
- Vyas, Nirav., Florida Atlantic University, Mahgoub, Imad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis describes routing in mobile ad hoc wireless networks. Ad hoc networks lack wired backbone to maintain routes as mobile hosts move and power is on or off. Therefore, the hosts in ad hoc networks must cooperate with each other to determine routes in a distributed manner. Routing based on a Location is a frequently used approach, where the searching space for a route is reduced to smaller zone by defining request zone and expected zone. We propose a mobility pattern based algorithm...
Show moreThis thesis describes routing in mobile ad hoc wireless networks. Ad hoc networks lack wired backbone to maintain routes as mobile hosts move and power is on or off. Therefore, the hosts in ad hoc networks must cooperate with each other to determine routes in a distributed manner. Routing based on a Location is a frequently used approach, where the searching space for a route is reduced to smaller zone by defining request zone and expected zone. We propose a mobility pattern based algorithm to reduce the overhead, then evaluate the proposed algorithm through simulation. We have implemented two mobility patterns into Location Aided Routing, namely, leading movement and random walk type mobility patterns. We have developed simulation model for each mobility pattern, using SES/Workbench. The performance is measured in terms of overhead of the network. We also discuss various routing algorithms such as dynamic source routing, zone routing protocol, associativity based routing protocol and ad hoc on demand distance vector routing.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12662
- Subject Headings
- Mobile computing, Wireless communication systems, Computer algorithms
- Format
- Document (PDF)
- Title
- Comparison of different realizations and adaptive algorithms for channel equalization.
- Creator
- Kamath, Anuradha K., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents simulation results comparing the performance of different realizations and adaptive algorithms for channel equalization. An attempt is made to study and compare the performance of some filter structures used as an equalizer in fast data transmission over the baseband channel. To this end, simulation experiments are performed using minimum and non minimum phase channel models with adaptation algorithms such as the least mean square (LMS) and recursive least square (RLS)...
Show moreThis thesis presents simulation results comparing the performance of different realizations and adaptive algorithms for channel equalization. An attempt is made to study and compare the performance of some filter structures used as an equalizer in fast data transmission over the baseband channel. To this end, simulation experiments are performed using minimum and non minimum phase channel models with adaptation algorithms such as the least mean square (LMS) and recursive least square (RLS) algorithms, filter structures such as the lattice and transversal filters and the input signals such as the binary phase shift keyed (BPSK) and quadrature phase shift keyed (QPSK) signals. Based on the simulation studies, conclusions are drawn regarding the performance of various adaptation algorithms.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14974
- Subject Headings
- Computer algorithms, Data transmission systems, Equalizers (Electronics)
- Format
- Document (PDF)
- Title
- MACHINE LEARNING ALGORITHMS FOR THE DETECTION AND ANALYSIS OF WEB ATTACKS.
- Creator
- Zuech, Richard, Khoshgoftaar, Taghi M., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
The Internet has provided humanity with many great benefits, but it has also introduced new risks and dangers. E-commerce and other web portals have become large industries with big data. Criminals and other bad actors constantly seek to exploit these web properties through web attacks. Being able to properly detect these web attacks is a crucial component in the overall cybersecurity landscape. Machine learning is one tool that can assist in detecting web attacks. However, properly using...
Show moreThe Internet has provided humanity with many great benefits, but it has also introduced new risks and dangers. E-commerce and other web portals have become large industries with big data. Criminals and other bad actors constantly seek to exploit these web properties through web attacks. Being able to properly detect these web attacks is a crucial component in the overall cybersecurity landscape. Machine learning is one tool that can assist in detecting web attacks. However, properly using machine learning to detect web attacks does not come without its challenges. Classification algorithms can have difficulty with severe levels of class imbalance. Class imbalance occurs when one class label disproportionately outnumbers another class label. For example, in cybersecurity, it is common for the negative (normal) label to severely outnumber the positive (attack) label. Another difficulty encountered in machine learning is models can be complex, thus making it difficult for even subject matter experts to truly understand a model’s detection process. Moreover, it is important for practitioners to determine which input features to include or exclude in their models for optimal detection performance. This dissertation studies machine learning algorithms in detecting web attacks with big data. Severe class imbalance is a common problem in cybersecurity, and mainstream machine learning research does not sufficiently consider this with web attacks. Our research first investigates the problems associated with severe class imbalance and rarity. Rarity is an extreme form of class imbalance where the positive class suffers extremely low positive class count, thus making it difficult for the classifiers to discriminate. In reducing imbalance, we demonstrate random undersampling can effectively mitigate the class imbalance and rarity problems associated with web attacks. Furthermore, our research introduces a novel feature popularity technique which produces easier to understand models by only including the fewer, most popular features. Feature popularity granted us new insights into the web attack detection process, even though we had already intensely studied it. Even so, we proceed cautiously in selecting the best input features, as we determined that the “most important” Destination Port feature might be contaminated by lopsided traffic distributions.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013823
- Subject Headings
- Machine learning, Computer security, Algorithms, Cybersecurity
- Format
- Document (PDF)
- Title
- COLLECTION AND ANALYSIS OF SLOW DENIAL OF SERVICE ATTACKS USING MACHINE LEARNING ALGORITHMS.
- Creator
- Kemp, Clifford, Khoshgoftaar, Taghi M., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Application-layer based attacks are becoming a more desirable target in computer networks for hackers. From complex rootkits to Denial of Service (DoS) attacks, hackers look to compromise computer networks. Web and application servers can get shut down by various application-layer DoS attacks, which exhaust CPU or memory resources. The HTTP protocol has become a popular target to launch application-layer DoS attacks. These exploits consume less bandwidth than traditional DoS attacks....
Show moreApplication-layer based attacks are becoming a more desirable target in computer networks for hackers. From complex rootkits to Denial of Service (DoS) attacks, hackers look to compromise computer networks. Web and application servers can get shut down by various application-layer DoS attacks, which exhaust CPU or memory resources. The HTTP protocol has become a popular target to launch application-layer DoS attacks. These exploits consume less bandwidth than traditional DoS attacks. Furthermore, this type of DoS attack is hard to detect because its network traffic resembles legitimate network requests. Being able to detect these DoS attacks effectively is a critical component of any robust cybersecurity system. Machine learning can help detect DoS attacks by identifying patterns in network traffic. With machine learning methods, predictive models can automatically detect network threats. This dissertation offers a novel framework for collecting several attack datasets on a live production network, where producing quality representative data is a requirement. Our approach builds datasets from collected Netflow and Full Packet Capture (FPC) data. We evaluate a wide range of machine learning classifiers which allows us to analyze slow DoS detection models more thoroughly. To identify attacks, we look at each dataset's unique traffic patterns and distinguishing properties. This research evaluates and investigates appropriate feature selection evaluators and search strategies. Features are assessed for their predictive value and degree of redundancy to build a subset of features. Feature subsets with high-class correlation but low intercorrelation are favored. Experimental results indicate Netflow and FPC features are discriminating enough to detect DoS attacks accurately. We conduct a comparative examination of performance metrics to determine the capability of several machine learning classifiers. Additionally, we improve upon our performance scores by investigating a variety of feature selection optimization strategies. Overall, this dissertation proposes a novel machine learning approach for detecting slow DoS attacks. Our machine learning results demonstrate that a single subset of features trained on Netflow data can effectively detect slow application-layer DoS attacks.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013848
- Subject Headings
- Machine learning, Algorithms, Denial of service attacks
- Format
- Document (PDF)
- Title
- Routing in mobile ad-hoc wireless networks.
- Creator
- Li, Hailan., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis describes routing in mobile ad hoc wireless networks. Ad hoc networks are lack of wired backbone to maintain routes as mobile hosts move and power is on or off. Therefore, the hosts in ad hoc networks must cooperate with each other to determine routes in a distributed manner. Routing based on a connected dominating set is a frequently used approach, where the searching space for a route is reduced to nodes in small connected dominating set subnetwork. We propose a simple and...
Show moreThis thesis describes routing in mobile ad hoc wireless networks. Ad hoc networks are lack of wired backbone to maintain routes as mobile hosts move and power is on or off. Therefore, the hosts in ad hoc networks must cooperate with each other to determine routes in a distributed manner. Routing based on a connected dominating set is a frequently used approach, where the searching space for a route is reduced to nodes in small connected dominating set subnetwork. We propose a simple and efficient distributed algorithm for calculating connected dominating set in a given un-directed ad hoc network, then evaluate the proposed algorithm through simulation. We also discuss connected dominating set update/recalculation algorithms when the topology of the ad hoc network changes. We also explore the possible extension of using hierarchical connected dominating set. The shortest path routing and the dynamic source routing, which are based on the connected dominating set subnetwork, are discussed.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15695
- Subject Headings
- Mobile computing, Computer algorithms, Computer networks
- Format
- Document (PDF)
- Title
- EVALUATING ENVIRONMENTAL VARIABLES THAT INFLUENCE POND DISSOLVED OXYGEN TO INFORM PREDICTION MODEL DEVELOPMENT.
- Creator
- Weber, Ethan W., Wills, Paul S., Florida Atlantic University, Department of Marine Science and Oceanography, Charles E. Schmidt College of Science
- Abstract/Description
-
Pond aquaculture accounts 65% of global finfish production. A major factor limiting pond aquaculture productivity is fluctuating oxygen levels, which are heavily influenced by atmospheric conditions and primary productivity. Being able to predict DO concentrations by measuring environmental parameters would be beneficial to improving the industry’s efficiencies. The data collected included pond DO, water temperature, air temperature, atmospheric pressure, wind speed/direction, solar...
Show morePond aquaculture accounts 65% of global finfish production. A major factor limiting pond aquaculture productivity is fluctuating oxygen levels, which are heavily influenced by atmospheric conditions and primary productivity. Being able to predict DO concentrations by measuring environmental parameters would be beneficial to improving the industry’s efficiencies. The data collected included pond DO, water temperature, air temperature, atmospheric pressure, wind speed/direction, solar irradiance, rainfall, pond Chl-a concentrations as well as water color images. Pearson’s correlations and stepwise regressions were used to determine the variables’ connection to DO and their potential usefulness for a prediction model. It was determined that sunlight levels play a crucial role in DO fluctuations and crashes because of its influence on pond heating, primary productivity, and pond stratification. It was also found that image data did have correlations to certain weather variables and helped improve prediction strength.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00014012
- Subject Headings
- Pond aquaculture, Water--Dissolved oxygen, Algorithms
- Format
- Document (PDF)
- Title
- AN ANALYTICAL FRAMEWORK TO ASSESS LIPOSUCTION OUTCOMES.
- Creator
- Patel, Kaivan, Pandya, Abhijit, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Liposuction is a common invasive procedure. Liposuction is performed for cosmetic and non-cosmetic reasons. Its use in regenerative medicine has been increasing. Its invasive nature renders it to have complications which can cause limitations in patients' recovery and patient lives. This thesis’s aim is to create an analytical framework to assess the liposuction procedure and its outcomes. The fundamental requirement to create this framework is to have a complete understanding of the...
Show moreLiposuction is a common invasive procedure. Liposuction is performed for cosmetic and non-cosmetic reasons. Its use in regenerative medicine has been increasing. Its invasive nature renders it to have complications which can cause limitations in patients' recovery and patient lives. This thesis’s aim is to create an analytical framework to assess the liposuction procedure and its outcomes. The fundamental requirement to create this framework is to have a complete understanding of the procedure which includes preparation and planning of the procedure, correctly performing the procedure and ensuring patient safety on day 0, week 2, week 4, and week 12 of the procedure. 54 patient’s liposuction outcomes were followed till week 12. Data collection is the first part of the framework, which involves understanding the complex surgical outcomes. Algorithms that have been previously studied to assess morbidities and mortalities have been used in this framework to assess if they can assess liposuction outcomes. In this framework algorithms like decision tree, XG boost, random forest, support vector classifier, k nearest neighbor, k means, k fold validation have been used. XG boost performed best to assess liposuction outcomes without validation. However, after cross validation other algorithms which are random forest, support vector machine and KNN classifier outperformed XG boost. This framework allows to assess liposuction outcomes based on the performance of the algorithms. In future, researchers can use this framework to assess liposuction as well as other surgical outcome.
Show less - Date Issued
- 2024
- PURL
- http://purl.flvc.org/fau/fd/FA00014429
- Subject Headings
- Liposuction, Outcome assessment (Medical care), Surgery, Algorithms
- Format
- Document (PDF)