Current Search: Florida Atlantic University (x) » Algorithms (x)
View All Items
Pages
- Title
- Algorithms in Elliptic Curve Cryptography.
- Creator
- Hutchinson, Aaron, Karabina, Koray, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
Elliptic curves have played a large role in modern cryptography. Most notably, the Elliptic Curve Digital Signature Algorithm (ECDSA) and the Elliptic Curve Di e-Hellman (ECDH) key exchange algorithm are widely used in practice today for their e ciency and small key sizes. More recently, the Supersingular Isogeny-based Di e-Hellman (SIDH) algorithm provides a method of exchanging keys which is conjectured to be secure in the post-quantum setting. For ECDSA and ECDH, e cient and secure...
Show moreElliptic curves have played a large role in modern cryptography. Most notably, the Elliptic Curve Digital Signature Algorithm (ECDSA) and the Elliptic Curve Di e-Hellman (ECDH) key exchange algorithm are widely used in practice today for their e ciency and small key sizes. More recently, the Supersingular Isogeny-based Di e-Hellman (SIDH) algorithm provides a method of exchanging keys which is conjectured to be secure in the post-quantum setting. For ECDSA and ECDH, e cient and secure algorithms for scalar multiplication of points are necessary for modern use of these protocols. Likewise, in SIDH it is necessary to be able to compute an isogeny from a given nite subgroup of an elliptic curve in a fast and secure fashion. We therefore nd strong motivation to study and improve the algorithms used in elliptic curve cryptography, and to develop new algorithms to be deployed within these protocols. In this thesis we design and develop d-MUL, a multidimensional scalar multiplication algorithm which is uniform in its operations and generalizes the well known 1-dimensional Montgomery ladder addition chain and the 2-dimensional addition chain due to Dan J. Bernstein. We analyze the construction and derive many optimizations, implement the algorithm in software, and prove many theoretical and practical results. In the nal chapter of the thesis we analyze the operations carried out in the construction of an isogeny from a given subgroup, as performed in SIDH. We detail how to e ciently make use of parallel processing when constructing this isogeny.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013113
- Subject Headings
- Curves, Elliptic, Cryptography, Algorithms
- Format
- Document (PDF)
- Title
- ALGORITHMS IN LATTICE-BASED CRYPTANALYSIS.
- Creator
- Miller, Shaun, Bai, Shi, Florida Atlantic University, Department of Mathematical Sciences, Charles E. Schmidt College of Science
- Abstract/Description
-
An adversary armed with a quantum computer has algorithms[66, 33, 34] at their disposal, which are capable of breaking our current methods of encryption. Even with the birth of post-quantum cryptography[52, 62, 61], some of best cryptanalytic algorithms are still quantum [45, 8]. This thesis contains several experiments on the efficacy of lattice reduction algorithms, BKZ and LLL. In particular, the difficulty of solving Learning With Errors is assessed by reducing the problem to an instance...
Show moreAn adversary armed with a quantum computer has algorithms[66, 33, 34] at their disposal, which are capable of breaking our current methods of encryption. Even with the birth of post-quantum cryptography[52, 62, 61], some of best cryptanalytic algorithms are still quantum [45, 8]. This thesis contains several experiments on the efficacy of lattice reduction algorithms, BKZ and LLL. In particular, the difficulty of solving Learning With Errors is assessed by reducing the problem to an instance of the Unique Shortest Vector Problem. The results are used to predict the behavior these algorithms may have on actual cryptographic schemes with security based on hard lattice problems. Lattice reduction algorithms require several floating-point operations including multiplication. In this thesis, I consider the resource requirements of a quantum circuit designed to simulate floating-point multiplication with high precision.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013543
- Subject Headings
- Cryptanalysis, Cryptography, Algorithms, Lattices, Quantum computing
- Format
- Document (PDF)
- Title
- An Algorithmic Approach to Tran Van Trung's Basic Recursive Construction of t-Designs.
- Creator
- Lopez, Oscar A., Magliveras, Spyros S., Florida Atlantic University, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
It was not until the 20th century that combinatorial design theory was studied as a formal subject. This field has many applications, for example in statistical experimental design, coding theory, authentication codes, and cryptography. Major approaches to the problem of discovering new t-designs rely on (i) the construction of large sets of t designs, (ii) using prescribed automorphism groups, (iii) recursive construction methods. In 2017 and 2018, Tran Van Trung introduced new recursive...
Show moreIt was not until the 20th century that combinatorial design theory was studied as a formal subject. This field has many applications, for example in statistical experimental design, coding theory, authentication codes, and cryptography. Major approaches to the problem of discovering new t-designs rely on (i) the construction of large sets of t designs, (ii) using prescribed automorphism groups, (iii) recursive construction methods. In 2017 and 2018, Tran Van Trung introduced new recursive techniques to construct t – (v, k, λ) designs. These methods are of purely combinatorial nature and require using "ingredient" t-designs or resolutions whose parameters satisfy a system of non-linear equations. Even after restricting the range of parameters in this new method, the task is computationally intractable. In this work, we enhance Tran Van Trung's "Basic Construction" by a robust and efficient hybrid computational apparatus which enables us to construct hundreds of thousands of new t – (v, k, Λ) designs from previously known ingredient designs. Towards the end of the dissertation we also create a new family of 2-resolutions, which will be infinite if there are infinitely many Sophie Germain primes.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013233
- Subject Headings
- Combinatorial designs and configurations, Algorithms, t-designs
- Format
- Document (PDF)
- Title
- An evaluation of Unsupervised Machine Learning Algorithms for Detecting Fraud and Abuse in the U.S. Medicare Insurance Program.
- Creator
- Da Rosa, Raquel C., Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The population of people ages 65 and older has increased since the 1960s and current estimates indicate it will double by 2060. Medicare is a federal health insurance program for people 65 or older in the United States. Medicare claims fraud and abuse is an ongoing issue that wastes a large amount of money every year resulting in higher health care costs and taxes for everyone. In this study, an empirical evaluation of several unsupervised machine learning approaches is performed which...
Show moreThe population of people ages 65 and older has increased since the 1960s and current estimates indicate it will double by 2060. Medicare is a federal health insurance program for people 65 or older in the United States. Medicare claims fraud and abuse is an ongoing issue that wastes a large amount of money every year resulting in higher health care costs and taxes for everyone. In this study, an empirical evaluation of several unsupervised machine learning approaches is performed which indicates reasonable fraud detection results. We employ two unsupervised machine learning algorithms, Isolation Forest and Unsupervised Random Forest, which have not been previously used for the detection of fraud and abuse on Medicare data. Additionally, we implement three other machine learning methods previously applied on Medicare data which include: Local Outlier Factor, Autoencoder, and k-Nearest Neighbor. For our dataset, we combine the 2012 to 2015 Medicare provider utilization and payment data and add fraud labels from the List of Excluded Individuals/Entities (LEIE) database. Results show that Local Outlier Factor is the best model to use for Medicare fraud detection.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013042
- Subject Headings
- Machine learning, Medicare fraud, Algorithms
- Format
- Document (PDF)
- Title
- Automatic extraction and tracking of eye features from facial image sequences.
- Creator
- Xie, Xangdong., Florida Atlantic University, Sudhakar, Raghavan, Zhuang, Hanqi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the...
Show moreThe dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the processing system. A new corner detection algorithm is presented in which the problem of detecting corners is solved by minimizing a cost function. Each cost factor captures a desirable characteristic of the corner using both the gray level information and the geometrical structure of a corner. This approach additionally provides corner orientations and angles along with corner locations. The advantage of the new approach over the existing corner detectors is that it is able to improve the reliability of detection and localization by imposing criteria related to both the gray level data and the corner structure. The extraction of eye features is performed by using an improved method of deformable templates which are geometrically arranged to resemble the expected shape of the eye. The overall energy function is redefined to simplify the minimization process. The weights for the energy terms are selected based on the normalized value of the energy term. Thus the weighting schedule of the modified method does not demand any expert knowledge for the user. Rather than using a sequential procedure, all parameters of the template are changed simultaneously during the minimization process. This reduces not only the processing time but also the probability of the template being trapped in local minima. An efficient algorithm for real-time eye feature tracking from a sequence of eye images is developed in the dissertation. Based on a geometrical model which describes the characteristics of the eye, the measurement equations are formulated to relate suitably selected measurements to the tracking parameters. A discrete Kalman filter is then constructed for the recursive estimation of the eye features, while taking into account the measurement noise. The small processing time allows this tracking algorithm to be used in real-time applications. This tracking algorithm is suitable for an automated, non-intrusive and inexpensive system as the algorithm is capable of measuring the time profiles of the eye movements. The issue of compensating head movements during the tracking of eye movements is also discussed. An appropriate measurement model was established to describe the effects of head movements. Based on this model, a Kalman filter structure was formulated to carry out the compensation. The whole tracking scheme which cascades two Kalman filters is constructed to track the iris movement, while compensating the head movement. The presence of the eye blink is also taken into account and its detection is incorporated into the cascaded tracking scheme. The above algorithms have been integrated to design an automated, non-intrusive and inexpensive system which provides accurate time profile of eye movements tracking from video image frames.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12377
- Subject Headings
- Kalman filtering, Eye--Movements, Algorithms, Image processing
- Format
- Document (PDF)
- Title
- Bijections for partition identities.
- Creator
- Lai, Jin-Mei Jeng, Florida Atlantic University, Meyerowitz, Aaron, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
This paper surveys work of the last few years on construction of bijections for partition identities. We use the more general setting of sieve--equivalent families. Suppose A1' ... ,An are subsets of a finite set A and B1' ... ,Bn are subsets of a finite set B. Define AS=∩(i∈S) Ai and BS = ∩ (i∈S) Bi for all S⊆N={1,...,n}. Given explicit bijections fS: AS->BS for each S⊆N, A-∪Ai has the same size as B-∪Bi. Several authors have given algorithms for producing an explicit bijection between these...
Show moreThis paper surveys work of the last few years on construction of bijections for partition identities. We use the more general setting of sieve--equivalent families. Suppose A1' ... ,An are subsets of a finite set A and B1' ... ,Bn are subsets of a finite set B. Define AS=∩(i∈S) Ai and BS = ∩ (i∈S) Bi for all S⊆N={1,...,n}. Given explicit bijections fS: AS->BS for each S⊆N, A-∪Ai has the same size as B-∪Bi. Several authors have given algorithms for producing an explicit bijection between these two sets. In certain important cases they give the same result. We discuss and compare algorithms, use Graph Theory to illustrate them, and provide PAS CAL programs for them.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fau/fd/FADT14826
- Subject Headings
- Algorithms, Partitions (Mathematics), Sieves (Mathematics)
- Format
- Document (PDF)
- Title
- COLLECTION AND ANALYSIS OF SLOW DENIAL OF SERVICE ATTACKS USING MACHINE LEARNING ALGORITHMS.
- Creator
- Kemp, Clifford, Khoshgoftaar, Taghi M., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Application-layer based attacks are becoming a more desirable target in computer networks for hackers. From complex rootkits to Denial of Service (DoS) attacks, hackers look to compromise computer networks. Web and application servers can get shut down by various application-layer DoS attacks, which exhaust CPU or memory resources. The HTTP protocol has become a popular target to launch application-layer DoS attacks. These exploits consume less bandwidth than traditional DoS attacks....
Show moreApplication-layer based attacks are becoming a more desirable target in computer networks for hackers. From complex rootkits to Denial of Service (DoS) attacks, hackers look to compromise computer networks. Web and application servers can get shut down by various application-layer DoS attacks, which exhaust CPU or memory resources. The HTTP protocol has become a popular target to launch application-layer DoS attacks. These exploits consume less bandwidth than traditional DoS attacks. Furthermore, this type of DoS attack is hard to detect because its network traffic resembles legitimate network requests. Being able to detect these DoS attacks effectively is a critical component of any robust cybersecurity system. Machine learning can help detect DoS attacks by identifying patterns in network traffic. With machine learning methods, predictive models can automatically detect network threats. This dissertation offers a novel framework for collecting several attack datasets on a live production network, where producing quality representative data is a requirement. Our approach builds datasets from collected Netflow and Full Packet Capture (FPC) data. We evaluate a wide range of machine learning classifiers which allows us to analyze slow DoS detection models more thoroughly. To identify attacks, we look at each dataset's unique traffic patterns and distinguishing properties. This research evaluates and investigates appropriate feature selection evaluators and search strategies. Features are assessed for their predictive value and degree of redundancy to build a subset of features. Feature subsets with high-class correlation but low intercorrelation are favored. Experimental results indicate Netflow and FPC features are discriminating enough to detect DoS attacks accurately. We conduct a comparative examination of performance metrics to determine the capability of several machine learning classifiers. Additionally, we improve upon our performance scores by investigating a variety of feature selection optimization strategies. Overall, this dissertation proposes a novel machine learning approach for detecting slow DoS attacks. Our machine learning results demonstrate that a single subset of features trained on Netflow data can effectively detect slow application-layer DoS attacks.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013848
- Subject Headings
- Machine learning, Algorithms, Denial of service attacks
- Format
- Document (PDF)
- Title
- CONTRIBUTIONS TO QUANTUM-SAFE CRYPTOGRAPHY: HYBRID ENCRYPTION AND REDUCING THE T GATE COST OF AES.
- Creator
- Pham, Hai, Steinwandt, Rainer, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
Quantum cryptography offers a wonderful source for current and future research. The idea started in the early 1970s, and it continues to inspire work and development toward a popular goal, large-scale communication networks with strong security guarantees, based on quantum-mechanical properties. Quantum cryptography builds on the idea of exploiting physical properties to establish secure cryptographic operations. A particular quantum-based protocol has gathered interest in recent years for...
Show moreQuantum cryptography offers a wonderful source for current and future research. The idea started in the early 1970s, and it continues to inspire work and development toward a popular goal, large-scale communication networks with strong security guarantees, based on quantum-mechanical properties. Quantum cryptography builds on the idea of exploiting physical properties to establish secure cryptographic operations. A particular quantum-based protocol has gathered interest in recent years for its use of mesoscopic coherent states. The AlphaEta protocol has been designed to exploit properties of coherent states of light to transmit data securely over an optical channel. AlphaEta aims to draw security from the uncertainty of any measurement of the transmitted coherent states due to intrinsic quantum noise. We propose a framework to combine this protocol with classical preprocessing, taking into account error-correction for the optical channel and establishing a strong provable security guarantee. Integrating a state-of-the-art solution for fast authenticated encryption is straightforward, but in this case the security analysis requires heuristic reasoning.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013339
- Subject Headings
- Cryptography, Quantum computing, Algorithms, Mesoscopic coherent states
- Format
- Document (PDF)
- Title
- Cryptanalysis of small private key RSA.
- Creator
- Guild, Jeffrey Kirk, Florida Atlantic University, Klingler, Lee
- Abstract/Description
-
RSA cryptosystems with decryption exponent d less than N 0.292, for a given RSA modulus N, show themselves to be vulnerable to an attack which utilizes modular polynomials and the LLL Basis Reduction Algorithm. This result, presented by Dan Boneh and Glenn Durfee in 1999, is an improvement on the bound of N0.25 established by Wiener in 1990. This thesis examines in detail the LLL Basis Reduction Algorithm and the attack on RSA as presented by Boneh and Durfee.
- Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15730
- Subject Headings
- Cryptography, Algorithms, Data encryption (Computer science)
- Format
- Document (PDF)
- Title
- DATA COLLECTION FRAMEWORK AND MACHINE LEARNING ALGORITHMS FOR THE ANALYSIS OF CYBER SECURITY ATTACKS.
- Creator
- Calvert, Chad, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors...
Show moreThe integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors of persistent and emerging threats. When collecting modern day network traffic for intrusion detection, substantial amounts of traffic can be collected, much of which consists of relatively few attack instances as compared to normal traffic. This skewed distribution between normal and attack data can lead to high levels of class imbalance. Machine learning techniques can be used to aid in attack detection, but large levels of imbalance between normal (majority) and attack (minority) instances can lead to inaccurate detection results.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013289
- Subject Headings
- Machine learning, Algorithms, Anomaly detection (Computer security), Intrusion detection systems (Computer security), Big data
- Format
- Document (PDF)
- Title
- Derivation and identification of linearly parametrized robot manipulator dynamic models.
- Creator
- Xu, Hua., Florida Atlantic University, Roth, Zvi S., Zilouchian, Ali, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The dissertation focuses on robot manipulator dynamic modeling, and inertial and kinematic parameters identification problem. An automatic dynamic parameters derivation symbolic algorithm is presented. This algorithm provides the linearly independent dynamic parameters set. It is shown that all the dynamic parameters are identifiable when the trajectory is persistently exciting. The parameters set satisfies the necessary condition of finding a persistently exciting trajectory. Since in...
Show moreThe dissertation focuses on robot manipulator dynamic modeling, and inertial and kinematic parameters identification problem. An automatic dynamic parameters derivation symbolic algorithm is presented. This algorithm provides the linearly independent dynamic parameters set. It is shown that all the dynamic parameters are identifiable when the trajectory is persistently exciting. The parameters set satisfies the necessary condition of finding a persistently exciting trajectory. Since in practice the system data matrix is corrupted with noise, conventional estimation methods do not converge to the true values. An error bound is given for Kalman filters. Total least squares method is introduced to obtain unbiased estimates. Simulations studies are presented for five particular identification methods. The simulations are performed under different noise levels. Observability problems for the inertial and kinematic parameters are investigated. U%wer certain conditions all L%wearly Independent Parameters derived from are observable. The inertial and kinematic parameters can be categorized into three parts according to their influences on the system dynamics. The dissertation gives an algorithm to classify these parameters.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12291
- Subject Headings
- Algorithms, Manipulators (Mechanism), Robots--Control systems
- Format
- Document (PDF)
- Title
- Design and modeling of hybrid software fault-tolerant systems.
- Creator
- Zhang, Man-xia Maria., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Fault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic...
Show moreFault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic methods are developed to construct hybrid fault tolerant systems with total cost constraints. The algorithms provide a systematic approach to the design of hybrid fault tolerant systems.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14783
- Subject Headings
- Computer software--Reliability, Fault-tolerant computing, Algorithms
- Format
- Document (PDF)
- Title
- Deterministic and non-deterministic basis reduction techniques for NTRU lattices.
- Creator
- Socek, Daniel, Florida Atlantic University, Magliveras, Spyros S.
- Abstract/Description
-
Finding the shortest or a "short enough" vector in an integral lattice of substantial dimension is a difficult problem. The problem is not known to be but most people believe it is [7]. The security of the newly proposed NTRU cryptosystem depends solely on this fact. However, by the definition NTRU lattices possess a certain symmetry. This suggests that there may be a way of taking advantage of this symmetry to enable a new cryptanalytical approach in combination with existing good lattice...
Show moreFinding the shortest or a "short enough" vector in an integral lattice of substantial dimension is a difficult problem. The problem is not known to be but most people believe it is [7]. The security of the newly proposed NTRU cryptosystem depends solely on this fact. However, by the definition NTRU lattices possess a certain symmetry. This suggests that there may be a way of taking advantage of this symmetry to enable a new cryptanalytical approach in combination with existing good lattice reduction algorithms. The aim of this work is to exploit the symmetry inherent in NTRU lattices to design a non-deterministic algorithm for improving basis reduction techniques for NTRU lattices. We show how the non-trivial cyclic automorphism of an NTRU lattice enables further reduction. Our approach combines the recently published versions of the famous LLL algorithm for lattice basis reduction with our automorphism utilization techniques.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12933
- Subject Headings
- Cryptography, Lattice theory, Algorithms
- Format
- Document (PDF)
- Title
- Dosimetry comparison between treatment plans computed with Finite size pencil beam algorithm and Monte Carlo algorithm using InCise™ Multileaf collimator equipped CyberKnife® System.
- Creator
- Galpayage Dona, Kalpani Nisansala Udeni, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Physics
- Abstract/Description
-
Since the release of the Cyberknife Multileaf Collimator (CK-MLC), it has been a constant concern on the realistic dose differences computed with its early-available Finite Size Pencil Beam algorithm (FSPB) from those computed by using industry well-accepted algorithms such as the Monte Carlo (MC) dose algorithm. In this study dose disparities between FSPB and MC dose calculation algorithms for selected CK-MLC treatment plans were quantified. The dosimetry for planning target volume (PTV) and...
Show moreSince the release of the Cyberknife Multileaf Collimator (CK-MLC), it has been a constant concern on the realistic dose differences computed with its early-available Finite Size Pencil Beam algorithm (FSPB) from those computed by using industry well-accepted algorithms such as the Monte Carlo (MC) dose algorithm. In this study dose disparities between FSPB and MC dose calculation algorithms for selected CK-MLC treatment plans were quantified. The dosimetry for planning target volume (PTV) and major organs at risks (OAR) was compared by calculating normalized percentage deviations (Ndev) between the two algorithms. It is found that the FSPB algorithm overestimates D95 of PTV when compared with the MC algorithm by averaging 24.0% in detached lung cases, and 15.0% in non-detached lung cases which is attributed to the absence of heterogeneity correction in the FSPB algorithm. Average dose differences are 0.3% in intracranial and 0.9% in pancreas cases. Ndev for the D95 of PTV range from 8.8% to 14.1% for the CK-MLC lung treatment plans with small field (SF ≤ 2x2cm2). Ndev is ranged from 0.5-7.0% for OARs.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013123
- Subject Headings
- Radiosurgery, Radiation dosimetry, Monte Carlo method, Algorithms, Lung Neoplasms--radiotherapy
- Format
- Document (PDF)
- Title
- Efficient localized broadcast algorithms in mobile ad hoc networks.
- Creator
- Lou, Wei., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The broadcast operation has a most fundamental role in mobile ad hoc networks because of the broadcasting nature of radio transmission, i.e., when a sender transmits a packet, all nodes within the sender's transmission range will be affected by this transmission. The benefit of this property is that one packet can be received by all neighbors while the negative effect is that it interferes with other transmissions. Flooding ensures that the entire network receives the packet but generates...
Show moreThe broadcast operation has a most fundamental role in mobile ad hoc networks because of the broadcasting nature of radio transmission, i.e., when a sender transmits a packet, all nodes within the sender's transmission range will be affected by this transmission. The benefit of this property is that one packet can be received by all neighbors while the negative effect is that it interferes with other transmissions. Flooding ensures that the entire network receives the packet but generates many redundant transmissions which may trigger a serious broadcast storm problem that may collapse the entire network. The broadcast storm problem can be avoided by providing efficient broadcast algorithms that aim to reduce the number of nodes that retransmit the broadcast packet while still guaranteeing that all nodes receive the packet. This dissertation focuses on providing several efficient localized broadcast algorithms to reduce the broadcast redundancy in mobile ad hoc networks. In my dissertation, the efficiency of a broadcast algorithm is measured by the number of forward nodes for relaying a broadcast packet. A classification of broadcast algorithms for mobile ad hoc networks has been provided at the beginning. Two neighbor-designating broadcast algorithms, called total dominant pruning and partial dominant pruning, have been proposed to reduce the number of the forward nodes. Several extensions based on the neighbor-designating approach have also been investigated. The cluster-based broadcast algorithm shows good performance in dense networks, and it also provides a constant upper bound approximation ratio to the optimum solution for the number of forward nodes in the worst case. A generic broadcast framework with K hop neighbor information has a trade-off between the number of the forward nodes and the size of the K-hop zone. A reliable broadcast algorithm, called double-covered broadcast, is proposed to improve the delivery ratio of a broadcast package when the transmission error rate of the network is high. The effectiveness of all these algorithms has been confirmed by simulations.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fau/fd/FADT12103
- Subject Headings
- Wireless LANS, Mobile communication systems, Wireless communication systems--Mathematics, Algorithms
- Format
- Document (PDF)
- Title
- Efficient Machine Learning Algorithms for Identifying Risk Factors of Prostate and Breast Cancers among Males and Females.
- Creator
- Rikhtehgaran, Samaneh, Muhammad, Wazir, Florida Atlantic University, Department of Physics, Charles E. Schmidt College of Science
- Abstract/Description
-
One of the most common types of cancer among women is breast cancer. It represents one of the diseases leading to a high number of mortalities among women. On the other hand, prostate cancer is the second most frequent malignancy in men worldwide. The early detection of prostate cancer is fundamental to reduce mortality and increase the survival rate. A comparison between six types of machine learning models as Logistic Regression, Decision Tree, Random Forest, Gradient Boosting, k Nearest...
Show moreOne of the most common types of cancer among women is breast cancer. It represents one of the diseases leading to a high number of mortalities among women. On the other hand, prostate cancer is the second most frequent malignancy in men worldwide. The early detection of prostate cancer is fundamental to reduce mortality and increase the survival rate. A comparison between six types of machine learning models as Logistic Regression, Decision Tree, Random Forest, Gradient Boosting, k Nearest Neighbors, and Naïve Bayes has been performed. This research aims to identify the most efficient machine learning algorithms for identifying the most significant risk factors of prostate and breast cancers. For this reason, National Health Interview Survey (NHIS) and Prostate, Lung, Colorectal, and Ovarian (PLCO) datasets are used. A comprehensive comparison of risk factors leading to these two crucial cancers can significantly impact early detection and progressive improvement in survival.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013755
- Subject Headings
- Machine learning, Algorithms, Cancer--Risk factors, Breast--Cancer, Prostate--Cancer
- Format
- Document (PDF)
- Title
- Enhanced Fibonacci Cubes.
- Creator
- Qian, Haifeng., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
We propose the enhanced Fibonacci cube (EFC), which is defined based on the sequence Fn = 2(n-2) + 2F(n-4). We study its topological properties, embeddings, applications, routings, VLSI/WSI implementations, and its extensions. Our results show that EFC retains many properties of the hypercube. It contains the Fibonacci cube (FC) and extended Fibonacci cube of the same order as subgraphs and maintains virtually all the desirable properties of FC. EFC is even better in some structural...
Show moreWe propose the enhanced Fibonacci cube (EFC), which is defined based on the sequence Fn = 2(n-2) + 2F(n-4). We study its topological properties, embeddings, applications, routings, VLSI/WSI implementations, and its extensions. Our results show that EFC retains many properties of the hypercube. It contains the Fibonacci cube (FC) and extended Fibonacci cube of the same order as subgraphs and maintains virtually all the desirable properties of FC. EFC is even better in some structural properties, embeddings, applications and VLSI designs than FC or hypercube. With EFC, there are more cubes with various structures and sizes for selection, and more backup cubes into which faulty hypercubes can be reconfigured, which alleviates the size limitation of the hypercube and results in a higher level of fault tolerance.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15196
- Subject Headings
- Integrated circuits--Very large scale integration, Hypercube networks (Computer networks), Algorithms, Fault-tolerant computing, Multiprocessors
- Format
- Document (PDF)
- Title
- EVALUATING ENVIRONMENTAL VARIABLES THAT INFLUENCE POND DISSOLVED OXYGEN TO INFORM PREDICTION MODEL DEVELOPMENT.
- Creator
- Weber, Ethan W., Wills, Paul S., Florida Atlantic University, Department of Marine Science and Oceanography, Charles E. Schmidt College of Science
- Abstract/Description
-
Pond aquaculture accounts 65% of global finfish production. A major factor limiting pond aquaculture productivity is fluctuating oxygen levels, which are heavily influenced by atmospheric conditions and primary productivity. Being able to predict DO concentrations by measuring environmental parameters would be beneficial to improving the industry’s efficiencies. The data collected included pond DO, water temperature, air temperature, atmospheric pressure, wind speed/direction, solar...
Show morePond aquaculture accounts 65% of global finfish production. A major factor limiting pond aquaculture productivity is fluctuating oxygen levels, which are heavily influenced by atmospheric conditions and primary productivity. Being able to predict DO concentrations by measuring environmental parameters would be beneficial to improving the industry’s efficiencies. The data collected included pond DO, water temperature, air temperature, atmospheric pressure, wind speed/direction, solar irradiance, rainfall, pond Chl-a concentrations as well as water color images. Pearson’s correlations and stepwise regressions were used to determine the variables’ connection to DO and their potential usefulness for a prediction model. It was determined that sunlight levels play a crucial role in DO fluctuations and crashes because of its influence on pond heating, primary productivity, and pond stratification. It was also found that image data did have correlations to certain weather variables and helped improve prediction strength.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00014012
- Subject Headings
- Pond aquaculture, Water--Dissolved oxygen, Algorithms
- Format
- Document (PDF)
- Title
- Evolution and application of a parallel algorithm for explicit transient finite element analysis on SIMD/MIMD computers.
- Creator
- Das, Partha S., Florida Atlantic University, Case, Robert O., Tsai, Chi-Tay, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP-1) machine, is presented, and then extended to implementation on the MIMD computer, Cray-T3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric...
Show moreThe development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP-1) machine, is presented, and then extended to implementation on the MIMD computer, Cray-T3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric elements for the nonlinear dynamic analysis of shells of revolution. The parallel algorithm required the development of a new procedure, called an 'exchange', which consists of an exchange of nodal forces at each time step to replace the standard gather-assembly operations in sequential code. In addition, the data was reconfigured so that all nodal variables associated with an element are stored in a processor along with other element data. The architectural and Fortran programming language features of the MasPar MP-1 and Cray-T3D computers which are pertinent to finite element computations are also summarized, and sample code segments are provided to illustrate programming in a data parallel environment. The governing equations, the finite element discretization and a comparison between their implementation on Von Neumann and SIMD-MIMD parallel computers are discussed to demonstrate their applicability and the important differences in the new algorithm. Various large scale transient problems are solved using the parallel data structure and elemental decomposition algorithm and measured performances are presented and analyzed in detail. Results show that Cray-T3D is a very promising parallel computer for finite element computation. The 32 processors of this machine shows an overall speedup of 27-28, i.e. an efficiency of 85% or more and 128 processors shows a speedup of 70-77, i.e. an efficiency of 55% or more. The Cray-T3D results demonstrated that this machine is capable of outperforming the Cray-YMP by a factor of about 10 for finite element problems with 4K elements, therefore, the method of developing the parallel data structure and its associated elemental decomposition algorithm is recommended for implementation on other finite element code in this machine. However, the results from MasPar MP-1 show that this new algorithm for explicit finite element computations do not produce very efficient parallel code on this computer and therefore, the new data structure is not recommended for further use on this MasPar machine.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12500
- Subject Headings
- Finite element method, Algorithms, Parallel computers
- Format
- Document (PDF)
- Title
- Evolutionary algorithms for design and control of material handling and manufacturing systems.
- Creator
- Kanwar, Pankaj., Florida Atlantic University, Han, Chingping (Jim), College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution...
Show moreThe crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution strategies and neural networks. The emergence of massively parallel systems has made these inherently parallel algorithms of high practical interest. The advantages offered by these algorithms over other classical techniques has resulted in their wide acceptance. These algorithms have been applied for solving a large class of interesting problems, for which no efficient or reasonably fast algorithm exists. This thesis extends their usage to the domain of production research. Problems of high practical interest in the domain of production research are solved using a subclass of these algorithms i.e. those based on the principle of evolution. The problems include: the flowpath design of AGV systems and vehicle routing in a transportation system. Furthermore, a Genetic Based Machine Learning (GBML) system has been developed for optimal scheduling and control of a job shop.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15025
- Subject Headings
- Industrial productivity--Data processing, Algorithms, Genetic algorithms, Motor vehicles--Automatic location systems, Materials handling--Computer simulation, Manufacturing processes--Computer simulation
- Format
- Document (PDF)