Current Search: Algorithms (x)
View All Items
Pages
- Title
- A new GMDH type algorithm for the development of neural networks for pattern recognition.
- Creator
- Gilbar, Thomas C., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Researchers from a wide range of fields have discovered the benefits of applying neural networks to pattern recognition problems. Although applications for neural networks have increased, development of tools to design these networks has been slower. There are few comprehensive network development methods. Those that do exist are slow, inefficient, and application specific, require predetermination of the final network structure, and/or result in large, complicated networks. Finding optimal...
Show moreResearchers from a wide range of fields have discovered the benefits of applying neural networks to pattern recognition problems. Although applications for neural networks have increased, development of tools to design these networks has been slower. There are few comprehensive network development methods. Those that do exist are slow, inefficient, and application specific, require predetermination of the final network structure, and/or result in large, complicated networks. Finding optimal neural networks that balance low network complexity with accuracy is a complicated process that traditional network development procedures are incapable of achieving. Although not originally designed for neural networks, the Group Method of Data Handling (GMDH) has characteristics that are ideal for neural network design. GMDH minimizes the number of required neurons by choosing and keeping only the best neurons and filtering out unneeded inputs. In addition, GMDH develops the neurons and organizes the network simultaneously, saving time and processing power. However, some of the qualities of the network must still be predetermined. This dissertation introduces a new algorithm that applies some of the best characteristics of GMDH to neural network design. The new algorithm is faster, more flexible, and more accurate than traditional network development methods. It is also more dynamic than current GMDH based methods, capable of creating a network that is optimal for an application and training data. Additionally, the new algorithm virtually guarantees that the number of neurons progressively decreases in each succeeding layer. To show its flexibility, speed, and ability to design optimal networks, the algorithm was used to successfully design networks for a wide variety of real applications. The networks developed using the new algorithm were compared to other development methods and network architectures. The new algorithm's networks were more accurate and yet less complicated than the other networks. Additionally, the algorithm designs neurons that are flexible enough to meet the needs of the specific applications, yet similar enough to be implemented using a standardized hardware cell. When combined with the simplified network layout that naturally occurs with the algorithm, this results in networks that can be implemented using Field Programmable Gate Array (FPGA) type devices.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/11994
- Subject Headings
- GMDH algorithms, Neural networks (Computer science), Pattern recognition systems
- Format
- Document (PDF)
- Title
- A method for the optimization of product development resource allocation.
- Creator
- Worp, Nicholas Jacob., Florida Atlantic University, Han, Chingping (Jim), College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
This thesis presents a model designed to optimize the allocation of corporate resources required for the success of a product in the marketplace. The product development resources used in the model are: market research, applied research, product design, cost reduction and advertising. The key goals of this thesis are to provide industry with a usable tool: (1) Implement strategic plans through effective budgeting; (2) Optimize both short and long term profits; (3) Evaluate the impact of...
Show moreThis thesis presents a model designed to optimize the allocation of corporate resources required for the success of a product in the marketplace. The product development resources used in the model are: market research, applied research, product design, cost reduction and advertising. The key goals of this thesis are to provide industry with a usable tool: (1) Implement strategic plans through effective budgeting; (2) Optimize both short and long term profits; (3) Evaluate the impact of resource inter-dependencies; (4) Enable accountability that leads to goal achievement and checks unnecessary growth; (5) Remove much of the negative political and emotional variability; (6) Easily adapt to internal and external changes; (7) Output a specific allocation for each resource as a percentage of sales; (8) Output an estimate of future profitability. Genetic Algorithms are particularly well suited for this application because an exact optima is not required and the search space can be extremely large, complex, and non-linear.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15519
- Subject Headings
- Genetic algorithms, Resource allocation, Strategic planning, Business planning
- Format
- Document (PDF)
- Title
- Indexed resource auction multiple access (I-RAMA): A new medium access scheme for third generation wireless networks.
- Creator
- Barrantes-Sliesarieva, Elena Gabriela., Florida Atlantic University, Ulyas, Mohammad
- Abstract/Description
-
Indexed Resource Auction Multiple Access (I-RAMA), a new medium access protocol for wireless cellular networks based on Resource Auction Multiple Access (RAMA) is presented. I-RAMA relies in variable length resource auctions, whose length depends on the time it takes the Base Station to uniquely identify the Mobile Station. This identification is done by using dynamic Base Station information about the users present in the cell at any moment. I-RAMA effectively reduces the amount of time...
Show moreIndexed Resource Auction Multiple Access (I-RAMA), a new medium access protocol for wireless cellular networks based on Resource Auction Multiple Access (RAMA) is presented. I-RAMA relies in variable length resource auctions, whose length depends on the time it takes the Base Station to uniquely identify the Mobile Station. This identification is done by using dynamic Base Station information about the users present in the cell at any moment. I-RAMA effectively reduces the amount of time spent in the resource auctions without introducing contention or excessive complexity at the Base Station. The effects of introducing data users in the system are investigated using a simulation, and it is shown that I-RAMA guarantees Quality of Service for isochronous users while maintaining a bounded delay for data users at much higher loads than RAMA.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15204
- Subject Headings
- Wireless communication systems, Cellular radio, Mobile communication systems, Computer algorithms
- Format
- Document (PDF)
- Title
- Software reliability engineering: An evolutionary neural network approach.
- Creator
- Hochman, Robert., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
This thesis presents the results of an empirical investigation of the applicability of genetic algorithms to a real-world problem in software reliability--the fault-prone module identification problem. The solution developed is an effective hybrid of genetic algorithms and neural networks. This approach (ENNs) was found to be superior, in terms of time, effort, and confidence in the optimality of results, to the common practice of searching manually for the best-performing net. Comparisons...
Show moreThis thesis presents the results of an empirical investigation of the applicability of genetic algorithms to a real-world problem in software reliability--the fault-prone module identification problem. The solution developed is an effective hybrid of genetic algorithms and neural networks. This approach (ENNs) was found to be superior, in terms of time, effort, and confidence in the optimality of results, to the common practice of searching manually for the best-performing net. Comparisons were made to discriminant analysis. On fault-prone, not-fault-prone, and overall classification, the lower error proportions for ENNs were found to be statistically significant. The robustness of ENNs follows from their superior performance over many data configurations. Given these encouraging results, it is suggested that ENNs have potential value in other software reliability problem domains, where genetic algorithms have been largely ignored. For future research, several plans are outlined for enhancing ENNs with respect to accuracy and applicability.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15474
- Subject Headings
- Neural networks (Computer science), Software engineering, Genetic algorithms
- Format
- Document (PDF)
- Title
- Time-step optimal broadcasting in mesh networks with minimum total communication distance.
- Creator
- Cang, Songluan., Florida Atlantic University, Wu, Jie
- Abstract/Description
-
We propose a new minimum total communication distance (TCD) algorithm and an optimal TCD algorithm for broadcast in a 2-dimensional mesh (2-D mesh). The former generates a minimum TCD from a given source node, and the latter guarantees a minimum TCD among all the possible source nodes. These algorithms are based on a divide-and-conquer approach where a 2-D mesh is partitioned into four submeshes of equal size. The source node sends the broadcast message to a special node called an eye in each...
Show moreWe propose a new minimum total communication distance (TCD) algorithm and an optimal TCD algorithm for broadcast in a 2-dimensional mesh (2-D mesh). The former generates a minimum TCD from a given source node, and the latter guarantees a minimum TCD among all the possible source nodes. These algorithms are based on a divide-and-conquer approach where a 2-D mesh is partitioned into four submeshes of equal size. The source node sends the broadcast message to a special node called an eye in each submesh. The above procedure is then recursively applied in each submesh. These algorithms are extended to a 3-dimensional mesh (3-D mesh), and are generalized to a d-dimensional mesh or torus. In addition, the proposed approach can potentially be used to solve optimization problems in other collective communication operations.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15647
- Subject Headings
- Computer algorithms, Parallel processing (Electronic computers), Computer architecture
- Format
- Document (PDF)
- Title
- Statistical physics based heuristic clustering algorithms with an application to econophysics.
- Creator
- Baldwin, Lucia Liliana, Florida Atlantic University, Wille, Luc T.
- Abstract/Description
-
Three new approaches to the clustering of data sets are presented. They are heuristic methods and represent forms of unsupervised (non-parametric) clustering. Applied to an unknown set of data these methods automatically determine the number of clusters and their location using no a priori assumptions. All are based on analogies with different physical phenomena. The first technique, named the Percolation Clustering Algorithm, embodies a novel variation on the nearest-neighbor algorithm...
Show moreThree new approaches to the clustering of data sets are presented. They are heuristic methods and represent forms of unsupervised (non-parametric) clustering. Applied to an unknown set of data these methods automatically determine the number of clusters and their location using no a priori assumptions. All are based on analogies with different physical phenomena. The first technique, named the Percolation Clustering Algorithm, embodies a novel variation on the nearest-neighbor algorithm focusing on the connectivity between sample points. Exploiting the equivalence with a percolation process, this algorithm considers data points to be surrounded by expanding hyperspheres, which bond when they touch each other. Once a sequence of joined spheres spans an entire cluster, percolation occurs and the cluster size remains constant until it merges with a neighboring cluster. The second procedure, named Nucleation and Growth Clustering, exploits the analogy with nucleation and growth which occurs in island formation during epitaxial growth of solids. The original data points are nucleation centers, around which aggregation will occur. Additional "ad-data" that are introduced into the sample space, interact with the data points and stick if located within a threshold distance. These "ad-data" are used as a tool to facilitate the detection of clusters. The third method, named Discrete Deposition Clustering Algorithm, constrains deposition to occur on a grid, which has the advantage of computational efficiency as opposed to the continuous deposition used in the previous method. The original data form the vertexes of a sparse graph and the deposition sites are defined to be the middle points of this graphs edges. Ad-data are introduced on the deposition site and the system is allowed to evolve in a self-organizing regime. This allows the simulation of a phase transition and by monitoring the specific heat capacity of the system one can mark out a "natural" criterion for validating the partition. All of these techniques are competitive with existing algorithms and offer possible advantages for certain types of data distributions. A practical application is presented using the Percolation Clustering Algorithm to determine the taxonomy of the Dow Jones Industrial Average portfolio. The statistical properties of the correlation coefficients between DJIA components are studied along with the eigenvalues of the correlation matrix between the DJIA components.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/12032
- Subject Headings
- Cluster analysis, Statistical physics, Percolation (Statistical physics), Algorithms
- Format
- Document (PDF)
- Title
- Two-dimensional feature tracking algorithm for motion analysis.
- Creator
- Krishnan, Srivatsan., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the...
Show moreIn this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the first image of the sequence to be analyzed. Any kind of motion, i.e., 6 DOF (translation and rotation), can be tolerated keeping in mind the pixels-per-frame motion limitations. No subpixel computations are necessary. Taking into account constraints of temporal continuity, the algorithm uses simple and efficient predictive tracking over multiple frames. Trajectories of features on multiple objects can also be computed. The algorithm accepts a slow, continuous change of brightness D.C. level in the pixels of the feature. Another important aspect of the algorithm is the use of an adaptive feature matching threshold that accounts for change in relative brightness of neighboring pixels. As applications of the feature-tracking algorithm and to test the accuracy of the tracking, we show how the algorithm has been used to extract the Focus of Expansion (FOE) and compute the Time-to-contact using real image sequences of unstructured, unknown environments. In both these applications, information from multiple frames is used.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15030
- Subject Headings
- Algorithms, Image transmission, Motion perception (Vision), Image processing
- Format
- Document (PDF)
- Title
- Intelligent systems using GMDH algorithms.
- Creator
- Gupta, Mukul., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Design of intelligent systems that can learn from the environment and adapt to the change in the environment has been pursued by many researchers in this age of information technology. The Group Method of Data Handling (GMDH) algorithm to be implemented is a multilayered neural network. Neural network consists of neurons which use information acquired in training to deduce relationships in order to predict future responses. Most software tool during the simulation of the neural network based...
Show moreDesign of intelligent systems that can learn from the environment and adapt to the change in the environment has been pursued by many researchers in this age of information technology. The Group Method of Data Handling (GMDH) algorithm to be implemented is a multilayered neural network. Neural network consists of neurons which use information acquired in training to deduce relationships in order to predict future responses. Most software tool during the simulation of the neural network based algorithms in a sequential, single processor machine like Pascal, C or C++ takes several hours or even days. But in this thesis, the GMDH algorithm was modified and implemented into a software tool written in Verilog HDL and tested with specific application (XOR) to make the simulation faster. The purpose of the development of this tool is also to keep it general enough so that it can have a wide range of uses, but robust enough that it can give accurate results for all of those uses. Most of the applications of neural networks are basically software simulations of the algorithms only but in this thesis the hardware design is also developed of the algorithm so that it can be easily implemented on hardware using Field Programmable Gate Array (FPGA) type devices. The design is small enough to require a minimum amount of memory, circuit space, and propagation delay.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/2976442
- Subject Headings
- GMDH algorithms, Genetic algorithms, Pattern recognition systems, Expert systems (Computer science), Neural networks (Computer science), Fuzzy logic, Intelligent control systems
- Format
- Document (PDF)
- Title
- Evolutionary algorithms for design and control of material handling and manufacturing systems.
- Creator
- Kanwar, Pankaj., Florida Atlantic University, Han, Chingping (Jim), College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution...
Show moreThe crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution strategies and neural networks. The emergence of massively parallel systems has made these inherently parallel algorithms of high practical interest. The advantages offered by these algorithms over other classical techniques has resulted in their wide acceptance. These algorithms have been applied for solving a large class of interesting problems, for which no efficient or reasonably fast algorithm exists. This thesis extends their usage to the domain of production research. Problems of high practical interest in the domain of production research are solved using a subclass of these algorithms i.e. those based on the principle of evolution. The problems include: the flowpath design of AGV systems and vehicle routing in a transportation system. Furthermore, a Genetic Based Machine Learning (GBML) system has been developed for optimal scheduling and control of a job shop.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15025
- Subject Headings
- Industrial productivity--Data processing, Algorithms, Genetic algorithms, Motor vehicles--Automatic location systems, Materials handling--Computer simulation, Manufacturing processes--Computer simulation
- Format
- Document (PDF)
- Title
- An evaluation of machine learning algorithms for tweet sentiment analysis.
- Creator
- Prusa, Joseph D., Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Sentiment analysis of tweets is an application of mining Twitter, and is growing in popularity as a means of determining public opinion. Machine learning algorithms are used to perform sentiment analysis; however, data quality issues such as high dimensionality, class imbalance or noise may negatively impact classifier performance. Machine learning techniques exist for targeting these problems, but have not been applied to this domain, or have not been studied in detail. In this thesis we...
Show moreSentiment analysis of tweets is an application of mining Twitter, and is growing in popularity as a means of determining public opinion. Machine learning algorithms are used to perform sentiment analysis; however, data quality issues such as high dimensionality, class imbalance or noise may negatively impact classifier performance. Machine learning techniques exist for targeting these problems, but have not been applied to this domain, or have not been studied in detail. In this thesis we discuss research that has been conducted on tweet sentiment classification, its accompanying data concerns, and methods of addressing these concerns. We test the impact of feature selection, data sampling and ensemble techniques in an effort to improve classifier performance. We also evaluate the combination of feature selection and ensemble techniques and examine the effects of high dimensionality when combining multiple types of features. Additionally, we provide strategies and insights for potential avenues of future work.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00004460, http://purl.flvc.org/fau/fd/FA00004460
- Subject Headings
- Social media., Natural language processing (Computer science), Machine learning., Algorithms., Fuzzy expert systems., Artificial intelligence.
- Format
- Document (PDF)
- Title
- DATA COLLECTION FRAMEWORK AND MACHINE LEARNING ALGORITHMS FOR THE ANALYSIS OF CYBER SECURITY ATTACKS.
- Creator
- Calvert, Chad, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors...
Show moreThe integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors of persistent and emerging threats. When collecting modern day network traffic for intrusion detection, substantial amounts of traffic can be collected, much of which consists of relatively few attack instances as compared to normal traffic. This skewed distribution between normal and attack data can lead to high levels of class imbalance. Machine learning techniques can be used to aid in attack detection, but large levels of imbalance between normal (majority) and attack (minority) instances can lead to inaccurate detection results.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013289
- Subject Headings
- Machine learning, Algorithms, Anomaly detection (Computer security), Intrusion detection systems (Computer security), Big data
- Format
- Document (PDF)
- Title
- Design of a Test Framework for the Evaluation of Transfer Learning Algorithms.
- Creator
- Weiss, Karl Robert, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A traditional machine learning environment is characterized by the training and testing data being drawn from the same domain, therefore, having similar distribution characteristics. In contrast, a transfer learning environment is characterized by the training data having di erent distribution characteristics from the testing data. Previous research on transfer learning has focused on the development and evaluation of transfer learning algorithms using real-world datasets. Testing with real...
Show moreA traditional machine learning environment is characterized by the training and testing data being drawn from the same domain, therefore, having similar distribution characteristics. In contrast, a transfer learning environment is characterized by the training data having di erent distribution characteristics from the testing data. Previous research on transfer learning has focused on the development and evaluation of transfer learning algorithms using real-world datasets. Testing with real-world datasets exposes an algorithm to a limited number of data distribution di erences and does not exercise an algorithm's full capability and boundary limitations. In this research, we de ne, implement, and deploy a transfer learning test framework to test machine learning algorithms. The transfer learning test framework is designed to create a wide-range of distribution di erences that are typically encountered in a transfer learning environment. By testing with many di erent distribution di erences, an algorithm's strong and weak points can be discovered and evaluated against other algorithms. This research additionally performs case studies that use the transfer learning test framework. The rst case study focuses on measuring the impact of exposing algorithms to the Domain Class Imbalance distortion pro le. The next case study uses the entire transfer learning test framework to evaluate both transfer learning and traditional machine learning algorithms. The nal case study uses the transfer learning test framework in conjunction with real-world datasets to measure the impact of the base traditional learner on the performance of transfer learning algorithms. Two additional experiments are performed that are focused on using unique realworld datasets. The rst experiment uses transfer learning techniques to predict fraudulent Medicare claims. The second experiment uses a heterogeneous transfer learning method to predict phishing webgages. These case studies will be of interest to researchers who develop and improve transfer learning algorithms. This research will also be of bene t to machine learning practitioners in the selection of high-performing transfer learning algorithms.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00005925
- Subject Headings
- Dissertations, Academic -- Florida Atlantic University, Machine learning., Algorithms., Machine learning Development.
- Format
- Document (PDF)
- Title
- Dosimetry comparison between treatment plans computed with Finite size pencil beam algorithm and Monte Carlo algorithm using InCise™ Multileaf collimator equipped CyberKnife® System.
- Creator
- Galpayage Dona, Kalpani Nisansala Udeni, Florida Atlantic University, Charles E. Schmidt College of Science, Department of Physics
- Abstract/Description
-
Since the release of the Cyberknife Multileaf Collimator (CK-MLC), it has been a constant concern on the realistic dose differences computed with its early-available Finite Size Pencil Beam algorithm (FSPB) from those computed by using industry well-accepted algorithms such as the Monte Carlo (MC) dose algorithm. In this study dose disparities between FSPB and MC dose calculation algorithms for selected CK-MLC treatment plans were quantified. The dosimetry for planning target volume (PTV) and...
Show moreSince the release of the Cyberknife Multileaf Collimator (CK-MLC), it has been a constant concern on the realistic dose differences computed with its early-available Finite Size Pencil Beam algorithm (FSPB) from those computed by using industry well-accepted algorithms such as the Monte Carlo (MC) dose algorithm. In this study dose disparities between FSPB and MC dose calculation algorithms for selected CK-MLC treatment plans were quantified. The dosimetry for planning target volume (PTV) and major organs at risks (OAR) was compared by calculating normalized percentage deviations (Ndev) between the two algorithms. It is found that the FSPB algorithm overestimates D95 of PTV when compared with the MC algorithm by averaging 24.0% in detached lung cases, and 15.0% in non-detached lung cases which is attributed to the absence of heterogeneity correction in the FSPB algorithm. Average dose differences are 0.3% in intracranial and 0.9% in pancreas cases. Ndev for the D95 of PTV range from 8.8% to 14.1% for the CK-MLC lung treatment plans with small field (SF ≤ 2x2cm2). Ndev is ranged from 0.5-7.0% for OARs.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013123
- Subject Headings
- Radiosurgery, Radiation dosimetry, Monte Carlo method, Algorithms, Lung Neoplasms--radiotherapy
- Format
- Document (PDF)
- Title
- Efficient Machine Learning Algorithms for Identifying Risk Factors of Prostate and Breast Cancers among Males and Females.
- Creator
- Rikhtehgaran, Samaneh, Muhammad, Wazir, Florida Atlantic University, Department of Physics, Charles E. Schmidt College of Science
- Abstract/Description
-
One of the most common types of cancer among women is breast cancer. It represents one of the diseases leading to a high number of mortalities among women. On the other hand, prostate cancer is the second most frequent malignancy in men worldwide. The early detection of prostate cancer is fundamental to reduce mortality and increase the survival rate. A comparison between six types of machine learning models as Logistic Regression, Decision Tree, Random Forest, Gradient Boosting, k Nearest...
Show moreOne of the most common types of cancer among women is breast cancer. It represents one of the diseases leading to a high number of mortalities among women. On the other hand, prostate cancer is the second most frequent malignancy in men worldwide. The early detection of prostate cancer is fundamental to reduce mortality and increase the survival rate. A comparison between six types of machine learning models as Logistic Regression, Decision Tree, Random Forest, Gradient Boosting, k Nearest Neighbors, and Naïve Bayes has been performed. This research aims to identify the most efficient machine learning algorithms for identifying the most significant risk factors of prostate and breast cancers. For this reason, National Health Interview Survey (NHIS) and Prostate, Lung, Colorectal, and Ovarian (PLCO) datasets are used. A comprehensive comparison of risk factors leading to these two crucial cancers can significantly impact early detection and progressive improvement in survival.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013755
- Subject Headings
- Machine learning, Algorithms, Cancer--Risk factors, Breast--Cancer, Prostate--Cancer
- Format
- Document (PDF)
- Title
- Design and implementation of efficient routing protocols in delay tolerant networks.
- Creator
- Liu, Cong., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Delay tolerant networks (DTNs) are occasionally-connected networks that may suffer from frequent partitions. DTNs provide service despite long end to end delays or infrequent connectivity. One fundamental problem in DTNs is routing messages from their source to their destination. DTNs differ from the Internet in that disconnections are the norm instead of the exception. Representative DTNs include sensor-based networks using scheduled intermittent connectivity, terrestrial wireless networks...
Show moreDelay tolerant networks (DTNs) are occasionally-connected networks that may suffer from frequent partitions. DTNs provide service despite long end to end delays or infrequent connectivity. One fundamental problem in DTNs is routing messages from their source to their destination. DTNs differ from the Internet in that disconnections are the norm instead of the exception. Representative DTNs include sensor-based networks using scheduled intermittent connectivity, terrestrial wireless networks that cannot ordinarily maintain end-to-end connectivity, satellite networks with moderate delays and periodic connectivity, underwater acoustic networks with moderate delays and frequent interruptions due to environmental factors, and vehicular networks with cyclic but nondeterministic connectivity. The focus of this dissertation is on routing protocols that send messages in DTNs. When no connected path exists between the source and the destination of the message, other nodes may relay the message to the destination. This dissertation covers routing protocols in DTNs with both deterministic and non-deterministic mobility respectively. In DTNs with deterministic and cyclic mobility, we proposed the first routing protocol that is both scalable and delivery guaranteed. In DTNs with non-deterministic mobility, numerous heuristic protocols are proposed to improve the routing performance. However, none of those can provide a theoretical optimization on a particular performance measurement. In this dissertation, two routing protocols for non-deterministic DTNs are proposed, which minimizes delay and maximizes delivery rate on different scenarios respectively. First, in DTNs with non-deterministic and cyclic mobility, an optimal single-copy forwarding protocol which minimizes delay is proposed., In DTNs with non-deterministic mobility, an optimal multi-copy forwarding protocol is proposed. which maximizes delivery rate under the constraint that the number of copies per message is fixed . Simulation evaluations using both real and synthetic trace are conducted to compare the proposed protocols with the existing ones.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/210522
- Subject Headings
- Computer network protocols, Computer networks, Reliability, Computer algorithms, Wireless communication systems, Technological innovations
- Format
- Document (PDF)
- Title
- A GPU- BASED SIMULATED ANNEALING ALGORITHM FOR INTENSITY-MODULATED RADIATION THERAPY.
- Creator
- Galanakou, Panagiota, Leventouri, Theodora, Florida Atlantic University, Department of Physics, Charles E. Schmidt College of Science
- Abstract/Description
-
Simulating Annealing Algorithm (SAA) has been proposed for optimization of the Intensity-Modulated Radiation Therapy (IMRT). Despite the advantage of the SAA to be a global optimizer, the SAA optimization of IMRT is an extensive computational task due to the large scale of the optimization variables, and therefore it requires significant computational resources. In this research we introduce a parallel graphics processing unit (GPU)-based SAA developed in MATLAB platform and compliant with...
Show moreSimulating Annealing Algorithm (SAA) has been proposed for optimization of the Intensity-Modulated Radiation Therapy (IMRT). Despite the advantage of the SAA to be a global optimizer, the SAA optimization of IMRT is an extensive computational task due to the large scale of the optimization variables, and therefore it requires significant computational resources. In this research we introduce a parallel graphics processing unit (GPU)-based SAA developed in MATLAB platform and compliant with the computational environment for radiotherapy research (CERR) for IMRT treatment planning in order elucidate the performance improvement of the SAA in IMRT optimization. First, we identify the “bottlenecks” of our code, and then we parallelize those on the GPU accordingly. Performance tests were conducted on four different GPU cards in comparison to a serial version of the algorithm executed on a CPU. A gradual increase of the speedup factor as a function of the number of beamlets was found for all four GPUs. A maximum speedup factor of 33.48 was achieved for a prostate case, and 30.51 for a lung cancer case when the K40m card and the maximum number of beams was utilized for each case. At the same time, the two optimized IMRT plans that were created (prostate and lung cancer plans) were met the IMRT optimization goals.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013372
- Subject Headings
- Radiotherapy, Intensity-Modulated, Annealing algorithm, Simulated annealing (Mathematics), Graphics processing units
- Format
- Document (PDF)
- Title
- A Collision-Free Drone Scheduling System.
- Creator
- Steinberg, Andrew, Cardei, Mihaela, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Today, drones have been receiving a lot of notice from commercial businesses. Businesses (mainly companies that have delivery services) are trying to expand their productivity in order bring more satisfaction for their loyal customers. One-way companies can expand their delivery services are through the use of delivery drones. Drones are very powerful devices that are going through many evolutionary changes for their uses throughout the years. For many years, researchers in academia have been...
Show moreToday, drones have been receiving a lot of notice from commercial businesses. Businesses (mainly companies that have delivery services) are trying to expand their productivity in order bring more satisfaction for their loyal customers. One-way companies can expand their delivery services are through the use of delivery drones. Drones are very powerful devices that are going through many evolutionary changes for their uses throughout the years. For many years, researchers in academia have been examining how drones can plan their paths along with avoiding collisions of other drones and certain obstacles in the civil airspace. However, researchers have not considered how the motion path planning can a ect the overall scheduling aspect of civilian drones. In this thesis, we propose an algorithm for a collision-free scheduling motion path planning of a set drones such that they avoid certain obstacles as well as maintaining a safety distance from each other.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004994, http://purl.flvc.org/fau/fd/FA00004984
- Subject Headings
- Dissertations, Academic -- Florida Atlantic University, Drone aircraft., Algorithms., Scheduling., Drone aircraft--Safety measures.
- Format
- Document (PDF)
- Title
- Implementation and comparison of the Golay and first order Reed-Muller codes.
- Creator
- Shukina, Olga., Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
In this project we perform data transmission across noisy channels and recover the message first by using the Golay code, and then by using the first-order Reed- Muller code. The main objective of this thesis is to determine which code among the above two is more efficient for text message transmission by applying the two codes to exactly the same data with the same channel error bit probabilities. We use the comparison of the error-correcting capability and the practical speed of the Golay...
Show moreIn this project we perform data transmission across noisy channels and recover the message first by using the Golay code, and then by using the first-order Reed- Muller code. The main objective of this thesis is to determine which code among the above two is more efficient for text message transmission by applying the two codes to exactly the same data with the same channel error bit probabilities. We use the comparison of the error-correcting capability and the practical speed of the Golay code and the first-order Reed-Muller code to meet our goal.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fcla/dt/3362579
- Subject Headings
- Error-correcting codes (Information theory), Coding theory, Computer algorithms, Digital modulation
- Format
- Document (PDF)
- Title
- Perceptual methods for video coding.
- Creator
- Adzic, Velibor, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are...
Show moreThe main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are implemented in the state-of- the-art video encoders. Result of using our algorithms is visually lossless compression with improved efficiency, as verified by standard subjective quality and psychophysical tests. Savings in bitrate compared to the High Efficiency Video Coding / H.265 reference implementation are up to 45%.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004074, http://purl.flvc.org/fau/fd/FA00004074
- Subject Headings
- Algorithms, Coding theory, Digital coding -- Data processing, Imaging systems -- Image quality, Perception, Video processing -- Data processing
- Format
- Document (PDF)
- Title
- A novel optimization algorithm and other techniques in medicinal chemistry.
- Creator
- Santos, Radleigh G., Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
In this dissertation we will present a stochastic optimization algorithm and use it and other mathematical techniques to tackle problems arising in medicinal chemistry. In Chapter 1, we present some background about stochastic optimization and the Accelerated Random Search (ARS) algorithm. We then present a novel improvement of the ARS algorithm, DIrected Accelerated Random Search (DARS), motivated by some theoretical results, and demonstrate through numerical results that it improves upon...
Show moreIn this dissertation we will present a stochastic optimization algorithm and use it and other mathematical techniques to tackle problems arising in medicinal chemistry. In Chapter 1, we present some background about stochastic optimization and the Accelerated Random Search (ARS) algorithm. We then present a novel improvement of the ARS algorithm, DIrected Accelerated Random Search (DARS), motivated by some theoretical results, and demonstrate through numerical results that it improves upon ARS. In Chapter 2, we use DARS and other methods to address issues arising from the use of mixture-based combinatorial libraries in drug discovery. In particular, we look at models associated with the biological activity of these mixtures and use them to answer questions about sensitivity and robustness, and also present a novel method for determining the integrity of the synthesis. Finally, in Chapter 3 we present an in-depth analysis of some statistical and mathematical techniques in combinatorial chemistry, including a novel probabilistic approach to using structural similarity to predict the activity landscape.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3352830
- Subject Headings
- Drugs, Design, Mathematical models, Combinatorial optimization, Combinatorial chemistry, Genetic algorithms, Mathematical optimization, Stochastic processes
- Format
- Document (PDF)