Current Search: Algorithms (x)
View All Items
Pages
- Title
- Feature extraction implementation for handwritten numeral recognition.
- Creator
- Banuru, Prashanth K., Florida Atlantic University, Shankar, Ravi
- Abstract/Description
-
Feature extraction for handwritten character recognition has always been a challenging problem for investigators in the field. The problem gets worse due to large variations present for each type of input character. Our algorithm computes directional features for alphanumeric input mapped on to a hexagonal lattice. The algorithm implements size and scale invariance that is a requirement for achieving a reasonably good recognition rate. Functional performance has been verified for an hexagonal...
Show moreFeature extraction for handwritten character recognition has always been a challenging problem for investigators in the field. The problem gets worse due to large variations present for each type of input character. Our algorithm computes directional features for alphanumeric input mapped on to a hexagonal lattice. The algorithm implements size and scale invariance that is a requirement for achieving a reasonably good recognition rate. Functional performance has been verified for an hexagonal lattice mapped input on the data obtained from the US postal service handwritten character database. In this thesis, we implemented the algorithm in a Xilinx FPGA (XC4xxx series).
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15103
- Subject Headings
- Algorithms, Pattern recognition systems--Computer simulation, Optical character recognition devices--Computer simulation
- Format
- Document (PDF)
- Title
- Handprinted character recognition and Alopex algorithm analysis.
- Creator
- Du, Jian., Florida Atlantic University, Shankar, Ravi
- Abstract/Description
-
A novel neural network, trained with the Alopex algorithm to recognize handprinted characters, was developed in this research. It was constructed by an encoded fully connected multi-layer perceptron (EFCMP). It consists of one input layer, one intermediate layer, and one encoded output layer. The Alopex algorithm is used to supervise the training of the EFCMP. Alopex is a stochastic algorithm used to solve optimization problems. The Alopex algorithm has been shown to accelerate the rate of...
Show moreA novel neural network, trained with the Alopex algorithm to recognize handprinted characters, was developed in this research. It was constructed by an encoded fully connected multi-layer perceptron (EFCMP). It consists of one input layer, one intermediate layer, and one encoded output layer. The Alopex algorithm is used to supervise the training of the EFCMP. Alopex is a stochastic algorithm used to solve optimization problems. The Alopex algorithm has been shown to accelerate the rate of convergence in the training procedure. Software simulation programs were developed for training, testing and analyzing the performance of this EFCMP architecture. Several neural networks with different structures were developed and compared. Optimization of the Alopex algorithm was explored through simulations of the EFCMP training procedure with the use of different parametric values for Alopex.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15012
- Subject Headings
- Algorithms, Neural networks (Computer science), Optical character recognition devices, Writing--Data processing, Image processing
- Format
- Document (PDF)
- Title
- Label routing protocol: A new cross-layer protocol for multi-hop ad hoc wireless network.
- Creator
- Wang, Yu., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Compared to the traditional wireless network, the multi-hop ad hoc wireless network (simply called ad hoc networks) is self-configurable, dynamic, and distributed. During the past few years, many routing protocols have been proposed for this particular network environment. While in wired and optical networks, multi-protocol label switching (MPLS) has clearly shown its advantages in routing and switching such as flexibility, high efficiency, scalability, and low cost, however MPLS is complex...
Show moreCompared to the traditional wireless network, the multi-hop ad hoc wireless network (simply called ad hoc networks) is self-configurable, dynamic, and distributed. During the past few years, many routing protocols have been proposed for this particular network environment. While in wired and optical networks, multi-protocol label switching (MPLS) has clearly shown its advantages in routing and switching such as flexibility, high efficiency, scalability, and low cost, however MPLS is complex and does not consider the mobility issue for wireless networks, especially for ad hoc networks. This thesis migrates the label concept into the ad hoc network and provides a framework for the efficient Label Routing Protocol (LRP) in such a network. The MAC layer is also optimized with LRP for shorter delay, power saving, and higher efficiency. The simulation results show that the delay is improved significantly with this cross-layer routing protocol.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fcla/dt/13321
- Subject Headings
- Computer network protocols, Wireless communication systems, Mobile computing, Computer algorithms, MPLS standard, Operating systems (Computers)
- Format
- Document (PDF)
- Title
- A visual perception threshold matching algorithm for real-time video compression.
- Creator
- Noll, John M., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A barrier to the use of digital imaging is the vast storage requirements involved. One solution is compression. Since imagery is ultimately subject to human visual perception, it is worthwhile to design and implement an algorithm which performs compression as a function of perception. The underlying premise of the thesis is that if the algorithm closely matches visual perception thresholds, then its coded images contain only the components necessary to recreate the perception of the visual...
Show moreA barrier to the use of digital imaging is the vast storage requirements involved. One solution is compression. Since imagery is ultimately subject to human visual perception, it is worthwhile to design and implement an algorithm which performs compression as a function of perception. The underlying premise of the thesis is that if the algorithm closely matches visual perception thresholds, then its coded images contain only the components necessary to recreate the perception of the visual stimulus. Psychophysical test results are used to map the thresholds of visual perception, and develop an algorithm that codes only the image content exceeding those thresholds. The image coding algorithm is simulated in software to demonstrate compression of a single frame image. The simulation results are provided. The algorithm is also adapted to real-time video compression for implementation in hardware.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14857
- Subject Headings
- Image processing--Digital techniques, Computer algorithms, Visual perception, Data compression (Computer science)
- Format
- Document (PDF)
- Title
- A simplistic approach to reactive multi-robot navigation in unknown environments.
- Creator
- MacKunis, William Thomas., Florida Atlantic University, Raviv, Daniel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Multi-agent control is a very promising area of robotics. In applications for which it is difficult or impossible for humans to intervene, the utilization of multi-agent, autonomous robot groups is indispensable. This thesis presents a novel approach to reactive multi-agent control that is practical and elegant in its simplicity. The basic idea upon which this approach is based is that a group of robots can cooperate to determine the shortest path through a previously unmapped environment by...
Show moreMulti-agent control is a very promising area of robotics. In applications for which it is difficult or impossible for humans to intervene, the utilization of multi-agent, autonomous robot groups is indispensable. This thesis presents a novel approach to reactive multi-agent control that is practical and elegant in its simplicity. The basic idea upon which this approach is based is that a group of robots can cooperate to determine the shortest path through a previously unmapped environment by virtue of redundant sharing of simple data between multiple agents. The idea was implemented with two robots. In simulation, it was tested with over sixty agents. The results clearly show that the shortest path through various environments emerges as a result of redundant sharing of information between agents. In addition, this approach exhibits safeguarding techniques that reduce the risk to robot agents working in unknown and possibly hazardous environments. Further, the simplicity of this approach makes implementation very practical and easily expandable to reliably control a group comprised of many agents.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13013
- Subject Headings
- Robots--Control systems, Intelligent control systems, Genetic algorithms, Parallel processing (Electronic computers)
- Format
- Document (PDF)
- Title
- PATH PLANNING ALGORITHMS FOR UNMANNED AIRCRAFT SYSTEMS WITH A SPACE-TIME GRAPH.
- Creator
- Steinberg, Andrew, Cardei, Mihaela, Cardei, Ionut, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Unmanned Aircraft Systems (UAS) have grown in popularity due to their widespread potential applications, including efficient package delivery, monitoring, surveillance, search and rescue operations, agricultural uses, along with many others. As UAS become more integrated into our society and airspace, it is anticipated that the development and maintenance of a path planning collision-free system will become imperative, as the safety and efficiency of the airspace represents a priority. The...
Show moreUnmanned Aircraft Systems (UAS) have grown in popularity due to their widespread potential applications, including efficient package delivery, monitoring, surveillance, search and rescue operations, agricultural uses, along with many others. As UAS become more integrated into our society and airspace, it is anticipated that the development and maintenance of a path planning collision-free system will become imperative, as the safety and efficiency of the airspace represents a priority. The dissertation defines this problem as the UAS Collision-free Path Planning Problem. The overall objective of the dissertation is to design an on-demand, efficient and scalable aerial highway path planning system for UAS. The dissertation explores two solutions to this problem. The first solution proposes a space-time algorithm that searches for shortest paths in a space-time graph. The solution maps the aerial traffic map to a space-time graph that is discretized on the inter-vehicle safety distance. This helps compute safe trajectories by design. The mechanism uses space-time edge pruning to maintain the dynamic availability of edges as vehicles move on a trajectory. Pruning edges is critical to protect active UAS from collisions and safety hazards. The dissertation compares the solution with another related work to evaluate improvements in delay, run time scalability, and admission success while observing up to 9000 flight requests in the network. The second solution to the path planning problem uses a batch planning algorithm. This is a new mechanism that processes a batch of flight requests with prioritization on the current slack time. This approach aims to improve the planning success ratio. The batch planning algorithm is compared with the space-time algorithm to ascertain improvements in admission ratio, delay ratio, and running time, in scenarios with up to 10000 flight requests.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013696
- Subject Headings
- Unmanned aerial vehicles, Drone aircraft, Drone aircraft--Automatic control, Space and time, Algorithms
- Format
- Document (PDF)
- Title
- Shamir's secret sharing scheme using floating point arithmetic.
- Creator
- Finamore, Timothy., Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
Implementing Shamir's secret sharing scheme using floating point arithmetic would provide a faster and more efficient secret sharing scheme due to the speed in which GPUs perform floating point arithmetic. However, with the loss of a finite field, properties of a perfect secret sharing scheme are not immediately attainable. The goal is to analyze the plausibility of Shamir's secret sharing scheme using floating point arithmetic achieving the properties of a perfect secret sharing scheme and...
Show moreImplementing Shamir's secret sharing scheme using floating point arithmetic would provide a faster and more efficient secret sharing scheme due to the speed in which GPUs perform floating point arithmetic. However, with the loss of a finite field, properties of a perfect secret sharing scheme are not immediately attainable. The goal is to analyze the plausibility of Shamir's secret sharing scheme using floating point arithmetic achieving the properties of a perfect secret sharing scheme and propose improvements to attain these properties. Experiments indicate that property 2 of a perfect secret sharing scheme, "Any k-1 or fewer participants obtain no information regarding the shared secret", is compromised when Shamir's secret sharing scheme is implemented with floating point arithmetic. These experimental results also provide information regarding possible solutions and adjustments. One of which being, selecting randomly generated points from a smaller interval in one of the proposed schemes of this thesis. Further experimental results indicate improvement using the scheme outlined. Possible attacks are run to test the desirable properties of the different schemes and reinforce the improvements observed in prior experiments.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3342048
- Subject Headings
- Signal processing, Digital techniques, Mathematics, Data encryption (Computer science), Computer file sharing, Security measures, Computer algorithms, Numerical analysis, Data processing
- Format
- Document (PDF)
- Title
- MODELING GROUND ELEVATION OF LOUISIANA COASTAL WETLANDS AND ANALYZING RELATIVE SEA LEVEL RISE INUNDATION USING RSET-MH AND LIDAR MEASUREMENTS.
- Creator
- Liu, Jing, Zhang, Caiyun, Florida Atlantic University, Department of Geosciences, Charles E. Schmidt College of Science
- Abstract/Description
-
The Louisiana coastal ecosystem is experiencing increasing threats from human flood control construction, sea-level rise (SLR), and subsidence. Louisiana lost about 4,833 km2 of coastal wetlands from 1932 to 2016, and concern exists whether remaining wetlands will persist while facing the highest rate of relative sea-level rise (RSLR) in the world. Restoration aimed at rehabilitating the ongoing and future disturbances is currently underway through the implementation of the Coastal Wetlands...
Show moreThe Louisiana coastal ecosystem is experiencing increasing threats from human flood control construction, sea-level rise (SLR), and subsidence. Louisiana lost about 4,833 km2 of coastal wetlands from 1932 to 2016, and concern exists whether remaining wetlands will persist while facing the highest rate of relative sea-level rise (RSLR) in the world. Restoration aimed at rehabilitating the ongoing and future disturbances is currently underway through the implementation of the Coastal Wetlands Planning Protection and Restoration Act of 1990 (CWPPRA). To effectively monitor the progress of projects in CWPPRA, the Coastwide Reference Monitoring System (CRMS) was established in 2006. To date, more than a decade of valuable coastal, environmental, and ground elevation data have been collected and archived. This dataset offers a unique opportunity to evaluate the wetland ground elevation dynamics by linking the Rod Surface Elevation Table (RSET) measurements with environmental variables like water salinity and biophysical variables like canopy coverage. This dissertation research examined the effects of the environmental and biophysical variables on wetland terrain elevation by developing innovative machine learning based models to quantify the contribution of each factor using the CRMS collected dataset. Three modern machine learning algorithms, including Random Forest (RF), Support Vector Machine (SVM), and Artificial Neural Network (ANN), were assessed and cross-compared with the commonly used Multiple Linear Regression (MLR). The results showed that RF had the best performance in modeling ground elevation with Root Mean Square Error (RMSE) of 10.8 cm and coefficient of coefficient (r) = 0.74. The top four factors contributing to ground elevation are the distance from monitoring station to closest water source, water salinity, water elevation, and dominant vegetation height.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013568
- Subject Headings
- Coastal zone management--Louisiana, Sea level rise, Inundations, Wetland restoration--Louisiana, Machine learning, Computer simulation, Algorithms.
- Format
- Document (PDF)
- Title
- Spectral refinement to speech enhancement.
- Creator
- Charoenruengkit, Werayuth., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The goal of a speech enhancement algorithm is to remove noise and recover the original signal with as little distortion and residual noise as possible. Most successful real-time algorithms thereof have done in the frequency domain where the frequency amplitude of clean speech is estimated per short-time frame of the noisy signal. The state of-the-art short-time spectral amplitude estimator algorithms estimate the clean spectral amplitude in terms of the power spectral density (PSD) function...
Show moreThe goal of a speech enhancement algorithm is to remove noise and recover the original signal with as little distortion and residual noise as possible. Most successful real-time algorithms thereof have done in the frequency domain where the frequency amplitude of clean speech is estimated per short-time frame of the noisy signal. The state of-the-art short-time spectral amplitude estimator algorithms estimate the clean spectral amplitude in terms of the power spectral density (PSD) function of the noisy signal. The PSD has to be computed from a large ensemble of signal realizations. However, in practice, it may only be estimated from a finite-length sample of a single realization of the signal. Estimation errors introduced by these limitations deviate the solution from the optimal. Various spectral estimation techniques, many with added spectral smoothing, have been investigated for decades to reduce the estimation errors. These algorithms do not address significantly issue on quality of speech as perceived by a human. This dissertation presents analysis and techniques that offer spectral refinements toward speech enhancement. We present an analytical framework of the effect of spectral estimate variance on the performance of speech enhancement. We use the variance quality factor (VQF) as a quantitative measure of estimated spectra. We show that reducing the spectral estimator VQF reduces significantly the VQF of the enhanced speech. The Autoregressive Multitaper (ARMT) spectral estimate is proposed as a low VQF spectral estimator for use in speech enhancement algorithms. An innovative method of incorporating a speech production model using multiband excitation is also presented as a technique to emphasize the harmonic components of the glottal speech input., The preconditioning of the noisy estimates by exploiting other avenues of information, such as pitch estimation and the speech production model, effectively increases the localized narrow-band signal-to noise ratio (SNR) of the noisy signal, which is subsequently denoised by the amplitude gain. Combined with voicing structure enhancement, the ARMT spectral estimate delivers enhanced speech with sound clarity desirable to human listeners. The resulting improvements in enhanced speech are observed to be significant with both Objective and Subjective measurement.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186327
- Subject Headings
- Adaptive signal processing, Digital techniques, Spectral theory (Mathematics), Noise control, Fuzzy algorithms, Speech processing systems, Digital techniques
- Format
- Document (PDF)
- Title
- An Empirical Study of Performance Metrics for Classifier Evaluation in Machine Learning.
- Creator
- Bruhns, Stefan, Khoshgoftaar, Taghi M., Florida Atlantic University
- Abstract/Description
-
A variety of classifiers for solving classification problems is available from the domain of machine learning. Commonly used classifiers include support vector machines, decision trees and neural networks. These classifiers can be configured by modifying internal parameters. The large number of available classifiers and the different configuration possibilities result in a large number of combinatiorrs of classifier and configuration settings, leaving the practitioner with the problem of...
Show moreA variety of classifiers for solving classification problems is available from the domain of machine learning. Commonly used classifiers include support vector machines, decision trees and neural networks. These classifiers can be configured by modifying internal parameters. The large number of available classifiers and the different configuration possibilities result in a large number of combinatiorrs of classifier and configuration settings, leaving the practitioner with the problem of evaluating the performance of different classifiers. This problem can be solved by using performance metrics. However, the large number of available metrics causes difficulty in deciding which metrics to use and when comparing classifiers on the basis of multiple metrics. This paper uses the statistical method of factor analysis in order to investigate the relationships between several performance metrics and introduces the concept of relative performance which has the potential to case the process of comparing several classifiers. The relative performance metric is also used to evaluate different support vector machine classifiers and to determine if the default settings in the Weka data mining tool are reasonable.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012508
- Subject Headings
- Machine learning, Computer algorithms, Pattern recognition systems, Data structures (Computer science), Kernel functions, Pattern perception--Data processing
- Format
- Document (PDF)
- Title
- Modeling strategic resource allocation in probabilistic global supply chain system with genetic algorithm.
- Creator
- Damrongwongsiri, Montri., Florida Atlantic University, Han, Chingping (Jim), College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Effective and efficient supply chain management is essential for domestic and global organizations to compete successfully in the international market. Superior inventory control policies and product distribution strategies along with advanced information technology enable an organization to collaborate distribution and allocation of inventory to gain a competitive advantage in the world market. Our research establishes the strategic resource allocation model to capture and encapsulate the...
Show moreEffective and efficient supply chain management is essential for domestic and global organizations to compete successfully in the international market. Superior inventory control policies and product distribution strategies along with advanced information technology enable an organization to collaborate distribution and allocation of inventory to gain a competitive advantage in the world market. Our research establishes the strategic resource allocation model to capture and encapsulate the complexity of the modern global supply chain management problem. A mathematical model was constructed to depict the stochastic, multiple-period, two-echelon inventory with the many-to-many demand-supplier network problem. The model simultaneously constitutes the uncertainties of inventory control and transportation parameters as well as the varying price factors. A genetic algorithm (GA) was applied to derive optimal solutions through a two-stage optimization process. Practical examples and solutions from three sourcing strategies (single sourcing, multiple sourcing, and dedicated system) were included to illustrate the GA based solution procedure. Our model can be utilized as a collaborative supply chain strategic planning tool to efficiently determine the appropriate inventory allocation and a dynamic decision making process to effectively manage the distribution plan.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/12056
- Subject Headings
- Business logistics--Mathematical models, Physical distribution of goods--Management, Inventory control--Mathematical models, Genetic algorithms
- Format
- Document (PDF)
- Title
- Enhanced Fibonacci Cubes.
- Creator
- Qian, Haifeng., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
We propose the enhanced Fibonacci cube (EFC), which is defined based on the sequence Fn = 2(n-2) + 2F(n-4). We study its topological properties, embeddings, applications, routings, VLSI/WSI implementations, and its extensions. Our results show that EFC retains many properties of the hypercube. It contains the Fibonacci cube (FC) and extended Fibonacci cube of the same order as subgraphs and maintains virtually all the desirable properties of FC. EFC is even better in some structural...
Show moreWe propose the enhanced Fibonacci cube (EFC), which is defined based on the sequence Fn = 2(n-2) + 2F(n-4). We study its topological properties, embeddings, applications, routings, VLSI/WSI implementations, and its extensions. Our results show that EFC retains many properties of the hypercube. It contains the Fibonacci cube (FC) and extended Fibonacci cube of the same order as subgraphs and maintains virtually all the desirable properties of FC. EFC is even better in some structural properties, embeddings, applications and VLSI designs than FC or hypercube. With EFC, there are more cubes with various structures and sizes for selection, and more backup cubes into which faulty hypercubes can be reconfigured, which alleviates the size limitation of the hypercube and results in a higher level of fault tolerance.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15196
- Subject Headings
- Integrated circuits--Very large scale integration, Hypercube networks (Computer networks), Algorithms, Fault-tolerant computing, Multiprocessors
- Format
- Document (PDF)
- Title
- Parallel architectures and algorithms for digital filter VLSI implementation.
- Creator
- Desai, Pratik Vishnubhai., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
In many scientific and signal processing applications, there are increasing demands for large volume and high speed computations, which call for not only high-speed low power computing hardware, but also for novel approaches in developing new algorithms and architectures. This thesis is concerned with the development of such architectures and algorithms suitable for the VLSI implementation of recursive and nonrecursive 1-dimension digital filters using multiple slower processing elements. As...
Show moreIn many scientific and signal processing applications, there are increasing demands for large volume and high speed computations, which call for not only high-speed low power computing hardware, but also for novel approaches in developing new algorithms and architectures. This thesis is concerned with the development of such architectures and algorithms suitable for the VLSI implementation of recursive and nonrecursive 1-dimension digital filters using multiple slower processing elements. As the background for the development, vectorization techniques such as state-space modeling, block processing, and look ahead computation are introduced. Concurrent architectures such as systolic arrays, wavefront arrays and appropriate parallel filter realizations such as lattice, all-pass, and wave filters are reviewed. A fully hardware efficient systolic array architecture termed as Multiplexed Block-State Filter is proposed for the high speed implementation of lattice and direct realizations of digital filters. The thesis also proposes a new simplified algorithm, Alternate Pole Pairing Algorithm, for realizing an odd order recursive filter as the sum of two all-pass filters. Performance of the proposed schemes are verified through numerical examples and simulation results.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15155
- Subject Headings
- Integrated circuits--Very large scale integration, Parallel processing (Electronic computers), Computer network architectures, Algorithms (Data processing), Digital integrated circuits
- Format
- Document (PDF)
- Title
- A genetic algorithm for non-constrained process and economic process optimization.
- Creator
- Chirdchid, Sangthen., Florida Atlantic University, Masory, Oren, Mazouz, Abdel Kader, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
Improving the quality of a product and manufacturing processes at a low cost is an economic and technological challenge which quality engineers and researches must contend with. In general, the quality of products and their cost are the main concerns for manufactures. This is because improving quality is very crucial for staying competitive and improving the organization's market position. However, some difficulty of finding where the standard of good quality is still remains. Customers'...
Show moreImproving the quality of a product and manufacturing processes at a low cost is an economic and technological challenge which quality engineers and researches must contend with. In general, the quality of products and their cost are the main concerns for manufactures. This is because improving quality is very crucial for staying competitive and improving the organization's market position. However, some difficulty of finding where the standard of good quality is still remains. Customers' satisfaction is a key for setting up the quality target. One possible solution is to develop control limits, which are the limits for indicating the area of nonconforming product on the basis of minimizing the total cost or loss to the customer as well as to the manufacturer. Therefore, the goal of this dissertation is to develop an effective tool to improve a high quality of product while maintaining a minimum cost.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fau/fd/FADT12081
- Subject Headings
- Genetic algorithms, Quality of products--Cost effectiveness--Econometric models, Multivariate analysis, Taguchi methods (Quality control)
- Format
- Document (PDF)
- Title
- An intelligent approach to system identification.
- Creator
- Saravanan, Natarajan, Florida Atlantic University, Duyar, Ahmet, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
System identification methods are frequently used to obtain appropriate models for the purpose of control, fault detection, pattern recognition, prediction, adaptive filtering and other purposes. A number of techniques exist for the identification of linear systems. However, real-world and complex systems are often nonlinear and there exists no generic methodology for the identification of nonlinear systems with unknown structure. A recent approach makes use of highly interconnected networks...
Show moreSystem identification methods are frequently used to obtain appropriate models for the purpose of control, fault detection, pattern recognition, prediction, adaptive filtering and other purposes. A number of techniques exist for the identification of linear systems. However, real-world and complex systems are often nonlinear and there exists no generic methodology for the identification of nonlinear systems with unknown structure. A recent approach makes use of highly interconnected networks of simple processing elements, which can be programmed to approximate nonlinear functions to identify nonlinear dynamic systems. This thesis takes a detailed look at identification of nonlinear systems with neural networks. Important questions in the application of neural networks for nonlinear systems are identified; concerning the excitation properties of input signals, selection of an appropriate neural network structure, estimation of the neural network weights, and the validation of the identified model. These questions are subsequently answered. This investigation leads to a systematic procedure for identification using neural networks and this procedure is clearly illustrated by modeling a complex nonlinear system; the components of the space shuttle main engine. Additionally, the neural network weights are determined by using a general purpose optimization technique known as evolutionary programming which is based on the concept of simulated evolution. The evolutionary programming algorithm is modified to include self-adapting step sizes. The effectiveness of the evolutionary programming algorithm as a general purpose optimization algorithm is illustrated on a test suite of problems including function optimization, neural network weight optimization, optimal control system synthesis and reinforcement learning control.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12371
- Subject Headings
- Neural networks (Computer science), System identification, Nonlinear theories, System analysis, Space shuttles--Electronic equipment, Algorithms--Computer programs
- Format
- Document (PDF)
- Title
- Modeling and simulation on the yard trailers deployment in the maritime container terminal.
- Creator
- Zhao, Yueqiong, College of Engineering and Computer Science, Department of Civil, Environmental and Geomatics Engineering
- Abstract/Description
-
In recent years, there has been an exponential increase in container volume shipment within intermodal transportation systems. Container terminals as part of the global port system represent important hubs within this intermodal transportation system. Thus, the need to improve the operational efficiency is the most important issue for container terminals from an economic standpoint. Moreover, intermodal transportation systems, ports and inland transport facilities should all be integrated...
Show moreIn recent years, there has been an exponential increase in container volume shipment within intermodal transportation systems. Container terminals as part of the global port system represent important hubs within this intermodal transportation system. Thus, the need to improve the operational efficiency is the most important issue for container terminals from an economic standpoint. Moreover, intermodal transportation systems, ports and inland transport facilities should all be integrated into one coordinated plan. More specifically, a method to schedule different types of handling equipment in an integrated way within a container terminal is a popular topic for researchers. However, not many researchers have addresses this topic in relationship to the simulation aspect which will test feasible solutions under real container terminal environment parameters. In order to increase the efficiency of operations, the development of mathematical models and algorithms is critical in finding the best feasible solution. The objective of this study is to evaluate the feasible solution to find the proper number of Yard Trailers (YTs) with the minimal cost for the container terminals. This study uses the Dynamic YTs operation's method as a background for modeling. A mathematical model with various constraints related to the integrated operations among the different types of handling equipment is formulated. This model takes into consideration both serving time of quay cranes and yard cranes, and cost reduction strategies by decreasing use of YTs with the specific objective of minimum total cost including utilization of YTs and vessel berthing. In addition, a heuristic algorithm combined with Monte Carlo Method and Brute-Force Search are employed. The early Stage Technique of Monte Carlo method is proposed to generate vast random numbers to replicate simulation for real cases., The Brute-Force Search is used for identifying all potential cases specific to the conditions of this study. Some preliminary numerical test results suggest that this method is good for use in conjunction with simulation of container terminal operation. The expected outcome of this research is a solution to obtain the proper number of YTs for transporting containers with a minimum cost; thus, improving the operational efficiency in a container terminal.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3174315
- Subject Headings
- Marine terminals, Computer programs, Computer algorithms, Materials management, Warehouses, Management, Transportation engineering, Freight and freightage
- Format
- Document (PDF)
- Title
- Efficient Resource Discovery Technique in a Mobile Ad Hoc Networks.
- Creator
- Thanawala, Ravi, Wu, Jie, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis describes a resource discovery technique in mobile ad hoc networks. Resource discovery is technique to search data in among the mobile nodes in the network. The highly dynamic nature of the infrastructure-less ad hoc networks poses new challenges in resource discovery, thus there is a need to for an optimized resource discovery technique. Efficient resource discovery algorithm discovers the resources in a mobile ad-hoc network in an optimized way. As there is no pre-established...
Show moreThis thesis describes a resource discovery technique in mobile ad hoc networks. Resource discovery is technique to search data in among the mobile nodes in the network. The highly dynamic nature of the infrastructure-less ad hoc networks poses new challenges in resource discovery, thus there is a need to for an optimized resource discovery technique. Efficient resource discovery algorithm discovers the resources in a mobile ad-hoc network in an optimized way. As there is no pre-established infrastructure in the network, every node takes its decision in forwarding resources and every node dynamically ranks these resources before disseminating them in the network. Ranking of the resources spreads the data which is of high priority at that instance of time. Ranking avoids the spreads the unwanted or low priority data which will utilize the bandwidth unnecessarily. The efficient resource discovery algorithm also keeps a check that redundant information is not spread in the network with the available bandwidth and the bandwidth is utilized in an optimized manner. We then introduce brokers in the algorithm for a better performance. We present a technique to maintain a constant number of brokers in the network. Our simulations show that, in a network with high density, the efficient resource discovery algorithm gives a better performance than the flooding and rank based broadcast algorithms.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012562
- Subject Headings
- Mobile communication systems--Mathematics, Computer algorithms, Wireless communication systems--Mathematics, Ad hoc networks (Computer networks)--Programming
- Format
- Document (PDF)
- Title
- An Ant Inspired Dynamic Traffic Assignment for VANETs: Early Notification of Traffic Congestion and Traffic Incidents.
- Creator
- Arellano, Wilmer, Mahgoub, Imad, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Vehicular Ad hoc NETworks (VANETs) are a subclass of Mobile Ad hoc NETworks and represent a relatively new and very active field of research. VANETs will enable in the near future applications that will dramatically improve roadway safety and traffic efficiency. There is a need to increase traffic efficiency as the gap between the traveled and the physical lane miles keeps increasing. The Dynamic Traffic Assignment problem tries to dynamically distribute vehicles efficiently on the road...
Show moreVehicular Ad hoc NETworks (VANETs) are a subclass of Mobile Ad hoc NETworks and represent a relatively new and very active field of research. VANETs will enable in the near future applications that will dramatically improve roadway safety and traffic efficiency. There is a need to increase traffic efficiency as the gap between the traveled and the physical lane miles keeps increasing. The Dynamic Traffic Assignment problem tries to dynamically distribute vehicles efficiently on the road network and in accordance with their origins and destinations. We present a novel dynamic decentralized and infrastructure-less algorithm to alleviate traffic congestions on road networks and to fill the void left by current algorithms which are either static, centralized, or require infrastructure. The algorithm follows an online approach that seeks stochastic user equilibrium and assigns traffic as it evolves in real time, without prior knowledge of the traffic demand or the schedule of the cars that will enter the road network in the future. The Reverse Online Algorithm for the Dynamic Traffic Assignment inspired by Ant Colony Optimization for VANETs follows a metaheuristic approach that uses reports from other vehicles to update the vehicle’s perceived view of the road network and change route if necessary. To alleviate the broadcast storm spontaneous clusters are created around traffic incidents and a threshold system based on the level of congestion is used to limit the number of incidents to be reported. Simulation results for the algorithm show a great improvement on travel time over routing based on shortest distance. As the VANET transceivers have a limited range, that would limit messages to reach at most 1,000 meters, we present a modified version of this algorithm that uses a rebroadcasting scheme. This rebroadcasting scheme has been successfully tested on roadways with segments of up to 4,000 meters. This is accomplished for the case of traffic flowing in a single direction on the roads. It is anticipated that future simulations will show further improvement when traffic in the other direction is introduced and vehicles travelling in that direction are allowed to use a store carry and forward mechanism.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004566, http://purl.flvc.org/fau/fd/FA00004566
- Subject Headings
- Vehicular ad hoc networks (Computer networks)--Technological innovations., Routing protocols (Computer network protocols), Artificial intelligence., Intelligent transportation systems., Intelligent control systems., Mobile computing., Computer algorithms., Combinatorial optimization.
- Format
- Document (PDF)
- Title
- A VLSI implementable learning algorithm.
- Creator
- Ruiz, Laura V., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a...
Show moreA top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a simple network. This methodology is later applied in the design of a novel architecture, a stochastic neural network. All issues related to algorithmic design for VLSI implementability are discussed and results of layout and timing analysis given over software simulations. A top-down design methodology is presented, including a brief introduction to HDL's and an overview of the software tools used throughout the design process. These tools make it possible now for a designer to complete a design in a relative short period of time. In-depth knowledge of computer architecture, VLSI fabrication, electronic circuits and integrated circuit design is not fundamental to accomplish a task that a few years ago would have required a large team of specialized experts in many fields. This may appeal to researchers from a wide background of knowledge, including computer scientists, mathematicians, and psychologists experimenting with learning algorithms. It is only in a hardware implementation of artificial neural network learning algorithms that the true parallel nature of these architectures could be fully tested. Most of the applications of neural networks are basically software simulations of the algorithms run on a single CPU executing sequential simulations of a parallel, richly interconnected architecture. This dissertation describes a methodology whereby a researcher experimenting with a known or new learning algorithm will be able to test it as it was intentionally designed for, on a parallel hardware architecture.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12453
- Subject Headings
- Integrated circuits--Very large scale integration--Design and construction, Neural networks (Computer science)--Design and construction, Computer algorithms, Machine learning
- Format
- Document (PDF)
- Title
- Mechanisms for prolonging network lifetime in wireless sensor networks.
- Creator
- Yang, Yinying., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Sensors are used to monitor and control the physical environment. A Wireless Sen- sor Network (WSN) is composed of a large number of sensor nodes that are densely deployed either inside the phenomenon or very close to it [18][5]. Sensor nodes measure various parameters of the environment and transmit data collected to one or more sinks, using hop-by-hop communication. Once a sink receives sensed data, it processes and forwards it to the users. Sensors are usually battery powered and it is...
Show moreSensors are used to monitor and control the physical environment. A Wireless Sen- sor Network (WSN) is composed of a large number of sensor nodes that are densely deployed either inside the phenomenon or very close to it [18][5]. Sensor nodes measure various parameters of the environment and transmit data collected to one or more sinks, using hop-by-hop communication. Once a sink receives sensed data, it processes and forwards it to the users. Sensors are usually battery powered and it is hard to recharge them. It will take a limited time before they deplete their energy and become unfunctional. Optimizing energy consumption to prolong network lifetime is an important issue in wireless sensor networks. In mobile sensor networks, sensors can self-propel via springs [14], wheels [20], or they can be attached to transporters, such as robots [20] and vehicles [36]. In static sensor networks with uniform deployment (uniform density), sensors closest to the sink will die first, which will cause uneven energy consumption and limitation of network life- time. In the dissertation, the nonuniform density is studied and analyzed so that the energy consumption within the monitored area is balanced and the network lifetime is prolonged. Several mechanisms are proposed to relocate the sensors after the initial deployment to achieve the desired density while minimizing the total moving cost. Using mobile relays for data gathering is another energy efficient approach. Mobile sensors can be used as ferries, which carry data to the sink for static sensors so that expensive multi-hop communication and long distance communication are reduced. In this thesis, we propose a mobile relay based routing protocol that considers both energy efficiency and data delivery delay. It can be applied to both event-based reporting and periodical report applications., Another mechanism used to prolong network lifetime is sensor scheduling. One of the major components that consume energy is the radio. One method to conserve energy is to put sensors to sleep mode when they are not actively participating in sensing or data relaying. This dissertation studies sensor scheduling mechanisms for composite event detection. It chooses a set of active sensors to perform sensing and data relaying, and all other sensors go to sleep to save energy. After some time, another set of active sensors is chosen. Thus sensors work alternatively to prolong network lifetime.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1870693
- Subject Headings
- Wireless communication systems, Technological innovations, Wireless communication systems, Design and construction, Ad hoc networks (Computer networks), Technological innovations, Sensor networks, Design and construction, Computer algorithms, Computer network protocols
- Format
- Document (PDF)