Current Search: Algorithms Data processing (x)
View All Items
- Title
- Alopex for handwritten digit recognition: Algorithmic verifications.
- Creator
- Martin, Gregory A., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Alopex is a biologically influenced computation paradigm that uses a stochastic procedure to find the global optimum of linear and nonlinear functions. It maps to a hierarchical SIMD (Single-Instruction-Multiple-Data) architecture with simple neuronal processing elements (PE's), therefore the large amount of interconnects in other types of neural networks are not required and more efficient utilization of chip level and board level "real estate" is realized. In this study, verifications were...
Show moreAlopex is a biologically influenced computation paradigm that uses a stochastic procedure to find the global optimum of linear and nonlinear functions. It maps to a hierarchical SIMD (Single-Instruction-Multiple-Data) architecture with simple neuronal processing elements (PE's), therefore the large amount of interconnects in other types of neural networks are not required and more efficient utilization of chip level and board level "real estate" is realized. In this study, verifications were performed on the use of a simplified Alopex algorithm in handwritten digit recognition with the intent that the verified algorithm be digitally implementable. The inputs to the simulated Alopex hardware are a set of 32 features extracted from the input characters. Although the goal of verifying the algorithm was not achieved, a firm direction for future studies has been established and a flexible software model for these future studies is available.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14842
- Subject Headings
- Algorithms--Data processing, Stochastic processes
- Format
- Document (PDF)
- Title
- INVESTIGATING MACHINE LEARNING ALGORITHMS WITH IMBALANCED BIG DATA.
- Creator
- Hasanin, Tawfiq, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Recent technological developments have engendered an expeditious production of big data and also enabled machine learning algorithms to produce high-performance models from such data. Nonetheless, class imbalance (in binary classifications) between the majority and minority classes in big data can skew the predictive performance of the classification algorithms toward the majority (negative) class whereas the minority (positive) class usually holds greater value for the decision makers. Such...
Show moreRecent technological developments have engendered an expeditious production of big data and also enabled machine learning algorithms to produce high-performance models from such data. Nonetheless, class imbalance (in binary classifications) between the majority and minority classes in big data can skew the predictive performance of the classification algorithms toward the majority (negative) class whereas the minority (positive) class usually holds greater value for the decision makers. Such bias may lead to adverse consequences, some of them even life-threatening, when the existence of false negatives is generally costlier than false positives. The size of the minority class can vary from fair to extraordinary small, which can lead to different performance scores for machine learning algorithms. Class imbalance is a well-studied area for traditional data, i.e., not big data. However, there is limited research focusing on both rarity and severe class imbalance in big data.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013316
- Subject Headings
- Algorithms, Machine learning, Big data--Data processing, Big data
- Format
- Document (PDF)
- Title
- Generalized Feature Embedding Learning for Clustering and Classication.
- Creator
- Golinko, Eric David, Zhu, Xingquan, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Data comes in many di erent shapes and sizes. In real life applications it is common that data we are studying has features that are of varied data types. This may include, numerical, categorical, and text. In order to be able to model this data with machine learning algorithms, it is required that the data is typically in numeric form. Therefore, for data that is not originally numerical, it must be transformed to be able to be used as input into these algorithms. Along with this...
Show moreData comes in many di erent shapes and sizes. In real life applications it is common that data we are studying has features that are of varied data types. This may include, numerical, categorical, and text. In order to be able to model this data with machine learning algorithms, it is required that the data is typically in numeric form. Therefore, for data that is not originally numerical, it must be transformed to be able to be used as input into these algorithms. Along with this transformation it is common that data we study has many features relative to the number of samples in the data. It is often desirable to reduce the number of features that are being trained in a model to eliminate noise and reduce time in training. This problem of high dimensionality can be approached through feature selection, feature extraction, or feature embedding. Feature selection seeks to identify the most essential variables in a dataset that will lead to a parsimonious model and high performing results, while feature extraction and embedding are techniques that utilize a mathematical transformation of the data into a represented space. As a byproduct of using a new representation, we are able to reduce the dimension greatly without sacri cing performance. Oftentimes, by using embedded features we observe a gain in performance. Though extraction and embedding methods may be powerful for isolated machine learning problems, they do not always generalize well. Therefore, we are motivated to illustrate a methodology that can be applied to any data type with little pre-processing. The methods we develop can be applied in unsupervised, supervised, incremental, and deep learning contexts. Using 28 benchmark datasets as examples which include di erent data types, we construct a framework that can be applied for general machine learning tasks. The techniques we develop contribute to the eld of dimension reduction and feature embedding. Using this framework, we make additional contributions to eigendecomposition by creating an objective matrix that includes three main vital components. The rst being a class partitioned row and feature product representation of one-hot encoded data. Secondarily, the derivation of a weighted adjacency matrix based on class label relationships. Finally, by the inner product of these aforementioned values, we are able to condition the one-hot encoded data generated from the original data prior to eigenvector decomposition. The use of class partitioning and adjacency enable subsequent projections of the data to be trained more e ectively when compared side-to-side to baseline algorithm performance. Along with this improved performance, we can adjust the dimension of the subsequent data arbitrarily. In addition, we also show how these dense vectors may be used in applications to order the features of generic data for deep learning. In this dissertation, we examine a general approach to dimension reduction and feature embedding that utilizes a class partitioned row and feature representation, a weighted approach to instance similarity, and an adjacency representation. This general approach has application to unsupervised, supervised, online, and deep learning. In our experiments of 28 benchmark datasets, we show signi cant performance gains in clustering, classi cation, and training time.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013063
- Subject Headings
- Eigenvectors--Data processing., Algorithms., Cluster analysis.
- Format
- Document (PDF)
- Title
- Asynchronous distributed algorithms for multi-agent supporting systems.
- Creator
- Jin, Kai., Florida Atlantic University, Larrondo-Petrie, Maria M.
- Abstract/Description
-
Based on multi-agent supporting system (MASS) structures used to investigate the synchronous algorithms in my previous work, the partially and totally asynchronous distributed algorithms are proposed in this thesis. The stability of discrete MASS with asynchronous distributed algorithms is analyzed. The partially asynchronous algorithms proposed for both 1- and 2-dimensional MASS are proven to be convergent, if the vertical disturbances vary sufficiently slower than the convergent time of the...
Show moreBased on multi-agent supporting system (MASS) structures used to investigate the synchronous algorithms in my previous work, the partially and totally asynchronous distributed algorithms are proposed in this thesis. The stability of discrete MASS with asynchronous distributed algorithms is analyzed. The partially asynchronous algorithms proposed for both 1- and 2-dimensional MASS are proven to be convergent, if the vertical disturbances vary sufficiently slower than the convergent time of the system. The adjacent error becomes zero when the system converges. It is also proven that in 1-dimensional MASS using the proposed totally asynchronous algorithm, the maximum of the absolute value of the adjacent error is non-increasing over time. Finally, the simulation results for all the above cases are presented to demonstrate the theoretical findings.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15277
- Subject Headings
- Electronic data processing--Distributed processing, Computer algorithms
- Format
- Document (PDF)
- Title
- Perceptual methods for video coding.
- Creator
- Adzic, Velibor, Kalva, Hari, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are...
Show moreThe main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are implemented in the state-of- the-art video encoders. Result of using our algorithms is visually lossless compression with improved efficiency, as verified by standard subjective quality and psychophysical tests. Savings in bitrate compared to the High Efficiency Video Coding / H.265 reference implementation are up to 45%.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004074, http://purl.flvc.org/fau/fd/FA00004074
- Subject Headings
- Algorithms, Coding theory, Digital coding -- Data processing, Imaging systems -- Image quality, Perception, Video processing -- Data processing
- Format
- Document (PDF)
- Title
- Radar cross section of an open-ended rectangular waveguide cavity: A massively parallel implementation applied to high-resolution radar cross section imaging.
- Creator
- Vann, Laura Dominick., Florida Atlantic University, Helmken, Henry, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis is concerned with adapting a sequential code that calculates the Radar Cross Section (RCS) of an open-ended rectangular waveguide cavity to a massively parallel computational platform. The primary motivation for doing this is to obtain wideband data over a large range of incident angles in order to generate a two-dimensional radar cross section image. Images generated from measured and computed data will be compared to evaluate program performance. The computer used in this...
Show moreThis thesis is concerned with adapting a sequential code that calculates the Radar Cross Section (RCS) of an open-ended rectangular waveguide cavity to a massively parallel computational platform. The primary motivation for doing this is to obtain wideband data over a large range of incident angles in order to generate a two-dimensional radar cross section image. Images generated from measured and computed data will be compared to evaluate program performance. The computer used in this implementation is a MasPar MP-1 single instruction, multiple data massively parallel computer consisting of 4,096 processors arranged in a two-dimensional mesh. The algorithm uses the mode matching method of analysis to match fields over the cavity aperture to obtain an expression for the scattered far field.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14984
- Subject Headings
- Radar cross sections, Algorithms--Data processing, Imaging systems
- Format
- Document (PDF)
- Title
- Handprinted character recognition and Alopex algorithm analysis.
- Creator
- Du, Jian., Florida Atlantic University, Shankar, Ravi
- Abstract/Description
-
A novel neural network, trained with the Alopex algorithm to recognize handprinted characters, was developed in this research. It was constructed by an encoded fully connected multi-layer perceptron (EFCMP). It consists of one input layer, one intermediate layer, and one encoded output layer. The Alopex algorithm is used to supervise the training of the EFCMP. Alopex is a stochastic algorithm used to solve optimization problems. The Alopex algorithm has been shown to accelerate the rate of...
Show moreA novel neural network, trained with the Alopex algorithm to recognize handprinted characters, was developed in this research. It was constructed by an encoded fully connected multi-layer perceptron (EFCMP). It consists of one input layer, one intermediate layer, and one encoded output layer. The Alopex algorithm is used to supervise the training of the EFCMP. Alopex is a stochastic algorithm used to solve optimization problems. The Alopex algorithm has been shown to accelerate the rate of convergence in the training procedure. Software simulation programs were developed for training, testing and analyzing the performance of this EFCMP architecture. Several neural networks with different structures were developed and compared. Optimization of the Alopex algorithm was explored through simulations of the EFCMP training procedure with the use of different parametric values for Alopex.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15012
- Subject Headings
- Algorithms, Neural networks (Computer science), Optical character recognition devices, Writing--Data processing, Image processing
- Format
- Document (PDF)
- Title
- Shamir's secret sharing scheme using floating point arithmetic.
- Creator
- Finamore, Timothy., Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
Implementing Shamir's secret sharing scheme using floating point arithmetic would provide a faster and more efficient secret sharing scheme due to the speed in which GPUs perform floating point arithmetic. However, with the loss of a finite field, properties of a perfect secret sharing scheme are not immediately attainable. The goal is to analyze the plausibility of Shamir's secret sharing scheme using floating point arithmetic achieving the properties of a perfect secret sharing scheme and...
Show moreImplementing Shamir's secret sharing scheme using floating point arithmetic would provide a faster and more efficient secret sharing scheme due to the speed in which GPUs perform floating point arithmetic. However, with the loss of a finite field, properties of a perfect secret sharing scheme are not immediately attainable. The goal is to analyze the plausibility of Shamir's secret sharing scheme using floating point arithmetic achieving the properties of a perfect secret sharing scheme and propose improvements to attain these properties. Experiments indicate that property 2 of a perfect secret sharing scheme, "Any k-1 or fewer participants obtain no information regarding the shared secret", is compromised when Shamir's secret sharing scheme is implemented with floating point arithmetic. These experimental results also provide information regarding possible solutions and adjustments. One of which being, selecting randomly generated points from a smaller interval in one of the proposed schemes of this thesis. Further experimental results indicate improvement using the scheme outlined. Possible attacks are run to test the desirable properties of the different schemes and reinforce the improvements observed in prior experiments.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3342048
- Subject Headings
- Signal processing, Digital techniques, Mathematics, Data encryption (Computer science), Computer file sharing, Security measures, Computer algorithms, Numerical analysis, Data processing
- Format
- Document (PDF)
- Title
- Application level intrusion detection using a sequence learning algorithm.
- Creator
- Dong, Yuhong., Florida Atlantic University, Hsu, Sam, Rajput, Saeed, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
An un-supervised learning algorithm on application level intrusion detection, named Graph Sequence Learning Algorithm (GSLA), is proposed in this dissertation. Experiments prove its effectiveness. Similar to most intrusion detection algorithms, in GSLA, the normal profile needs to be learned first. The normal profile is built using a session learning method, which is combined with the one-way Analysis of Variance method (ANOVA) to determine the value of an anomaly threshold. In the proposed...
Show moreAn un-supervised learning algorithm on application level intrusion detection, named Graph Sequence Learning Algorithm (GSLA), is proposed in this dissertation. Experiments prove its effectiveness. Similar to most intrusion detection algorithms, in GSLA, the normal profile needs to be learned first. The normal profile is built using a session learning method, which is combined with the one-way Analysis of Variance method (ANOVA) to determine the value of an anomaly threshold. In the proposed approach, a hash table is used to store a sparse data matrix in triple data format that is collected from a web transition log instead of an n-by-n dimension matrix. Furthermore, in GSLA, the sequence learning matrix can be dynamically changed according to a different volume of data sets. Therefore, this approach is more efficient, easy to manipulate, and saves memory space. To validate the effectiveness of the algorithm, extensive simulations have been conducted by applying the GSLA algorithm to the homework submission system at our computer science and engineering department. The performance of GSLA is evaluated and compared with traditional Markov Model (MM) and K-means algorithms. Specifically, three major experiments have been done: (1) A small data set is collected as a sample data, and is applied to GSLA, MM, and K-means algorithms to illustrate the operation of the proposed algorithm and demonstrate the detection of abnormal behaviors. (2) The Random Walk-Through sampling method is used to generate a larger sample data set, and the resultant anomaly score is classified into several clusters in order to visualize and demonstrate the normal and abnormal behaviors with K-means and GSLA algorithms. (3) Multiple professors' data sets are collected and used to build the normal profiles, and the ANOVA method is used to test the significant difference among professors' normal profiles. The GSLA algorithm can be made as a module and plugged into the IDS as an anomaly detection system.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fcla/dt/12220
- Subject Headings
- Data mining, Parallel processing (Electronic computers), Computer algorithms, Computer security, Pattern recognition systems
- Format
- Document (PDF)
- Title
- A general pressure based Navier-Stokes solver in arbitrary configurations.
- Creator
- Ke, Zhao Ping., Florida Atlantic University, Chow, Wen L., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
A pressure-based computer program for a general Navier-Stokes equations has been developed. Body-fitted coordinate system is employed to handle flows with complex geometry. Non-staggered grid is used while the pressure oscillation is eliminated by a special pressure interpolation scheme. The hybrid algorithm is adopted to discretize the equations and the finite-difference equations are solved by TDMA, while the whole solution is obtained through an under-relaxed iterative process. The...
Show moreA pressure-based computer program for a general Navier-Stokes equations has been developed. Body-fitted coordinate system is employed to handle flows with complex geometry. Non-staggered grid is used while the pressure oscillation is eliminated by a special pressure interpolation scheme. The hybrid algorithm is adopted to discretize the equations and the finite-difference equations are solved by TDMA, while the whole solution is obtained through an under-relaxed iterative process. The pressure field is evaluated using the compressible from of the SIMPLE algorithm., To test the accuracy and efficiency of the computer program, problems of incompressible and compressible flows are calculated. As examples of inviscid compressible flow problems, flows over a bump with 10% and 4% thickness are computed with the incoming Mach numbers of M[infinity] = 0.5 (subsonic flow), M[infinity] = 0.675 (transonic flow and M[infinity] = 1.65 (supersonic flow). One laminar subsonic flow over a bump with 5% thickness at M[infinity] = 0.5 is also calculated with the consideration of the full energy equation. With the help of the k-epsilon model incorporating the wall function, the computations of two turbulent incompressible flows are carried out. One is the flow past a flat plate and the other over a flame holder. As an application to the three-dimensional flow, a laminar flow in a driven cubic cavity is calculated. All the numerical results obtained here are compared with experimental data or other numerical results available in the literature.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12330
- Subject Headings
- Navier-Stokes equations--Numerical solutions--Data processing, Algorithms, Flows (Differential dynamical systems)
- Format
- Document (PDF)
- Title
- A visual perception threshold matching algorithm for real-time video compression.
- Creator
- Noll, John M., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A barrier to the use of digital imaging is the vast storage requirements involved. One solution is compression. Since imagery is ultimately subject to human visual perception, it is worthwhile to design and implement an algorithm which performs compression as a function of perception. The underlying premise of the thesis is that if the algorithm closely matches visual perception thresholds, then its coded images contain only the components necessary to recreate the perception of the visual...
Show moreA barrier to the use of digital imaging is the vast storage requirements involved. One solution is compression. Since imagery is ultimately subject to human visual perception, it is worthwhile to design and implement an algorithm which performs compression as a function of perception. The underlying premise of the thesis is that if the algorithm closely matches visual perception thresholds, then its coded images contain only the components necessary to recreate the perception of the visual stimulus. Psychophysical test results are used to map the thresholds of visual perception, and develop an algorithm that codes only the image content exceeding those thresholds. The image coding algorithm is simulated in software to demonstrate compression of a single frame image. The simulation results are provided. The algorithm is also adapted to real-time video compression for implementation in hardware.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14857
- Subject Headings
- Image processing--Digital techniques, Computer algorithms, Visual perception, Data compression (Computer science)
- Format
- Document (PDF)
- Title
- An Empirical Study of Performance Metrics for Classifier Evaluation in Machine Learning.
- Creator
- Bruhns, Stefan, Khoshgoftaar, Taghi M., Florida Atlantic University
- Abstract/Description
-
A variety of classifiers for solving classification problems is available from the domain of machine learning. Commonly used classifiers include support vector machines, decision trees and neural networks. These classifiers can be configured by modifying internal parameters. The large number of available classifiers and the different configuration possibilities result in a large number of combinatiorrs of classifier and configuration settings, leaving the practitioner with the problem of...
Show moreA variety of classifiers for solving classification problems is available from the domain of machine learning. Commonly used classifiers include support vector machines, decision trees and neural networks. These classifiers can be configured by modifying internal parameters. The large number of available classifiers and the different configuration possibilities result in a large number of combinatiorrs of classifier and configuration settings, leaving the practitioner with the problem of evaluating the performance of different classifiers. This problem can be solved by using performance metrics. However, the large number of available metrics causes difficulty in deciding which metrics to use and when comparing classifiers on the basis of multiple metrics. This paper uses the statistical method of factor analysis in order to investigate the relationships between several performance metrics and introduces the concept of relative performance which has the potential to case the process of comparing several classifiers. The relative performance metric is also used to evaluate different support vector machine classifiers and to determine if the default settings in the Weka data mining tool are reasonable.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012508
- Subject Headings
- Machine learning, Computer algorithms, Pattern recognition systems, Data structures (Computer science), Kernel functions, Pattern perception--Data processing
- Format
- Document (PDF)
- Title
- Parallel architectures and algorithms for digital filter VLSI implementation.
- Creator
- Desai, Pratik Vishnubhai., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
In many scientific and signal processing applications, there are increasing demands for large volume and high speed computations, which call for not only high-speed low power computing hardware, but also for novel approaches in developing new algorithms and architectures. This thesis is concerned with the development of such architectures and algorithms suitable for the VLSI implementation of recursive and nonrecursive 1-dimension digital filters using multiple slower processing elements. As...
Show moreIn many scientific and signal processing applications, there are increasing demands for large volume and high speed computations, which call for not only high-speed low power computing hardware, but also for novel approaches in developing new algorithms and architectures. This thesis is concerned with the development of such architectures and algorithms suitable for the VLSI implementation of recursive and nonrecursive 1-dimension digital filters using multiple slower processing elements. As the background for the development, vectorization techniques such as state-space modeling, block processing, and look ahead computation are introduced. Concurrent architectures such as systolic arrays, wavefront arrays and appropriate parallel filter realizations such as lattice, all-pass, and wave filters are reviewed. A fully hardware efficient systolic array architecture termed as Multiplexed Block-State Filter is proposed for the high speed implementation of lattice and direct realizations of digital filters. The thesis also proposes a new simplified algorithm, Alternate Pole Pairing Algorithm, for realizing an odd order recursive filter as the sum of two all-pass filters. Performance of the proposed schemes are verified through numerical examples and simulation results.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15155
- Subject Headings
- Integrated circuits--Very large scale integration, Parallel processing (Electronic computers), Computer network architectures, Algorithms (Data processing), Digital integrated circuits
- Format
- Document (PDF)
- Title
- Evolutionary algorithms for design and control of material handling and manufacturing systems.
- Creator
- Kanwar, Pankaj., Florida Atlantic University, Han, Chingping (Jim), College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution...
Show moreThe crucial goal of enhancing industrial productivity has led researchers to look for robust and efficient solutions to problems in production systems. Evolving technologies has also, led to an immediate demand for algorithms which can exploit these developments. During the last three decades there has been a growing interest in algorithms which rely on analogies to natural processes. The best known algorithms in this class include evolutionary programming, genetic algorithms, evolution strategies and neural networks. The emergence of massively parallel systems has made these inherently parallel algorithms of high practical interest. The advantages offered by these algorithms over other classical techniques has resulted in their wide acceptance. These algorithms have been applied for solving a large class of interesting problems, for which no efficient or reasonably fast algorithm exists. This thesis extends their usage to the domain of production research. Problems of high practical interest in the domain of production research are solved using a subclass of these algorithms i.e. those based on the principle of evolution. The problems include: the flowpath design of AGV systems and vehicle routing in a transportation system. Furthermore, a Genetic Based Machine Learning (GBML) system has been developed for optimal scheduling and control of a job shop.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15025
- Subject Headings
- Industrial productivity--Data processing, Algorithms, Genetic algorithms, Motor vehicles--Automatic location systems, Materials handling--Computer simulation, Manufacturing processes--Computer simulation
- Format
- Document (PDF)