Current Search: Department of Computer and Electrical Engineering and Computer Science (x)
View All Items
Pages
- Title
- Workspace evaluation and kinematic calibration of Stewart platform.
- Creator
- Wang, Jian., Florida Atlantic University, Masory, Oren, Roth, Zvi S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Parallel manipulators have their special characteristics in contrast to the traditional serial type of robots. Stewart platform is a typical six degree of freedom fully parallel robot manipulator. The goal of this research is to enhance the accuracy and the restricted workspace of the Stewart platform. The first part of the dissertation discusses the effect of three kinematic constraints: link length limitation, joint angle limitation and link interference, and kinematic parameters on the...
Show moreParallel manipulators have their special characteristics in contrast to the traditional serial type of robots. Stewart platform is a typical six degree of freedom fully parallel robot manipulator. The goal of this research is to enhance the accuracy and the restricted workspace of the Stewart platform. The first part of the dissertation discusses the effect of three kinematic constraints: link length limitation, joint angle limitation and link interference, and kinematic parameters on the workspace of the platform. An algorithm considering the above constraints for the determination of the volume and the envelop of Stewart platform workspace is developed. The workspace volume is used as a criterion to evaluate the effects of the platform dimensions and kinematic constraints on the workspace and the dexterity of the Stewart platform. The analysis and algorithm can be used as a design tool to select dimensions, actuators and joints in order to maximize the workspace. The remaining parts of the dissertation focus on the accuracy enhancement. Manufacturing tolerances, installation errors and link offsets cause deviations with respect to the nominal parameters of the platform. As a result, if nominal parameters are being used, the resulting platform pose will be inaccurate. An accurate kinematic model of Stewart platform which accommodates all manufacturing and installation errors is developed. In order to evaluate the effects of the above factors on the accuracy, algorithms for the forward and inverse kinematics solutions of the accurate model are developed. The effects of different manufacturing tolerances and installation errors on the platform accuracy are investigated based on this model. Simulation results provide insight into the expected accuracy and indicate the major factors contributing to the inaccuracies. In order to enhance the accuracy, there is a need to calibrate the platform, or to determine the actual values of the kinematic parameters (Parameter Identification) and to incorporate these into the inverse kinematic solution (Accuracy Compensation). An error-model based algorithm for the parameter identification is developed. Procedures for the formulation of the identification Jacobian and for accuracy compensation are presented. The algorithms are tested using simulated measurements in which the realistic measurement noise is included. As a result, pose error of the platform are significantly reduced.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12316
- Subject Headings
- Robots--Control systems, Manipulators (Mechanism), Robotics--Calibration
- Format
- Document (PDF)
- Title
- A VLSI implementable learning algorithm.
- Creator
- Ruiz, Laura V., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a...
Show moreA top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a simple network. This methodology is later applied in the design of a novel architecture, a stochastic neural network. All issues related to algorithmic design for VLSI implementability are discussed and results of layout and timing analysis given over software simulations. A top-down design methodology is presented, including a brief introduction to HDL's and an overview of the software tools used throughout the design process. These tools make it possible now for a designer to complete a design in a relative short period of time. In-depth knowledge of computer architecture, VLSI fabrication, electronic circuits and integrated circuit design is not fundamental to accomplish a task that a few years ago would have required a large team of specialized experts in many fields. This may appeal to researchers from a wide background of knowledge, including computer scientists, mathematicians, and psychologists experimenting with learning algorithms. It is only in a hardware implementation of artificial neural network learning algorithms that the true parallel nature of these architectures could be fully tested. Most of the applications of neural networks are basically software simulations of the algorithms run on a single CPU executing sequential simulations of a parallel, richly interconnected architecture. This dissertation describes a methodology whereby a researcher experimenting with a known or new learning algorithm will be able to test it as it was intentionally designed for, on a parallel hardware architecture.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12453
- Subject Headings
- Integrated circuits--Very large scale integration--Design and construction, Neural networks (Computer science)--Design and construction, Computer algorithms, Machine learning
- Format
- Document (PDF)
- Title
- Web log analysis: Experimental studies.
- Creator
- Yang, Zhijian., Florida Atlantic University, Zhong, Shi, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
With rapid growth of the World Wide Web, web performance becomes increasingly important for modern businesses, especially for e-commerce. As we all know, web server logs contain potentially useful empirical data to improve web server performance. In this thesis, we discuss some topics related to the analysis of a website's server logs for enhancing server performance, which will benefit some applications in business. Markov chain models are used and allow us to dynamically model page...
Show moreWith rapid growth of the World Wide Web, web performance becomes increasingly important for modern businesses, especially for e-commerce. As we all know, web server logs contain potentially useful empirical data to improve web server performance. In this thesis, we discuss some topics related to the analysis of a website's server logs for enhancing server performance, which will benefit some applications in business. Markov chain models are used and allow us to dynamically model page sequences extracted from server logs. My experimental studies contain three major parts. First, I present a workload characterization study of the website used for my research. Second, Markov chain models are constructed for both page request and page-visiting sequence prediction. Finally, I carefully evaluate the constructed models using an independent test data set, which is from server logs on a different day. The research results demonstrate the effectiveness of Markov chain models for characterizing page-visiting sequences.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/13202
- Subject Headings
- Markov processes, Operations research, Business enterprises--Computer networks, Electronic commerce--Data processing
- Format
- Document (PDF)
- Title
- An artificial neural network architecture for interpolation, function approximation, time series modeling and control applications.
- Creator
- Luebbers, Paul Glenn., Florida Atlantic University, Pandya, Abhijit S., Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A new artificial neural network architecture called Power Net (PWRNET) and Orthogonal Power Net (OPWRNET) has been developed. Based on the Taylor series expansion of the hyperbolic tangent function, this novel architecture can approximate multi-input multi-layer artificial networks, while requiring only a single layer of hidden nodes. This allows a compact network representation with only one layer of hidden layer weights. The resulting trained network can be expressed as a polynomial...
Show moreA new artificial neural network architecture called Power Net (PWRNET) and Orthogonal Power Net (OPWRNET) has been developed. Based on the Taylor series expansion of the hyperbolic tangent function, this novel architecture can approximate multi-input multi-layer artificial networks, while requiring only a single layer of hidden nodes. This allows a compact network representation with only one layer of hidden layer weights. The resulting trained network can be expressed as a polynomial function of the input nodes. Applications which cannot be implemented with conventional artificial neural networks, due to their intractable nature, can be developed with these network architectures. The degree of nonlinearity of the network can be directly controlled by adjusting the number of hidden layer nodes, thus avoiding problems of over-fitting which restrict generalization. The learning algorithm used for adapting the network is the familiar error back propagation training algorithm. Other learning algorithms may be applied and since only one hidden layer is to be trained, the training performance of the network is expected to be comparable to or better than conventional multi-layer feed forward networks. The new architecture is explored by applying OPWRNET to classification, function approximation and interpolation problems. These applications show that the OPWRNET has comparable performance to multi-layer perceptrons. The OPWRNET was also applied to the prediction of noisy time series and the identification of nonlinear systems. The resulting trained networks, for system identification tasks, can be expressed directly as discrete nonlinear recursive polynomials. This characteristic was exploited in the development of two new neural network based nonlinear control algorithms, the Linearized Self-Tuning Controller (LSTC) and a variation of a Neural Adaptive Controller (NAC). These control algorithms are compared to a linear self-tuning controller and an artificial neural network based Inverse Model Controller. The advantages of these new controllers are discussed.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12357
- Subject Headings
- Neural networks (Computer science)
- Format
- Document (PDF)
- Title
- An empirical study of analogy-based software quality classification models.
- Creator
- Ross, Fletcher Douglas., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Time and cost are among the most important elements in a software project. By efficiently using time and resources we can reduce costs. Any program can potentially contain faults. If we can identify those program modules that have better quality and are less likely to be fault-prone, then we can reduce the effort and cost required in testing these modules. This thesis presents a series of studies evaluating the use of Case-Based Reasoning (CBR ) as an effective method for classifying program...
Show moreTime and cost are among the most important elements in a software project. By efficiently using time and resources we can reduce costs. Any program can potentially contain faults. If we can identify those program modules that have better quality and are less likely to be fault-prone, then we can reduce the effort and cost required in testing these modules. This thesis presents a series of studies evaluating the use of Case-Based Reasoning (CBR ) as an effective method for classifying program modules based upon their quality. We believe that this is the first time that the mahalanobis distance, a distance measure utilizing the covariance matrix of the independent variables which accounts for the multi-colinearity of the data without the necessity for preprocessing, and data clustering, wherein the data was separated into groups based on a dependent variable have been used as modeling techniques in conjunction with (CBR).
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12817
- Subject Headings
- Modular programming, Computer software--Quality control, Software measurement
- Format
- Document (PDF)
- Title
- A feedback-based multimedia synchronization technique for distributed systems.
- Creator
- Ehley, Lynnae Anne., Florida Atlantic University, Ilyas, Mohammad, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Multimedia applications incorporate the use of more than one type of media, i.e., voice, video, data, text and image. With the advances in high-speed communication, the ability to transmit multimedia is becoming widely available. One of the means of transport for multimedia in distributed networks is Broadband Integrated Services Digital Network (B-ISDN). B-ISDN supports the transport of large volumes of data with a low error rate. It also handles the burstiness of multimedia traffic by...
Show moreMultimedia applications incorporate the use of more than one type of media, i.e., voice, video, data, text and image. With the advances in high-speed communication, the ability to transmit multimedia is becoming widely available. One of the means of transport for multimedia in distributed networks is Broadband Integrated Services Digital Network (B-ISDN). B-ISDN supports the transport of large volumes of data with a low error rate. It also handles the burstiness of multimedia traffic by providing dynamic bandwidth allocation. When multimedia is requested for transport in a distributed network, different Quality of Service (QOS) may be required for each type of media. For example, video can withstand more errors than voice. In order to provide, the most efficient form of transfer, different QOS media are sent using different channels. By using different channels for transport, jitter can impose skews on the temporal relations between the media. Jitter is caused by errors and buffering delays. Since B-ISDN uses Asynchronous Transfer Mode (ATM) as its transfer mode, the jitter that is incurred can be assumed to be bounded if traffic management principles such as admission control and resource reservation are employed. Another network that can assume bounded buffering is the 16 Mbps token-ring LAN when the LAN Server (LS) Ultimedia(TM) software is applied over the OS/2 LAN Server(TM) (using OS/2(TM)). LS Ultimedia(TM) reserves critical resources such as disk, server processor, and network resources for multimedia use. In addition, it also enforces admission control(1). Since jitter is bounded on the networks chosen, buffers can be used to realign the temporal relations in the media. This dissertation presents a solution to this problem by proposing a Feedback-based Multimedia Synchronization Technique (FMST) to correct and compensate for the jitter that is incurred when media are received over high speed communication channels and played back in real time. FMST has been implemented at the session layer for the playback of the streams. A personal computer was used to perform their synchronized playback from a 16 Mbps token-ring and from a simulated B-ISDN network.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12382
- Subject Headings
- Multimedia systems, Broadband communication systems, Data transmission systems, Integrated services digital networks, Electronic data processing--Distributed processing
- Format
- Document (PDF)
- Title
- A fault-tolerant memory architecture for storing one hour of D-1 video in real time on long polyimide tapes.
- Creator
- Monteiro, Pedro Cox de Sousa., Florida Atlantic University, Glenn, William E., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Research is under way to fabricate large-area thin-film transistor arrays produced on a thin polyimide substrate. The polyimide substrate is available in long thirty centimeter wide rolls of tape, and lithography hardware is being developed to expose hundreds of meters of this tape with electrically addressable light modulators which can resolve 2 $\mu$m features. A fault-tolerant memory architecture is proposed that is capable of storing one hour of D-1 component digital video (almost 10^12...
Show moreResearch is under way to fabricate large-area thin-film transistor arrays produced on a thin polyimide substrate. The polyimide substrate is available in long thirty centimeter wide rolls of tape, and lithography hardware is being developed to expose hundreds of meters of this tape with electrically addressable light modulators which can resolve 2 $\mu$m features. A fault-tolerant memory architecture is proposed that is capable of storing one hour of D-1 component digital video (almost 10^12 bits) in real-time, on eight two-hundred meter long tapes. Appropriate error correcting codes and error concealment are proposed to compensate for drop-outs resulting from manufacturing defects so as to yield video images with error rates low enough to survive several generations of copies.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14869
- Subject Headings
- Polyimides, Computer architecture, Memory hierarchy (Computer science), Fault-tolerant computing
- Format
- Document (PDF)
- Title
- The human face recognition problem: A solution based on third-order synthetic neural networks and isodensity analysis.
- Creator
- Uwechue, Okechukwu A., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Third-order synthetic neural networks are applied to the recognition of isodensity facial images extracted from digitized grayscale facial images. A key property of neural networks is their ability to recognize invariances and extract essential parameters from complex high-dimensional data. In pattern recognition an input image must be recognized regardless of its position, size, and angular orientation. In order to achieve this, the neural network needs to learn the relationships between the...
Show moreThird-order synthetic neural networks are applied to the recognition of isodensity facial images extracted from digitized grayscale facial images. A key property of neural networks is their ability to recognize invariances and extract essential parameters from complex high-dimensional data. In pattern recognition an input image must be recognized regardless of its position, size, and angular orientation. In order to achieve this, the neural network needs to learn the relationships between the input pixels. Pattern recognition requires the nonlinear subdivision of the pattern space into subsets representing the objects to be identified. Single-layer neural networks can only perform linear discrimination. However, multilayer first-order networks and high-order neural networks can both achieve this. The most significant advantage of a higher-order net over a traditional multilayer perceptron is that invariances to 2-dimensional geometric transformations can be incorporated into the network and need not be learned through prolonged training with an extensive family of exemplars. It is shown that a third-order network can be used to achieve translation-, scale-, and rotation-invariant recognition with a significant reduction in training time over other neural net paradigms such as the multilayer perceptron. A model based on an enhanced version of the Widrow-Hoff training algorithm and a new momentum paradigm are introduced and applied to the complex problem of human face recognition under varying facial expressions. Arguments for the use of isodensity information in the recognition algorithm are put forth and it is shown how the technique of coarse-coding is applied to reduce the memory required for computer simulations. The combination of isodensity information and neural networks for image recognition is described and its merits over other image recognition methods are explained. It is shown that isodensity information coupled with the use of an "adaptive threshold strategy" (ATS) yields a system that is relatively impervious to image contrast noise. The new momentum paradigm produces much faster convergence rates than ordinary momentum and renders the network behaviour independent of its training parameters over a broad range of parameter values.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12464
- Subject Headings
- Image processing, Face perception, Neural networks (Computer science)
- Format
- Document (PDF)
- Title
- iVEST A: Interactive Data Visualization and Analysis for Drive Test Data Evaluation.
- Creator
- Lee, Yongsuk, Zhu, Xingquan, Pandya, Abhijit S., Hsu, Sam, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis, a practical solution for drive test data evaluation and a real application are studied. We propose a system framework to project high dimensional Drive Test Data (DTD) to well-organized web pages, such that users can visually review phone performance with respect to different factors. The proposed application, iVESTA (interactive Visualization and Evaluation System for driven Test dAta), employs a web-based architecture which enables users to upload DTD and immediately...
Show moreIn this thesis, a practical solution for drive test data evaluation and a real application are studied. We propose a system framework to project high dimensional Drive Test Data (DTD) to well-organized web pages, such that users can visually review phone performance with respect to different factors. The proposed application, iVESTA (interactive Visualization and Evaluation System for driven Test dAta), employs a web-based architecture which enables users to upload DTD and immediately visualize the test results and observe phone and network performances with respect to different factors such as dropped call rate, signal quality, vehicle speed, handover and network delays. iVESTA provides practical solutions for mobile phone manufacturers and network service providers to perform comprehensive study on their products from the real-world DTD.
Show less - Date Issued
- 2007
- PURL
- http://purl.flvc.org/fau/fd/FA00012532
- Subject Headings
- Information visualization--Data processing, Object-oriented programming (Computer science), Information technology--Management, Application software--Development
- Format
- Document (PDF)
- Title
- VoIP Network Security and Forensic Models using Patterns.
- Creator
- Pelaez, Juan C., Fernandez, Eduardo B., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Voice over Internet Protocol (VoIP) networks is becoming the most popular telephony system in the world. However, studies of the security of VoIP networks are still in their infancy. VoIP devices and networks are commonly attacked, and it is therefore necessary to analyze the threats against the converged network and the techniques that exist today to stop or mitigate these attacks. We also need to understand what evidence can be obtained from the VoIP system after an attack has occurred....
Show moreVoice over Internet Protocol (VoIP) networks is becoming the most popular telephony system in the world. However, studies of the security of VoIP networks are still in their infancy. VoIP devices and networks are commonly attacked, and it is therefore necessary to analyze the threats against the converged network and the techniques that exist today to stop or mitigate these attacks. We also need to understand what evidence can be obtained from the VoIP system after an attack has occurred. Many of these attacks occur in similar ways in different contexts or environments. Generic solutions to these issues can be expressed as patterns. A pattern can be used to guide the design or simulation of VoIP systems as an abstract solution to a problem in this environment. Patterns have shown their value in developing good quality software and we expect that their application to VoIP will also prove valuable to build secure systems. This dissertation presents a variety of patterns (architectural, attack, forensic and security patterns). These patterns will help forensic analysts as well, as secure systems developers because they provide a systematic approach to structure the required information and help understand system weaknesses. The patterns will also allow us to specify, analyze and implement network security investigations for different architectures. The pattern system uses object-oriented modeling (Unified Modeling Language) as a way to formalize the information and dynamics of attacks and systems.
Show less - Date Issued
- 2007
- PURL
- http://purl.flvc.org/fau/fd/FA00012576
- Subject Headings
- Internet telephony--Security measures, Computer network protocols, Global system for mobile communications, Software engineering
- Format
- Document (PDF)
- Title
- Visualization of Impact Analysis on Configuration Management Data for Software Process Improvement.
- Creator
- Lo, Christopher Hoi-Yin, Huang, Shihong, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The software development process is an incremental and iterative activity. Source code is constantly altered to reflect changing requirements, to respond to testing results, and to address problem reports. Proper software measurement that derives meaningful numeric values for some attributes of a software product or process can help in identifying problem areas and development bottlenecks. Impact analysis is the evaluation of the risks associated with change requests or problem reports,...
Show moreThe software development process is an incremental and iterative activity. Source code is constantly altered to reflect changing requirements, to respond to testing results, and to address problem reports. Proper software measurement that derives meaningful numeric values for some attributes of a software product or process can help in identifying problem areas and development bottlenecks. Impact analysis is the evaluation of the risks associated with change requests or problem reports, including estimates of effects on resources, effort, and schedule. This thesis presents a methodology called VITA for applying software analysis techniques to configuration management repository data with the aim of identifying the impact on file changes due to change requests and problem reports. The repository data can be analyzed and visualized in a semi-automated manner according to user-selectable criteria. The approach is illustrated with a model problem concerning software process improvement of an embedded software system in the context of performing high-quality software maintenance.
Show less - Date Issued
- 2007
- PURL
- http://purl.flvc.org/fau/fd/FA00012535
- Subject Headings
- Software mesurement, Software engineering--Quality control, Data mining--Quality control
- Format
- Document (PDF)
- Title
- Visualization of search engine query result using region-based document model on XML documents.
- Creator
- Parikh, Sunish Umesh., Florida Atlantic University, Horton, Thomas, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Information access systems have traditionally focused on retrieval of documents consisting of titles and abstracts. The underlying assumptions of such systems are not necessarily appropriate for full text, structured documents. Context and structure should play an important role in information access from full text document collections. When a system retrieves a document in response to a query, it is important to indicate not only how strong the match is (e.g., how many terms from the query...
Show moreInformation access systems have traditionally focused on retrieval of documents consisting of titles and abstracts. The underlying assumptions of such systems are not necessarily appropriate for full text, structured documents. Context and structure should play an important role in information access from full text document collections. When a system retrieves a document in response to a query, it is important to indicate not only how strong the match is (e.g., how many terms from the query are present in the document), but also how frequent each term is, how each term is distributed in the text and where the terms overlap within the document. This information is especially important in long texts, since it is less clear how the terms in the query contribute to the ranking of a long text than a short abstract. This thesis does research in the application of information visualization techniques to the problem of navigating and finding information in XML files which are becoming available in increasing quantities on the World Wide Web (WWW). It provides a methodology for presenting detailed information about a specific topic while also presenting a complete overview of all the information available. A prototype has been developed for visualization of search query results. Limitations of the prototype developed and future direction of work have also been discussed.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12694
- Subject Headings
- XML (Document markup language), Web search engines
- Format
- Document (PDF)
- Title
- Visualization as a Qualitative Method for Analysis of Data from Location Tracking Technologies.
- Creator
- Mani, Mohan, VanHilst, Michael, Pandya, Abhijit S., Hsu, Sam, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
One of the biggest factors in the quest to better wireless communication is cellular call handoff, which in tum, is a function of geographic location. In this thesis, our fundamental goal was to demonstrate the value addition brought forth by spatial data visualization techniques for the analysis of geo-referenced data from two different location tracking technologies: GPS and cellular systems. Through our efforts, we unearthed some valuable and surprising insights from the data being...
Show moreOne of the biggest factors in the quest to better wireless communication is cellular call handoff, which in tum, is a function of geographic location. In this thesis, our fundamental goal was to demonstrate the value addition brought forth by spatial data visualization techniques for the analysis of geo-referenced data from two different location tracking technologies: GPS and cellular systems. Through our efforts, we unearthed some valuable and surprising insights from the data being analyzed that led to interesting observations about the data itself as opposed to the entity, or entities, that the data is supposed to describe. In doing so, we underscored the value addition brought forth by spatial data visualization techniques even in the incipient stages of analysis of georeferenced data from cellular networks. We also demonstrated the value of visualization techniques as a verification tool to verify the results of analysis done through other methods, such as statistical analysis.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012536
- Subject Headings
- Mobile communication systems, Algorithms--Data analysis, Radio--Transmitters and transmissions, Code division multiple access
- Format
- Document (PDF)
- Title
- An intelligent neural network forecaster to predict the Standard & Poor 500's index.
- Creator
- Shah, Sulay Bipin., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis we present an intelligent forecaster based on neural network technology to capture the future path of the market indicator. This thesis is about the development of a new methodology in financial forecasting. An effort is made to develop a neural network forecaster using the financial indicators as the input variables. A complex recurrent neural network is used to capture the behavior of the nonlinear characteristics of the S&P 500. The main outcome of this research is, a...
Show moreIn this thesis we present an intelligent forecaster based on neural network technology to capture the future path of the market indicator. This thesis is about the development of a new methodology in financial forecasting. An effort is made to develop a neural network forecaster using the financial indicators as the input variables. A complex recurrent neural network is used to capture the behavior of the nonlinear characteristics of the S&P 500. The main outcome of this research is, a systematic way of constructing a forecaster for nonlinear and non-stationary data series of S&P 500 that leads to very good out-of-sample prediction. The results of the training and testing of the network are presented along with conclusion. The tool used for the validation of this research is "Brainmaker". This thesis also contains a brief survey of available tools for financial forecasting.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15741
- Subject Headings
- Neural networks (Computer science), Stock price forecasting, Time-series analysis
- Format
- Document (PDF)
- Title
- The cochlea: A signal processing paradigm.
- Creator
- Barrett, Raymond L. Jr., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The cochlea provides frequency selectivity for acoustic input signal processing in mammals. The excellent performance of human hearing for speech processing leads to examination of the cochlea as a paradigm for signal processing. The components of the hearing process are examined and suitable models are selected for each component's function. The signal processing function is simulated by a computer program and the ensemble is examined for behavior and improvement. The models reveal that the...
Show moreThe cochlea provides frequency selectivity for acoustic input signal processing in mammals. The excellent performance of human hearing for speech processing leads to examination of the cochlea as a paradigm for signal processing. The components of the hearing process are examined and suitable models are selected for each component's function. The signal processing function is simulated by a computer program and the ensemble is examined for behavior and improvement. The models reveal that the motion of the basilar membrane provides a very selective low pass transmission characteristic. Narrowband frequency resolution is obtained from the motion by computation of spatial differences in the magnitude of the motion as energy propagates along the membrane. Basilar membrane motion is simulated using the integrable model of M. R. Schroeder, but the paradigm is useful for any model that exhibits similar high selectivity. Support is shown for an hypothesis that good frequency discrimination is possible without highly resonant structure. The nonlinear magnitude calculation is performed on signals developed without highly resonant structure, and differences in those magnitudes are signals shown to have good narrowband selectivity. Simultaneously, good transient behavior is preserved due to the avoidance of highly resonant structure. The cochlear paradigm is shown to provide a power spectrum with serendipitous good frequency selectivity and good transient response simultaneously.
Show less - Date Issued
- 1990
- PURL
- http://purl.flvc.org/fcla/dt/12251
- Subject Headings
- Engineering, Electronics and Electrical, Computer Science
- Format
- Document (PDF)
- Title
- A connectionist approach to adaptive reasoning: An expert system to predict skid numbers.
- Creator
- Reddy, Mohan S., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This project illustrates the neural network approach to constructing a fuzzy logic decision system. This technique employs an artificial neural network (ANN) to recognize the relationships that exit between the various inputs and outputs. An ANN is constructed based on the variables present in the application. The network is trained and tested. Various training methods are explored, some of which include auxiliary input and output columns. After successful testing, the ANN is exposed to new...
Show moreThis project illustrates the neural network approach to constructing a fuzzy logic decision system. This technique employs an artificial neural network (ANN) to recognize the relationships that exit between the various inputs and outputs. An ANN is constructed based on the variables present in the application. The network is trained and tested. Various training methods are explored, some of which include auxiliary input and output columns. After successful testing, the ANN is exposed to new data and the results are grouped into fuzzy membership sets based membership evaluation rules. This data grouping forms the basis of a new ANN. The network is now trained and tested with the fuzzy membership data. New data is presented to the trained network and the results form the fuzzy implications. This approach is used to compute skid resistance values from G-analyst accelerometer readings on open grid bridge decks.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15239
- Subject Headings
- Artificial intelligence, Fuzzy logic, Neural networks (Computer science), Pavements--Skid resistance
- Format
- Document (PDF)
- Title
- A communication protocol for wireless sensor networks.
- Creator
- Callaway, Edgar Herbert, Jr., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Many wireless network applications, such as wireless computing on local area networks, employ data throughput as a primary performance metric. The data throughput on such networks has therefore been increasing in recent years. However, there are other potential wireless network applications, such as industrial monitoring and control, consumer home automation, and military remote sensing, that have relaxed throughput requirements, often measured in bits/day. Such networks have power...
Show moreMany wireless network applications, such as wireless computing on local area networks, employ data throughput as a primary performance metric. The data throughput on such networks has therefore been increasing in recent years. However, there are other potential wireless network applications, such as industrial monitoring and control, consumer home automation, and military remote sensing, that have relaxed throughput requirements, often measured in bits/day. Such networks have power consumption and cost as primary performance metrics, rather than data throughput, and have been called wireless sensor networks. This work describes a physical layer, a data link layer, and a network layer design suitable for use in wireless sensor networks. To minimize node duty cycle and therefore average power consumption, while minimizing the symbol rate, the proposed physical layer employs a form of orthogonal multilevel signaling in a direct sequence spread spectrum format. Results of Signal Processing Worksystem (SPW, Cadence, Inc.) simulations are presented showing a 4-dB sensitivity advantage of the proposed modulation method compared to binary signaling, in agreement with theory. Since the proposed band of operation is the 2.4 GHz unlicensed band, interference from other services is possible; to address this, SPW simulations of the proposed modulation method in the presence of Bluetooth interference are presented. The processing gain inherent in the proposed spread spectrum scheme is shown to require the interferer to be significantly stronger than the desired signal before materially affecting the received bit error rate. The proposed data link layer employs a novel distributed mediation device (MD) technique to enable networked nodes to synchronize to each other, even when the node duty cycle is arbitrarily low (e.g., <0.1%). This technique enables low-cost devices, which may employ only low-stability time bases, to remain asynchronous to one another, becoming synchronized only when communication is necessary between them. Finally, a wireless sensor network design is presented. A cluster-type architecture is chosen; the clusters are organized in a hierarchical tree to simplify the routing algorithm. Results of several network performance metrics simulations, including the effects of the distributed MD dynamic synchronization scheme, are presented, including the average message latency, node duty cycle, and data throughput. The architecture is shown to represent a practical alternative for the design of wireless sensor networks.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/11991
- Subject Headings
- Wireless communication systems, Computer network protocols, Radio detectors
- Format
- Document (PDF)
- Title
- A critical comparison of three user interface architectures in object-oriented design.
- Creator
- Walls, David Paul., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Frameworks for the development of object-oriented, user interactive applications have been examined. Three alternate approaches have been explored; the Model-View-Controller (MVC) approach, the MVC++ approach and the Presentation-Abstraction-Control (PAC) approach. For the purpose of assessing the approaches, a simple engineering application was selected for object-oriented analysis using the three techniques. The utility of each technique was compared on the basis of complexity,...
Show moreFrameworks for the development of object-oriented, user interactive applications have been examined. Three alternate approaches have been explored; the Model-View-Controller (MVC) approach, the MVC++ approach and the Presentation-Abstraction-Control (PAC) approach. For the purpose of assessing the approaches, a simple engineering application was selected for object-oriented analysis using the three techniques. The utility of each technique was compared on the basis of complexity, extensibility and reusability. While the approaches aim to provide reusable user interface components and extensibility through incorporation of an additional class, only MVC++ and PAC truly achieve this goal, although at the expense of introducing additional messaging complexity. It was also noted that, in general, decoupling of the GUI classes, while providing increased extensibility and reusability, increases the inter-object messaging requirement.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15747
- Subject Headings
- User interfaces (Computer systems), Object-oriented methods (Computer science)
- Format
- Document (PDF)
- Title
- A comparative study of attribute selection techniques for CBR-based software quality classification models.
- Creator
- Nguyen, Laurent Quoc Viet., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
To achieve high reliability in software-based systems, software metrics-based quality classification models have been explored in the literature. However, the collection of software metrics may be a hard and long process, and some metrics may not be helpful or may be harmful to the classification models, deteriorating the models' accuracies. Hence, methodologies have been developed to select the most significant metrics in order to build accurate and efficient classification models. Case...
Show moreTo achieve high reliability in software-based systems, software metrics-based quality classification models have been explored in the literature. However, the collection of software metrics may be a hard and long process, and some metrics may not be helpful or may be harmful to the classification models, deteriorating the models' accuracies. Hence, methodologies have been developed to select the most significant metrics in order to build accurate and efficient classification models. Case-Based Reasoning is the classification technique used in this thesis. Since it does not provide any metric selection mechanisms, some metric selection techniques were studied. In the context of CBR, this thesis presents a comparative evaluation of metric selection methodologies, for raw and discretized data. Three attribute selection techniques have been studied: Kolmogorov-Smirnov Two-Sample Test, Kruskal-Wallis Test, and Information Gain. These techniques resulted in classification models that are useful for software quality improvement.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12944
- Subject Headings
- Case-based reasoning, Software engineering, Computer software--Quality control
- Format
- Document (PDF)
- Title
- GENERALIZED PADE APPROXIMATION TECHNIQUES AND MULTIDIMENSIONAL SYSTEMS.
- Creator
- MESSITER, MARK A., Florida Atlantic University, Shamash, Yacov A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Two algorithms for greatest common factor (GCF) extraction from two multivariable polynomials, based on generalized Pade approximation, are presented. The reduced transfer matrices for two-dimensional (20) systems are derived from two 20 state-space models. Tests for product and sum separabilities of multivariable functions are also given.
- Date Issued
- 1983
- PURL
- http://purl.flvc.org/fcla/dt/14175
- Subject Headings
- Multivariate analysis, Padé approximant, Polynomials
- Format
- Document (PDF)