Current Search: info:fedora/fau:CurrentETDs (x) » College of Engineering and Computer Science (x)
View All Items
Pages
- Title
- YACAD: Yet Another Congestion Avoidance Design for ATM-based networks.
- Creator
- Hsu, Sam, Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This dissertation proposes YACAD (Yet Another Congestion Avoidance Design for ATM-based Networks), a congestion prevention model that includes admission control, traffic shaping, and link-by-link flow control for ATM-based networks. Network traffic in this model is composed of real-time traffic and data traffic. As real-time traffic is delay-sensitive and connection-oriented, its call acceptance is based upon the effective bandwidth at all nodes. Effective bandwidth is defined as a vector of...
Show moreThis dissertation proposes YACAD (Yet Another Congestion Avoidance Design for ATM-based Networks), a congestion prevention model that includes admission control, traffic shaping, and link-by-link flow control for ATM-based networks. Network traffic in this model is composed of real-time traffic and data traffic. As real-time traffic is delay-sensitive and connection-oriented, its call acceptance is based upon the effective bandwidth at all nodes. Effective bandwidth is defined as a vector of bandwidth and maximum node delay. As data traffic can be either connection-oriented or connectionless, it is subject to link-by-link flow control based on a criterion known as effective buffer which is defined as a scalar of buffer size. Data traffic is not delay-sensitive but is loss-sensitive. Traffic shaping is imposed on real-time traffic to ensure a smooth inflow of real-time cells. YACAD also allocates a large buffer (fat bucket) to data traffic to accommodate sudden long bursts of data cells. Absence of data cell loss is a major feature of YACAD. Two simulation studies on the performance of the model are conducted. Analyses of the simulation results show that the proposed congestion avoidance model can achieve congestion-free networking and bounded network delays for real-time traffic at high levels of channel utilization. The maximum buffer requirements for loss-free cell delivery for data traffic, and the cell loss probabilities for real-time traffic are also obtained. In addition, results of performance comparisons to other similar models have shown that YACAD outperforms several other leaky-bucket based congestion control methods in terms of cell loss probability for real-time traffic. The simulation source program has also been verified using existing queueing theories, and the Paired-t Confidence Interval method with satisfactory results at 99% confidence level.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12336
- Subject Headings
- Integrated services digital networks, Broadband communications systems, Packet switching (Data transmission), Computer networks--Management
- Format
- Document (PDF)
- Title
- XYZ: A scalable, partially centralized lookup service for large-scale peer-to-peer systems.
- Creator
- Zhang, Jianying., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Peer-to-Peer (P2P) systems are characterized by direct access between peer computers, rather than through a centralized server. File sharing is the dominant P2P application on the Internet, allowing users to easily contribute, search and obtain content. The objective of this thesis was to design XYZ, a partially centralized, scalable and self-organizing lookup service for wide area P2P systems. The XYZ system is based on distributed hash table (DHT). A unique ID and a color assigned to each...
Show morePeer-to-Peer (P2P) systems are characterized by direct access between peer computers, rather than through a centralized server. File sharing is the dominant P2P application on the Internet, allowing users to easily contribute, search and obtain content. The objective of this thesis was to design XYZ, a partially centralized, scalable and self-organizing lookup service for wide area P2P systems. The XYZ system is based on distributed hash table (DHT). A unique ID and a color assigned to each node and each file. The author uses clustering method to create the system backbone by connecting the cluster heads together and uses color clustering method to create color overlays. Any lookup for a file with a color will only be forwarded in the color overlay with the same color so that the searching space is minimized. Simulations and analysis are also provided in this thesis.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/13263
- Subject Headings
- Wireless communication systems, Peer-to-peer architecture (Computer networks), Computational grids (Computer systems)
- Format
- Document (PDF)
- Title
- XYZ Video Compression: An algorithm for real-time compression of motion video based upon the three-dimensional discrete cosine transform.
- Creator
- Westwater, Raymond John., Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
XYZ Video Compression denotes a video compression algorithm that operates in three dimensions, without the overhead of motion estimation. The smaller overhead of this algorithm as compared to MPEG and other "standards-based" compression algorithms using motion estimation suggests the suitability of this algorithm to real-time applications. The demonstrated results of compression of standard motion video benchmarks suggest that XYZ Video Compression is not only a faster algorithm, but develops...
Show moreXYZ Video Compression denotes a video compression algorithm that operates in three dimensions, without the overhead of motion estimation. The smaller overhead of this algorithm as compared to MPEG and other "standards-based" compression algorithms using motion estimation suggests the suitability of this algorithm to real-time applications. The demonstrated results of compression of standard motion video benchmarks suggest that XYZ Video Compression is not only a faster algorithm, but develops superior compression ratios as well. The algorithm is based upon the three-dimensional Discrete Cosine Transform (DCT). Pixels are organized as 8 x 8 x 8 cubes by taking 8 x 8 squares out of 8 consecutive frames. A fast three-dimensional transform is applied to each cube, generating 512 DCT coefficients. The energy-packing property of the DCT concentrates the energy in the cube into few coefficients. The DCT coefficients are quantized to maximize the energy concentration at the expense of introduction of a user-determined level of error. A method of adaptive quantization that generates optimal quantizers based upon statistics gathered for the 8 consecutive frames is described. The sensitivity of the human eye to various DCT coefficients is used to modify the quantizers to create a "visually equivalent" cube with still greater energy concentration. Experiments are described that justify choice of Human Visual System factors to be folded into the quantization step. The quantized coefficients are then encoded into a data stream using a method of entropy coding based upon the statistics of the quantized coefficients. The bitstream generated by entropy coding represents the compressed data of the 8 motion video frames, and typically will be compressed at 50:1 at 5% error. The decoding process is the reverse of the encoding process: the bitstream is decoded to generate blocks of quantized DCT coefficients, the DCT coefficients are dequantized, and the Inverse Discrete Cosine Transform is performed on the cube to recover pixel data suitable for display. The elegance of this technique lies in its simplicity, which lends itself to inexpensive implementation of both encoder and decoder. Finally, real-time implementation of the XYZ Compressor/Decompressor is discussed. Experiments are run to determine the effectiveness of the implementation.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12450
- Subject Headings
- Digital video, Data compression (Telecommunication), Image processing--Digital techniques, Coding theory
- Format
- Document (PDF)
- Title
- An XML-based data exchange model.
- Creator
- Sreenivasan, Sridhar., Florida Atlantic University, Hsu, Sam, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Data are the backbone to any organization. In some organizations there are applications that operate on a distributed platform. These applications might often communicate with applications on a different platform, that structures data in a different format. The manner in which data is transferred between such applications can become complex, because there will be many platform dependent applications that need to process the data transferred. Hence a mechanism for exchange of data in a simple,...
Show moreData are the backbone to any organization. In some organizations there are applications that operate on a distributed platform. These applications might often communicate with applications on a different platform, that structures data in a different format. The manner in which data is transferred between such applications can become complex, because there will be many platform dependent applications that need to process the data transferred. Hence a mechanism for exchange of data in a simple, effective manner for such applications is a basic necessity. In this thesis, such a mechanism to deal with data exchange in a platform independent manner is discussed. The proposed model is XML-based, and data between the applications is exchanged in the form of XML documents. XML is text-based and can be processed by any application existing on any platform. The model has an interface that processes the XML documents transferred between the client applications and the underlying database systems. The model is implemented in a System administration application. The application is a Web application that transfers data in the XML format. This is processed by the interface and transferred to the database. Data from the database is retrieved and converted to XML documents by the interface and transferred to the client (Web) applications.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12926
- Subject Headings
- XML (Document markup language)
- Format
- Document (PDF)
- Title
- Workspace evaluation and kinematic calibration of Stewart platform.
- Creator
- Wang, Jian., Florida Atlantic University, Masory, Oren, Roth, Zvi S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Parallel manipulators have their special characteristics in contrast to the traditional serial type of robots. Stewart platform is a typical six degree of freedom fully parallel robot manipulator. The goal of this research is to enhance the accuracy and the restricted workspace of the Stewart platform. The first part of the dissertation discusses the effect of three kinematic constraints: link length limitation, joint angle limitation and link interference, and kinematic parameters on the...
Show moreParallel manipulators have their special characteristics in contrast to the traditional serial type of robots. Stewart platform is a typical six degree of freedom fully parallel robot manipulator. The goal of this research is to enhance the accuracy and the restricted workspace of the Stewart platform. The first part of the dissertation discusses the effect of three kinematic constraints: link length limitation, joint angle limitation and link interference, and kinematic parameters on the workspace of the platform. An algorithm considering the above constraints for the determination of the volume and the envelop of Stewart platform workspace is developed. The workspace volume is used as a criterion to evaluate the effects of the platform dimensions and kinematic constraints on the workspace and the dexterity of the Stewart platform. The analysis and algorithm can be used as a design tool to select dimensions, actuators and joints in order to maximize the workspace. The remaining parts of the dissertation focus on the accuracy enhancement. Manufacturing tolerances, installation errors and link offsets cause deviations with respect to the nominal parameters of the platform. As a result, if nominal parameters are being used, the resulting platform pose will be inaccurate. An accurate kinematic model of Stewart platform which accommodates all manufacturing and installation errors is developed. In order to evaluate the effects of the above factors on the accuracy, algorithms for the forward and inverse kinematics solutions of the accurate model are developed. The effects of different manufacturing tolerances and installation errors on the platform accuracy are investigated based on this model. Simulation results provide insight into the expected accuracy and indicate the major factors contributing to the inaccuracies. In order to enhance the accuracy, there is a need to calibrate the platform, or to determine the actual values of the kinematic parameters (Parameter Identification) and to incorporate these into the inverse kinematic solution (Accuracy Compensation). An error-model based algorithm for the parameter identification is developed. Procedures for the formulation of the identification Jacobian and for accuracy compensation are presented. The algorithms are tested using simulated measurements in which the realistic measurement noise is included. As a result, pose error of the platform are significantly reduced.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12316
- Subject Headings
- Robots--Control systems, Manipulators (Mechanism), Robotics--Calibration
- Format
- Document (PDF)
- Title
- Weight function approach for stress analysis of the surface crack in a finite plate subjected to nonuniform stress fields.
- Creator
- Jani, Jayant Shivkumar., Florida Atlantic University, Arockiasamy, Madasamy, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The effects of various nonuniform stress fields on the stress intensity factors for the semi-elliptic surface crack (three-dimensional problem) in a finite plate are determined using the weight function approach. The formulation satisfies the linear elastic fracture mechanics criteria and the principle of conservation of energy. Based on the knowledge of stress intensity solutions for the reference load/stress system, the expression for the crack opening displacement function for the surface...
Show moreThe effects of various nonuniform stress fields on the stress intensity factors for the semi-elliptic surface crack (three-dimensional problem) in a finite plate are determined using the weight function approach. The formulation satisfies the linear elastic fracture mechanics criteria and the principle of conservation of energy. Based on the knowledge of stress intensity solutions for the reference load/stress system, the expression for the crack opening displacement function for the surface crack is derived. Using the crack opening displacement function and the reference stress intensity factor, the three-dimensional weight functions and subsequently the stress intensity solutions for the surface crack subjected to nonuniform stress fields are derived. The formulation is then applied to determine the effects of linear, quadratic, cubic, and pure bending stress fields on the stress intensity factor for the surface crack in a finite plate. In the initial stage of the study a two-dimensional problem of an edge-crack emanating from the weld-toe in a T-joint is considered. The effect of parameters such as plate thickness, weld-toe radius, and weld-flank angle on the stress intensity factor for an edge-crack is studied. Finite element analyses of the welded T-joints are performed to study the effects of plate thickness, weld-toe radius and the weld-flank angle on the local stress distribution. The ratio of plate thickness to weld-toe radius ranging from 13.09 to 153.93, and the weld-flank angles of 30, 45, and 60 degrees are considered in the analyses. Based on the results from FEM analyses, a parametric equation for the local stress concentration factor and a polynomial expression for the local stress distribution across the plate thickness are derived using the method of least squares and the polynomial curve-fitting technique.
Show less - Date Issued
- 1990
- PURL
- http://purl.flvc.org/fcla/dt/12254
- Subject Headings
- Strains and stresses, Plates (Engineering), Fracture mechanics
- Format
- Document (PDF)
- Title
- Web log analysis: Experimental studies.
- Creator
- Yang, Zhijian., Florida Atlantic University, Zhong, Shi, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
With rapid growth of the World Wide Web, web performance becomes increasingly important for modern businesses, especially for e-commerce. As we all know, web server logs contain potentially useful empirical data to improve web server performance. In this thesis, we discuss some topics related to the analysis of a website's server logs for enhancing server performance, which will benefit some applications in business. Markov chain models are used and allow us to dynamically model page...
Show moreWith rapid growth of the World Wide Web, web performance becomes increasingly important for modern businesses, especially for e-commerce. As we all know, web server logs contain potentially useful empirical data to improve web server performance. In this thesis, we discuss some topics related to the analysis of a website's server logs for enhancing server performance, which will benefit some applications in business. Markov chain models are used and allow us to dynamically model page sequences extracted from server logs. My experimental studies contain three major parts. First, I present a workload characterization study of the website used for my research. Second, Markov chain models are constructed for both page request and page-visiting sequence prediction. Finally, I carefully evaluate the constructed models using an independent test data set, which is from server logs on a different day. The research results demonstrate the effectiveness of Markov chain models for characterizing page-visiting sequences.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/13202
- Subject Headings
- Markov processes, Operations research, Business enterprises--Computer networks, Electronic commerce--Data processing
- Format
- Document (PDF)
- Title
- A wavelet-based detector for underwater communication.
- Creator
- Petljanski, Branko., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The need for reliable underwater communication at Florida Atlantic University is critical in transmitting data to and from Autonomous Underwater Vehicles (AUV) and remote sensors. Since a received signal is corrupted with ambient ocean noise, the nature of such noise is investigated. Furthermore, we establish connection between ambient ocean noise and fractal noise. Since the matched filter is designed under the assumption that noise is white, performance degradation of the matched filter due...
Show moreThe need for reliable underwater communication at Florida Atlantic University is critical in transmitting data to and from Autonomous Underwater Vehicles (AUV) and remote sensors. Since a received signal is corrupted with ambient ocean noise, the nature of such noise is investigated. Furthermore, we establish connection between ambient ocean noise and fractal noise. Since the matched filter is designed under the assumption that noise is white, performance degradation of the matched filter due non-white noise is investigated. We show empirical results that the wavelet transform provides an approximate Karhunen-Loeve expansion for 1/f-type noise. Since whitening can improve only broadband signals, a new method for synchronization signal design in wavelet subspaces with increased energy-to-peak amplitude ratio is presented. The wavelet detector with whitening of fractal noise and detection in wavelet subspace is shown. Results show that the wavelet detector improves detectability, however this is below expectation due to differences between fractal noise and ambient ocean noise.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12778
- Subject Headings
- Wavelets (Mathematics), Underwater acoustics
- Format
- Document (PDF)
- Title
- Wavelet transform-based digital signal processing.
- Creator
- Basbug, Filiz., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This study deals with applying the wavelet transform to mainly two different areas of signal processing: adaptive signal processing, and signal detection. It starts with background information on the theory of wavelets with an emphasis on the multiresolution representation of signals by the wavelet transform in Chapter 1. Chapter 2 begins with an overview of adaptive filtering in general and extends it to transform domain adaptive filtering. Later in the chapter, a novel adaptive filtering...
Show moreThis study deals with applying the wavelet transform to mainly two different areas of signal processing: adaptive signal processing, and signal detection. It starts with background information on the theory of wavelets with an emphasis on the multiresolution representation of signals by the wavelet transform in Chapter 1. Chapter 2 begins with an overview of adaptive filtering in general and extends it to transform domain adaptive filtering. Later in the chapter, a novel adaptive filtering architecture using the wavelet transform is introduced. The performance of this new structure is evaluated by using the LMS algorithm with variations in step size. As a result of this study, the wavelet transform based adaptive filter is shown to reduce the eigenvalue ratio, or condition number, of the input signal. As a result, the new structure is shown to have faster convergence, implying an improvement in the ability to track rapidly changing signals. Chapter 3 deals with signal detection with the help of the wavelet transform. One scheme studies signal detection by projecting the input signal onto different scales. The relationship between this approach and that of matched filtering is established. Then the effect of different factors on signal detection with the wavelet transform is examined. It is found that the method is robust in the presence of white noise. Also, the wavelets are analyzed as eigenfunctions of a certain random process, and how this gives way to optimal receiver design is shown. It is further demonstrated that the design of an optimum receiver leads to the wavelet transform based adaptive filter structure described in Chapter 2.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12354
- Subject Headings
- Wavelets (Mathematics), Signal processing--Digital techniques
- Format
- Document (PDF)
- Title
- Wave attenuation by rigid and flexible-membrane submerged breakwaters.
- Creator
- Harris, Lee Errol., Florida Atlantic University, Reddy, Dronnadula V., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
This research investigates the use of rigid and flexible-membrane submerged breakwaters for wave energy attenuation. A comprehensive review of breakwater design criteria and previous research on submerged breakwaters is included. Physical model laboratory studies conducted by the author and other researchers are investigated as a means for obtaining formulations for wave transmission coefficients. The mechanisms by which waves are attenuated and break are analyzed using video photography of...
Show moreThis research investigates the use of rigid and flexible-membrane submerged breakwaters for wave energy attenuation. A comprehensive review of breakwater design criteria and previous research on submerged breakwaters is included. Physical model laboratory studies conducted by the author and other researchers are investigated as a means for obtaining formulations for wave transmission coefficients. The mechanisms by which waves are attenuated and break are analyzed using video photography of the wave tank tests. The primary objective of this doctoral research was to determine and compare the wave attenuation of non-conventional rigid and flexible-membrane type submerged breakwaters. Physical model tests were performed using the wave tank facilities at Florida Institute of Technology located in Melbourne, Florida. Six different breakwater cross-sections used were: (1) rectangular, (2) triangular, (3) P.E.P.-$Reef\sp{TM}$, (4) single sand-filled container, (5) three stacked sand-filled containers, and (6) one single water-filled container. The first three breakwater units were rigid (or monolithic), and the last three units are flexible-membrane breakwater units. All six units tested had the same height, length (longshore), and base width (cross-shore), with different cross-sections and shapes, and were composed of different materials. A new classification scheme was developed for breakwaters and artificial reefs, based on water depth, structure height, and wave heights. The wave-structure interaction resulting in the wave breaking on the submerged breakwaters was documented, and the observations were analyzed. Wave transmission coefficients were computed for the six different breakwater models tested, and comparisons between the different models were made. Conclusions regarding the primary factors affecting the effectiveness of rigid and flexible-membrane submerged breakwaters were developed, as were recommendations for further research.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12468
- Subject Headings
- Breakwaters, Water waves
- Format
- Document (PDF)
- Title
- Water Cone Improvement Project.
- Creator
- Foley, Michael, Zitani, Matthew, Scheigner, Kyle, Ortega, Abel, Fisken, Gordon, Su, Tsung-Chow, College of Engineering and Computer Science
- Abstract/Description
-
Abstract Object of research is to improve a solar desalination device known as the Water Cone that creates potable water using solar energy. The water cone is a polymeric cone that sits overtop a dish of saline water. The water is evaporated by the sun and condenses back onto the surface of the cone creating fresh water. In an attempt to improve the cone’s water production, two different hydrophobic coatings are applied to the inside of two cones, which allow water droplets to flow at a much...
Show moreAbstract Object of research is to improve a solar desalination device known as the Water Cone that creates potable water using solar energy. The water cone is a polymeric cone that sits overtop a dish of saline water. The water is evaporated by the sun and condenses back onto the surface of the cone creating fresh water. In an attempt to improve the cone’s water production, two different hydrophobic coatings are applied to the inside of two cones, which allow water droplets to flow at a much faster rate, collecting water more quickly. Two water cones are coated separately, and are exposed to sunlight for five days. Water collection for the coated portion of the cone is compared to the uncoated portion of the cone. Results after a first trial show that coating A on the water cone impedes water collection whereas coating B appears to increase water collection.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00005188
- Subject Headings
- College students --Research --United States.
- Format
- Document (PDF)
- Title
- VoIP Network Security and Forensic Models using Patterns.
- Creator
- Pelaez, Juan C., Fernandez, Eduardo B., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Voice over Internet Protocol (VoIP) networks is becoming the most popular telephony system in the world. However, studies of the security of VoIP networks are still in their infancy. VoIP devices and networks are commonly attacked, and it is therefore necessary to analyze the threats against the converged network and the techniques that exist today to stop or mitigate these attacks. We also need to understand what evidence can be obtained from the VoIP system after an attack has occurred....
Show moreVoice over Internet Protocol (VoIP) networks is becoming the most popular telephony system in the world. However, studies of the security of VoIP networks are still in their infancy. VoIP devices and networks are commonly attacked, and it is therefore necessary to analyze the threats against the converged network and the techniques that exist today to stop or mitigate these attacks. We also need to understand what evidence can be obtained from the VoIP system after an attack has occurred. Many of these attacks occur in similar ways in different contexts or environments. Generic solutions to these issues can be expressed as patterns. A pattern can be used to guide the design or simulation of VoIP systems as an abstract solution to a problem in this environment. Patterns have shown their value in developing good quality software and we expect that their application to VoIP will also prove valuable to build secure systems. This dissertation presents a variety of patterns (architectural, attack, forensic and security patterns). These patterns will help forensic analysts as well, as secure systems developers because they provide a systematic approach to structure the required information and help understand system weaknesses. The patterns will also allow us to specify, analyze and implement network security investigations for different architectures. The pattern system uses object-oriented modeling (Unified Modeling Language) as a way to formalize the information and dynamics of attacks and systems.
Show less - Date Issued
- 2007
- PURL
- http://purl.flvc.org/fau/fd/FA00012576
- Subject Headings
- Internet telephony--Security measures, Computer network protocols, Global system for mobile communications, Software engineering
- Format
- Document (PDF)
- Title
- Voice activity detection over multiresolution subspaces.
- Creator
- Schultz, Robert Carl., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Society's increased demand for communications requires searching for techniques that preserve bandwidth. It has been observed that much of the time spent during telephone communications is actually idle time with no voice activity present. Detecting these idle periods and preventing transmission during these idle periods can aid in reducing bandwidth requirements during high traffic periods. While techniques exist to perform this detection, certain types of noise can prove difficult at best...
Show moreSociety's increased demand for communications requires searching for techniques that preserve bandwidth. It has been observed that much of the time spent during telephone communications is actually idle time with no voice activity present. Detecting these idle periods and preventing transmission during these idle periods can aid in reducing bandwidth requirements during high traffic periods. While techniques exist to perform this detection, certain types of noise can prove difficult at best for signal detection. The use of wavelets with multi-resolution subspaces can aid detection by providing noise whitening and signal matching. This thesis explores its use and proposes a technique for detection.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15740
- Subject Headings
- Speech processing systems, Signal processing--Digital techniques, Wavelets (Mathematics)
- Format
- Document (PDF)
- Title
- A VLSI implementation of a hexagonal topology CCD image sensor.
- Creator
- Madabushi, Vasudhevan., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis we report a VLSI design implementation of an application specific, full-frame architecture CCD image sensor for a handwritten Optical Character Recognition system. The design is targeted to the MOSIS 2mu, 2-poly/ 2-metal n-buried channel CCD/CMOS technology. The front side illuminated CCD image sensor uses a transparent polysilicon gate structure and is comprised of 84 (H) x 100 (V) pixels arranged in a hexagonal lattice structure. The sensor has unit pixel dimensions of 18...
Show moreIn this thesis we report a VLSI design implementation of an application specific, full-frame architecture CCD image sensor for a handwritten Optical Character Recognition system. The design is targeted to the MOSIS 2mu, 2-poly/ 2-metal n-buried channel CCD/CMOS technology. The front side illuminated CCD image sensor uses a transparent polysilicon gate structure and is comprised of 84 (H) x 100 (V) pixels arranged in a hexagonal lattice structure. The sensor has unit pixel dimensions of 18 lambda (H) x 16 lambda (V). A second layer of metal is used for shielding certain areas from incident light, and the effective pixel photosite area is 8 lambda x 8 lambda. The imaging pixels use a 3-phase structure (with an innovative addressing scheme for the hexagonal lattice) for image sensing and horizontal charge shift. Columns of charge are shifted into the vertical 2-phase CCD shift registers, which shift the charge out serially at high speed. The chip has been laid out on the 'tinychip' (2250 mu m x 2220 (mu m) pad frame and fabrication through MOSIS is planned next.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15123
- Subject Headings
- Integrated circuits--Very large scale integration, Optical character recognition devices, Pattern recognition systems, Imaging systems
- Format
- Document (PDF)
- Title
- A VLSI implementable thinning algorithm.
- Creator
- Zhang, Wei, Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Thinning is a very important step in a Character Recognition System. This thesis evolves a thinning algorithm that can be hardware implemented to improve the speed of the process. The software thinning algorithm features a simple set of rules that can be applied on both hexagonal and orthogonal character images. The hardware architecture features the SIMD structure, simple processing elements and near neighbor communications. The algorithm was simulated against the U.S. Postal Service...
Show moreThinning is a very important step in a Character Recognition System. This thesis evolves a thinning algorithm that can be hardware implemented to improve the speed of the process. The software thinning algorithm features a simple set of rules that can be applied on both hexagonal and orthogonal character images. The hardware architecture features the SIMD structure, simple processing elements and near neighbor communications. The algorithm was simulated against the U.S. Postal Service Character Database. The architecture, evolved with consideration of both the software constraints and the physical layout limitations, was simulated using VHDL hardware description language. Subsequent to VLSI design and simulations the chip was fabricated. The project provides for a feasibility study in utilizing the parallel processor architecture for the implementation of a parallel image thinning algorithm. It is hoped that such a hardware implementation will speed up the processing and lead eventually to a real time system.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14837
- Subject Headings
- Optical character recognition devices--Computer simulation, Algorithms, Integrated circuits--Very large scale integration
- Format
- Document (PDF)
- Title
- A VLSI implementable learning algorithm.
- Creator
- Ruiz, Laura V., Florida Atlantic University, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a...
Show moreA top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a simple network. This methodology is later applied in the design of a novel architecture, a stochastic neural network. All issues related to algorithmic design for VLSI implementability are discussed and results of layout and timing analysis given over software simulations. A top-down design methodology is presented, including a brief introduction to HDL's and an overview of the software tools used throughout the design process. These tools make it possible now for a designer to complete a design in a relative short period of time. In-depth knowledge of computer architecture, VLSI fabrication, electronic circuits and integrated circuit design is not fundamental to accomplish a task that a few years ago would have required a large team of specialized experts in many fields. This may appeal to researchers from a wide background of knowledge, including computer scientists, mathematicians, and psychologists experimenting with learning algorithms. It is only in a hardware implementation of artificial neural network learning algorithms that the true parallel nature of these architectures could be fully tested. Most of the applications of neural networks are basically software simulations of the algorithms run on a single CPU executing sequential simulations of a parallel, richly interconnected architecture. This dissertation describes a methodology whereby a researcher experimenting with a known or new learning algorithm will be able to test it as it was intentionally designed for, on a parallel hardware architecture.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/12453
- Subject Headings
- Integrated circuits--Very large scale integration--Design and construction, Neural networks (Computer science)--Design and construction, Computer algorithms, Machine learning
- Format
- Document (PDF)
- Title
- A VLSI implementable handwritten digit recognition system using artificial neural networks.
- Creator
- Agba, Lawrence C., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A VLSI implementable feature extraction scheme, and two VLSI implementable algorithms for feature classification that should lead to a practical handwritten digit recognition system are proposed. The feature extraction algorithm exploits the concept of holon dynamics. Holons can be regarded as a group of cooperative processors with self-organizing property. Two types of artificial neural network-based classifiers have been evolved to classify these features. The United States Post Office...
Show moreA VLSI implementable feature extraction scheme, and two VLSI implementable algorithms for feature classification that should lead to a practical handwritten digit recognition system are proposed. The feature extraction algorithm exploits the concept of holon dynamics. Holons can be regarded as a group of cooperative processors with self-organizing property. Two types of artificial neural network-based classifiers have been evolved to classify these features. The United States Post Office handwritten digit database was used to train and test these networks. The first type of classifier system used limited interconnect multi-layer perceptron (LIMP) modules in a hierarchical configuration. Each classifier in this system was independently trained and designated to recognize a particular digit. A maximum of sixty-one digits were used to train and 464 digits which included the training set were used to test the classifiers. A cumulative performance of 93.75% (correctly recognized digits) was recorded. The second classifier system consists of a cluster of small multi-layer perceptron (CLUMP) networks. Each cell in this system was independently trained to trace the boundary between two or more digits in the recognition plane. A combination of these cells distinguish a digit from the rest. This system was trained with 1796 digits and tested on 1918 different set of digits. On the training set a performance of 95.55% was recorded while 79.35% resulted from the test data. These results, which are expected to further improve, are superior to those obtained by other researchers on the same database. This technique of digit recognition is general enough for application in the development of a universal alphanumeric recognition system. A hybrid VLSI system consisting of both analog and digital circuitry, and utilizing both Bi-CMOS and switched capacitor technologies has been designed. The design is intended for implementation with the current MOSIS 2 $\mu$m, double poly, double metal, and p-well CMOS technology. The integrated circuit is such that both classifier systems can be realized using the same chip.
Show less - Date Issued
- 1990
- PURL
- http://purl.flvc.org/fcla/dt/12260
- Subject Headings
- Optical character recognition devices--Computer simulation, Pattern recognition systems--Computer simulation
- Format
- Document (PDF)
- Title
- Visualization of search engine query result using region-based document model on XML documents.
- Creator
- Parikh, Sunish Umesh., Florida Atlantic University, Horton, Thomas, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Information access systems have traditionally focused on retrieval of documents consisting of titles and abstracts. The underlying assumptions of such systems are not necessarily appropriate for full text, structured documents. Context and structure should play an important role in information access from full text document collections. When a system retrieves a document in response to a query, it is important to indicate not only how strong the match is (e.g., how many terms from the query...
Show moreInformation access systems have traditionally focused on retrieval of documents consisting of titles and abstracts. The underlying assumptions of such systems are not necessarily appropriate for full text, structured documents. Context and structure should play an important role in information access from full text document collections. When a system retrieves a document in response to a query, it is important to indicate not only how strong the match is (e.g., how many terms from the query are present in the document), but also how frequent each term is, how each term is distributed in the text and where the terms overlap within the document. This information is especially important in long texts, since it is less clear how the terms in the query contribute to the ranking of a long text than a short abstract. This thesis does research in the application of information visualization techniques to the problem of navigating and finding information in XML files which are becoming available in increasing quantities on the World Wide Web (WWW). It provides a methodology for presenting detailed information about a specific topic while also presenting a complete overview of all the information available. A prototype has been developed for visualization of search query results. Limitations of the prototype developed and future direction of work have also been discussed.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12694
- Subject Headings
- XML (Document markup language), Web search engines
- Format
- Document (PDF)
- Title
- Visualization of Impact Analysis on Configuration Management Data for Software Process Improvement.
- Creator
- Lo, Christopher Hoi-Yin, Huang, Shihong, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The software development process is an incremental and iterative activity. Source code is constantly altered to reflect changing requirements, to respond to testing results, and to address problem reports. Proper software measurement that derives meaningful numeric values for some attributes of a software product or process can help in identifying problem areas and development bottlenecks. Impact analysis is the evaluation of the risks associated with change requests or problem reports,...
Show moreThe software development process is an incremental and iterative activity. Source code is constantly altered to reflect changing requirements, to respond to testing results, and to address problem reports. Proper software measurement that derives meaningful numeric values for some attributes of a software product or process can help in identifying problem areas and development bottlenecks. Impact analysis is the evaluation of the risks associated with change requests or problem reports, including estimates of effects on resources, effort, and schedule. This thesis presents a methodology called VITA for applying software analysis techniques to configuration management repository data with the aim of identifying the impact on file changes due to change requests and problem reports. The repository data can be analyzed and visualized in a semi-automated manner according to user-selectable criteria. The approach is illustrated with a model problem concerning software process improvement of an embedded software system in the context of performing high-quality software maintenance.
Show less - Date Issued
- 2007
- PURL
- http://purl.flvc.org/fau/fd/FA00012535
- Subject Headings
- Software mesurement, Software engineering--Quality control, Data mining--Quality control
- Format
- Document (PDF)
- Title
- Visualization of buried objects in three-dimensional acoustic data acquired by a buried object scanning sonar.
- Creator
- Tellier, Arnaud Marc., Florida Atlantic University, Schock, Steven G., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The common approach for finding objects buried under the seabed is to use a single channel chirp reflection profiler. Reflection profiles lack information on target location, geometry and size. This thesis investigates methods for visualizing buried objects in noisy 3D acoustic data acquired by a small aperture scanning sonar. Various surface and volume rendering methods are tested with synthetic datasets containing fluid loaded spheres and with experimental data acquired with a 4-by-8 planar...
Show moreThe common approach for finding objects buried under the seabed is to use a single channel chirp reflection profiler. Reflection profiles lack information on target location, geometry and size. This thesis investigates methods for visualizing buried objects in noisy 3D acoustic data acquired by a small aperture scanning sonar. Various surface and volume rendering methods are tested with synthetic datasets containing fluid loaded spheres and with experimental data acquired with a 4-by-8 planar hydrophone array towed over buried objects with various aspects and size. The Maximum Intensity Projection is the best of the tested methods for real-time visualization of the data where a global overview of the targets is needed. A surface rendering technique such as the Marching Cubes is useful for offline measurement of the geometry and size of buried objects selected by the operator.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15682
- Subject Headings
- Three-dimensional display systems, Sonar, Sound-waves--Scattering, Computer graphics
- Format
- Document (PDF)