Current Search: Sudhakar, Raghavan (x)
View All Items
Pages
- Title
- COMPUTER ANALYSIS AND SYNTHESIS OF AN ACOUSTIC PIANO NOTE.
- Creator
- GUGEL, KARL STEWART., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
An IBM Personal Computer AT system was used in conjunction with a professional data acquisition unit to digitize and store several cassette deck recorded piano notes. These digitized notes were then visually analyzed both with an AT monitor and a high resolution plotter. Fourier and Walsh Transformations were then performed on the digitized data to yield further visual information. Upon completion of this visual study, several types of data reduction and waveform synthesis methods were...
Show moreAn IBM Personal Computer AT system was used in conjunction with a professional data acquisition unit to digitize and store several cassette deck recorded piano notes. These digitized notes were then visually analyzed both with an AT monitor and a high resolution plotter. Fourier and Walsh Transformations were then performed on the digitized data to yield further visual information. Upon completion of this visual study, several types of data reduction and waveform synthesis methods were formulated. These experimental methods tested included a wide range of signal processing techniques such as Fourier Transformation, Walsh Transformation, Polynomial Curve Fitting, linear Interpolation, Amplitude Normalization, and Frequency Normalization. The actual test performed on the experimental synthesis method consisted of recreating the piano note and then subjectively comparing the audio performance of the synthetic note versus the original note.
Show less - Date Issued
- 1987
- PURL
- http://purl.flvc.org/fcla/dt/14362
- Subject Headings
- Musical notation--Data processing
- Format
- Document (PDF)
- Title
- Parallel architectures and algorithms for digital filter VLSI implementation.
- Creator
- Desai, Pratik Vishnubhai., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
In many scientific and signal processing applications, there are increasing demands for large volume and high speed computations, which call for not only high-speed low power computing hardware, but also for novel approaches in developing new algorithms and architectures. This thesis is concerned with the development of such architectures and algorithms suitable for the VLSI implementation of recursive and nonrecursive 1-dimension digital filters using multiple slower processing elements. As...
Show moreIn many scientific and signal processing applications, there are increasing demands for large volume and high speed computations, which call for not only high-speed low power computing hardware, but also for novel approaches in developing new algorithms and architectures. This thesis is concerned with the development of such architectures and algorithms suitable for the VLSI implementation of recursive and nonrecursive 1-dimension digital filters using multiple slower processing elements. As the background for the development, vectorization techniques such as state-space modeling, block processing, and look ahead computation are introduced. Concurrent architectures such as systolic arrays, wavefront arrays and appropriate parallel filter realizations such as lattice, all-pass, and wave filters are reviewed. A fully hardware efficient systolic array architecture termed as Multiplexed Block-State Filter is proposed for the high speed implementation of lattice and direct realizations of digital filters. The thesis also proposes a new simplified algorithm, Alternate Pole Pairing Algorithm, for realizing an odd order recursive filter as the sum of two all-pass filters. Performance of the proposed schemes are verified through numerical examples and simulation results.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15155
- Subject Headings
- Integrated circuits--Very large scale integration, Parallel processing (Electronic computers), Computer network architectures, Algorithms (Data processing), Digital integrated circuits
- Format
- Document (PDF)
- Title
- Performance evaluation of blind equalization techniques in the digital cellular environment.
- Creator
- Boccuzzi, Joseph., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
This thesis presents simulation results evaluating the performance of blind equalization techniques in the Digital Cellular environment. A new method of a simple zero memory non-linear detector for complex signals is presented for various forms of Fractionally Spaced Equalizers (FSE). Initial simulations are conducted with Binary Phase Shift Keying (BPSK) to study the characteristics of FSEs. The simulations are then extended to complex case via $\pi/$4-Differential Quaterny Phase Shift...
Show moreThis thesis presents simulation results evaluating the performance of blind equalization techniques in the Digital Cellular environment. A new method of a simple zero memory non-linear detector for complex signals is presented for various forms of Fractionally Spaced Equalizers (FSE). Initial simulations are conducted with Binary Phase Shift Keying (BPSK) to study the characteristics of FSEs. The simulations are then extended to complex case via $\pi/$4-Differential Quaterny Phase Shift Keying ($\pi/$4-DQPSK) modulation. The primary focus in this thesis is the performance of this complex case when operating in Additive White Gaussian Noise (AWGN) and Rayleigh Multipath Fading channels.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14859
- Subject Headings
- Equalizers (Electronics), Computer algorithms, Data transmission systems, Programming electronic computers
- Format
- Document (PDF)
- Title
- ROBOT CALIBRATION USING STEREO VISION.
- Creator
- CHEN, SHOUPU., Florida Atlantic University, Roth, Zvi S., Sudhakar, Raghavan
- Abstract/Description
-
This thesis deals with a study of using the stereo vision technique in the robot calibration. Three cameras are used in measurement to extract the position information of a target point attached onto each of the robot manipulator links for the purpose of identifying the actual kinematic parameters of every link of the robot manipulator under testing. The robot kinematic model used in this study is the S-Model which is an extension of the well-known Denavit-Hartenberg model. The calibration...
Show moreThis thesis deals with a study of using the stereo vision technique in the robot calibration. Three cameras are used in measurement to extract the position information of a target point attached onto each of the robot manipulator links for the purpose of identifying the actual kinematic parameters of every link of the robot manipulator under testing. The robot kinematic model used in this study is the S-Model which is an extension of the well-known Denavit-Hartenberg model. The calibration has been done on the wrist of the IBM 7565 robot. The experiment set-up and results and the necessary software are all presented in this thesis.
Show less - Date Issued
- 1987
- PURL
- http://purl.flvc.org/fcla/dt/14416
- Subject Headings
- Robotics--Calibration--Measurement
- Format
- Document (PDF)
- Title
- IMPULSE NOISE AND ITS EFFECT ON RECEIVING STRUCTURES: A SURVEY AND SIMULATION.
- Creator
- CADD, JIMMY WILLIAM., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
The effects of impulse noise on receiving systems are studied and impulse noise models commonly used in analysis of such receiving systems are introduced. Various techniques for identifying the optimum receiving structure are presented and the concept of a nonlinear receiver for enhancing receiver environments is evolved. performance in impulse noise The effect of finite predetection bandwidth on the performance of such nonlinear receiver structures is studied in a qualitative fashion through...
Show moreThe effects of impulse noise on receiving systems are studied and impulse noise models commonly used in analysis of such receiving systems are introduced. Various techniques for identifying the optimum receiving structure are presented and the concept of a nonlinear receiver for enhancing receiver environments is evolved. performance in impulse noise The effect of finite predetection bandwidth on the performance of such nonlinear receiver structures is studied in a qualitative fashion through computer simulation. The performance of a linear receiver (matched filter) is compared to that of nonlinear receiver structures employing nonlinearities such as blanker and softlimiter; noncoherent ASK modulation was used for the computer simulation experiment. The performance of the blanker and softlimiter is then compared for different predetection bandwidths. An attempt was made to optimize a particular receiver structure in terms of the predetection bandwidth, for a given model of corrupting noise parameters (Gauss~an and impulsive).
Show less - Date Issued
- 1986
- PURL
- http://purl.flvc.org/fcla/dt/14318
- Subject Headings
- Signal theory (Telecommunication), Noise control
- Format
- Document (PDF)
- Title
- Estimation of motion parameters of a planar body using binocular camera configuration.
- Creator
- Haliyur, Padma., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
This thesis is concerned with the estimation of motion parameters of planar-object surfaces viewed with a binocular camera configuration. Possible application of this method includes autonomous guidance of a moving platform (AGVS) via imaging, and segmentation of moving objects by the use of the information concerning the motion and the structure. The brightness constraint equation is obtained by assuming the brightness of a moving patch as almost invariant. This equation is solved for single...
Show moreThis thesis is concerned with the estimation of motion parameters of planar-object surfaces viewed with a binocular camera configuration. Possible application of this method includes autonomous guidance of a moving platform (AGVS) via imaging, and segmentation of moving objects by the use of the information concerning the motion and the structure. The brightness constraint equation is obtained by assuming the brightness of a moving patch as almost invariant. This equation is solved for single camera case as well as binocular camera case by knowing values of the surface normal or by iteratively determining it using the estimates of motion parameters. For this value of the surface normal, rotational and translational motion components are determined over the entire image using a least squares algorithm. This algorithm is tested for simulated images as well as real images pertaining to a single camera as well as binocular camera situations. (Abstract shortened with permission of author.)
Show less - Date Issued
- 1991
- PURL
- http://purl.flvc.org/fcla/dt/14692
- Subject Headings
- Motion, Image processing
- Format
- Document (PDF)
- Title
- Homomorphic estimation and detection of convolved signals.
- Creator
- Cox, Steven William., Florida Atlantic University, Sudhakar, Raghavan
- Abstract/Description
-
A new approach to estimating convolved signals, refered to as homomorphic estimation, is presented. This method is the fusion of two well-developed signal processing techniques. The first is the class of homomorphic systems which are characterized by a generalized principle of superposition and allow any linear filter method to be applied when signals are non-additively combined. The second well-known technique is the Wiener estimation filter which has the ability to estimate a desired signal...
Show moreA new approach to estimating convolved signals, refered to as homomorphic estimation, is presented. This method is the fusion of two well-developed signal processing techniques. The first is the class of homomorphic systems which are characterized by a generalized principle of superposition and allow any linear filter method to be applied when signals are non-additively combined. The second well-known technique is the Wiener estimation filter which has the ability to estimate a desired signal in the presence of additive noise. The theory and realization of the homomorphic system for convolution based on the Fourier transform is developed. Homomorphic estimation system performance is analyzed using digital computer simulation. Homomorphic detection is also presented and is shown to be a useful and easily implemented method.
Show less - Date Issued
- 1988
- PURL
- http://purl.flvc.org/fcla/dt/14459
- Subject Headings
- Signal detection
- Format
- Document (PDF)
- Title
- DSP implementation of Turbo codes using the soft output Viterbi algorithm.
- Creator
- Dewsnap, Robert C., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
There are various algorithms used for the iterative decoding of two-dimensional systematic convolutional codes in applications such as spread-spectrum communications and CDMA detection. The main objective of these decoding schemes is to approach the Shannon limit in signal-to-noise ratio while keeping the system complexity and processing delay to a minimum. One such scheme proposed recently is termed Turbo (de)coding. Through the use of Log-likelihood algebra, it is shown that a decoder can...
Show moreThere are various algorithms used for the iterative decoding of two-dimensional systematic convolutional codes in applications such as spread-spectrum communications and CDMA detection. The main objective of these decoding schemes is to approach the Shannon limit in signal-to-noise ratio while keeping the system complexity and processing delay to a minimum. One such scheme proposed recently is termed Turbo (de)coding. Through the use of Log-likelihood algebra, it is shown that a decoder can be developed which accepts soft inputs as a priori information and delivers soft outputs consisting of channel information, a priori information and extrinsic information to subsequent stages of iteration. The output is then used as the a priori input information for the next iteration. Realization of the Turbo decoder is performed on the digital signal processing chip, TMS320C30 by Texas Instruments using a low complexity soft-input soft-output decoding algorithm. Hardware issues such as memory and processing time are addressed and how they are impacted by the chosen decoding scheme. Test results of the BER performance are presented for various block sizes and number of iterations.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15616
- Subject Headings
- Engineering, Electronics and Electrical
- Format
- Document (PDF)
- Title
- Development of handprinting character recognition system using two stage shape and stroke classification.
- Creator
- Tse, Hing Wing., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis deals with the recognition of digitized handprinting characters. Digitized character images are thresholded, binarized and converted into 32 x 32 matrices. The binarized character matrices are preprocessed to remove noise and thin down to one pixel per linewidth. For dominant features, namely, (1) number of loops, (2) number of end-pixels, (3) number of 3-branch-pixels, and (4) number of 4-branch-pixels, are used as criteria to pre-classify characters into 14 groups. Characters...
Show moreThis thesis deals with the recognition of digitized handprinting characters. Digitized character images are thresholded, binarized and converted into 32 x 32 matrices. The binarized character matrices are preprocessed to remove noise and thin down to one pixel per linewidth. For dominant features, namely, (1) number of loops, (2) number of end-pixels, (3) number of 3-branch-pixels, and (4) number of 4-branch-pixels, are used as criteria to pre-classify characters into 14 groups. Characters belonging to larger groups are encoded into chain code and compiled into a data base. Recognition of characters belonging to larger groups is achieved by data base look-up and or decision tree tests if ambiguities occur in the data base entries. Recognition of characters belonging to the smaller groups is doned by decision tree tests.
Show less - Date Issued
- 1988
- PURL
- http://purl.flvc.org/fcla/dt/14486
- Subject Headings
- Optical character recognition devices, Pattern recognition systems
- Format
- Document (PDF)
- Title
- Digital signal processing for a high-resolution three-dimensional sonar imaging system for autonomous underwater vehicles.
- Creator
- Cao, Ping., Florida Atlantic University, Cuschieri, Joseph M., Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
In this dissertation, the digital signal processing techniques required for a 3-D sonar imaging system are examined. The achievable performance of the generated images is investigated by using a combination of theoretical analysis, computer simulation and field experiments. The system consists of a forward looking sonar, with separate projector and receiver. The projector is a line source with an 80 degrees by 1.2 degree beam pattern, which is electronically scanned within a 150 degree sector...
Show moreIn this dissertation, the digital signal processing techniques required for a 3-D sonar imaging system are examined. The achievable performance of the generated images is investigated by using a combination of theoretical analysis, computer simulation and field experiments. The system consists of a forward looking sonar, with separate projector and receiver. The projector is a line source with an 80 degrees by 1.2 degree beam pattern, which is electronically scanned within a 150 degree sector. The receiver is a multi element line array, where each transducer element has a directivity pattern that covers the full sector of view, that is 150 degrees by 80 degrees. The purpose of this sonar system is to produce three dimensional (3-D) images which display the underwater topography within the sector of view up to a range of 200 meters. The principle of operation of the proposed 3-D imaging system differs from other commonly used systems in that it is not based on the intensity of backscatter. The geometries of the targets are obtained from the delay and direction information that can be extracted from the signal backscatter. The acquired data is further processed using an approach based on sequential Fourier transforms to build the 3-D images. With careful selection of the system parameters, the generated images have sufficient quality to be used for AUV tasks such as obstacle avoidance, navigation and object classification. An approach based on a sophisticated two dimensional (2-D) autoregressive (AR) model is explored to further improve the resolution and generate images with higher quality. The real time processing requirements for image generation are evaluated, with the use of dedicated Digital Signal Processing (DSP) chips. A pipeline processing model is analyzed and developed on a selected system.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12317
- Subject Headings
- Sonar, Signal processing--Digital techniques, Three-dimensional display systems, Submersibles
- Format
- Document (PDF)
- Title
- Implementation of low-complexity Viterbi decoder.
- Creator
- Mukhtar, Adeel., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The design of mobile communication receiver requires addressing the stringent issues of low signal-to-noise ratio (SNR) operation and low battery power consumption. Typically, forward error correction using convolutional coding with Viterbi decoding is employed to improve the error performance. However, even with moderate code lengths, the computation and storage requirement of conventional VD are substantial consuming appreciable fraction of DSP computations and hence battery power. The new...
Show moreThe design of mobile communication receiver requires addressing the stringent issues of low signal-to-noise ratio (SNR) operation and low battery power consumption. Typically, forward error correction using convolutional coding with Viterbi decoding is employed to improve the error performance. However, even with moderate code lengths, the computation and storage requirement of conventional VD are substantial consuming appreciable fraction of DSP computations and hence battery power. The new error selective Viterbi decoding (ESVD) scheme developed recently (1) reduces the computational load substantially by taking advantage of the noise-free intervals to limit the trellis search. This thesis is concerned with the development of an efficient hardware architecture to implement a hard decision version of ESVD scheme for IS-54 coder. The implementations are optimized to reduce the computational complexity. The performance of the implemented ESVD scheme is verified for different channel conditions.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15429
- Subject Headings
- Decoders (Electronics), Coding theory
- Format
- Document (PDF)
- Title
- Low-level and high-level correlation for image registration.
- Creator
- Mandalia, Anil Dhirajlal., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The fundamental goal of a machine vision system in the inspection of an assembled printed circuit board is to locate the integrated circuit(IC) components. These components are then checked for their position and orientation with respect to a given position and orientation of the model and to detect deviations. To this end, a method based on a modified two-level correlation scheme is presented in this thesis. In the first level, Low-Level correlation, a modified two-stage template matching...
Show moreThe fundamental goal of a machine vision system in the inspection of an assembled printed circuit board is to locate the integrated circuit(IC) components. These components are then checked for their position and orientation with respect to a given position and orientation of the model and to detect deviations. To this end, a method based on a modified two-level correlation scheme is presented in this thesis. In the first level, Low-Level correlation, a modified two-stage template matching method is proposed. It makes use of the random search techniques, better known as the Monte Carlo method, to speed up the matching process on binarized version of the images. Due to the random search techniques, there is uncertainty involved in the location where the matches are found. In the second level, High-Level correlation, an evidence scheme based on the Dempster-Shafer formalism is presented to resolve the uncertainty. Experiment results performed on a printed circuit board containing mounted integrated components is also presented to demonstrate the validity of the techniques.
Show less - Date Issued
- 1990
- PURL
- http://purl.flvc.org/fcla/dt/14635
- Subject Headings
- Image processing--Digital techniques, Computer vision, Integrated circuits
- Format
- Document (PDF)
- Title
- Comparison of different realizations and adaptive algorithms for channel equalization.
- Creator
- Kamath, Anuradha K., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents simulation results comparing the performance of different realizations and adaptive algorithms for channel equalization. An attempt is made to study and compare the performance of some filter structures used as an equalizer in fast data transmission over the baseband channel. To this end, simulation experiments are performed using minimum and non minimum phase channel models with adaptation algorithms such as the least mean square (LMS) and recursive least square (RLS)...
Show moreThis thesis presents simulation results comparing the performance of different realizations and adaptive algorithms for channel equalization. An attempt is made to study and compare the performance of some filter structures used as an equalizer in fast data transmission over the baseband channel. To this end, simulation experiments are performed using minimum and non minimum phase channel models with adaptation algorithms such as the least mean square (LMS) and recursive least square (RLS) algorithms, filter structures such as the lattice and transversal filters and the input signals such as the binary phase shift keyed (BPSK) and quadrature phase shift keyed (QPSK) signals. Based on the simulation studies, conclusions are drawn regarding the performance of various adaptation algorithms.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14974
- Subject Headings
- Computer algorithms, Data transmission systems, Equalizers (Electronics)
- Format
- Document (PDF)
- Title
- A novel DSP scheme for image compression and HDTV transmission.
- Creator
- Dong, Xu., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The main objective of the research is to develop computationally efficient hybrid coding schemes for the low bit implementations of image frames and image sequences. The basic fractal block coding can compress a relatively low resolution image efficiently without blocky artifacts, but it does not converge well at the high frequency edges. This research proposes a hybrid multi-resolution scheme which combines the advantages of fractal and DCT coding schemes. The fractal coding is applied to...
Show moreThe main objective of the research is to develop computationally efficient hybrid coding schemes for the low bit implementations of image frames and image sequences. The basic fractal block coding can compress a relatively low resolution image efficiently without blocky artifacts, but it does not converge well at the high frequency edges. This research proposes a hybrid multi-resolution scheme which combines the advantages of fractal and DCT coding schemes. The fractal coding is applied to get a lower resolution, quarter size output image and DCT is then used to encode the error residual between original full bandwidth image signal and the fractal decoded image signal. At the decoder side, the full resolution, full size reproduced image is generated by adding decoded error image to the decoded fractal image. Also, the lower resolution, quarter size output image is automatically given by the iteration function scheme without having to spend extra effort. Other advantages of the scheme are that the high resolution layer is generated by error image which covers the bandwidth loss of the lower resolution layer as well as the coding error of the lower resolution layer, and that it does not need a sophisticated classification procedure. A series of computer simulation experiments are conducted and their results are presented to illustrate the merit of the scheme. The hybrid fractal coding method is then extended to process motion sequences as well. A new scheme is proposed for motion vector detection and motion compensation, by judiciously combining the techniques of fractal compression and block matching. The advantage of this scheme is that it improves the performance of the motion compensation, while keeping the overall computational complexity low for each frame. The simulation results on realistic video conference image sequences support the superiority of the proposed method in terms of reproduced picture quality and compression ratio.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/12407
- Subject Headings
- Hybrid integrated circuits, Image compression, Fractals, Image processing--Digital techniques, High definition television
- Format
- Document (PDF)
- Title
- Selective texture characterization using Gabor filters.
- Creator
- Boutros, George., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The objective of this dissertation is to develop effective algorithms for texture characterization, segmentation and labeling that operate selectively to label image textures, using the Gabor representation of signals. These representations are an analog of the spatial frequency tuning characteristics of the visual cortex cells. The Gabor function, of all spatial/spectral signal representations, provides optimal resolution between both domains. A discussion of spatial/spectral representations...
Show moreThe objective of this dissertation is to develop effective algorithms for texture characterization, segmentation and labeling that operate selectively to label image textures, using the Gabor representation of signals. These representations are an analog of the spatial frequency tuning characteristics of the visual cortex cells. The Gabor function, of all spatial/spectral signal representations, provides optimal resolution between both domains. A discussion of spatial/spectral representations focuses on the Gabor function and the biological analog that exists between it and the simple cells of the striate cortex. A simulation generates examples of the use of the Gabor filter as a line detector with synthetic data. Simulations are then presented using Gabor filters for real texture characterization. The Gabor filter spatial and spectral attributes are selectively chosen based on the information from a scale-space image in order to maximize resolution of the characterization process. A variation of probabilistic relaxation that exploits the Gabor filter spatial and spectral attributes is devised, and used to force a consensus of the filter responses for texture characterization. We then perform segmentation of the image using the concept of isolation of low energy states within an image. This iterative smoothing algorithm, operating as a Gabor filter post-processing stage, depends on a line processes discontinuity threshold. Selection of the discontinuity threshold is obtained from the modes of the histogram of the relaxed Gabor filter responses using probabilistic relaxation to detect the significant modes. We test our algorithm on simple synthetic and real textures, then use a more complex natural texture image to test the entire algorithm. Limitations on textural resolution are noted, as well as for the resolution of the image segmentation process.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12342
- Subject Headings
- Image processing--Digital techniques, Computer vision
- Format
- Document (PDF)
- Title
- DSP implementation of turbo decoder using the Modified-Log-MAP algorithm.
- Creator
- Khan, Zeeshan Haneef., Florida Atlantic University, Zhuang, Hanqi, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The design of any communication receiver needs to addresses the issues of operating under the lowest possible signal-to-noise ratio. Among various algorithms that facilitate this objective are those used for iterative decoding of two-dimensional systematic convolutional codes in applications such as spread spectrum communications and Code Division Multiple Access (CDMA) detection. A main theme of any decoding schemes is to approach the Shannon limit in signal-to-noise ratio. All these...
Show moreThe design of any communication receiver needs to addresses the issues of operating under the lowest possible signal-to-noise ratio. Among various algorithms that facilitate this objective are those used for iterative decoding of two-dimensional systematic convolutional codes in applications such as spread spectrum communications and Code Division Multiple Access (CDMA) detection. A main theme of any decoding schemes is to approach the Shannon limit in signal-to-noise ratio. All these decoding algorithms have various complexity levels and processing delay issues. Hence, the optimality depends on how they are used in the system. The technique used in various decoding algorithms is termed as iterative decoding. Iterative decoding was first developed as a practical means for decoding turbo codes. With the Log-Likelihood algebra, it is shown that a decoder can be developed that accepts soft inputs as a priori information and delivers soft outputs consisting of channel information, a posteriori information and extrinsic information to subsequent stages of iteration. Different algorithms such as Soft Output Viterbi Algorithm (SOVA), Maximum A Posteriori (MAP), and Log-MAP are compared and their complexities are analyzed in this thesis. A turbo decoder is implemented on the Digital Signal Processing (DSP) chip, TMS320C30 by Texas Instruments using a Modified-Log-MAP algorithm. For the Modified-Log-MAP-Algorithm, the optimal choice of the lookup table (LUT) is analyzed by experimenting with different LUT approximations. A low complexity decoder is proposed for a (7,5) code and implemented in the DSP chip. Performance of the decoder is verified under the Additive Wide Gaussian Noise (AWGN) environment. Hardware issues such as memory requirements and processing time are addressed for the chosen decoding scheme. Test results of the bit error rate (BER) performance are presented for a fixed number of frames and iterations.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12948
- Subject Headings
- Error-correcting codes (Information theory), Signal processing--Digital techniques, Coding theory, Digital communications
- Format
- Document (PDF)
- Title
- Automatic extraction and tracking of eye features from facial image sequences.
- Creator
- Xie, Xangdong., Florida Atlantic University, Sudhakar, Raghavan, Zhuang, Hanqi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the...
Show moreThe dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the processing system. A new corner detection algorithm is presented in which the problem of detecting corners is solved by minimizing a cost function. Each cost factor captures a desirable characteristic of the corner using both the gray level information and the geometrical structure of a corner. This approach additionally provides corner orientations and angles along with corner locations. The advantage of the new approach over the existing corner detectors is that it is able to improve the reliability of detection and localization by imposing criteria related to both the gray level data and the corner structure. The extraction of eye features is performed by using an improved method of deformable templates which are geometrically arranged to resemble the expected shape of the eye. The overall energy function is redefined to simplify the minimization process. The weights for the energy terms are selected based on the normalized value of the energy term. Thus the weighting schedule of the modified method does not demand any expert knowledge for the user. Rather than using a sequential procedure, all parameters of the template are changed simultaneously during the minimization process. This reduces not only the processing time but also the probability of the template being trapped in local minima. An efficient algorithm for real-time eye feature tracking from a sequence of eye images is developed in the dissertation. Based on a geometrical model which describes the characteristics of the eye, the measurement equations are formulated to relate suitably selected measurements to the tracking parameters. A discrete Kalman filter is then constructed for the recursive estimation of the eye features, while taking into account the measurement noise. The small processing time allows this tracking algorithm to be used in real-time applications. This tracking algorithm is suitable for an automated, non-intrusive and inexpensive system as the algorithm is capable of measuring the time profiles of the eye movements. The issue of compensating head movements during the tracking of eye movements is also discussed. An appropriate measurement model was established to describe the effects of head movements. Based on this model, a Kalman filter structure was formulated to carry out the compensation. The whole tracking scheme which cascades two Kalman filters is constructed to track the iris movement, while compensating the head movement. The presence of the eye blink is also taken into account and its detection is incorporated into the cascaded tracking scheme. The above algorithms have been integrated to design an automated, non-intrusive and inexpensive system which provides accurate time profile of eye movements tracking from video image frames.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12377
- Subject Headings
- Kalman filtering, Eye--Movements, Algorithms, Image processing
- Format
- Document (PDF)
- Title
- An artificial neural network architecture for interpolation, function approximation, time series modeling and control applications.
- Creator
- Luebbers, Paul Glenn., Florida Atlantic University, Pandya, Abhijit S., Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A new artificial neural network architecture called Power Net (PWRNET) and Orthogonal Power Net (OPWRNET) has been developed. Based on the Taylor series expansion of the hyperbolic tangent function, this novel architecture can approximate multi-input multi-layer artificial networks, while requiring only a single layer of hidden nodes. This allows a compact network representation with only one layer of hidden layer weights. The resulting trained network can be expressed as a polynomial...
Show moreA new artificial neural network architecture called Power Net (PWRNET) and Orthogonal Power Net (OPWRNET) has been developed. Based on the Taylor series expansion of the hyperbolic tangent function, this novel architecture can approximate multi-input multi-layer artificial networks, while requiring only a single layer of hidden nodes. This allows a compact network representation with only one layer of hidden layer weights. The resulting trained network can be expressed as a polynomial function of the input nodes. Applications which cannot be implemented with conventional artificial neural networks, due to their intractable nature, can be developed with these network architectures. The degree of nonlinearity of the network can be directly controlled by adjusting the number of hidden layer nodes, thus avoiding problems of over-fitting which restrict generalization. The learning algorithm used for adapting the network is the familiar error back propagation training algorithm. Other learning algorithms may be applied and since only one hidden layer is to be trained, the training performance of the network is expected to be comparable to or better than conventional multi-layer feed forward networks. The new architecture is explored by applying OPWRNET to classification, function approximation and interpolation problems. These applications show that the OPWRNET has comparable performance to multi-layer perceptrons. The OPWRNET was also applied to the prediction of noisy time series and the identification of nonlinear systems. The resulting trained networks, for system identification tasks, can be expressed directly as discrete nonlinear recursive polynomials. This characteristic was exploited in the development of two new neural network based nonlinear control algorithms, the Linearized Self-Tuning Controller (LSTC) and a variation of a Neural Adaptive Controller (NAC). These control algorithms are compared to a linear self-tuning controller and an artificial neural network based Inverse Model Controller. The advantages of these new controllers are discussed.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12357
- Subject Headings
- Neural networks (Computer science)
- Format
- Document (PDF)
- Title
- Learning in connectionist networks using the Alopex algorithm.
- Creator
- Venugopal, Kootala Pattath., Florida Atlantic University, Pandya, Abhijit S., Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Alopex algorithm is presented as a universal learning algorithm for connectionist models. It is shown that the Alopex procedure could be used efficiently as a supervised learning algorithm for such models. The algorithm is demonstrated successfully on a variety of network architectures. Such architectures include multilayer perceptrons, time-delay models, asymmetric, fully recurrent networks and memory neuron networks. The learning performance as well as the generation capability of the...
Show moreThe Alopex algorithm is presented as a universal learning algorithm for connectionist models. It is shown that the Alopex procedure could be used efficiently as a supervised learning algorithm for such models. The algorithm is demonstrated successfully on a variety of network architectures. Such architectures include multilayer perceptrons, time-delay models, asymmetric, fully recurrent networks and memory neuron networks. The learning performance as well as the generation capability of the Alopex algorithm are compared with those of the backpropagation procedure, concerning a number of benchmark problems, and it is shown that the Alopex has specific advantages over the backpropagation. Two new architectures (gain layer schemes) are proposed for the on-line, direct adaptive control of dynamical systems using neural networks. The proposed schemes are shown to provide better dynamic response and tracking characteristics, than the other existing direct control schemes. A velocity reference scheme is introduced to improve the dynamic response of on-line learning controllers. The proposed learning algorithm and architectures are studied on three practical problems; (i) Classification of handwritten digits using Fourier Descriptors; (ii) Recognition of underwater targets from sonar returns, considering temporal dependencies of consecutive returns and (iii) On-line learning control of autonomous underwater vehicles, starting with random initial conditions. Detailed studies are conducted on the learning control applications. Effect of the network learning rate on the tracking performance and dynamic response of the system are investigated. Also, the ability of the neural network controllers to adapt to slow and sudden varying parameter disturbances and measurement noise is studied in detail.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12325
- Subject Headings
- Computer algorithms, Computer networks, Neural networks (Computer science), Machine learning
- Format
- Document (PDF)
- Title
- Synchronization in digital wireless radio receivers.
- Creator
- Nezami, Mohamed Khalid., Florida Atlantic University, Sudhakar, Raghavan, Helmken, Henry, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Time Division Multiple Access (TDMA) architecture is an established technology for digital cellular, personal and satellite communications, as it supports variable data rate transmission and simplified receiver design. Due to transmission bandwidth restrictions, increasing user demands and the necessity to operate at lower signal-to-noise ratio (SNR), the TDMA systems employ high order modulation schemes such as M-ary Quadrature Amplitude Modulation (M-QAM) and burst transmission. Use of such...
Show moreTime Division Multiple Access (TDMA) architecture is an established technology for digital cellular, personal and satellite communications, as it supports variable data rate transmission and simplified receiver design. Due to transmission bandwidth restrictions, increasing user demands and the necessity to operate at lower signal-to-noise ratio (SNR), the TDMA systems employ high order modulation schemes such as M-ary Quadrature Amplitude Modulation (M-QAM) and burst transmission. Use of such techniques in low SNR fading channels causes degradations of carrier frequency error, phase rotation error, and symbol timing jitter. To compensate for the severe degradation due to additive white Gaussian noise (AWGN) and channel impairments, precise and robust synchronization algorithms are required. This dissertation deals with the synchronization techniques for TDMA receivers using short burst mode transmission with emphasis on preamble-less feedforward synchronization schemes. The objective is to develop new algorithms for symbol timing, carrier frequency offset acquisition, and carrier phase tracking using preamble-less synchronization techniques. To this end, the currently existing synchronization algorithms are surveyed and analyzed. The performance evaluation of the developed algorithms is conducted through Monte-Carlo simulations and theoretical analyses. The statistical properties of the proposed algorithms in AWGN and fading channels are evaluated in terms of the mean and variance of the estimated synchronization errors and their Cramer-Rao lower bounds. Based on the investigation of currently employed feedforward symbol timing algorithms, two new symbol timing recovery schemes are proposed for 16-QAM land mobile signals operating in fading channels. Both schemes achieve better performance in fading channels compared to their existing counterparts without increasing the complexity of the receiver implementation. Further, based on the analysis of currently employed carrier offset and carrier phase recovery algorithms, two new algorithms are proposed for carrier acquisition and carrier tracking of mobile satellite systems utilizing short TDMA bursts with large frequency offsets. The proposed algorithms overcome some of the conventional problems associated with currently employed carrier recovery schemes in terms of capture range, speed of convergence, and stability.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/11947
- Subject Headings
- Radio--Receivers and reception, Digital communications, Time division multiple access
- Format
- Document (PDF)