Current Search: FAU (x) » American Civil War --United States --Pictorial Works. (x) » Department of Computer and Electrical Engineering and Computer Science (x) » Digital techniques (x)
View All Items
- Title
- Automated nursing knowledge classification using indexing.
- Creator
- Chinchanikar, Sucharita Vijay., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Promoting healthcare and wellbeing requires the dedication of a multi-tiered health service delivery system, which is comprised of specialists, medical doctors and nurses. A holistic view to a patient care perspective involves emotional, mental and physical healthcare needs, in which caring is understood as the essence of nursing. Properly and efficiently capturing and managing nursing knowledge is essential to advocating health promotion and illness prevention. This thesis proposes a...
Show morePromoting healthcare and wellbeing requires the dedication of a multi-tiered health service delivery system, which is comprised of specialists, medical doctors and nurses. A holistic view to a patient care perspective involves emotional, mental and physical healthcare needs, in which caring is understood as the essence of nursing. Properly and efficiently capturing and managing nursing knowledge is essential to advocating health promotion and illness prevention. This thesis proposes a document-indexing framework for automating classification of nursing knowledge based on nursing theory and practice model. The documents defining the numerous categories in nursing care model are structured with the help of expert nurse practitioners and professionals. These documents are indexed and used as a benchmark for the process of automatic mapping of each expression in the assessment form of a patient to the corresponding category in the nursing theory model. As an illustration of the proposed methodology, a prototype application is developed using the Latent Semantic Indexing (LSI) technique. The prototype application is tested in a nursing practice environment to validate the accuracy of the proposed algorithm. The simulation results are also compared with an application using Lucene indexing technique that internally uses modified vector space model for indexing. The result comparison showed that the LSI strategy gives 87.5% accurate results compared to the Lucene indexing technique that gives 80% accuracy. Both indexing methods maintain 100% consistency in the results.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186677
- Subject Headings
- Nursing, Computer-assisted instruction, Data transmission systems, Outcome assessment (Medical care), Nursing assessment, Digital techniques
- Format
- Document (PDF)
- Title
- Bioinformatics-inspired binary image correlation: application to bio-/medical-images, microsarrays, finger-prints and signature classifications.
- Creator
- Pappusetty, Deepti, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The efforts addressed in this thesis refer to assaying the extent of local features in 2D-images for the purpose of recognition and classification. It is based on comparing a test-image against a template in binary format. It is a bioinformatics-inspired approach pursued and presented as deliverables of this thesis as summarized below: 1. By applying the so-called 'Smith-Waterman (SW) local alignment' and 'Needleman-Wunsch (NW) global alignment' approaches of bioinformatics, a test 2D-image...
Show moreThe efforts addressed in this thesis refer to assaying the extent of local features in 2D-images for the purpose of recognition and classification. It is based on comparing a test-image against a template in binary format. It is a bioinformatics-inspired approach pursued and presented as deliverables of this thesis as summarized below: 1. By applying the so-called 'Smith-Waterman (SW) local alignment' and 'Needleman-Wunsch (NW) global alignment' approaches of bioinformatics, a test 2D-image in binary format is compared against a reference image so as to recognize the differential features that reside locally in the images being compared 2. SW and NW algorithms based binary comparison involves conversion of one-dimensional sequence alignment procedure (indicated traditionally for molecular sequence comparison adopted in bioinformatics) to 2D-image matrix 3. Relevant algorithms specific to computations are implemented as MatLabTM codes 4. Test-images considered are: Real-world bio-/medical-images, synthetic images, microarrays, biometric finger prints (thumb-impressions) and handwritten signatures. Based on the results, conclusions are enumerated and inferences are made with directions for future studies.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3333052
- Subject Headings
- Bioinformatics, Statistical methods, Diagnostic imaging, Digital techniques, Image processing, Digital techniques, Pattern perception, Data processing, DNA microarrays
- Format
- Document (PDF)
- Title
- Cache optimization for real-time embedded systems.
- Creator
- Asaduzzaman, Abu Sadath Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Cache memory is used, in most single-core and multi-core processors, to improve performance by bridging the speed gap between the main memory and CPU. Even though cache increases performance, it poses some serious challenges for embedded systems running real-time applications. Cache introduces execution time unpredictability due to its adaptive and dynamic nature and cache consumes vast amount of power to be operated. Energy requirement and execution time predictability are crucial for the...
Show moreCache memory is used, in most single-core and multi-core processors, to improve performance by bridging the speed gap between the main memory and CPU. Even though cache increases performance, it poses some serious challenges for embedded systems running real-time applications. Cache introduces execution time unpredictability due to its adaptive and dynamic nature and cache consumes vast amount of power to be operated. Energy requirement and execution time predictability are crucial for the success of real-time embedded systems. Various cache optimization schemes have been proposed to address the performance, power consumption, and predictability issues. However, currently available solutions are not adequate for real-time embedded systems as they do not address the performance, power consumption, and execution time predictability issues at the same time. Moreover, existing solutions are not suitable for dealing with multi-core architecture issues. In this dissertation, we develop a methodology through cache optimization for real-time embedded systems that can be used to analyze and improve execution time predictability and performance/power ratio at the same time. This methodology is effective for both single-core and multi-core systems. First, we develop a cache modeling and optimization technique for single-core systems to improve performance. Then, we develop a cache modeling and optimization technique for multi-core systems to improve performance/power ratio. We develop a cache locking scheme to improve execution time predictability for real-time systems. We introduce Miss Table (MT) based cache locking scheme with victim cache (VC) to improve predictability and performance/power ratio. MT holds information about memory blocks, which may cause more misses if not locked, to improve cache locking performance., VC temporarily stores the victim blocks from level-1 cache to improve cache hits. In addition, MT is used to improve cache replacement performance and VC is used to improve cache hits by supporting stream buffering. We also develop strategies to generate realistic workload by characterizing applications to simulate cache optimization and cache locking schemes. Popular MPEG4, H.264/AVC, FFT, MI, and DFT applications are used to run the simulation programs. Simulation results show that newly introduced Miss Table based cache locking scheme with victim cache significantly improves the predictability and performance/power ratio. In this work, a reduction of 33% in mean delay per task and a reduction of 41% in total power consumption are achieved by using MT and VCs while locking 25% of level-2 cache size in an 4-core system. It is also observed that execution time predictability can be improved by avoiding more than 50% cache misses while locking one-fourth of the cache size.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/359919
- Subject Headings
- Real-time embedded systems and components, Embedded computer systems, Programming, Computer architecture, Integrated circuits, Design and construction, Signal processing, Digital techniques, Object-oriented methods (Computer science)
- Format
- Document (PDF)
- Title
- Content identification using video tomography.
- Creator
- Leon, Gustavo A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Video identification or copy detection is a challenging problem and is becoming increasingly important with the popularity of online video services. The problem addressed in this thesis is the identification of a given video clip in a given set of videos. For a given query video, the system returns all the instance of the video in the data set. This identification system uses video signatures based on video tomography. A robust and low complexity video signature is designed and implemented....
Show moreVideo identification or copy detection is a challenging problem and is becoming increasingly important with the popularity of online video services. The problem addressed in this thesis is the identification of a given video clip in a given set of videos. For a given query video, the system returns all the instance of the video in the data set. This identification system uses video signatures based on video tomography. A robust and low complexity video signature is designed and implemented. The nature of the signature makes it independent to the most commonly video transformations. The signatures are generated for video shots and not individual frames, resulting in a compact signature of 64 bytes per video shot. The signatures are matched using simple Euclidean distance metric. The results show that videos can be identified with 100% recall and over 93% precision. The experiments included several transformations on videos.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/2783207
- Subject Headings
- Biometric identification, High performance computing, Image processing, Digital techniques, Multimedia systems, Security measures
- Format
- Document (PDF)
- Title
- Event detection in surveillance video.
- Creator
- Castellanos Jimenez, Ricardo Augusto., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Digital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video...
Show moreDigital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video sequences. This thesis presents a surveillance system based on optical flow and background subtraction concepts to detect events based on a motion analysis, using an event probability zone definition. Advantages, limitations, capabilities and possible solution alternatives are also discussed. The result is a system capable of detecting events of objects moving in opposing direction to a predefined condition or running in the scene, with precision greater than 50% and recall greater than 80%.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1870694
- Subject Headings
- Computer systems, Security measures, Image processing, Digital techniques, Imaging systems, Mathematical models, Pattern recognition systems, Computer vision, Digital video
- Format
- Document (PDF)
- Title
- Exploiting audiovisual attention for visual coding.
- Creator
- Torres, Freddy., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Perceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this...
Show morePerceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this work, we validate some of those physiological tests using complex video sequence. We designed and developed a web-based tool for video quality measurement. After conducting different experiments, we observed that in the general reaction time to detect video artifacts was higher when video was presented with the audio information. We observed that emotional information in audio guide human attention to particular ROI. We also observed that sound frequency change spatial frequency perception in still images.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fcla/dt/3361251
- Subject Headings
- Digital video, Image processing, Digital techniques, Visual perception, Coding theory, Human-computer interaction, Intersensory effects
- Format
- Document (PDF)
- Title
- Image improvement using dynamic optical low-pass filter.
- Creator
- Petljanski, Branko., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Professional imaging systems, particularly motion picture cameras, usually employ larger photosites and lower pixel counts than many amateur cameras. This results in the desirable characteristics of improved dynamic range, signal to noise and sensitivity. However, high performance optics often have frequency response characteristics that exceed the Nyquist limit of the sensor, which, if not properly addressed, results in aliasing artifacts in the captured image. Most contemporary still and...
Show moreProfessional imaging systems, particularly motion picture cameras, usually employ larger photosites and lower pixel counts than many amateur cameras. This results in the desirable characteristics of improved dynamic range, signal to noise and sensitivity. However, high performance optics often have frequency response characteristics that exceed the Nyquist limit of the sensor, which, if not properly addressed, results in aliasing artifacts in the captured image. Most contemporary still and video cameras employ various optically birefringent materials as optical low-pass filters (OLPF) in order to minimize aliasing artifacts in the image. Most OLPFs are designed as optical elements with a frequency response that does not change even if the frequency responses of the other elements of the capturing systems are altered. An extended evaluation of currently used birefringent-based OLPFs is provided. In this work, the author proposed and demonstrated the use of a parallel optical window p ositioned between a lens and a sensor as an OLPF. Controlled X- and Y-axes rotations of the optical window during the image exposure results in a manipulation of the system's point-spread function (PSF). Consequently, changing the PSF affects some portions of the frequency components contained in the image formed on the sensor. The system frequency response is evaluated when various window functions are used to shape the lens' PSF, such as rectangle, triangle, Tukey, Gaussian, Blackman-Harris etc. In addition to the ability to change the PSF, this work demonstrated that the PSF can be manipulated dynamically, which allowed us to modify the PSF to counteract any alteration of other optical elements of the capturing system. There are several instances presented in the dissertation in which it is desirable to change the characteristics of an OLPF in a controlled way., In these instances, an OLPF whose characteristics can be altered dynamically results in an improvement of the image quality.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1927613
- Subject Headings
- Image processing, Digital techniques, Signal processing, Digital techniques, Frequency response (Dynamics), Polymers and polymerization, Optical wave guides
- Format
- Document (PDF)
- Title
- Image retrieval using visual attention.
- Creator
- Mayron, Liam M., College of Engineering and Computer Science, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The retrieval of digital images is hindered by the semantic gap. The semantic gap is the disparity between a user's high-level interpretation of an image and the information that can be extracted from an image's physical properties. Content based image retrieval systems are particularly vulnerable to the semantic gap due to their reliance on low-level visual features for describing image content. The semantic gap can be narrowed by including high-level, user-generated information. High-level...
Show moreThe retrieval of digital images is hindered by the semantic gap. The semantic gap is the disparity between a user's high-level interpretation of an image and the information that can be extracted from an image's physical properties. Content based image retrieval systems are particularly vulnerable to the semantic gap due to their reliance on low-level visual features for describing image content. The semantic gap can be narrowed by including high-level, user-generated information. High-level descriptions of images are more capable of capturing the semantic meaning of image content, but it is not always practical to collect this information. Thus, both content-based and human-generated information is considered in this work. A content-based method of retrieving images using a computational model of visual attention was proposed, implemented, and evaluated. This work is based on a study of contemporary research in the field of vision science, particularly computational models of bottom-up visual attention. The use of computational models of visual attention to detect salient by design regions of interest in images is investigated. The method is then refined to detect objects of interest in broad image databases that are not necessarily salient by design. An interface for image retrieval, organization, and annotation that is compatible with the attention-based retrieval method has also been implemented. It incorporates the ability to simultaneously execute querying by image content, keyword, and collaborative filtering. The user is central to the design and evaluation of the system. A game was developed to evaluate the entire system, which includes the user, the user interface, and retrieval methods.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fcla/flaent/EN00154040/68_1/98p0137i.pdf, http://purl.flvc.org/FAU/58006
- Subject Headings
- Image processing, Digital techniques, Database systems, Cluster analysis, Multimedia systems
- Format
- Document (PDF)
- Title
- Knowledge based evaluation of nursing care practice model.
- Creator
- Tripathi, Shubhang., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Provision of complete and responsive solution to healthcare services requires a multi-tired health delivery system. One of the aspects of healthcare hierarchy is the need for nursing care of the patient. Nursing care and observation provide basis for nurses to communicate with other aspects of healthcare system. The ability of capturing and managing nursing practice is essential to the quality of human care. The thesis proposes knowledge based decision making and analyzing system for the...
Show moreProvision of complete and responsive solution to healthcare services requires a multi-tired health delivery system. One of the aspects of healthcare hierarchy is the need for nursing care of the patient. Nursing care and observation provide basis for nurses to communicate with other aspects of healthcare system. The ability of capturing and managing nursing practice is essential to the quality of human care. The thesis proposes knowledge based decision making and analyzing system for the nurses to capture and manage the nursing practice. Moreover it allows them to monitor nursing care quality, as well as to test an aspect of an electronic healthcare record for recording and reporting nursing practice. The framework used for this system is based on nursing theory and is coupled with the quantitative analysis of qualitative data. It allows us to quantify the qualitative raw natural nursing language data. The results are summarized in the graph that shows the relative importance of those attributes with respect to each other at different instances of nurse-patient encounter. Research has been conducted by the Department of Computer and Electrical Engineering and Computer Science for the College of Nursing.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/2683141
- Subject Headings
- Nursing assessment, Digital techniques, Nursing, Research, Methodology, Nursing, Technological innovations, Nursing, Practice, Nursing informatics
- Format
- Document (PDF)
- Title
- Model-based classification of speech audio.
- Creator
- Thoman, Chris., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This work explores the process of model-based classification of speech audio signals using low-level feature vectors. The process of extracting low-level features from audio signals is described along with a discussion of established techniques for training and testing mixture model-based classifiers and using these models in conjunction with feature selection algorithms to select optimal feature subsets. The results of a number of classification experiments using a publicly available speech...
Show moreThis work explores the process of model-based classification of speech audio signals using low-level feature vectors. The process of extracting low-level features from audio signals is described along with a discussion of established techniques for training and testing mixture model-based classifiers and using these models in conjunction with feature selection algorithms to select optimal feature subsets. The results of a number of classification experiments using a publicly available speech database, the Berlin Database of Emotional Speech, are presented. This includes experiments in optimizing feature extraction parameters and comparing different feature selection results from over 700 candidate feature vectors for the tasks of classifying speaker gender, identity, and emotion. In the experiments, final classification accuracies of 99.5%, 98.0% and 79% were achieved for the gender, identity and emotion tasks respectively.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/210518
- Subject Headings
- Signal processing, Digital techniques, Speech processing systems, Sound, Recording and reproducing, Digital techniques, Pattern recognition systems
- Format
- Document (PDF)
- Title
- Object detection in low resolution video sequences.
- Creator
- Pava, Diego F., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
With augmenting security concerns and decreasing costs of surveillance and computing equipment, research on automated systems for object detection has been increasing, but the majority of the studies focus their attention on sequences where high resolution objects are present. The main objective of this work is the detection and extraction of information of low resolution objects (e.g. objects that are so far away from the camera that they occupy only tens of pixels) in order to provide a...
Show moreWith augmenting security concerns and decreasing costs of surveillance and computing equipment, research on automated systems for object detection has been increasing, but the majority of the studies focus their attention on sequences where high resolution objects are present. The main objective of this work is the detection and extraction of information of low resolution objects (e.g. objects that are so far away from the camera that they occupy only tens of pixels) in order to provide a base for higher level information operations such as classification and behavioral analysis. The system proposed is composed of four stages (preprocessing, background modeling, information extraction, and post processing) and uses context based region of importance selection, histogram equalization, background subtraction and morphological filtering techniques. The result is a system capable of detecting and tracking low resolution objects in a controlled background scene which can be a base for systems with higher complexity.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186685
- Subject Headings
- Computer systems, Security measures, Remote sensing, Image processing, Digital techniques, Imaging systems, Mathematical models
- Format
- Document (PDF)
- Title
- Sensitivity analysis of blind separation of speech mixtures.
- Creator
- Bulek, Savaskan., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Blind source separation (BSS) refers to a class of methods by which multiple sensor signals are combined with the aim of estimating the original source signals. Independent component analysis (ICA) is one such method that effectively resolves static linear combinations of independent non-Gaussian distributions. We propose a method that can track variations in the mixing system by seeking a compromise between adaptive and block methods by using mini-batches. The resulting permutation...
Show moreBlind source separation (BSS) refers to a class of methods by which multiple sensor signals are combined with the aim of estimating the original source signals. Independent component analysis (ICA) is one such method that effectively resolves static linear combinations of independent non-Gaussian distributions. We propose a method that can track variations in the mixing system by seeking a compromise between adaptive and block methods by using mini-batches. The resulting permutation indeterminacy is resolved based on the correlation continuity principle. Methods employing higher order cumulants in the separation criterion are susceptible to outliers in the finite sample case. We propose a robust method based on low-order non-integer moments by exploiting the Laplacian model of speech signals. We study separation methods for even (over)-determined linear convolutive mixtures in the frequency domain based on joint diagonalization of matrices employing time-varying second order statistics. We investigate the sources affecting the sensitivity of the solution under the finite sample case such as the set size, overlap amount and cross-spectrum estimation methods.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/2953201
- Subject Headings
- Blind source separation, Mathematical models, Signal processing, Digital techniques, Neural networks (Computer science), Automatic speech recognition, Speech processing systems
- Format
- Document (PDF)
- Title
- Signature system for video identification.
- Creator
- Medellin, Sebastian Possos., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Video signature techniques based on tomography images address the problem of video identification. This method relies on temporal segmentation and sampling strategies to build and determine the unique elements that will form the signature. In this thesis an extension for these methods is presented; first a new feature extraction method, derived from the previously proposed sampling pattern, is implemented and tested, resulting in a highly distinctive set of signature elements, second a robust...
Show moreVideo signature techniques based on tomography images address the problem of video identification. This method relies on temporal segmentation and sampling strategies to build and determine the unique elements that will form the signature. In this thesis an extension for these methods is presented; first a new feature extraction method, derived from the previously proposed sampling pattern, is implemented and tested, resulting in a highly distinctive set of signature elements, second a robust temporal video segmentation system is used to replace the original method applied to determine shot changes more accurately. Under a very exhaustive set of tests the system was able to achieve 99.58% of recall, 100% of precision and 99.35% of prediction precision.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/2683534
- Subject Headings
- Biometric identification, Image processing, Digital techniques, Pattern recognition systems, Data encryption (Computer science)
- Format
- Document (PDF)
- Title
- Spectral refinement to speech enhancement.
- Creator
- Charoenruengkit, Werayuth., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The goal of a speech enhancement algorithm is to remove noise and recover the original signal with as little distortion and residual noise as possible. Most successful real-time algorithms thereof have done in the frequency domain where the frequency amplitude of clean speech is estimated per short-time frame of the noisy signal. The state of-the-art short-time spectral amplitude estimator algorithms estimate the clean spectral amplitude in terms of the power spectral density (PSD) function...
Show moreThe goal of a speech enhancement algorithm is to remove noise and recover the original signal with as little distortion and residual noise as possible. Most successful real-time algorithms thereof have done in the frequency domain where the frequency amplitude of clean speech is estimated per short-time frame of the noisy signal. The state of-the-art short-time spectral amplitude estimator algorithms estimate the clean spectral amplitude in terms of the power spectral density (PSD) function of the noisy signal. The PSD has to be computed from a large ensemble of signal realizations. However, in practice, it may only be estimated from a finite-length sample of a single realization of the signal. Estimation errors introduced by these limitations deviate the solution from the optimal. Various spectral estimation techniques, many with added spectral smoothing, have been investigated for decades to reduce the estimation errors. These algorithms do not address significantly issue on quality of speech as perceived by a human. This dissertation presents analysis and techniques that offer spectral refinements toward speech enhancement. We present an analytical framework of the effect of spectral estimate variance on the performance of speech enhancement. We use the variance quality factor (VQF) as a quantitative measure of estimated spectra. We show that reducing the spectral estimator VQF reduces significantly the VQF of the enhanced speech. The Autoregressive Multitaper (ARMT) spectral estimate is proposed as a low VQF spectral estimator for use in speech enhancement algorithms. An innovative method of incorporating a speech production model using multiband excitation is also presented as a technique to emphasize the harmonic components of the glottal speech input., The preconditioning of the noisy estimates by exploiting other avenues of information, such as pitch estimation and the speech production model, effectively increases the localized narrow-band signal-to noise ratio (SNR) of the noisy signal, which is subsequently denoised by the amplitude gain. Combined with voicing structure enhancement, the ARMT spectral estimate delivers enhanced speech with sound clarity desirable to human listeners. The resulting improvements in enhanced speech are observed to be significant with both Objective and Subjective measurement.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186327
- Subject Headings
- Adaptive signal processing, Digital techniques, Spectral theory (Mathematics), Noise control, Fuzzy algorithms, Speech processing systems, Digital techniques
- Format
- Document (PDF)
- Title
- Stochastic optimization of energy for multi-user wireless networks over fading channels.
- Creator
- Wang, Di, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Wireless devices in wireless networks are powered typically by small batteries that are not replaceable nor recharged in a convenient way. To prolong the operating lifetime of networks, energy efficiency is indicated as a critical issue and energy-efficient resource allocation designs have been extensively developed. We investigated energy-efficient schemes that prolong network operating lifetime in wireless sensor networks and in wireless relay networks. In Chapter 2, the energy-efficient...
Show moreWireless devices in wireless networks are powered typically by small batteries that are not replaceable nor recharged in a convenient way. To prolong the operating lifetime of networks, energy efficiency is indicated as a critical issue and energy-efficient resource allocation designs have been extensively developed. We investigated energy-efficient schemes that prolong network operating lifetime in wireless sensor networks and in wireless relay networks. In Chapter 2, the energy-efficient resource allocation that minimizes a general cost function of average user powers for small- or medium-scale wireless sensor networks, where the simple time-division multiple-access (TDMA) is adopted as the multiple access scheme. A class of Ç-fair cost-functions is derived to balance the tradeoff between efficiency and fairness in energy-efficient designs. Based on such cost functions, optimal channel-adaptive resource allocation schemes are developed for both single-hop and multi-hop TDMA sensor networks. In Chapter 3, optimal power control methods to balance the tradeoff between energy efficiency and fairness for wireless cooperative networks are developed. It is important to maximize power efficiency by minimizing power consumption for a given quality of service, such as the data rate; it is also equally important to evenly or fairly distribute power consumption to all nodes to maximize the network life. The optimal power control policy proposed is derived in a quasi-closed form by solving a convex optimization problem with a properly chosen cost-function. To further optimize a wireless relay network performance, an orthogonal frequency division multiplexing (OFDM) based multi-user wireless relay network is considered in Chapter 4., In the OFDM approach, each subcarrier is dynamically assigned to a source- destination link, and several relays assist communication between pairs of source-destination over their assigned subcarriers. Using a class of Ç-fair cost-functions to balance the tradeoff between energy efficiency and fairness, jointly with optimal subcarrier and power allocation schemes at the relays. Relevant algorithms are derived in quasi-closed form. Lastly, the proposed energy-efficient schemes are summarized and future work is discussed in Chapter 5.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3322519
- Subject Headings
- Stochastic processes, Data processing, Wireless communication systems, Mathematical models, Computer network protocols, Signal processing, Digital techniques, Code division multiple access, Waveless division multiplexing, Orthogonalization methods
- Format
- Document (PDF)