Current Search: 1936-1945 (x) » Department of Computer and Electrical Engineering and Computer Science (x)
View All Items
Pages
- Title
- Fault tolerance and reliability patterns.
- Creator
- Buckley, Ingrid A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The need to achieve dependability in critical infrastructures has become indispensable for government and commercial enterprises. This need has become more necessary with the proliferation of malicious attacks on critical systems, such as healthcare, aerospace and airline applications. Additionally, due to the widespread use of web services in critical systems, the need to ensure their reliability is paramount. We believe that patterns can be used to achieve dependability. We conducted a...
Show moreThe need to achieve dependability in critical infrastructures has become indispensable for government and commercial enterprises. This need has become more necessary with the proliferation of malicious attacks on critical systems, such as healthcare, aerospace and airline applications. Additionally, due to the widespread use of web services in critical systems, the need to ensure their reliability is paramount. We believe that patterns can be used to achieve dependability. We conducted a survey of fault tolerance, reliability and web service products and patterns to better understand them. One objective of our survey is to evaluate the state of these patterns, and to investigate which standards are being used in products and their tool support. Our survey found that these patterns are insufficient, and many web services products do not use them. In light of this, we wrote some fault tolerance and web services reliability patterns and present an analysis of them.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/166447
- Subject Headings
- Fault-tolerant computing, Computer software, Reliability, Reliability (Engineering), Computer programs
- Format
- Document (PDF)
- Title
- Low complexity H.264 video encoder design using machine learning techniques.
- Creator
- Carrillo, Paula., Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
H.264/AVC encoder complexity is mainly due to variable size in Intra and Inter frames. This makes H.264/AVC very difficult to implement, especially for real time applications and mobile devices. The current technological challenge is to conserve the compression capacity and quality that H.264 offers but reduce the encoding time and, therefore, the processing complexity. This thesis applies machine learning technique for video encoding mode decisions and investigates ways to improve the...
Show moreH.264/AVC encoder complexity is mainly due to variable size in Intra and Inter frames. This makes H.264/AVC very difficult to implement, especially for real time applications and mobile devices. The current technological challenge is to conserve the compression capacity and quality that H.264 offers but reduce the encoding time and, therefore, the processing complexity. This thesis applies machine learning technique for video encoding mode decisions and investigates ways to improve the process of generating more general low complexity H.264/AVC video encoders. The proposed H.264 encoding method decreases the complexity in the mode decision inside the Inter frames. Results show, at least, a 150% average reduction of complexity and, at most, 0.6 average increases in PSNR for different kinds of videos and formats.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/166448
- Subject Headings
- Code division multiple access, Digital media, Technological innovations, Image transmission, Technological innovations, Coding theory, Data structures (Computer science)
- Format
- Document (PDF)
- Title
- Video transcoding using machine learning.
- Creator
- Holder, Christopher., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The field of Video Transcoding has been evolving throughout the past ten years. The need for transcoding of video files has greatly increased because of the new upcoming standards which are incompatible with old ones. This thesis takes the method of using machine learning for video transcoding mode decisions and discusses ways to improve the process of generating the algorithm for implementation in different video transcoders. The transcoding methods used decrease the complexity in the mode...
Show moreThe field of Video Transcoding has been evolving throughout the past ten years. The need for transcoding of video files has greatly increased because of the new upcoming standards which are incompatible with old ones. This thesis takes the method of using machine learning for video transcoding mode decisions and discusses ways to improve the process of generating the algorithm for implementation in different video transcoders. The transcoding methods used decrease the complexity in the mode decision inside the video encoder. Also methods which automate and improve results are discussed and implemented in two different sets of transcoders: H.263 to VP6 , and MPEG-2 to H.264. Both of these transcoders have shown a complexity loss of almost 50%. Video transcoding is important because the quantity of video standards have been increasing while devices usually can only decode one specific codec.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/166451
- Subject Headings
- Coding theory, Image transmission, Technological innovations, File conversion (Computer science), Data structures (Computer science), MPEG (Video coding standard), Digital media, Video compression
- Format
- Document (PDF)
- Title
- Web accessibility for the hearing impaired.
- Creator
- Pasmore, Simone., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
With the exponential increase of Internet usage and the embedding of multimedia content on the Web, some of the Internet resources remain inaccessible for people with disabilities. Particularly, people who are deaf or Hard of Hearing (HOH) experience inaccessible Web sites due to a lack of Closed Captioning (CC) for multimedia content on the Web, no sign language equivalents for the content on the Web, and an insufficient evaluation framework for determining if a Web page is accessible to the...
Show moreWith the exponential increase of Internet usage and the embedding of multimedia content on the Web, some of the Internet resources remain inaccessible for people with disabilities. Particularly, people who are deaf or Hard of Hearing (HOH) experience inaccessible Web sites due to a lack of Closed Captioning (CC) for multimedia content on the Web, no sign language equivalents for the content on the Web, and an insufficient evaluation framework for determining if a Web page is accessible to the Hearing Impaired community. Several opportunities for accessing content needed to be rectified in order for the Hearing Impaired community to access the full benefits of the information repository on the Internet. The research contributions of this thesis are to resolve some of the Web accessibility problems being faced by the Hearing Impaired community. These objectives are to create an automated CC for the Web for multimedia content, to embed sign language equivalent for content available on the Web, to create a framework to evaluate Web accessibility for the Hearing Impaired community, and to create a social network for the Deaf community. To demonstrate the feasibility of fulfilling the above listed objectives several prototypes were implemented. These prototypes have been used in real life scenarios in order to have an objective evaluation of the proposed framework. Further, the implemented prototypes have had an impact to both the academic community and to the industry.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fcla/dt/177011
- Subject Headings
- Computers and people with disabilities, Interactive multimedia, Hearing impaired, Services for, Communication devices for people with disabilities, User interfaces (Computer systems), Web sites, Design
- Format
- Document (PDF)
- Title
- Fuzzycuda: interactive matte extraction on a GPU.
- Creator
- Gibson, Joel, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Natural matte extraction is a difficult and generally unsolved problem. Generating a matte from a nonuniform background traditionally requires a tediously hand drawn matte. This thesis studies recent methods requiring the user to place only modest scribbles identifying the foreground and the background. This research demonstrates a new GPU-based implementation of the recently introduced Fuzzy- Matte algorithm. Interactive matte extraction was achieved on a CUDA enabled G80 graphics processor....
Show moreNatural matte extraction is a difficult and generally unsolved problem. Generating a matte from a nonuniform background traditionally requires a tediously hand drawn matte. This thesis studies recent methods requiring the user to place only modest scribbles identifying the foreground and the background. This research demonstrates a new GPU-based implementation of the recently introduced Fuzzy- Matte algorithm. Interactive matte extraction was achieved on a CUDA enabled G80 graphics processor. Experimental results demonstrate improved performance over the previous CPU based version. In depth analysis of experimental data from the GPU and the CPU implementations are provided. The design challenges of porting a variant of Dijkstra's shortest distance algorithm to a parallel processor are considered.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/186288
- Subject Headings
- Computer graphics, Scientific applications, Information visualization, High performance computing, Real-time data processing
- Format
- Document (PDF)
- Title
- Enabling access for mobile devices to the web services resource framework.
- Creator
- Mangs, Jan Christian., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The increasing availability of Web services and grid computing has made easier the access and reuse of different types of services. Web services provide network accessible interfaces to application functionality in a platform-independent manner. Developments in grid computing have led to the efficient distribution of computing resources and power through the use of stateful web services. At the same time, mobile devices as a platform of computing have become a ubiquitous, inexpensive, and...
Show moreThe increasing availability of Web services and grid computing has made easier the access and reuse of different types of services. Web services provide network accessible interfaces to application functionality in a platform-independent manner. Developments in grid computing have led to the efficient distribution of computing resources and power through the use of stateful web services. At the same time, mobile devices as a platform of computing have become a ubiquitous, inexpensive, and powerful computing resource. Concepts such as cloud computing has pushed the trend towards using grid concepts in the internet domain and are ideally suited for internet-supported mobile devices. Currently, there are a few complete implementations that leverage mobile devices as a member of a grid or virtual organization. This thesis presents a framework that enables the use of mobile devices to access stateful Web services on a Globus-based grid. To illustrate the presented framework, a user-friendly mobile application has been created that utilizes the framework libraries do to demonstrate the various functionalities that are accessible from any mobile device that supports Java ME.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/186290
- Subject Headings
- User interfaces (Computer systems), Data structures (Computer science), Mobile computing, Security measures, Mobile communication systems, Computational grids (Computer systems)
- Format
- Document (PDF)
- Title
- Collabortive filtering using machine learning and statistical techniques.
- Creator
- Su, Xiaoyuan., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Collaborative filtering (CF), a very successful recommender system, is one of the applications of data mining for incomplete data. The main objective of CF is to make accurate recommendations from highly sparse user rating data. My contributions to this research topic include proposing the frameworks of imputation-boosted collaborative filtering (IBCF) and imputed neighborhood based collaborative filtering (INCF). We also proposed a model-based CF technique, TAN-ELR CF, and two hybrid CF...
Show moreCollaborative filtering (CF), a very successful recommender system, is one of the applications of data mining for incomplete data. The main objective of CF is to make accurate recommendations from highly sparse user rating data. My contributions to this research topic include proposing the frameworks of imputation-boosted collaborative filtering (IBCF) and imputed neighborhood based collaborative filtering (INCF). We also proposed a model-based CF technique, TAN-ELR CF, and two hybrid CF algorithms, sequential mixture CF and joint mixture CF. Empirical results show that our proposed CF algorithms have very good predictive performances. In the investigation of applying imputation techniques in mining incomplete data, we proposed imputation-helped classifiers, and VCI predictors (voting on classifications from imputed learning sets), both of which resulted in significant improvement in classification performance for incomplete data over conventional machine learned classifiers, including kNN, neural network, one rule, decision table, SVM, logistic regression, decision tree (C4.5), random forest, and decision list (PART), and the well known Bagging predictors. The main imputation techniques involved in these algorithms include EM (expectation maximization) and BMI (Bayesian multiple imputation).
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/186301
- Subject Headings
- Filters (Mathematics), Machine learning, Data mining, Technological innovations, Database management, Combinatorial group theory
- Format
- Document (PDF)
- Title
- Spectral refinement to speech enhancement.
- Creator
- Charoenruengkit, Werayuth., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The goal of a speech enhancement algorithm is to remove noise and recover the original signal with as little distortion and residual noise as possible. Most successful real-time algorithms thereof have done in the frequency domain where the frequency amplitude of clean speech is estimated per short-time frame of the noisy signal. The state of-the-art short-time spectral amplitude estimator algorithms estimate the clean spectral amplitude in terms of the power spectral density (PSD) function...
Show moreThe goal of a speech enhancement algorithm is to remove noise and recover the original signal with as little distortion and residual noise as possible. Most successful real-time algorithms thereof have done in the frequency domain where the frequency amplitude of clean speech is estimated per short-time frame of the noisy signal. The state of-the-art short-time spectral amplitude estimator algorithms estimate the clean spectral amplitude in terms of the power spectral density (PSD) function of the noisy signal. The PSD has to be computed from a large ensemble of signal realizations. However, in practice, it may only be estimated from a finite-length sample of a single realization of the signal. Estimation errors introduced by these limitations deviate the solution from the optimal. Various spectral estimation techniques, many with added spectral smoothing, have been investigated for decades to reduce the estimation errors. These algorithms do not address significantly issue on quality of speech as perceived by a human. This dissertation presents analysis and techniques that offer spectral refinements toward speech enhancement. We present an analytical framework of the effect of spectral estimate variance on the performance of speech enhancement. We use the variance quality factor (VQF) as a quantitative measure of estimated spectra. We show that reducing the spectral estimator VQF reduces significantly the VQF of the enhanced speech. The Autoregressive Multitaper (ARMT) spectral estimate is proposed as a low VQF spectral estimator for use in speech enhancement algorithms. An innovative method of incorporating a speech production model using multiband excitation is also presented as a technique to emphasize the harmonic components of the glottal speech input., The preconditioning of the noisy estimates by exploiting other avenues of information, such as pitch estimation and the speech production model, effectively increases the localized narrow-band signal-to noise ratio (SNR) of the noisy signal, which is subsequently denoised by the amplitude gain. Combined with voicing structure enhancement, the ARMT spectral estimate delivers enhanced speech with sound clarity desirable to human listeners. The resulting improvements in enhanced speech are observed to be significant with both Objective and Subjective measurement.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186327
- Subject Headings
- Adaptive signal processing, Digital techniques, Spectral theory (Mathematics), Noise control, Fuzzy algorithms, Speech processing systems, Digital techniques
- Format
- Document (PDF)
- Title
- Gene selection for sample sets with biased distribution.
- Creator
- Kamal, Abu Hena Mustafa., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Microarray expression data which contains the expression levels of a large number of simultaneously observed genes have been used in many scientific research and clinical studies. Due to its high dimensionalities, selecting a small number of genes has shown to be beneficial for many tasks such as building prediction models from the microarray expression data or gene regulatory network discovery. Traditional gene selection methods, however, fail to take the class distribution into the...
Show moreMicroarray expression data which contains the expression levels of a large number of simultaneously observed genes have been used in many scientific research and clinical studies. Due to its high dimensionalities, selecting a small number of genes has shown to be beneficial for many tasks such as building prediction models from the microarray expression data or gene regulatory network discovery. Traditional gene selection methods, however, fail to take the class distribution into the selection process. In biomedical science, it is very common to have microarray expression data which is severely biased with one class of examples (e.g., diseased samples) significantly less than other classes (e.g., normal samples). These sample sets with biased distributions require special attention from researchers for identification of genes responsible for a particular disease. In this thesis, we propose three filtering techniques, Higher Weight ReliefF, ReliefF with Differential Minority Repeat and ReliefF with Balanced Minority Repeat to identify genes responsible for fatal diseases from biased microarray expression data. Our solutions are evaluated on five well-known microarray datasets, Colon, Central Nervous System, DLBCL Tumor, Lymphoma and ECML Pancreas. Experimental comparisons with the traditional ReliefF filtering method demonstrate the effectiveness of the proposed methods in selecting informative genes from microarray expression data with biased sample distributions.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186330
- Subject Headings
- Gene expression, Research, Methodology, Medical informatics, Apoptosis, Molecular aspects, DNA microarrays, Research
- Format
- Document (PDF)
- Title
- Scheduling for composite event detection in wireless sensor networks.
- Creator
- Ambrose, Arny Isonja, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Wireless sensor networks are used in areas that are inaccessible, inhospitable or for continuous monitoring. The main use of such networks is for event detection. Event detection is used to monitor a particular environment for an event such as fire or flooding. Composite event detection is used to break down the detection of the event into the specific conditions that need to be present for the event to occur. Using this method, each sensor node does not need to carry every sensing component...
Show moreWireless sensor networks are used in areas that are inaccessible, inhospitable or for continuous monitoring. The main use of such networks is for event detection. Event detection is used to monitor a particular environment for an event such as fire or flooding. Composite event detection is used to break down the detection of the event into the specific conditions that need to be present for the event to occur. Using this method, each sensor node does not need to carry every sensing component necessary to detect the event. Since energy efficiency is important the sensor nodes need to be scheduled so that they consume [sic] consume as little energy as possible to extend the network lifetime. In this thesis, a solution to the sensor Scheduling for Composite Event Detection (SCED) problem will be presented as a way to improve the network lifetime when using composite event detection.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fcla/dt/186333
- Subject Headings
- Sensor networks, Wireless communication systems, Embedded computer systems, Computer systems, Reliability
- Format
- Document (PDF)
- Title
- Automated nursing knowledge classification using indexing.
- Creator
- Chinchanikar, Sucharita Vijay., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Promoting healthcare and wellbeing requires the dedication of a multi-tiered health service delivery system, which is comprised of specialists, medical doctors and nurses. A holistic view to a patient care perspective involves emotional, mental and physical healthcare needs, in which caring is understood as the essence of nursing. Properly and efficiently capturing and managing nursing knowledge is essential to advocating health promotion and illness prevention. This thesis proposes a...
Show morePromoting healthcare and wellbeing requires the dedication of a multi-tiered health service delivery system, which is comprised of specialists, medical doctors and nurses. A holistic view to a patient care perspective involves emotional, mental and physical healthcare needs, in which caring is understood as the essence of nursing. Properly and efficiently capturing and managing nursing knowledge is essential to advocating health promotion and illness prevention. This thesis proposes a document-indexing framework for automating classification of nursing knowledge based on nursing theory and practice model. The documents defining the numerous categories in nursing care model are structured with the help of expert nurse practitioners and professionals. These documents are indexed and used as a benchmark for the process of automatic mapping of each expression in the assessment form of a patient to the corresponding category in the nursing theory model. As an illustration of the proposed methodology, a prototype application is developed using the Latent Semantic Indexing (LSI) technique. The prototype application is tested in a nursing practice environment to validate the accuracy of the proposed algorithm. The simulation results are also compared with an application using Lucene indexing technique that internally uses modified vector space model for indexing. The result comparison showed that the LSI strategy gives 87.5% accurate results compared to the Lucene indexing technique that gives 80% accuracy. Both indexing methods maintain 100% consistency in the results.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186677
- Subject Headings
- Nursing, Computer-assisted instruction, Data transmission systems, Outcome assessment (Medical care), Nursing assessment, Digital techniques
- Format
- Document (PDF)
- Title
- Traffic congestion detection using VANET.
- Creator
- Padron, Francisco M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
We propose a distributed, collaborative traffic congestion detection and dissemination system using VANET that makes efficient use of the communication channel, maintains location privacy, and provides drivers with real-time information on traffic congestions over long distances. The system uses vehicles themselves, equipped with simple inexpensive devices, as gatherers and distributors of information without the need for costly road infrastructure such as sensors, cameras or external...
Show moreWe propose a distributed, collaborative traffic congestion detection and dissemination system using VANET that makes efficient use of the communication channel, maintains location privacy, and provides drivers with real-time information on traffic congestions over long distances. The system uses vehicles themselves, equipped with simple inexpensive devices, as gatherers and distributors of information without the need for costly road infrastructure such as sensors, cameras or external communication equipment. Additionally, we present a flexible simulation and visualization framework we designed and developed to validate our system by showing its effectiveness in multiple scenarios and to aid in the research and development of this and future VANET applications.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186684
- Subject Headings
- Vehicular ad-hoc networks (Computer networks), Traffic congestion, Mathematical models, Mobile communication systems, Evaluation, Traffic congestion, Prevention
- Format
- Document (PDF)
- Title
- Object detection in low resolution video sequences.
- Creator
- Pava, Diego F., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
With augmenting security concerns and decreasing costs of surveillance and computing equipment, research on automated systems for object detection has been increasing, but the majority of the studies focus their attention on sequences where high resolution objects are present. The main objective of this work is the detection and extraction of information of low resolution objects (e.g. objects that are so far away from the camera that they occupy only tens of pixels) in order to provide a...
Show moreWith augmenting security concerns and decreasing costs of surveillance and computing equipment, research on automated systems for object detection has been increasing, but the majority of the studies focus their attention on sequences where high resolution objects are present. The main objective of this work is the detection and extraction of information of low resolution objects (e.g. objects that are so far away from the camera that they occupy only tens of pixels) in order to provide a base for higher level information operations such as classification and behavioral analysis. The system proposed is composed of four stages (preprocessing, background modeling, information extraction, and post processing) and uses context based region of importance selection, histogram equalization, background subtraction and morphological filtering techniques. The result is a system capable of detecting and tracking low resolution objects in a controlled background scene which can be a base for systems with higher complexity.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186685
- Subject Headings
- Computer systems, Security measures, Remote sensing, Image processing, Digital techniques, Imaging systems, Mathematical models
- Format
- Document (PDF)
- Title
- Implementing security in an IP Multimedia Subsystem (IMS) next generation network - a case study.
- Creator
- Ortiz-Villajos, Jose M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The IP Multimedia Subsystem (IMS) has gone from just a step in the evolution of the GSM cellular architecture control core, to being the de-facto framework for Next Generation Network (NGN) implementations and deployments by operators world-wide, not only cellular mobile communications operators, but also fixed line, cable television, and alternative operators. With this transition from standards documents to the real world, engineers in these new multimedia communications companies need to...
Show moreThe IP Multimedia Subsystem (IMS) has gone from just a step in the evolution of the GSM cellular architecture control core, to being the de-facto framework for Next Generation Network (NGN) implementations and deployments by operators world-wide, not only cellular mobile communications operators, but also fixed line, cable television, and alternative operators. With this transition from standards documents to the real world, engineers in these new multimedia communications companies need to face the task of making these new networks secure against threats and real attacks that were not a part of the previous generation of networks. We present the IMS and other competing frameworks, we analyze the security issues, we present the topic of Security Patterns, we introduce several new patterns, including the basis for a Generic Network pattern, and we apply these concepts to designing a security architecture for a fictitious 3G operator using IMS for the control core.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/186763
- Subject Headings
- Electronic digital computers, Programming, Computer networks, Security measures, TCP/IP (Computer network protocol), Security measures, Internet Protocol Multimedia Subsystem (IMS), Security measures, Multimedia communications, Security measures
- Format
- Document (PDF)
- Title
- Mechanisms for prolonging network lifetime in wireless sensor networks.
- Creator
- Yang, Yinying., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Sensors are used to monitor and control the physical environment. A Wireless Sen- sor Network (WSN) is composed of a large number of sensor nodes that are densely deployed either inside the phenomenon or very close to it [18][5]. Sensor nodes measure various parameters of the environment and transmit data collected to one or more sinks, using hop-by-hop communication. Once a sink receives sensed data, it processes and forwards it to the users. Sensors are usually battery powered and it is...
Show moreSensors are used to monitor and control the physical environment. A Wireless Sen- sor Network (WSN) is composed of a large number of sensor nodes that are densely deployed either inside the phenomenon or very close to it [18][5]. Sensor nodes measure various parameters of the environment and transmit data collected to one or more sinks, using hop-by-hop communication. Once a sink receives sensed data, it processes and forwards it to the users. Sensors are usually battery powered and it is hard to recharge them. It will take a limited time before they deplete their energy and become unfunctional. Optimizing energy consumption to prolong network lifetime is an important issue in wireless sensor networks. In mobile sensor networks, sensors can self-propel via springs [14], wheels [20], or they can be attached to transporters, such as robots [20] and vehicles [36]. In static sensor networks with uniform deployment (uniform density), sensors closest to the sink will die first, which will cause uneven energy consumption and limitation of network life- time. In the dissertation, the nonuniform density is studied and analyzed so that the energy consumption within the monitored area is balanced and the network lifetime is prolonged. Several mechanisms are proposed to relocate the sensors after the initial deployment to achieve the desired density while minimizing the total moving cost. Using mobile relays for data gathering is another energy efficient approach. Mobile sensors can be used as ferries, which carry data to the sink for static sensors so that expensive multi-hop communication and long distance communication are reduced. In this thesis, we propose a mobile relay based routing protocol that considers both energy efficiency and data delivery delay. It can be applied to both event-based reporting and periodical report applications., Another mechanism used to prolong network lifetime is sensor scheduling. One of the major components that consume energy is the radio. One method to conserve energy is to put sensors to sleep mode when they are not actively participating in sensing or data relaying. This dissertation studies sensor scheduling mechanisms for composite event detection. It chooses a set of active sensors to perform sensing and data relaying, and all other sensors go to sleep to save energy. After some time, another set of active sensors is chosen. Thus sensors work alternatively to prolong network lifetime.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1870693
- Subject Headings
- Wireless communication systems, Technological innovations, Wireless communication systems, Design and construction, Ad hoc networks (Computer networks), Technological innovations, Sensor networks, Design and construction, Computer algorithms, Computer network protocols
- Format
- Document (PDF)
- Title
- Event detection in surveillance video.
- Creator
- Castellanos Jimenez, Ricardo Augusto., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Digital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video...
Show moreDigital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video sequences. This thesis presents a surveillance system based on optical flow and background subtraction concepts to detect events based on a motion analysis, using an event probability zone definition. Advantages, limitations, capabilities and possible solution alternatives are also discussed. The result is a system capable of detecting events of objects moving in opposing direction to a predefined condition or running in the scene, with precision greater than 50% and recall greater than 80%.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1870694
- Subject Headings
- Computer systems, Security measures, Image processing, Digital techniques, Imaging systems, Mathematical models, Pattern recognition systems, Computer vision, Digital video
- Format
- Document (PDF)
- Title
- Patterns for web services standards.
- Creator
- Ajaj, Ola, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Web services intend to provide an application integration technology that can be successfully used over the Internet in a secure, interoperable and trusted manner. Policies are high-level guidelines defining the way an institution conducts its activities. The WS-Policy standard describes how to apply policies of security definition, enforcement of access control, authentication and logging. WS-Trust defines a security token service and a trust engine which are used by web services to...
Show moreWeb services intend to provide an application integration technology that can be successfully used over the Internet in a secure, interoperable and trusted manner. Policies are high-level guidelines defining the way an institution conducts its activities. The WS-Policy standard describes how to apply policies of security definition, enforcement of access control, authentication and logging. WS-Trust defines a security token service and a trust engine which are used by web services to authenticate other web services. Using the functions defined in WS-Trust, applications can engage in secure communication after establishing trust. BPEL is a language for web service composition that intends to provide convenient and effective means for application integration over the Internet. We address security considerations in BPEL and how to enforce them, as well as its interactions with other web services standards such as WS-Security and WS-Policy.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1927300
- Subject Headings
- Computational grids (Computer systems), Computer systems, Verification, Expert systems (Computer science), Computer network architectures, Web servers, Management, Electronic commerce, Computer programs
- Format
- Document (PDF)
- Title
- Visualization tool for molecular dynamics simulation.
- Creator
- Garg, Meha., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A study of Molecular Dynamics using computational methods and modeling provides the understanding on the interaction of the atoms, properties, structure, and motion and model phenomenon. There are numerous commercial tools available for simulation, analysis and visualization. However any particular tool does not provide all the functionalities. The main objective of this work is the development of the visualization tool customized for our research needs to view the three dimensional...
Show moreA study of Molecular Dynamics using computational methods and modeling provides the understanding on the interaction of the atoms, properties, structure, and motion and model phenomenon. There are numerous commercial tools available for simulation, analysis and visualization. However any particular tool does not provide all the functionalities. The main objective of this work is the development of the visualization tool customized for our research needs to view the three dimensional orientation of the atom, process the simulation results offline, able to handle large volume of data, ability to display complete frame, atomic trails, and runtime response to the researchers' query with low processing time. This thesis forms the basis for the development of such an in-house tool for analysis and display of simulation results based on Open GL and MFC. Advantages, limitations, capabilities and future aspects are also discussed. The result is the system capable of processing large amount of simulation result data in 11 minutes and query response and display in less than 1 second.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1927308
- Subject Headings
- Molecular dynamics, Computer simulation, Condensed matter, Computer simulation, Intermolecular forces, Computer simulation, Molecules, Mathematical models
- Format
- Document (PDF)
- Title
- Integrated platform for coordination of emergency medical response system using mobile devices.
- Creator
- Chakrabarty, Nabarun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents a framework for a platform that integrates various infrastructural services and facilities in an automated manner to improve and coordinate the processes of medical emergency response system (MERS). It aims to improve the quality of healthcare system infrastructure by improving the quality of service of MERS. Presently the processes of MERS and their coordination are semi-automated, which adds to the complication of service availability and information exchange among...
Show moreThis thesis presents a framework for a platform that integrates various infrastructural services and facilities in an automated manner to improve and coordinate the processes of medical emergency response system (MERS). It aims to improve the quality of healthcare system infrastructure by improving the quality of service of MERS. Presently the processes of MERS and their coordination are semi-automated, which adds to the complication of service availability and information exchange among participating systems, thereby affecting the MERS' quality of service adversely. An integrated platform for the coordination of MERS processes can help improve its quality of service and ensure better control of data and process flow. The improvements to the MERS service quality can significantly contribute to the improvement of the quality of healthcare infrastructure. The integrated platform framework presented here resolves the problems of data flow and process coordination to achieve the desired goal.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1927611
- Subject Headings
- Mobile communication systems, Radio paging, Health services accessibility
- Format
- Document (PDF)
- Title
- Individual profiling of perceived tinnitus by developing tinnitus analyzer software.
- Creator
- Chaudbury, Baishali., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Tinnitus is a conscious perception of phantom sounds in the absence of external acoustic stimuli, and masking is one of the popular ways to treat it. Due to the variation in the perceived tinnitus sound from patient to patient, the usefulness of masking therapy cannot be generalized. Thus, it is important to first determine the feasibility of masking therapy on a particular patient, by quantifying the tinnitus sound, and then generate an appropriate masking signal. This paper aims to achieve...
Show moreTinnitus is a conscious perception of phantom sounds in the absence of external acoustic stimuli, and masking is one of the popular ways to treat it. Due to the variation in the perceived tinnitus sound from patient to patient, the usefulness of masking therapy cannot be generalized. Thus, it is important to first determine the feasibility of masking therapy on a particular patient, by quantifying the tinnitus sound, and then generate an appropriate masking signal. This paper aims to achieve this kind of individual profiling by developing interactive software -Tinnitus Analyzer, based on clinical approach. The developed software has been proposed to be used in place of traditional clinical methods and this software (as a part of the future work) will be implemented in the practical scenario involving real tinnitus patients.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1927612
- Subject Headings
- Medical care, Technological innovations, Tinnitus, Diagnosis, Aids and devices, Hearing disorders, Diagnosis, Technological innovations, Psychoacoustics, Research
- Format
- Document (PDF)