Current Search: Department of Computer and Electrical Engineering and Computer Science (x)
View All Items
Pages
- Title
- Experimental implementation of the new prototype in Linux.
- Creator
- Han, Gee Won., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. In the wired network, TCP performs remarkably well due to its scalability and distributed end-to-end congestion control algorithms. However, many studies have shown that the unmodified standard TCP performs poorly in networks with large bandwidth-delay products and/or lossy wireless links. In this thesis, we analyze the problems TCP exhibits in the wireless communication and develop TCP...
Show moreThe Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. In the wired network, TCP performs remarkably well due to its scalability and distributed end-to-end congestion control algorithms. However, many studies have shown that the unmodified standard TCP performs poorly in networks with large bandwidth-delay products and/or lossy wireless links. In this thesis, we analyze the problems TCP exhibits in the wireless communication and develop TCP congestion control algorithm for mobile applications. We show that the optimal TCP congestion control and link scheduling scheme amounts to window-control oriented implicit primaldual solvers for underlying network utility maximization. Based on this idea, we used a scalable congestion control algorithm called QUeueIng-Control (QUIC) TCP where it utilizes queueing-delay based MaxWeight-type scheduler for wireless links developed in [34]. Simulation and test results are provided to evaluate the proposed schemes in practical networks.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fcla/dt/3362375
- Subject Headings
- Ad hoc networks (Computer networks), Wireless sensor networks, Embedded computer systems, Programming, Operating systems (Computers), Network performance (Telecommunication), TCP/IP (Computer network protocol)
- Format
- Document (PDF)
- Title
- Evolving Legacy Software Systems with a Resource and Performance-Sensitive Autonomic Interaction Manager.
- Creator
- Mulcahy, James J., Huang, Shihong, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Retaining business value in a legacy commercial enterprise resource planning system today often entails more than just maintaining the software to preserve existing functionality. This type of system tends to represent a significant capital investment that may not be easily scrapped, replaced, or re-engineered without considerable expense. A legacy system may need to be frequently extended to impart new behavior as stakeholder business goals and technical requirements evolve. Legacy ERP...
Show moreRetaining business value in a legacy commercial enterprise resource planning system today often entails more than just maintaining the software to preserve existing functionality. This type of system tends to represent a significant capital investment that may not be easily scrapped, replaced, or re-engineered without considerable expense. A legacy system may need to be frequently extended to impart new behavior as stakeholder business goals and technical requirements evolve. Legacy ERP systems are growing in prevalence and are both expensive to maintain and risky to evolve. Humans are the driving factor behind the expense, from the engineering costs associated with evolving these types of systems to the labor costs required to operate the result. Autonomic computing is one approach that addresses these challenges by imparting self-adaptive behavior into the evolved system. The contribution of this dissertation aims to add to the body of knowledge in software engineering some insight and best practices for development approaches that are normally hidden from academia by the competitive nature of the retail industry. We present a formal architectural pattern that describes an asynchronous, low-complexity, and autonomic approach. We validate the pattern with two real-world commercial case studies and a reengineering simulation to demonstrate that the pattern is repeatable and agnostic with respect to the operating system, programming language, and communication protocols.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00004527, http://purl.flvc.org/fau/fd/FA00004527
- Subject Headings
- Business logistics -- Automation, Electronic commerce -- Management, Enterprise application integration (Computer systems), Information resources management, Management information systems, Software reengineering
- Format
- Document (PDF)
- Title
- Event detection in surveillance video.
- Creator
- Castellanos Jimenez, Ricardo Augusto., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Digital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video...
Show moreDigital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video sequences. This thesis presents a surveillance system based on optical flow and background subtraction concepts to detect events based on a motion analysis, using an event probability zone definition. Advantages, limitations, capabilities and possible solution alternatives are also discussed. The result is a system capable of detecting events of objects moving in opposing direction to a predefined condition or running in the scene, with precision greater than 50% and recall greater than 80%.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1870694
- Subject Headings
- Computer systems, Security measures, Image processing, Digital techniques, Imaging systems, Mathematical models, Pattern recognition systems, Computer vision, Digital video
- Format
- Document (PDF)
- Title
- Computer interaction system to identify learning patterns and improve performance in children with autism spectrum disorders.
- Creator
- Petersen, Jake Levi., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Autism Spectrum Disorders (ASD) affects one in every 110 children. Medical and educational research have demonstrated that ASD children's social skills and adaptation can be much improved, provided that interventions are early and intensive enough. The advancement of computer technologies and their ubiquitous penetration in people's life make them widely available to support intensive sociocognitive rehabilitation. Additionally, computer interactions are a natural choice for people with...
Show moreAutism Spectrum Disorders (ASD) affects one in every 110 children. Medical and educational research have demonstrated that ASD children's social skills and adaptation can be much improved, provided that interventions are early and intensive enough. The advancement of computer technologies and their ubiquitous penetration in people's life make them widely available to support intensive sociocognitive rehabilitation. Additionally, computer interactions are a natural choice for people with autism who value lawful and "systematizing" tools. A number of computer-aided approaches have been developed, showing effectiveness and generalization, but little quantitative research was conducted to identify the critical factors of engaging and improving the child's interest and performance. This thesis designs an adaptive computer interaction system, called Ying, which detects learning patterns in children with ASD and explores the computer interactive possibilities. The system tailors its content based on periodic performance assessments that offer a more effective learning path for children with ASD.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3356786
- Subject Headings
- Autism spectrum disorders, Treatment, Technological innovations, Optical pattern recognition, Children with disabilities, Education, Technological innovations, Assistive computer technology, Compter-assisted instruction, Computers and people with disabilities
- Format
- Document (PDF)
- Title
- Data mining heuristic-¬based malware detection for android applications.
- Creator
- Peiravian, Naser, Zhu, Xingquan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Google Android mobile phone platform is one of the dominant smartphone operating systems on the market. The open source Android platform allows developers to take full advantage of the mobile operation system, but also raises significant issues related to malicious applications (Apps). The popularity of Android platform draws attention of many developers which also attracts the attention of cybercriminals to develop different kinds of malware to be inserted into the Google Android Market...
Show moreThe Google Android mobile phone platform is one of the dominant smartphone operating systems on the market. The open source Android platform allows developers to take full advantage of the mobile operation system, but also raises significant issues related to malicious applications (Apps). The popularity of Android platform draws attention of many developers which also attracts the attention of cybercriminals to develop different kinds of malware to be inserted into the Google Android Market or other third party markets as safe applications. In this thesis, we propose to combine permission, API (Application Program Interface) calls and function calls to build a Heuristic-Based framework for the detection of malicious Android Apps. In our design, the permission is extracted from each App’s profile information and the APIs are extracted from the packed App file by using packages and classes to represent API calls. By using permissions, API calls and function calls as features to characterize each of Apps, we can develop a classifier by data mining techniques to identify whether an App is potentially malicious or not. An inherent advantage of our method is that it does not need to involve any dynamic tracking of the system calls but only uses simple static analysis to find system functions from each App. In addition, Our Method can be generalized to all mobile applications due to the fact that APIs and function calls are always present for mobile Apps. Experiments on real-world Apps with more than 1200 malwares and 1200 benign samples validate the algorithm performance. Research paper published based on the work reported in this thesis: Naser Peiravian, Xingquan Zhu, Machine Learning for Android Malware Detection Using Permission and API Calls, in Proc. of the 25th IEEE International Conference on Tools with Artificial Intelligence (ICTAI) – Washington D.C, November 4-6, 2013.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fau/fd/FA0004045
- Subject Headings
- Computer networks -- Security measures, Data encryption (Computer science), Data structures (Computer science), Internet -- Security measures
- Format
- Document (PDF)
- Title
- Data gateway for prognostic health monitoring of ocean-based power generation.
- Creator
- Gundel, Joseph., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
On August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean energy research and development. Their focus is the research and development of open-ocean current systems and associated infrastructure needed to development and testing prototypes. The generation of power is achieved by using a specialized electric generator with a rotor called a turbine. As with all machines,...
Show moreOn August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean energy research and development. Their focus is the research and development of open-ocean current systems and associated infrastructure needed to development and testing prototypes. The generation of power is achieved by using a specialized electric generator with a rotor called a turbine. As with all machines, the turbines will need maintenance and replacement as they near the end of their lifecycle. This prognostic health monitoring (PHM) requires data to be collected, stored, and analyzed in order to maximize the lifespan, reduce downtime and predict when failure is eminent. This thesis explores the use of a data gateway which will separate high level software with low level hardware including sensors and actuators. The gateway will v standardize and store the data collected from various sensors with different speeds, formats, and interfaces allowing an easy and uniform transition to a database system for analysis.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3342111
- Subject Headings
- Machinery, Monitoring, Marine turbines, Mathematical models, Fluid dynamics, Structural dynamics
- Format
- Document (PDF)
- Title
- DATA COLLECTION FRAMEWORK AND MACHINE LEARNING ALGORITHMS FOR THE ANALYSIS OF CYBER SECURITY ATTACKS.
- Creator
- Calvert, Chad, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors...
Show moreThe integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors of persistent and emerging threats. When collecting modern day network traffic for intrusion detection, substantial amounts of traffic can be collected, much of which consists of relatively few attack instances as compared to normal traffic. This skewed distribution between normal and attack data can lead to high levels of class imbalance. Machine learning techniques can be used to aid in attack detection, but large levels of imbalance between normal (majority) and attack (minority) instances can lead to inaccurate detection results.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013289
- Subject Headings
- Machine learning, Algorithms, Anomaly detection (Computer security), Intrusion detection systems (Computer security), Big data
- Format
- Document (PDF)
- Title
- Cytogenic bioinformatics of chromosomal aberrations and genetic disorders: data-mining of relevant biostatistical features.
- Creator
- Karri, Jagadeshwari., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Cytogenetics is a study on the genetic considerations associated with structural and functional aspects of the cells with reference to chromosomal inclusions. Chromosomes are structures within the cells containing body's information in the form of strings of DNA. When atypical version or structural abnormality in one or more chromosomes prevails, it is defined as chromosomal aberrations (CA) depicting certain genetic pathogeny (known as genetic disorders). The present study assumes the...
Show moreCytogenetics is a study on the genetic considerations associated with structural and functional aspects of the cells with reference to chromosomal inclusions. Chromosomes are structures within the cells containing body's information in the form of strings of DNA. When atypical version or structural abnormality in one or more chromosomes prevails, it is defined as chromosomal aberrations (CA) depicting certain genetic pathogeny (known as genetic disorders). The present study assumes the presence of normal and abnormal chromosomal sets in varying proportions in the cytogenetic complex ; and, stochastical mixture theory is invoked to ascertain the information redundancy as a function of fractional abnormal chromosome population. This bioinformatic measure of redundancy is indicated as a track-parameter towards the progression of genetic disorder, for example, the growth of cancer. Lastly, using the results obtained, conclusions are enumerated, inferences are outlined and directions for future studies are considered.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3358597
- Subject Headings
- Medical genetics, Chromosome abnormalities, Cancer, Genetic aspects, Mutation (Biology), DNA damage
- Format
- Document (PDF)
- Title
- Cytogenetic of chromosomal synteny evaluation: bioinformatic applications towards screening of chromosomal aberrations/ genetic disorder.
- Creator
- Sharma, Sandhya, Neelakanta, Perambur S., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The research efforts refer to tracking homologus loci in the chromosomes of a pair of a species. The purpose is to infer the extent of maximum syntenic correlation when an exhaustive set of orthologs of the species are searched. Relevant bioinformatic analyses use comparative mapping of conserved synteny via Oxford grid. In medical diagnostic efforts, deducing such synteny correlation can help screening chromosomal aberration in genetic disorder pathology. Objectively, the present study...
Show moreThe research efforts refer to tracking homologus loci in the chromosomes of a pair of a species. The purpose is to infer the extent of maximum syntenic correlation when an exhaustive set of orthologs of the species are searched. Relevant bioinformatic analyses use comparative mapping of conserved synteny via Oxford grid. In medical diagnostic efforts, deducing such synteny correlation can help screening chromosomal aberration in genetic disorder pathology. Objectively, the present study addresses: (i) Cytogenetic framework of syntenic correlation and, (ii) applying information-theoretics to determine entropy-dictated synteny across an exhaustive set of orthologs of the test pairs of species.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004331, http://purl.flvc.org/fau/fd/FA00004331
- Subject Headings
- Cytogenetics, Genetic screening, Human chromosome abnormalities, Medical genetics, Molecular biology, Molecular diagnosis, Molecular genetics, Mutation (Biology)
- Format
- Document (PDF)
- Title
- DEVELOPMENT OF AN ALGORITHM TO GUIDE A MULTI-POLE DIAGNOSTIC CATHETER FOR IDENTIFYING THE LOCATION OF ATRIAL FIBRILLATION SOURCES.
- Creator
- Ganesan, Prasanth, Ghoraani, Behnaz, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Atrial Fibrillation (AF) is a debilitating heart rhythm disorder affecting over 2.7 million people in the US and over 30 million people worldwide annually. It has a high correlation with causing a stroke and several other risk factors, resulting in increased mortality and morbidity rate. Currently, the non-pharmocological therapy followed to control AF is catheter ablation, in which the tissue surrounding the pulmonary veins (PVs) is cauterized (called the PV isolation - PVI procedure) aims...
Show moreAtrial Fibrillation (AF) is a debilitating heart rhythm disorder affecting over 2.7 million people in the US and over 30 million people worldwide annually. It has a high correlation with causing a stroke and several other risk factors, resulting in increased mortality and morbidity rate. Currently, the non-pharmocological therapy followed to control AF is catheter ablation, in which the tissue surrounding the pulmonary veins (PVs) is cauterized (called the PV isolation - PVI procedure) aims to block the ectopic triggers originating from the PVs from entering the atrium. However, the success rate of PVI with or without other anatomy-based lesions is only 50%-60%. A major reason for the suboptimal success rate is the failure to eliminate patientspecific non-PV sources present in the left atrium (LA), namely reentry source (a.k.a. rotor source) and focal source (a.k.a. point source). It has been shown from several animal and human studies that locating and ablating these sources significantly improves the long-term success rate of the ablation procedure. However, current technologies to locate these sources posses limitations with resolution, additional/special hardware requirements, etc. In this dissertation, the goal is to develop an efficient algorithm to locate AF reentry and focal sources using electrograms recorded from a conventionally used high-resolution multi-pole diagnostic catheter.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013310
- Subject Headings
- Atrial Fibrillation--diagnosis, Algorithm, Catheter ablation
- Format
- Document (PDF)
- Title
- DEVELOPMENT OF POINT-OF-CARE ASSAYS FOR DISEASE DIAGNOSTIC AND TREATMENT MONITORING FOR RESOURCE CONSTRAINED SETTINGS.
- Creator
- Sher, Mazhar, Asghar, Waseem, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
This thesis aims to address the challenges of the development of cost-effective and rapid assays for the accurate counting of CD4+ T cells and quantification of HIV-1 viral load for resource-constrained settings. The lack of such assays has severely affected people living in disease prevalent areas. CD4+ T cells count information plays a vital role in the effective management of HIV-1 disease. Here, we present a flow-free magnetic actuation platform that uses antibody-coated magnetic beads to...
Show moreThis thesis aims to address the challenges of the development of cost-effective and rapid assays for the accurate counting of CD4+ T cells and quantification of HIV-1 viral load for resource-constrained settings. The lack of such assays has severely affected people living in disease prevalent areas. CD4+ T cells count information plays a vital role in the effective management of HIV-1 disease. Here, we present a flow-free magnetic actuation platform that uses antibody-coated magnetic beads to efficiently capture CD4+ T cells from a 30 μL drop of whole blood. On-chip cell lysate electrical impedance spectroscopy has been utilized to quantify the isolated CD4 cells. The developed assay has a limit of detection of 25 cells per μL and provides accurate CD4 counts in the range of 25–800 cells per μL. The whole immunoassay along with the enumeration process is very rapid and provides CD4 quantification results within 5 min time frame. The assay does not require off-chip sample preparation steps and minimizes human involvement to a greater extent. The developed impedance-based immunoassay has the potential to significantly improve the CD4 enumeration process especially for POC settings.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013495
- Subject Headings
- Point-of-care testing, Diagnostic tests, Immunoassay, HIV-1, Microfluidic devices
- Format
- Document (PDF)
- Title
- DEEP MAXOUT NETWORKS FOR CLASSIFICATION PROBLEMS ACROSS MULTIPLE DOMAINS.
- Creator
- Castaneda, Gabriel, Khoshgoftaar, Taghi M., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Machine learning techniques such as deep neural networks have become an indispensable tool for a wide range of applications such as image classification, speech recognition, and sentiment analysis in text. An activation function is a mathematical equation that determines the output of each neuron in the neural network. In deep learning architectures the choice of activation functions is very important to the network’s performance. Activation functions determine the output of the model, its...
Show moreMachine learning techniques such as deep neural networks have become an indispensable tool for a wide range of applications such as image classification, speech recognition, and sentiment analysis in text. An activation function is a mathematical equation that determines the output of each neuron in the neural network. In deep learning architectures the choice of activation functions is very important to the network’s performance. Activation functions determine the output of the model, its computational efficiency, and its ability to train and converge after multiple iterations of training epochs. The selection of an activation function is critical to building and training an effective and efficient neural network. In real-world applications of deep neural networks, the activation function is a hyperparameter. We have observed a lack of consensus on how to select a good activation function for a deep neural network, and that a specific function may not be suitable for all domain-specific applications.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013362
- Subject Headings
- Classification, Machine learning--Technique, Neural networks (Computer science)
- Format
- Document (PDF)
- Title
- Channel Assignment in Cognitive Radio Wireless Networks.
- Creator
- Wu, Yueshi, Cardei, Mihaela, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Cognitive radio technology that enables dynamic spectrum access has been a promising solution for the spectrum scarcity problem. Cognitive radio networks enable the communication on both licensed and unlicensed channels, having the potential to better solve the interference and collision issues. Channel assignment is of great importance in cognitive radio networks. When operating on licensed channels, the objective is to exploit spectrum holes through cognitive communication, giving priority...
Show moreCognitive radio technology that enables dynamic spectrum access has been a promising solution for the spectrum scarcity problem. Cognitive radio networks enable the communication on both licensed and unlicensed channels, having the potential to better solve the interference and collision issues. Channel assignment is of great importance in cognitive radio networks. When operating on licensed channels, the objective is to exploit spectrum holes through cognitive communication, giving priority to the primary users. In this dissertation, we focus on the development of efficient channel assignment algorithms and protocols to improve network performance for cognitive radio wireless networks. The first contribution is on channel assignment for cognitive radio wireless sensor networks aiming to provide robust topology control, as well as to increase network throughput and data delivery rate. The approach is then extended to specific cognitive radio network applications achieving improved performances.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004750, http://purl.flvc.org/fau/fd/FA00004750
- Subject Headings
- Cognitive radio networks--Technological innovations., Wireless communication systems--Technological innovations., Ad hoc networks (Computer networks), Routing protocols (Computer network protocols)
- Format
- Document (PDF)
- Title
- Context-based Image Concept Detection and Annotation.
- Creator
- Zolghadr, Esfandiar, Furht, Borko, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Scene understanding attempts to produce a textual description of visible and latent concepts in an image to describe the real meaning of the scene. Concepts are either objects, events or relations depicted in an image. To recognize concepts, the decision of object detection algorithm must be further enhanced from visual similarity to semantical compatibility. Semantically relevant concepts convey the most consistent meaning of the scene. Object detectors analyze visual properties (e.g., pixel...
Show moreScene understanding attempts to produce a textual description of visible and latent concepts in an image to describe the real meaning of the scene. Concepts are either objects, events or relations depicted in an image. To recognize concepts, the decision of object detection algorithm must be further enhanced from visual similarity to semantical compatibility. Semantically relevant concepts convey the most consistent meaning of the scene. Object detectors analyze visual properties (e.g., pixel intensities, texture, color gradient) of sub-regions of an image to identify objects. The initially assigned objects names must be further examined to ensure they are compatible with each other and the scene. By enforcing inter-object dependencies (e.g., co-occurrence, spatial and semantical priors) and object to scene constraints as background information, a concept classifier predicts the most semantically consistent set of names for discovered objects. The additional background information that describes concepts is called context. In this dissertation, a framework for building context-based concept detection is presented that uses a combination of multiple contextual relationships to refine the result of underlying feature-based object detectors to produce most semantically compatible concepts. In addition to the lack of ability to capture semantical dependencies, object detectors suffer from high dimensionality of feature space that impairs them. Variances in the image (i.e., quality, pose, articulation, illumination, and occlusion) can also result in low-quality visual features that impact the accuracy of detected concepts. The object detectors used to build context-based framework experiments in this study are based on the state-of-the-art generative and discriminative graphical models. The relationships between model variables can be easily described using graphical models and the dependencies and precisely characterized using these representations. The generative context-based implementations are extensions of Latent Dirichlet Allocation, a leading topic modeling approach that is very effective in reduction of the dimensionality of the data. The discriminative contextbased approach extends Conditional Random Fields which allows efficient and precise construction of model by specifying and including only cases that are related and influence it. The dataset used for training and evaluation is MIT SUN397. The result of the experiments shows overall 15% increase in accuracy in annotation and 31% improvement in semantical saliency of the annotated concepts.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004745, http://purl.flvc.org/fau/fd/FA00004745
- Subject Headings
- Computer vision--Mathematical models., Pattern recognition systems., Information visualization., Natural language processing (Computer science), Multimodal user interfaces (Computer systems), Latent structure analysis., Expert systems (Computer science)
- Format
- Document (PDF)
- Title
- Developing a photovoltaic MPPT system.
- Creator
- Bennett, Thomas, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Many issues related to the design and implementation of a maximum power point tracking (MPPT) converter as part of a photovoltaic (PV) system are addressed. To begin with, variations of the single diode model for a PV module are compared, to determine whether the simplest variation may be used for MPPT PV system modeling and analysis purposes. As part ot this determination, four different DC/DC converters are used in conjunction with these different PV models. This is to verify consistent...
Show moreMany issues related to the design and implementation of a maximum power point tracking (MPPT) converter as part of a photovoltaic (PV) system are addressed. To begin with, variations of the single diode model for a PV module are compared, to determine whether the simplest variation may be used for MPPT PV system modeling and analysis purposes. As part ot this determination, four different DC/DC converters are used in conjunction with these different PV models. This is to verify consistent behavior across the different PV models, as well as across the different converter topologies. Consistent results across the different PV models, will allow a simpler model to be used for simulation ana analysis. Consistent results with the different converters will verify that MPPT algorithms are converter independent. Next, MPPT algorithms are discussed. In particular,the differences between the perturb and observe, and the incremental conductance algorithms are explained and illustrated. A new MPPT algorithm is then proposed based on the deficiencies of the other algorithms. The proposed algorithm's parameters are optimized, and the results for different PV modules obtained. Realistic system losses are then considered, and their effect on the PV system is analyzed ; especially in regards to the MPPT algorithm. Finally, a PV system is implemented and the theoretical results, as well as the behavior of the newly proposed MPPT algorithm, are verified.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3356887
- Subject Headings
- Photovoltaic power systems, Design, Electronic circuits, Electric current converters, Power (Mechanics), Renewable energy sources
- Format
- Document (PDF)
- Title
- Design considerations in high-throughput automation for biotechnology protocols.
- Creator
- Cardona, Aura, Roth, Zvi S., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation a computer-aided automation design methodology for biotechnology applications is proposed that leads to several design guidelines. Because of the biological nature of the samples that propagate in the automation line, a very specific set of environmental and maximum allowed shelf time conditions have to be followed to obtain good yield. In addition all biotechnology protocols require precise sequence of steps, the samples are scarce and the reagents are costly, so no...
Show moreIn this dissertation a computer-aided automation design methodology for biotechnology applications is proposed that leads to several design guidelines. Because of the biological nature of the samples that propagate in the automation line, a very specific set of environmental and maximum allowed shelf time conditions have to be followed to obtain good yield. In addition all biotechnology protocols require precise sequence of steps, the samples are scarce and the reagents are costly, so no waste can be afforded.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004272, http://purl.flvc.org/fau/fd/FA00004272
- Subject Headings
- Biotechnological process control, Biotechnological process monitoring, Molecular biology -- Automation, Molecular biology -- Technique, Molecular cloning -- Technique, Pharmacognosy
- Format
- Document (PDF)
- Title
- Development of Smart Phone-based Automated Microfluidic-ELISA For Human Immunodefciency Virus 1.
- Creator
- Coarsey, Chad Thomas, Asghar, Waseem, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The majority of HIV prevalence is found in Sub-Saharan Africa with 36.9 mil- lion living with HIV/AIDS. The cultural implications such as patient non-compliance or denial of available routine medical care can potentially cause limitations on the ef- fectiveness of detecting such virulent pathogens and manage chronic disease. The lack of access to healthcare and further socioeconomic impacts hinder the ability to ade- quately diagnose and treat infection in resource-limited settings....
Show moreThe majority of HIV prevalence is found in Sub-Saharan Africa with 36.9 mil- lion living with HIV/AIDS. The cultural implications such as patient non-compliance or denial of available routine medical care can potentially cause limitations on the ef- fectiveness of detecting such virulent pathogens and manage chronic disease. The lack of access to healthcare and further socioeconomic impacts hinder the ability to ade- quately diagnose and treat infection in resource-limited settings. Intervention through diagnosis and treatment helps prevent the spread of transmission, where pre-exposure prophylaxis or active disease prevention measures are not readily available. The cur- rent gold standard for HIV detection is by molecular detection; Reverse-Transcription Polymerase Chain Reaction is widely used that employs cycles of temperature condi- tions that require a thermal cycling platform and typically laboratory space for RNA extraction separate from RT-PCR space required. Serological detection can be ad- vantageous for surveillance and screening, Lateral Flow Assays and Enzyme-Linked Immunosorbent Assay (ELISA) can detect a viral protein (antigen) or antibodies. The ELISA can require at least 12 hours of assay preparation and takes a diagnostic laboratory many resources to run. There is need to develop Point-of-Care (POC) testing that can potentially be used for decentralized testing that can leverage ex- isting technologies such as smart phone capability and routine medical or diagnostic tests with cutting edge applications leveraging micro uidics, nanotechnology and in- tegrated circuit design. Such technologies allow for automated, rapid turnaround and cost-e ective diagnosis of HIV, where these assays could potentially be read- ily deployed. It is such technology that can potentially change the way diagnostics are performed, as POC technology can be rapidly disseminated, enable decentralized testing and, is user-friendly. A novel smart phone-enabled automated magnetic bead- based platform was developed for a micro uidic ELISA for HIV-1 detection at the POC to meet this demand.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00005945
- Subject Headings
- Dissertations, Academic -- Florida Atlantic University
- Format
- Document (PDF)
- Title
- Development of a Wearable Device to Detect Epilepsy.
- Creator
- Khandnor Bakappa, Pradeepkumar, Agarwal, Ankur, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This paper evaluates the effectiveness of a wearable device, developed by the author, to detect different types of epileptic seizures and monitor epileptic patients. The device uses GSR, Pulse, EMG, body temperature and 3-axis accelerometer sensors to detect epilepsy. The device first learns the signal patterns of the epileptic patient in ideal condition. The signal pattern generated during the epileptic seizure, which are distinct from other signal patterns, are detected and analyzed by the...
Show moreThis paper evaluates the effectiveness of a wearable device, developed by the author, to detect different types of epileptic seizures and monitor epileptic patients. The device uses GSR, Pulse, EMG, body temperature and 3-axis accelerometer sensors to detect epilepsy. The device first learns the signal patterns of the epileptic patient in ideal condition. The signal pattern generated during the epileptic seizure, which are distinct from other signal patterns, are detected and analyzed by the algorithms developed by the author. Based on an analysis, the device successfully detected different types of epileptic seizures. The author conducted an experiment on himself to determine the effectiveness of the device and the algorithms. Based on the simulation results, the algorithms are 100 percent accurate in detecting different types of epileptic seizures.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004937, http://purl.flvc.org/fau/fd/FA00004937
- Subject Headings
- Epilepsy--Diagnosis--Technological innovations., Patient monitoring., Signal processing--Digital techniques., Wearable computers--Industrial applications.
- Format
- Document (PDF)
- Title
- Finite safety models for high-assurance systems.
- Creator
- Sloan, John C., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Preventing bad things from happening to engineered systems, demands improvements to how we model their operation with regard to safety. Safety-critical and fiscally-critical systems both demand automated and exhaustive verification, which is only possible if the models of these systems, along with the number of scenarios spawned from these models, are tractably finite. To this end, this dissertation ad dresses problems of a model's tractability and usefulness. It addresses the state space...
Show morePreventing bad things from happening to engineered systems, demands improvements to how we model their operation with regard to safety. Safety-critical and fiscally-critical systems both demand automated and exhaustive verification, which is only possible if the models of these systems, along with the number of scenarios spawned from these models, are tractably finite. To this end, this dissertation ad dresses problems of a model's tractability and usefulness. It addresses the state space minimization problem by initially considering tradeoffs between state space size and level of detail or fidelity. It then considers the problem of human interpretation in model capture from system artifacts, by seeking to automate model capture. It introduces human control over level of detail and hence state space size during model capture. Rendering that model in a manner that can guide human decision making is also addressed, as is an automated assessment of system timeliness. Finally, it addresses state compression and abstraction using logical fault models like fault trees, which enable exhaustive verification of larger systems by subsequent use of transition fault models like Petri nets, timed automata, and process algebraic expressions. To illustrate these ideas, this dissertation considers two very different applications - web service compositions and submerged ocean machinery.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/2683206
- Subject Headings
- System failures (Engineering), Prevention, Sustainable engineering, Finite element method, Expert systems (Computer science)
- Format
- Document (PDF)
- Title
- Face Processing Using Mobile Devices.
- Creator
- James, Jhanon, Marques, Oge, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Image Processing and Computer Vision solutions have become commodities for software developers, thanks to the growing availability of Application Program- ming Interfaces (APIs) that encapsulate rich functionality, powered by advanced al- gorithms. To understand and create an e cient method to process faces in images by computers, one must understand how the human visual system processes them. Face processing by computers has been an active research area for about 50 years now. Face detection...
Show moreImage Processing and Computer Vision solutions have become commodities for software developers, thanks to the growing availability of Application Program- ming Interfaces (APIs) that encapsulate rich functionality, powered by advanced al- gorithms. To understand and create an e cient method to process faces in images by computers, one must understand how the human visual system processes them. Face processing by computers has been an active research area for about 50 years now. Face detection has become a commodity and is now incorporated into simple devices such as digital cameras and smartphones. An iOS app was implemented in Objective-C using Microsoft Cognitive Ser- vices APIs, as a tool for human vision and face processing research. Experimental work on image compression, upside-down orientation, the Thatcher e ect, negative inversion, high frequency, facial artifacts, caricatures and image degradation were completed on the Radboud and 10k US Adult Faces Databases along with other images.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004770, http://purl.flvc.org/fau/fd/FA00004770
- Subject Headings
- Image processing--Digital techniques., Mobile communication systems., Mobile computing., Artificial intelligence., Human face recognition (Computer science), Computer vision., Optical pattern recognition.
- Format
- Document (PDF)