Current Search: College of Engineering and Computer Science (x)
View All Items
Pages
- Title
- Collabortive filtering using machine learning and statistical techniques.
- Creator
- Su, Xiaoyuan., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Collaborative filtering (CF), a very successful recommender system, is one of the applications of data mining for incomplete data. The main objective of CF is to make accurate recommendations from highly sparse user rating data. My contributions to this research topic include proposing the frameworks of imputation-boosted collaborative filtering (IBCF) and imputed neighborhood based collaborative filtering (INCF). We also proposed a model-based CF technique, TAN-ELR CF, and two hybrid CF...
Show moreCollaborative filtering (CF), a very successful recommender system, is one of the applications of data mining for incomplete data. The main objective of CF is to make accurate recommendations from highly sparse user rating data. My contributions to this research topic include proposing the frameworks of imputation-boosted collaborative filtering (IBCF) and imputed neighborhood based collaborative filtering (INCF). We also proposed a model-based CF technique, TAN-ELR CF, and two hybrid CF algorithms, sequential mixture CF and joint mixture CF. Empirical results show that our proposed CF algorithms have very good predictive performances. In the investigation of applying imputation techniques in mining incomplete data, we proposed imputation-helped classifiers, and VCI predictors (voting on classifications from imputed learning sets), both of which resulted in significant improvement in classification performance for incomplete data over conventional machine learned classifiers, including kNN, neural network, one rule, decision table, SVM, logistic regression, decision tree (C4.5), random forest, and decision list (PART), and the well known Bagging predictors. The main imputation techniques involved in these algorithms include EM (expectation maximization) and BMI (Bayesian multiple imputation).
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/186301
- Subject Headings
- Filters (Mathematics), Machine learning, Data mining, Technological innovations, Database management, Combinatorial group theory
- Format
- Document (PDF)
- Title
- CEREBROSPINAL FLUID SHUNT SYSTEM WITH AUTO-FLOW REGULATION.
- Creator
- Mutlu, Caner, Asghar, Waseem, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
A cerebrospinal fluid (CSF) shunt system is used for treatment of hydrocephalus and abnormal intracranial pressure (ICP) conditions. Mostly a shunt system is placed under skin for creating a low resistance pathway between intracranial space and appropriate discharge sites within body by doing so excess CSF volume can exit the intracranial space. Displaced intracranial CSF volume normally results in lowered ICP. Thereby, a CSF shunt can manage ICP. In a healthy person, normal ICP is primarily...
Show moreA cerebrospinal fluid (CSF) shunt system is used for treatment of hydrocephalus and abnormal intracranial pressure (ICP) conditions. Mostly a shunt system is placed under skin for creating a low resistance pathway between intracranial space and appropriate discharge sites within body by doing so excess CSF volume can exit the intracranial space. Displaced intracranial CSF volume normally results in lowered ICP. Thereby, a CSF shunt can manage ICP. In a healthy person, normal ICP is primarily maintained by CSF production and reabsorption rate as a natural tendency of body. If intracranial CSF volume starts increasing due to under reabsorption, this mostly results in raised ICP. Abnormal ICP can be treated by discharging excess CSF volume via use of a shunt system. Once a shunt system is placed subcutaneously, a patient is expected to live a normal life. However, shunt failure as well as flow regulatory problems are major issues with current passive shunt systems which leaves patients with serious consequences of under-/over CSF drainage condition. In this research, a shunt system is developed which is resistant to most shunt-related causes of under-/over CSF drainage. This has been made possible via use of an on-board medical monitoring (diagnostic) and active flow control mechanism. The developed shunt system, in this research, has full external ventricular drainage (EVD) capability. Further miniaturization will make it possible for an implantable shunt.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013489
- Subject Headings
- Cerebrospinal Fluid Shunts
- Format
- Document (PDF)
- Title
- Classification techniques for noisy and imbalanced data.
- Creator
- Napolitano, Amri E., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Machine learning techniques allow useful insight to be distilled from the increasingly massive repositories of data being stored. As these data mining techniques can only learn patterns actually present in the data, it is important that the desired knowledge be faithfully and discernibly contained therein. Two common data quality issues that often affect important real life classification applications are class noise and class imbalance. Class noise, where dependent attribute values are...
Show moreMachine learning techniques allow useful insight to be distilled from the increasingly massive repositories of data being stored. As these data mining techniques can only learn patterns actually present in the data, it is important that the desired knowledge be faithfully and discernibly contained therein. Two common data quality issues that often affect important real life classification applications are class noise and class imbalance. Class noise, where dependent attribute values are recorded erroneously, misleads a classifier and reduces predictive performance. Class imbalance occurs when one class represents only a small portion of the examples in a dataset, and, in such cases, classifiers often display poor accuracy on the minority class. The reduction in classification performance becomes even worse when the two issues occur simultaneously. To address the magnified difficulty caused by this interaction, this dissertation performs thorough empirical investigations of several techniques for dealing with class noise and imbalanced data. Comprehensive experiments are performed to assess the effects of the classification techniques on classifier performance, as well as how the level of class imbalance, level of class noise, and distribution of class noise among the classes affects results. An empirical analysis of classifier based noise detection efficiency appears first. Subsequently, an intelligent data sampling technique, based on noise detection, is proposed and tested. Several hybrid classifier ensemble techniques for addressing class noise and imbalance are introduced. Finally, a detailed empirical investigation of classification filtering is performed to determine best practices.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/369201
- Subject Headings
- Combinatorial group theory, Data mining, Technological innovations, Decision trees, Machine learning, Filters (Mathematics)
- Format
- Document (PDF)
- Title
- Cloud-based Skin Lesion Diagnosis System using Convolutional Neural Networks.
- Creator
- Akar, Esad, Furht, Borko, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Skin cancer is a major medical problem. If not detected early enough, skin cancer like melanoma can turn fatal. As a result, early detection of skin cancer, like other types of cancer, is key for survival. In recent times, deep learning methods have been explored to create improved skin lesion diagnosis tools. In some cases, the accuracy of these methods has reached dermatologist level of accuracy. For this thesis, a full-fledged cloud-based diagnosis system powered by convolutional neural...
Show moreSkin cancer is a major medical problem. If not detected early enough, skin cancer like melanoma can turn fatal. As a result, early detection of skin cancer, like other types of cancer, is key for survival. In recent times, deep learning methods have been explored to create improved skin lesion diagnosis tools. In some cases, the accuracy of these methods has reached dermatologist level of accuracy. For this thesis, a full-fledged cloud-based diagnosis system powered by convolutional neural networks (CNNs) with near dermatologist level accuracy has been designed and implemented in part to increase early detection of skin cancer. A large range of client devices can connect to the system to upload digital lesion images and request diagnosis results from the diagnosis pipeline. The diagnosis is handled by a two-stage CNN pipeline hosted on a server where a preliminary CNN performs quality check on user requests, and a diagnosis CNN that outputs lesion predictions.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013150
- Subject Headings
- Skin Diseases--diagnosis, Skin--Cancer--Diagnosis, Diagnosis--Methodology, Neural networks, Cloud computing
- Format
- Document (PDF)
- Title
- Context-aware hybrid data dissemination in vehicular networks.
- Creator
- Rathod, Monika M., Mahgoub, Imad, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This work presents the development of the Context-Aware Hybrid Data Dissemination protocol for vehicular networks. The importance of developing vehicular networking data dissemination protocols is exemplified by the recent announcement by the U.S. Department of Transportation (DOT) National Highway Traffic Safety Administration (NHTSA) to enable vehicle-to-vehicle (V2V) communication technology. With emphasis on safety, other useful applications of V2V communication include but are not...
Show moreThis work presents the development of the Context-Aware Hybrid Data Dissemination protocol for vehicular networks. The importance of developing vehicular networking data dissemination protocols is exemplified by the recent announcement by the U.S. Department of Transportation (DOT) National Highway Traffic Safety Administration (NHTSA) to enable vehicle-to-vehicle (V2V) communication technology. With emphasis on safety, other useful applications of V2V communication include but are not limited to traffic and routing, weather, construction and road hazard alerts, as well as advertisement and entertainment. The core of V2V communication relies on the efficient dispersion of relevant data through wireless broadcast protocols for these varied applications. The challenges of vehicular networks demand an adaptive broadcast protocol capable of handling diverse applications. This research work illustrates the design of a wireless broadcast protocol that is context-aware and adaptive to vehicular environments taking into consideration vehicle density, road topology, and type of data to be disseminated. The context-aware hybrid data dissemination scheme combines store-and-forward and multi-hop broadcasts, capitalizing on the strengths of both these categories and mitigates the weaknesses to deliver data with maximum efficiency to a widest possible reach. This protocol is designed to work in both urban and highway mobility models. The behavior and performance of the hybrid data dissemination scheme is studied by varying the broadcast zone radius, aggregation ratio, data message size and frequency of the broadcast messages. Optimal parameters are determined and the protocol is then formulated to become adaptive to node density by keeping the field size constant and increasing the number of nodes. Adding message priority levels to propagate safety messages faster and farther than non-safety related messages is the next context we add to our adaptive protocol. We dynamically set the broadcast region to use multi-hop which has lower latency to propagate safety-related messages. Extensive simulation results have been obtained using realistic vehicular network scenarios. Results show that Context-Aware Hybrid Data Dissemination Protocol benefits from the low latency characteristics of multi-hop broadcast and low bandwidth consumption of store-and-forward. The protocol is adaptive to both urban and highway mobility models.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004152, http://purl.flvc.org/fau/fd/FA00004152
- Subject Headings
- Context aware computing, Convergence (Telecommunication), Intelligent transportation systems, Internetworking (Telecommunication), Routing (Computer network management), Routing protocols (Computer network protocols), Vehicular ad hoc networks (Computer networks)
- Format
- Document (PDF)
- Title
- Content identification using video tomography.
- Creator
- Leon, Gustavo A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Video identification or copy detection is a challenging problem and is becoming increasingly important with the popularity of online video services. The problem addressed in this thesis is the identification of a given video clip in a given set of videos. For a given query video, the system returns all the instance of the video in the data set. This identification system uses video signatures based on video tomography. A robust and low complexity video signature is designed and implemented....
Show moreVideo identification or copy detection is a challenging problem and is becoming increasingly important with the popularity of online video services. The problem addressed in this thesis is the identification of a given video clip in a given set of videos. For a given query video, the system returns all the instance of the video in the data set. This identification system uses video signatures based on video tomography. A robust and low complexity video signature is designed and implemented. The nature of the signature makes it independent to the most commonly video transformations. The signatures are generated for video shots and not individual frames, resulting in a compact signature of 64 bytes per video shot. The signatures are matched using simple Euclidean distance metric. The results show that videos can be identified with 100% recall and over 93% precision. The experiments included several transformations on videos.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/2783207
- Subject Headings
- Biometric identification, High performance computing, Image processing, Digital techniques, Multimedia systems, Security measures
- Format
- Document (PDF)
- Title
- Deep Learning for Android Application Ransomware Detection.
- Creator
- Wongsupa, Panupong, Zhu, Xingquan, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Smartphones and mobile tablets are rapidly growing, and very important nowadays. The most popular mobile operating system since 2012 has been Android. Android is an open source platform that allows developers to take full advantage of both the operating system and the applications itself. However, due to the open source community of an Android platform, some Android developers took advantage of this and created countless malicious applications such as Trojan, Malware, and Ransomware. All...
Show moreSmartphones and mobile tablets are rapidly growing, and very important nowadays. The most popular mobile operating system since 2012 has been Android. Android is an open source platform that allows developers to take full advantage of both the operating system and the applications itself. However, due to the open source community of an Android platform, some Android developers took advantage of this and created countless malicious applications such as Trojan, Malware, and Ransomware. All which are currently hidden in a large number of benign apps in official Android markets, such as Google PlayStore, and Amazon. Ransomware is a malware that once infected the victim’s device. It will encrypt files, unlock device system, and display a popup message which asks the victim to pay ransom in order to unlock their device or system which may include medical devices that connect through the internet. In this research, we propose to combine permission and API calls, then use Deep Learning techniques to detect ransomware apps from the Android market. Permissions setting and API calls are extracted from each app file by using a python library called AndroGuard. We are using Permissions and API call features to characterize each application, which can identify which application has potential to be ransomware or is benign. We implement our Android Ransomware Detection framework based on Keras, which uses MLP with back-propagation and a supervised algorithm. We used our method with experiments based on real-world applications with over 2000 benign applications and 1000 ransomware applications. The dataset came from ARGUS’s lab [1] which validated algorithm performance and selected the best architecture for the multi-layer perceptron (MLP) by trained our dataset with 6 various of MLP structures. Our experiments and validations show that the MLPs have over 3 hidden layers with medium sized of neurons achieved good results on both accuracy and AUC score of 98%. The worst score is approximately 45% to 60% and are from MLPs that have 2 hidden layers with large number of neurons.
Show less - Date Issued
- 2018
- PURL
- http://purl.flvc.org/fau/fd/FA00013151
- Subject Headings
- Deep learning, Android (Electronic resource)--Security measures, Malware (Computer software)--Prevention
- Format
- Document (PDF)
- Title
- Emulation of Safety Control Systems for Theme Park Rides.
- Creator
- Hirapara, Cole P., Alhalabi, Bassem, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Emulation of safety control systems is a form of computerized design validation completed during the design or fabrication stage of an engineering project. Emulation, a method of validating a system between the typical cases of computer simulation and experimental testing, provides a means to test physical systems while removing the limitation of the requirement for physical equipment. The use of emulation could mitigate the unique risks of theme park attractions in engineering design and...
Show moreEmulation of safety control systems is a form of computerized design validation completed during the design or fabrication stage of an engineering project. Emulation, a method of validating a system between the typical cases of computer simulation and experimental testing, provides a means to test physical systems while removing the limitation of the requirement for physical equipment. The use of emulation could mitigate the unique risks of theme park attractions in engineering design and business operation. This thesis considers the unique risks associated with the engineering of safety-related control systems for theme park attractions and rides, and how emulation and the computerization of testing can change the industry.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013379
- Subject Headings
- Amusement rides--Safety measures, Amusement parks, Emulators (Computer programs), Safety
- Format
- Document (PDF)
- Title
- Exploiting audiovisual attention for visual coding.
- Creator
- Torres, Freddy., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Perceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this...
Show morePerceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this work, we validate some of those physiological tests using complex video sequence. We designed and developed a web-based tool for video quality measurement. After conducting different experiments, we observed that in the general reaction time to detect video artifacts was higher when video was presented with the audio information. We observed that emotional information in audio guide human attention to particular ROI. We also observed that sound frequency change spatial frequency perception in still images.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fcla/dt/3361251
- Subject Headings
- Digital video, Image processing, Digital techniques, Visual perception, Coding theory, Human-computer interaction, Intersensory effects
- Format
- Document (PDF)
- Title
- Experimental implementation of the new prototype in Linux.
- Creator
- Han, Gee Won., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. In the wired network, TCP performs remarkably well due to its scalability and distributed end-to-end congestion control algorithms. However, many studies have shown that the unmodified standard TCP performs poorly in networks with large bandwidth-delay products and/or lossy wireless links. In this thesis, we analyze the problems TCP exhibits in the wireless communication and develop TCP...
Show moreThe Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. In the wired network, TCP performs remarkably well due to its scalability and distributed end-to-end congestion control algorithms. However, many studies have shown that the unmodified standard TCP performs poorly in networks with large bandwidth-delay products and/or lossy wireless links. In this thesis, we analyze the problems TCP exhibits in the wireless communication and develop TCP congestion control algorithm for mobile applications. We show that the optimal TCP congestion control and link scheduling scheme amounts to window-control oriented implicit primaldual solvers for underlying network utility maximization. Based on this idea, we used a scalable congestion control algorithm called QUeueIng-Control (QUIC) TCP where it utilizes queueing-delay based MaxWeight-type scheduler for wireless links developed in [34]. Simulation and test results are provided to evaluate the proposed schemes in practical networks.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fcla/dt/3362375
- Subject Headings
- Ad hoc networks (Computer networks), Wireless sensor networks, Embedded computer systems, Programming, Operating systems (Computers), Network performance (Telecommunication), TCP/IP (Computer network protocol)
- Format
- Document (PDF)
- Title
- Evolving Legacy Software Systems with a Resource and Performance-Sensitive Autonomic Interaction Manager.
- Creator
- Mulcahy, James J., Huang, Shihong, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Retaining business value in a legacy commercial enterprise resource planning system today often entails more than just maintaining the software to preserve existing functionality. This type of system tends to represent a significant capital investment that may not be easily scrapped, replaced, or re-engineered without considerable expense. A legacy system may need to be frequently extended to impart new behavior as stakeholder business goals and technical requirements evolve. Legacy ERP...
Show moreRetaining business value in a legacy commercial enterprise resource planning system today often entails more than just maintaining the software to preserve existing functionality. This type of system tends to represent a significant capital investment that may not be easily scrapped, replaced, or re-engineered without considerable expense. A legacy system may need to be frequently extended to impart new behavior as stakeholder business goals and technical requirements evolve. Legacy ERP systems are growing in prevalence and are both expensive to maintain and risky to evolve. Humans are the driving factor behind the expense, from the engineering costs associated with evolving these types of systems to the labor costs required to operate the result. Autonomic computing is one approach that addresses these challenges by imparting self-adaptive behavior into the evolved system. The contribution of this dissertation aims to add to the body of knowledge in software engineering some insight and best practices for development approaches that are normally hidden from academia by the competitive nature of the retail industry. We present a formal architectural pattern that describes an asynchronous, low-complexity, and autonomic approach. We validate the pattern with two real-world commercial case studies and a reengineering simulation to demonstrate that the pattern is repeatable and agnostic with respect to the operating system, programming language, and communication protocols.
Show less - Date Issued
- 2015
- PURL
- http://purl.flvc.org/fau/fd/FA00004527, http://purl.flvc.org/fau/fd/FA00004527
- Subject Headings
- Business logistics -- Automation, Electronic commerce -- Management, Enterprise application integration (Computer systems), Information resources management, Management information systems, Software reengineering
- Format
- Document (PDF)
- Title
- Event detection in surveillance video.
- Creator
- Castellanos Jimenez, Ricardo Augusto., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Digital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video...
Show moreDigital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video sequences. This thesis presents a surveillance system based on optical flow and background subtraction concepts to detect events based on a motion analysis, using an event probability zone definition. Advantages, limitations, capabilities and possible solution alternatives are also discussed. The result is a system capable of detecting events of objects moving in opposing direction to a predefined condition or running in the scene, with precision greater than 50% and recall greater than 80%.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/1870694
- Subject Headings
- Computer systems, Security measures, Image processing, Digital techniques, Imaging systems, Mathematical models, Pattern recognition systems, Computer vision, Digital video
- Format
- Document (PDF)
- Title
- Computer interaction system to identify learning patterns and improve performance in children with autism spectrum disorders.
- Creator
- Petersen, Jake Levi., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Autism Spectrum Disorders (ASD) affects one in every 110 children. Medical and educational research have demonstrated that ASD children's social skills and adaptation can be much improved, provided that interventions are early and intensive enough. The advancement of computer technologies and their ubiquitous penetration in people's life make them widely available to support intensive sociocognitive rehabilitation. Additionally, computer interactions are a natural choice for people with...
Show moreAutism Spectrum Disorders (ASD) affects one in every 110 children. Medical and educational research have demonstrated that ASD children's social skills and adaptation can be much improved, provided that interventions are early and intensive enough. The advancement of computer technologies and their ubiquitous penetration in people's life make them widely available to support intensive sociocognitive rehabilitation. Additionally, computer interactions are a natural choice for people with autism who value lawful and "systematizing" tools. A number of computer-aided approaches have been developed, showing effectiveness and generalization, but little quantitative research was conducted to identify the critical factors of engaging and improving the child's interest and performance. This thesis designs an adaptive computer interaction system, called Ying, which detects learning patterns in children with ASD and explores the computer interactive possibilities. The system tailors its content based on periodic performance assessments that offer a more effective learning path for children with ASD.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3356786
- Subject Headings
- Autism spectrum disorders, Treatment, Technological innovations, Optical pattern recognition, Children with disabilities, Education, Technological innovations, Assistive computer technology, Compter-assisted instruction, Computers and people with disabilities
- Format
- Document (PDF)
- Title
- Data mining heuristic-¬based malware detection for android applications.
- Creator
- Peiravian, Naser, Zhu, Xingquan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Google Android mobile phone platform is one of the dominant smartphone operating systems on the market. The open source Android platform allows developers to take full advantage of the mobile operation system, but also raises significant issues related to malicious applications (Apps). The popularity of Android platform draws attention of many developers which also attracts the attention of cybercriminals to develop different kinds of malware to be inserted into the Google Android Market...
Show moreThe Google Android mobile phone platform is one of the dominant smartphone operating systems on the market. The open source Android platform allows developers to take full advantage of the mobile operation system, but also raises significant issues related to malicious applications (Apps). The popularity of Android platform draws attention of many developers which also attracts the attention of cybercriminals to develop different kinds of malware to be inserted into the Google Android Market or other third party markets as safe applications. In this thesis, we propose to combine permission, API (Application Program Interface) calls and function calls to build a Heuristic-Based framework for the detection of malicious Android Apps. In our design, the permission is extracted from each App’s profile information and the APIs are extracted from the packed App file by using packages and classes to represent API calls. By using permissions, API calls and function calls as features to characterize each of Apps, we can develop a classifier by data mining techniques to identify whether an App is potentially malicious or not. An inherent advantage of our method is that it does not need to involve any dynamic tracking of the system calls but only uses simple static analysis to find system functions from each App. In addition, Our Method can be generalized to all mobile applications due to the fact that APIs and function calls are always present for mobile Apps. Experiments on real-world Apps with more than 1200 malwares and 1200 benign samples validate the algorithm performance. Research paper published based on the work reported in this thesis: Naser Peiravian, Xingquan Zhu, Machine Learning for Android Malware Detection Using Permission and API Calls, in Proc. of the 25th IEEE International Conference on Tools with Artificial Intelligence (ICTAI) – Washington D.C, November 4-6, 2013.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fau/fd/FA0004045
- Subject Headings
- Computer networks -- Security measures, Data encryption (Computer science), Data structures (Computer science), Internet -- Security measures
- Format
- Document (PDF)
- Title
- Data gateway for prognostic health monitoring of ocean-based power generation.
- Creator
- Gundel, Joseph., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
On August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean energy research and development. Their focus is the research and development of open-ocean current systems and associated infrastructure needed to development and testing prototypes. The generation of power is achieved by using a specialized electric generator with a rotor called a turbine. As with all machines,...
Show moreOn August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean energy research and development. Their focus is the research and development of open-ocean current systems and associated infrastructure needed to development and testing prototypes. The generation of power is achieved by using a specialized electric generator with a rotor called a turbine. As with all machines, the turbines will need maintenance and replacement as they near the end of their lifecycle. This prognostic health monitoring (PHM) requires data to be collected, stored, and analyzed in order to maximize the lifespan, reduce downtime and predict when failure is eminent. This thesis explores the use of a data gateway which will separate high level software with low level hardware including sensors and actuators. The gateway will v standardize and store the data collected from various sensors with different speeds, formats, and interfaces allowing an easy and uniform transition to a database system for analysis.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3342111
- Subject Headings
- Machinery, Monitoring, Marine turbines, Mathematical models, Fluid dynamics, Structural dynamics
- Format
- Document (PDF)
- Title
- DATA COLLECTION FRAMEWORK AND MACHINE LEARNING ALGORITHMS FOR THE ANALYSIS OF CYBER SECURITY ATTACKS.
- Creator
- Calvert, Chad, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors...
Show moreThe integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors of persistent and emerging threats. When collecting modern day network traffic for intrusion detection, substantial amounts of traffic can be collected, much of which consists of relatively few attack instances as compared to normal traffic. This skewed distribution between normal and attack data can lead to high levels of class imbalance. Machine learning techniques can be used to aid in attack detection, but large levels of imbalance between normal (majority) and attack (minority) instances can lead to inaccurate detection results.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013289
- Subject Headings
- Machine learning, Algorithms, Anomaly detection (Computer security), Intrusion detection systems (Computer security), Big data
- Format
- Document (PDF)
- Title
- Cytogenic bioinformatics of chromosomal aberrations and genetic disorders: data-mining of relevant biostatistical features.
- Creator
- Karri, Jagadeshwari., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Cytogenetics is a study on the genetic considerations associated with structural and functional aspects of the cells with reference to chromosomal inclusions. Chromosomes are structures within the cells containing body's information in the form of strings of DNA. When atypical version or structural abnormality in one or more chromosomes prevails, it is defined as chromosomal aberrations (CA) depicting certain genetic pathogeny (known as genetic disorders). The present study assumes the...
Show moreCytogenetics is a study on the genetic considerations associated with structural and functional aspects of the cells with reference to chromosomal inclusions. Chromosomes are structures within the cells containing body's information in the form of strings of DNA. When atypical version or structural abnormality in one or more chromosomes prevails, it is defined as chromosomal aberrations (CA) depicting certain genetic pathogeny (known as genetic disorders). The present study assumes the presence of normal and abnormal chromosomal sets in varying proportions in the cytogenetic complex ; and, stochastical mixture theory is invoked to ascertain the information redundancy as a function of fractional abnormal chromosome population. This bioinformatic measure of redundancy is indicated as a track-parameter towards the progression of genetic disorder, for example, the growth of cancer. Lastly, using the results obtained, conclusions are enumerated, inferences are outlined and directions for future studies are considered.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3358597
- Subject Headings
- Medical genetics, Chromosome abnormalities, Cancer, Genetic aspects, Mutation (Biology), DNA damage
- Format
- Document (PDF)
- Title
- Cytogenetic of chromosomal synteny evaluation: bioinformatic applications towards screening of chromosomal aberrations/ genetic disorder.
- Creator
- Sharma, Sandhya, Neelakanta, Perambur S., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The research efforts refer to tracking homologus loci in the chromosomes of a pair of a species. The purpose is to infer the extent of maximum syntenic correlation when an exhaustive set of orthologs of the species are searched. Relevant bioinformatic analyses use comparative mapping of conserved synteny via Oxford grid. In medical diagnostic efforts, deducing such synteny correlation can help screening chromosomal aberration in genetic disorder pathology. Objectively, the present study...
Show moreThe research efforts refer to tracking homologus loci in the chromosomes of a pair of a species. The purpose is to infer the extent of maximum syntenic correlation when an exhaustive set of orthologs of the species are searched. Relevant bioinformatic analyses use comparative mapping of conserved synteny via Oxford grid. In medical diagnostic efforts, deducing such synteny correlation can help screening chromosomal aberration in genetic disorder pathology. Objectively, the present study addresses: (i) Cytogenetic framework of syntenic correlation and, (ii) applying information-theoretics to determine entropy-dictated synteny across an exhaustive set of orthologs of the test pairs of species.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004331, http://purl.flvc.org/fau/fd/FA00004331
- Subject Headings
- Cytogenetics, Genetic screening, Human chromosome abnormalities, Medical genetics, Molecular biology, Molecular diagnosis, Molecular genetics, Mutation (Biology)
- Format
- Document (PDF)
- Title
- DEVELOPMENT OF AN ALGORITHM TO GUIDE A MULTI-POLE DIAGNOSTIC CATHETER FOR IDENTIFYING THE LOCATION OF ATRIAL FIBRILLATION SOURCES.
- Creator
- Ganesan, Prasanth, Ghoraani, Behnaz, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Atrial Fibrillation (AF) is a debilitating heart rhythm disorder affecting over 2.7 million people in the US and over 30 million people worldwide annually. It has a high correlation with causing a stroke and several other risk factors, resulting in increased mortality and morbidity rate. Currently, the non-pharmocological therapy followed to control AF is catheter ablation, in which the tissue surrounding the pulmonary veins (PVs) is cauterized (called the PV isolation - PVI procedure) aims...
Show moreAtrial Fibrillation (AF) is a debilitating heart rhythm disorder affecting over 2.7 million people in the US and over 30 million people worldwide annually. It has a high correlation with causing a stroke and several other risk factors, resulting in increased mortality and morbidity rate. Currently, the non-pharmocological therapy followed to control AF is catheter ablation, in which the tissue surrounding the pulmonary veins (PVs) is cauterized (called the PV isolation - PVI procedure) aims to block the ectopic triggers originating from the PVs from entering the atrium. However, the success rate of PVI with or without other anatomy-based lesions is only 50%-60%. A major reason for the suboptimal success rate is the failure to eliminate patientspecific non-PV sources present in the left atrium (LA), namely reentry source (a.k.a. rotor source) and focal source (a.k.a. point source). It has been shown from several animal and human studies that locating and ablating these sources significantly improves the long-term success rate of the ablation procedure. However, current technologies to locate these sources posses limitations with resolution, additional/special hardware requirements, etc. In this dissertation, the goal is to develop an efficient algorithm to locate AF reentry and focal sources using electrograms recorded from a conventionally used high-resolution multi-pole diagnostic catheter.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013310
- Subject Headings
- Atrial Fibrillation--diagnosis, Algorithm, Catheter ablation
- Format
- Document (PDF)
- Title
- DEVELOPMENT OF POINT-OF-CARE ASSAYS FOR DISEASE DIAGNOSTIC AND TREATMENT MONITORING FOR RESOURCE CONSTRAINED SETTINGS.
- Creator
- Sher, Mazhar, Asghar, Waseem, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
This thesis aims to address the challenges of the development of cost-effective and rapid assays for the accurate counting of CD4+ T cells and quantification of HIV-1 viral load for resource-constrained settings. The lack of such assays has severely affected people living in disease prevalent areas. CD4+ T cells count information plays a vital role in the effective management of HIV-1 disease. Here, we present a flow-free magnetic actuation platform that uses antibody-coated magnetic beads to...
Show moreThis thesis aims to address the challenges of the development of cost-effective and rapid assays for the accurate counting of CD4+ T cells and quantification of HIV-1 viral load for resource-constrained settings. The lack of such assays has severely affected people living in disease prevalent areas. CD4+ T cells count information plays a vital role in the effective management of HIV-1 disease. Here, we present a flow-free magnetic actuation platform that uses antibody-coated magnetic beads to efficiently capture CD4+ T cells from a 30 μL drop of whole blood. On-chip cell lysate electrical impedance spectroscopy has been utilized to quantify the isolated CD4 cells. The developed assay has a limit of detection of 25 cells per μL and provides accurate CD4 counts in the range of 25–800 cells per μL. The whole immunoassay along with the enumeration process is very rapid and provides CD4 quantification results within 5 min time frame. The assay does not require off-chip sample preparation steps and minimizes human involvement to a greater extent. The developed impedance-based immunoassay has the potential to significantly improve the CD4 enumeration process especially for POC settings.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013495
- Subject Headings
- Point-of-care testing, Diagnostic tests, Immunoassay, HIV-1, Microfluidic devices
- Format
- Document (PDF)