Current Search: FAU (x) » Department of Computer and Electrical Engineering and Computer Science (x)
View All Items
Pages
- Title
- A REFERENCE ARCHITECTURE FOR NETWORK FUNCTION VIRTUALIZATION.
- Creator
- Alwakeel, Ahmed M., Fernandez, Eduardo B., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Cloud computing has provided many services to potential consumers, one of these services being the provision of network functions using virtualization. Network Function Virtualization is a new technology that aims to improve the way we consume network services. Legacy networking solutions are different because consumers must buy and install various hardware equipment. In NFV, networks are provided to users as a software as a service (SaaS). Implementing NFV comes with many benefits, including...
Show moreCloud computing has provided many services to potential consumers, one of these services being the provision of network functions using virtualization. Network Function Virtualization is a new technology that aims to improve the way we consume network services. Legacy networking solutions are different because consumers must buy and install various hardware equipment. In NFV, networks are provided to users as a software as a service (SaaS). Implementing NFV comes with many benefits, including faster module development for network functions, more rapid deployment, enhancement of the network on cloud infrastructures, and lowering the overall cost of having a network system. All these benefits can be achieved in NFV by turning physical network functions into Virtual Network Functions (VNFs). However, since this technology is still a new network paradigm, integrating this virtual environment into a legacy environment or even moving all together into NFV reflects on the complexity of adopting the NFV system. Also, a network service could be composed of several components that are provided by different service providers; this also increases the complexity and heterogeneity of the system. We apply abstract architectural modeling to describe and analyze the NFV architecture. We use architectural patterns to build a flexible NFV architecture to build a Reference Architecture (RA) for NFV that describe the system and how it works. RAs are proven to be a powerful solution to abstract complex systems that lacks semantics. Having an RA for NFV helps us understand the system and how it functions. It also helps us to expose the possible vulnerabilities that may lead to threats toward the system. In the future, this RA could be enhanced into SRA by adding misuse and security patterns for it to cover potential threats and vulnerabilities in the system. Our audiences are system designers, system architects, and security professionals who are interested in building a secure NFV system.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013434
- Subject Headings
- Virtual computer systems, Cloud computing, Computer network architectures, Computer networks
- Format
- Document (PDF)
- Title
- META-LEARNING AND ENSEMBLE METHODS FOR DEEP NEURAL NETWORKS.
- Creator
- Liu, Feng, Dingding, Wang, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Deep Neural Networks have been widely applied in many different applications and achieve significant improvement over classical machine learning techniques. However, training a neural network usually requires large amount of data, which is not guaranteed in some applications such as medical image classification. To address this issue, people propose to implement meta learning and ensemble learning techniques to make deep learning trainers more powerful. This thesis focuses on using deep...
Show moreDeep Neural Networks have been widely applied in many different applications and achieve significant improvement over classical machine learning techniques. However, training a neural network usually requires large amount of data, which is not guaranteed in some applications such as medical image classification. To address this issue, people propose to implement meta learning and ensemble learning techniques to make deep learning trainers more powerful. This thesis focuses on using deep learning equipped with meta learning and ensemble learning to study specific problems. We first propose a new deep learning based method for suggestion mining. The major challenges of suggestion mining include cross domain issue and the issues caused by unstructured and highly imbalanced data structure. To overcome these challenges, we propose to apply Random Multi-model Deep Learning (RMDL) which combines three different deep learning architectures (DNNs, RNNs and CNNs) and automatically selects the optimal hyper parameter to improve the robustness and flexibility of the model. Our experimental results on the SemEval-2019 competition Task 9 data sets demonstrate that our proposed RMDL outperforms most of the existing suggestion mining methods.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013481
- Subject Headings
- Neural networks (Computer science), Deep learning, Neural Networks in Applications, Machine learning--Technique
- Format
- Document (PDF)
- Title
- DEVELOPMENT OF POINT-OF-CARE ASSAYS FOR DISEASE DIAGNOSTIC AND TREATMENT MONITORING FOR RESOURCE CONSTRAINED SETTINGS.
- Creator
- Sher, Mazhar, Asghar, Waseem, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
This thesis aims to address the challenges of the development of cost-effective and rapid assays for the accurate counting of CD4+ T cells and quantification of HIV-1 viral load for resource-constrained settings. The lack of such assays has severely affected people living in disease prevalent areas. CD4+ T cells count information plays a vital role in the effective management of HIV-1 disease. Here, we present a flow-free magnetic actuation platform that uses antibody-coated magnetic beads to...
Show moreThis thesis aims to address the challenges of the development of cost-effective and rapid assays for the accurate counting of CD4+ T cells and quantification of HIV-1 viral load for resource-constrained settings. The lack of such assays has severely affected people living in disease prevalent areas. CD4+ T cells count information plays a vital role in the effective management of HIV-1 disease. Here, we present a flow-free magnetic actuation platform that uses antibody-coated magnetic beads to efficiently capture CD4+ T cells from a 30 μL drop of whole blood. On-chip cell lysate electrical impedance spectroscopy has been utilized to quantify the isolated CD4 cells. The developed assay has a limit of detection of 25 cells per μL and provides accurate CD4 counts in the range of 25–800 cells per μL. The whole immunoassay along with the enumeration process is very rapid and provides CD4 quantification results within 5 min time frame. The assay does not require off-chip sample preparation steps and minimizes human involvement to a greater extent. The developed impedance-based immunoassay has the potential to significantly improve the CD4 enumeration process especially for POC settings.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013495
- Subject Headings
- Point-of-care testing, Diagnostic tests, Immunoassay, HIV-1, Microfluidic devices
- Format
- Document (PDF)
- Title
- CEREBROSPINAL FLUID SHUNT SYSTEM WITH AUTO-FLOW REGULATION.
- Creator
- Mutlu, Caner, Asghar, Waseem, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
A cerebrospinal fluid (CSF) shunt system is used for treatment of hydrocephalus and abnormal intracranial pressure (ICP) conditions. Mostly a shunt system is placed under skin for creating a low resistance pathway between intracranial space and appropriate discharge sites within body by doing so excess CSF volume can exit the intracranial space. Displaced intracranial CSF volume normally results in lowered ICP. Thereby, a CSF shunt can manage ICP. In a healthy person, normal ICP is primarily...
Show moreA cerebrospinal fluid (CSF) shunt system is used for treatment of hydrocephalus and abnormal intracranial pressure (ICP) conditions. Mostly a shunt system is placed under skin for creating a low resistance pathway between intracranial space and appropriate discharge sites within body by doing so excess CSF volume can exit the intracranial space. Displaced intracranial CSF volume normally results in lowered ICP. Thereby, a CSF shunt can manage ICP. In a healthy person, normal ICP is primarily maintained by CSF production and reabsorption rate as a natural tendency of body. If intracranial CSF volume starts increasing due to under reabsorption, this mostly results in raised ICP. Abnormal ICP can be treated by discharging excess CSF volume via use of a shunt system. Once a shunt system is placed subcutaneously, a patient is expected to live a normal life. However, shunt failure as well as flow regulatory problems are major issues with current passive shunt systems which leaves patients with serious consequences of under-/over CSF drainage condition. In this research, a shunt system is developed which is resistant to most shunt-related causes of under-/over CSF drainage. This has been made possible via use of an on-board medical monitoring (diagnostic) and active flow control mechanism. The developed shunt system, in this research, has full external ventricular drainage (EVD) capability. Further miniaturization will make it possible for an implantable shunt.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013489
- Subject Headings
- Cerebrospinal Fluid Shunts
- Format
- Document (PDF)
- Title
- COMPARISON OF PRE-TRAINED CONVOLUTIONAL NEURAL NETWORK PERFORMANCE ON GLIOMA CLASSIFICATION.
- Creator
- Andrews, Whitney Angelica Johanna, Furht, Borko, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Gliomas are an aggressive class of brain tumors that are associated with a better prognosis at a lower grade level. Effective differentiation and classification are imperative for early treatment. MRI scans are a popular medical imaging modality to detect and diagnosis brain tumors due to its capability to non-invasively highlight the tumor region. With the rise of deep learning, researchers have used convolution neural networks for classification purposes in this domain, specifically pre...
Show moreGliomas are an aggressive class of brain tumors that are associated with a better prognosis at a lower grade level. Effective differentiation and classification are imperative for early treatment. MRI scans are a popular medical imaging modality to detect and diagnosis brain tumors due to its capability to non-invasively highlight the tumor region. With the rise of deep learning, researchers have used convolution neural networks for classification purposes in this domain, specifically pre-trained networks to reduce computational costs. However, with various MRI modalities, MRI machines, and poor image scan quality cause different network structures to have different performance metrics. Each pre-trained network is designed with a different structure that allows robust results given specific problem conditions. This thesis aims to cover the gap in the literature to compare the performance of popular pre-trained networks on a controlled dataset that is different than the network trained domain.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013450
- Subject Headings
- Gliomas, Neural networks (Computer science), Deep Learning, Convolutional neural networks
- Format
- Document (PDF)
- Title
- SMARTPHONE BASED SICKLE CELL DISEASE DETECTION AND ITS TREATMENT MONITORING FOR POINT-OF-CARE SETTINGS.
- Creator
- Ilyas, Shazia, Asghar, Waseem, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
The majority of Sickle Cell Disease (SCD) prevalence is found in Sub-Saharan Africa, where 80% of the world’s population who suffer from this disease are born. Due to a lack of diagnosis and early treatments, 50-90% of these children will die before they reach the age of five. Current methods used for diagnosing SCD are based on hemoglobin analysis such as capillary electrophoresis, ion-exchange high-performance liquid chromatography, and isoelectric focusing. They require expensive...
Show moreThe majority of Sickle Cell Disease (SCD) prevalence is found in Sub-Saharan Africa, where 80% of the world’s population who suffer from this disease are born. Due to a lack of diagnosis and early treatments, 50-90% of these children will die before they reach the age of five. Current methods used for diagnosing SCD are based on hemoglobin analysis such as capillary electrophoresis, ion-exchange high-performance liquid chromatography, and isoelectric focusing. They require expensive laboratory equipment and are not feasible in these low-resource countries. It is, therefore, imperative to develop an alternative and cost-effective method for diagnosing and monitoring of SCD. This thesis aims to address the development and evaluation of a smartphone-based optical setup for the detection of SCD. This innovative technique can potentially be applied for low cost and accurate diagnosis of SCD and improve disease management in resource-limited settings where the disease exhibits a high prevalence. This Point-of-Care (POC) based device offers the potential to improve SCD diagnosis and patient care by providing a portable and cost effective device that requires minimal training to operate and analyze.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013475
- Subject Headings
- Anemia, Sickle Cell, Point-of-Care Systems, Sickle cell anemia--Treatment, Sickle cell anemia--Diagnosis, Smartphones
- Format
- Document (PDF)
- Title
- TOWARDS A SECURITY REFERENCE ARCHITECTURE FOR NETWORK FUNCTION VIRTUALIZATION.
- Creator
- Alnaim, Abdulrahman K., Fernandez, Eduardo B., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Network Function Virtualization (NFV) is an emerging technology that transforms legacy hardware-based network infrastructure into software-based virtualized networks. Instead of using dedicated hardware and network equipment, NFV relies on cloud and virtualization technologies to deliver network services to its users. These virtualized network services are considered better solutions than hardware-based network functions because their resources can be dynamically increased upon the consumer’s...
Show moreNetwork Function Virtualization (NFV) is an emerging technology that transforms legacy hardware-based network infrastructure into software-based virtualized networks. Instead of using dedicated hardware and network equipment, NFV relies on cloud and virtualization technologies to deliver network services to its users. These virtualized network services are considered better solutions than hardware-based network functions because their resources can be dynamically increased upon the consumer’s request. While their usefulness can’t be denied, they also have some security implications. In complex systems like NFV, the threats can come from a variety of domains due to it containing both the hardware and the virtualize entities in its infrastructure. Also, since it relies on software, the network service in NFV can be manipulated by external entities like third-party providers or consumers. This leads the NFV to have a larger attack surface than the traditional network infrastructure. In addition to its own threats, NFV also inherits security threats from its underlying cloud infrastructure. Therefore, to design a secure NFV system and utilize its full potential, we must have a good understanding of its underlying architecture and its possible security threats. Up until now, only imprecise models of this architecture existed. We try to improve this situation by using architectural modeling to describe and analyze the threats to NFV. Architectural modeling using Patterns and Reference Architectures (RAs) applies abstraction, which helps to reduce the complexity of NFV systems by defining their components at their highest level. The literature lacks attempts to implement this approach to analyze NFV threats. We started by enumerating the possible threats that may jeopardize the NFV system. Then, we performed an analysis of the threats to identify the possible misuses that could be performed from them. These threats are realized in the form of misuse patterns that show how an attack is performed from the point of view of attackers. Some of the most important threats are privilege escalation, virtual machine escape, and distributed denial-of-service. We used a reference architecture of NFV to determine where to add security mechanisms in order to mitigate the identified threats. This produces our ultimate goal, which is building a security reference architecture for NFV.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013435
- Subject Headings
- Computer network architectures--Safety measures, Virtual computer systems, Computer networks, Modeling, Computer
- Format
- Document (PDF)
- Title
- MACHINE LEARNING DEMODULATOR ARCHITECTURES FOR POWER-LIMITED COMMUNICATIONS.
- Creator
- Gorday, Paul E., Nurgun, Erdol, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
The success of deep learning has renewed interest in applying neural networks and other machine learning techniques to most fields of data and signal processing, including communications. Advances in architecture and training lead us to consider new modem architectures that allow flexibility in design, continued learning in the field, and improved waveform coding. This dissertation examines neural network architectures and training methods suitable for demodulation in power-limited...
Show moreThe success of deep learning has renewed interest in applying neural networks and other machine learning techniques to most fields of data and signal processing, including communications. Advances in architecture and training lead us to consider new modem architectures that allow flexibility in design, continued learning in the field, and improved waveform coding. This dissertation examines neural network architectures and training methods suitable for demodulation in power-limited communication systems, such as those found in wireless sensor networks. Such networks will provide greater connection to the world around us and are expected to contain orders of magnitude more devices than cellular networks. A number of standard and proprietary protocols span this space, with modulations such as frequency-shift-keying (FSK), Gaussian FSK (GFSK), minimum shift keying (MSK), on-off-keying (OOK), and M-ary orthogonal modulation (M-orth). These modulations enable low-cost radio hardware with efficient nonlinear amplification in the transmitter and noncoherent demodulation in the receiver.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013511
- Subject Headings
- Deep learning, Machine learning--Technique, Demodulators, Wireless sensor networks, Computer network architectures
- Format
- Document (PDF)
- Title
- MULTIFACETED EMBEDDING LEARNING FOR NETWORKED DATA AND SYSTEMS.
- Creator
- Shi, Min, Tang, Yufei, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Network embedding or representation learning is important for analyzing many real-world applications and systems, i.e., social networks, citation networks and communication networks. It targets at learning low-dimensional vector representations of nodes with preserved graph structure (e.g., link relations) and content (e.g., texts) information. The derived node representations can be directly applied in many downstream applications, including node classification, clustering and visualization....
Show moreNetwork embedding or representation learning is important for analyzing many real-world applications and systems, i.e., social networks, citation networks and communication networks. It targets at learning low-dimensional vector representations of nodes with preserved graph structure (e.g., link relations) and content (e.g., texts) information. The derived node representations can be directly applied in many downstream applications, including node classification, clustering and visualization. In addition to the complex network structures, nodes may have rich non structure information such as labels and contents. Therefore, structure, label and content constitute different aspects of the entire network system that reflect node similarities from multiple complementary facets. This thesis focuses on multifaceted network embedding learning, which aims to efficiently incorporate distinct aspects of information such as node labels and node contents for cooperative low-dimensional representation learning together with node topology.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013516
- Subject Headings
- Embedded computer systems, Neural networks (Computer science), Network embedding, Machine learning
- Format
- Document (PDF)
- Title
- NEURALSYNTH - A NEURAL NETWORK TO FPGA COMPILATION FRAMEWORK FOR RUNTIME EVALUATION.
- Creator
- Lanham, Grant Jr, Hallstrom, Jason O., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Artificial neural networks are increasing in power, with attendant increases in demand for efficient processing. Performance is limited by clock speed and degree of parallelization available through multi-core processors and GPUs. With a design tailored to a specific network, a field-programmable gate array (FPGA) can be used to minimize latency without the need for geographically distributed computing. However, the task of programming an FPGA is outside the realm of most data scientists....
Show moreArtificial neural networks are increasing in power, with attendant increases in demand for efficient processing. Performance is limited by clock speed and degree of parallelization available through multi-core processors and GPUs. With a design tailored to a specific network, a field-programmable gate array (FPGA) can be used to minimize latency without the need for geographically distributed computing. However, the task of programming an FPGA is outside the realm of most data scientists. There are tools to program FPGAs from a high level description of a network, but there is no unified interface for programmers across these tools. In this thesis, I present the design and implementation of NeuralSynth, a prototype Python framework which aims to bridge the gap between data scientists and FPGA programming for neural networks. My method relies on creating an extensible Python framework that is used to automate programming and interaction with an FPGA. The implementation includes a digital design for the FPGA that is completed by a Python framework. Programming and interacting with the FPGA does not require leaving the Python environment. The extensible approach allows multiple implementations, resulting in a similar workflow for each implementation. For evaluation, I compare the results of my implementation with a known neural network framework.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013533
- Subject Headings
- Artificial neural networks, Neural networks (Computer science)--Design, Field programmable gate arrays, Python (Computer program language)
- Format
- Document (PDF)
- Title
- HPCC based Platform for COPD Readmission Risk Analysis with implementation of Dimensionality reduction and balancing techniques.
- Creator
- Jain, Piyush, Agarwal, Ankur, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Hospital readmission rates are considered to be an important indicator of quality of care because they may be a consequence of actions of commission or omission made during the initial hospitalization of the patient, or as a consequence of poorly managed transition of the patient back into the community. The negative impact on patient quality of life and huge burden on healthcare system have made reducing hospital readmissions a central goal of healthcare delivery and payment reform efforts....
Show moreHospital readmission rates are considered to be an important indicator of quality of care because they may be a consequence of actions of commission or omission made during the initial hospitalization of the patient, or as a consequence of poorly managed transition of the patient back into the community. The negative impact on patient quality of life and huge burden on healthcare system have made reducing hospital readmissions a central goal of healthcare delivery and payment reform efforts. In this study, we will be proposing a framework on how the readmission analysis and other healthcare models could be deployed in real world and a Machine learning based solution which uses patients discharge summaries as a dataset to train and test the machine learning model created. Current systems does not take into consideration one of the very important aspect of solving readmission problem by taking Big data into consideration. This study also takes into consideration Big data aspect of solutions which can be deployed in the field for real world use. We have used HPCC compute platform which provides distributed parallel programming platform to create, run and manage applications which involves large amount of data. We have also proposed some feature engineering and data balancing techniques which have shown to greatly enhance the machine learning model performance. This was achieved by reducing the dimensionality in the data and fixing the imbalance in the dataset. The system presented in this study provides a real world machine learning based predictive modeling for reducing readmissions which could be templatized for other diseases.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013560
- Subject Headings
- Machine learning, Big data, Patient Readmission, Hospitals--Admission and discharge--Data processing, High performance computing
- Format
- Document (PDF)
- Title
- SPATIAL NETWORK BIG DATA APPROACHES TO EMERGENCY MANAGEMENT INFORMATION SYSTEMS.
- Creator
- Herschelman, Roxana M., Yang, KwangSoo, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Emergency Management Information Systems (EMIS) are defined as a set of tools that aid decision-makers in risk assessment and response for significant multi-hazard threats and disasters. Over the past three decades, EMIS have grown in importance as a major component for understanding, managing, and governing transportation-related systems. To increase resilience against potential threats, the main goal of EMIS is to timely utilize spatial and network datasets about (1) locations of hazard...
Show moreEmergency Management Information Systems (EMIS) are defined as a set of tools that aid decision-makers in risk assessment and response for significant multi-hazard threats and disasters. Over the past three decades, EMIS have grown in importance as a major component for understanding, managing, and governing transportation-related systems. To increase resilience against potential threats, the main goal of EMIS is to timely utilize spatial and network datasets about (1) locations of hazard areas (2) shelters and resources, (3) and how to respond to emergencies. The main concern about these datasets has always been the very large size, variety, and update rate required to ensure the timely delivery of useful emergency information and response for disastrous events. Another key issue is that the information should be concise and easy to understand, but at the same time very descriptive and useful in the case of emergency or disaster. Advancement in EMIS is urgently needed to develop fundamental data processing components for advanced spatial network queries that clearly and succinctly deliver critical information in emergencies. To address these challenges, we investigate Spatial Network Database Systems and study three challenging Transportation Resilience problems: producing large scale evacuation plans, identifying major traffic patterns during emergency evacuations, and identifying the highest areas in need of resources.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013576
- Subject Headings
- Emergency management, Big data, Emergency management--Information technology
- Format
- Document (PDF)
- Title
- ASSESSING METHODS AND TOOLS TO IMPROVE REPORTING, INCREASE TRANSPARENCY, AND REDUCE FAILURES IN MACHINE LEARNING APPLICATIONS IN HEALTHCARE.
- Creator
- Garbin, Christian, Marques, Oge, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Artificial intelligence (AI) had a few false starts – the AI winters of the 1970s and 1980s. We are now in what looks like an AI summer. There are many useful applications of AI in the field. But there are still unfulfilled promises and outright failures. From self-driving cars that work only in constrained cases, to medical image analysis products that would replace radiologists but never did, we still struggle to translate successful research into successful real-world applications. The...
Show moreArtificial intelligence (AI) had a few false starts – the AI winters of the 1970s and 1980s. We are now in what looks like an AI summer. There are many useful applications of AI in the field. But there are still unfulfilled promises and outright failures. From self-driving cars that work only in constrained cases, to medical image analysis products that would replace radiologists but never did, we still struggle to translate successful research into successful real-world applications. The software engineering community has accumulated a large body of knowledge over the decades on how to develop, release, and maintain products. AI products, being software products, benefit from some of that accumulated knowledge, but not all of it. AI products diverge from traditional software products in fundamental ways: their main component is not a specific piece of code, written for a specific purpose, but a generic piece of code, a model, customized by a training process driven by hyperparameters and a dataset. Datasets are usually large and models are opaque. We cannot directly inspect them as we can inspect the code of traditional software products. We need other methods to detect failures in AI products.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013580
- Subject Headings
- Machine learning, Artificial intelligence, Healthcare
- Format
- Document (PDF)
- Title
- CONNECTED MULTI-DOMAIN AUTONOMY AND ARTIFICIAL INTELLIGENCE: AUTONOMOUS LOCALIZATION, NETWORKING, AND DATA CONFORMITY EVALUATION.
- Creator
- Tountas, Konstantinos, Pados, Dimitris, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
The objective of this dissertation work is the development of a solid theoretical and algorithmic framework for three of the most important aspects of autonomous/artificialintelligence (AI) systems, namely data quality assurance, localization, and communications. In the era of AI and machine learning (ML), data reign supreme. During learning tasks, we need to ensure that the training data set is correct and complete. During operation, faulty data need to be discovered and dealt with to...
Show moreThe objective of this dissertation work is the development of a solid theoretical and algorithmic framework for three of the most important aspects of autonomous/artificialintelligence (AI) systems, namely data quality assurance, localization, and communications. In the era of AI and machine learning (ML), data reign supreme. During learning tasks, we need to ensure that the training data set is correct and complete. During operation, faulty data need to be discovered and dealt with to protect from -potentially catastrophic- system failures. With our research in data quality assurance, we develop new mathematical theory and algorithms for outlier-resistant decomposition of high-dimensional matrices (tensors) based on L1-norm principal-component analysis (PCA). L1-norm PCA has been proven to be resistant to irregular data-points and will drive critical real-world AI learning and autonomous systems operations in the future. At the same time, one of the most important tasks of autonomous systems is self-localization. In GPS-deprived environments, localization becomes a fundamental technical problem. State-of-the-art solutions frequently utilize power-hungry or expensive architectures, making them difficult to deploy. In this dissertation work, we develop and implement a robust, variable-precision localization technique for autonomous systems based on the direction-of-arrival (DoA) estimation theory, which is cost and power-efficient. Finally, communication between autonomous systems is paramount for mission success in many applications. In the era of 5G and beyond, smart spectrum utilization is key.. In this work, we develop physical (PHY) and medium-access-control (MAC) layer techniques that autonomously optimize spectrum usage and minimizes intra and internetwork interference.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013617
- Subject Headings
- Artificial intelligence, Machine learning, Tensor algebra
- Format
- Document (PDF)
- Title
- NETWORK FEATURE ENGINEERING AND DATA SCIENCE ANALYTICS FOR CYBER THREAT INTELLIGENCE.
- Creator
- Wheelus, Charles, Zhu, Xingquan, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
While it is evident that network services continue to play an ever-increasing role in our daily lives, it is less evident that our information infrastructure requires a concerted, well-conceived, and fastidiously executed strategy to remain viable. Government agencies, Non-Governmental Organizations (\NGOs"), and private organizations are all targets for malicious online activity. Security has deservedly become a serious focus for organizations that seek to assume a more proactive posture; in...
Show moreWhile it is evident that network services continue to play an ever-increasing role in our daily lives, it is less evident that our information infrastructure requires a concerted, well-conceived, and fastidiously executed strategy to remain viable. Government agencies, Non-Governmental Organizations (\NGOs"), and private organizations are all targets for malicious online activity. Security has deservedly become a serious focus for organizations that seek to assume a more proactive posture; in order to deal with the many facets of securing their infrastructure. At the same time, the discipline of data science has rapidly grown into a prominent role, as once purely theoretical machine learning algorithms have become practical for implementation. This is especially noteworthy, as principles that now fall neatly into the field of data science has been contemplated for quite some time, and as much as over two hundred years ago. Visionaries like Thomas Bayes [18], Andrey Andreyevich Markov [65], Frank Rosenblatt [88], and so many others made incredible contributions to the field long before the impact of Moore's law [92] would make such theoretical work commonplace for practical use; giving rise to what has come to be known as "Data Science".
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013620
- Subject Headings
- Cyber security, Computer security, Information infrastructure, Predictive analytics
- Format
- Document (PDF)
- Title
- PREDICTING MELANOMA RISK FROM ELECTRONIC HEALTH RECORDS WITH MACHINE LEARNING TECHNIQUES.
- Creator
- Richter, Aaron N., Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Melanoma is one of the fastest growing cancers in the world, and can affect patients earlier in life than most other cancers. Therefore, it is imperative to be able to identify patients at high risk for melanoma and enroll them in screening programs to detect the cancer early. Electronic health records collect an enormous amount of data about real-world patient encounters, treatments, and outcomes. This data can be mined to increase our understanding of melanoma as well as build personalized...
Show moreMelanoma is one of the fastest growing cancers in the world, and can affect patients earlier in life than most other cancers. Therefore, it is imperative to be able to identify patients at high risk for melanoma and enroll them in screening programs to detect the cancer early. Electronic health records collect an enormous amount of data about real-world patient encounters, treatments, and outcomes. This data can be mined to increase our understanding of melanoma as well as build personalized models to predict risk of developing the cancer. Cancer risk models built from structured clinical data are limited in current research, with most studies involving just a few variables from institutional databases or registries. This dissertation presents data processing and machine learning approaches to build melanoma risk models from a large database of de-identified electronic health records. The database contains consistently captured structured data, enabling the extraction of hundreds of thousands of data points each from millions of patient records. Several experiments are performed to build effective models, particularly to predict sentinel lymph node metastasis in known melanoma patients and to predict individual risk of developing melanoma. Data for these models suffer from high dimensionality and class imbalance. Thus, classifiers such as logistic regression, support vector machines, random forest, and XGBoost are combined with advanced modeling techniques such as feature selection and data sampling. Risk factors are evaluated using regression model weights and decision trees, while personalized predictions are provided through random forest decomposition and Shapley additive explanations. Random undersampling on the melanoma risk dataset shows that many majority samples can be removed without a decrease in model performance. To determine how much data is truly needed, we explore learning curve approximation methods on the melanoma data and three publicly-available large-scale biomedical datasets. We apply an inverse power law model as well as introduce a novel semi-supervised curve creation method that utilizes a small amount of labeled data.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013342
- Subject Headings
- Melanoma, Electronic Health Records, Machine learning--Technique, Big Data
- Format
- Document (PDF)
- Title
- INVESTIGATING MACHINE LEARNING ALGORITHMS WITH IMBALANCED BIG DATA.
- Creator
- Hasanin, Tawfiq, Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Recent technological developments have engendered an expeditious production of big data and also enabled machine learning algorithms to produce high-performance models from such data. Nonetheless, class imbalance (in binary classifications) between the majority and minority classes in big data can skew the predictive performance of the classification algorithms toward the majority (negative) class whereas the minority (positive) class usually holds greater value for the decision makers. Such...
Show moreRecent technological developments have engendered an expeditious production of big data and also enabled machine learning algorithms to produce high-performance models from such data. Nonetheless, class imbalance (in binary classifications) between the majority and minority classes in big data can skew the predictive performance of the classification algorithms toward the majority (negative) class whereas the minority (positive) class usually holds greater value for the decision makers. Such bias may lead to adverse consequences, some of them even life-threatening, when the existence of false negatives is generally costlier than false positives. The size of the minority class can vary from fair to extraordinary small, which can lead to different performance scores for machine learning algorithms. Class imbalance is a well-studied area for traditional data, i.e., not big data. However, there is limited research focusing on both rarity and severe class imbalance in big data.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013316
- Subject Headings
- Algorithms, Machine learning, Big data--Data processing, Big data
- Format
- Document (PDF)
- Title
- Models and Implementations of Online Laboratories; A Definition of a Standard Architecture to Integrate Distributed Remote Experiments.
- Creator
- Zapata Rivera, Luis Felipe, Larrondo Petrie, Maria M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Hands-on laboratory experiences are a key part of all engineering programs. Currently there is high demand for online engineering courses, but offering lab experiences online still remain a great challenge. Remote laboratories have been under development for more than 20 years and are part of a bigger category, called online laboratories, which includes also virtual laboratories. Development of remote laboratories in academic settings has been held back because of the lack of standardization...
Show moreHands-on laboratory experiences are a key part of all engineering programs. Currently there is high demand for online engineering courses, but offering lab experiences online still remain a great challenge. Remote laboratories have been under development for more than 20 years and are part of a bigger category, called online laboratories, which includes also virtual laboratories. Development of remote laboratories in academic settings has been held back because of the lack of standardization of technology, processes, operation and their integration with formal educational environments. Remote laboratories can be used in educational settings for a variety of reasons, for instance, when the equipment is not available in the physical laboratory; when the physical laboratory space available is not sufficient to either set up the experiments or permit access to all on-site students in the course; or when the teacher needs to provide online laboratory experiences to students taking courses via distance education. This dissertation proposes a new approach for the development and deployment of online laboratories over online platforms. The research activities performed include: The design and implementation of an architecture of a system for Smart Adaptive Remote Laboratories (SARL) integrated to educational environments to improve the remote laboratory users experience through the implementation of a modular architecture and the use of context information about the users and laboratory activities; the design pattern and implementation for the Remote Laboratory Management System (RLMS); the definition and implementation of an xAPI-based activity tracking system for online laboratories with support for both centralized and distributed architectures of Learning Record Stores (LRS); the definition of Smart Laboratory Learning Object (SLLO) capable of being integrated in different educational environments, including the implementation of a Lab Authoring module; and finally, the definition of a reliability model to detect and report failures and possible causes and countermeasures applying ruled based systems. The architecture proposed complies with the just approved IEEE 1876 Standard for Networked Smart Learning for Online Laboratories and supports virtual, remote, hybrid and mobile laboratories. A full set of low-cost online laboratory experiment stations were designed and implemented to support the Introduction to Logic Design course, providing true hands-on lab experience to students through the a low-cost, student-built mobile laboratory platform connected via USB to the SARL System. The SARL prototype have been successfully integrated to a Virtual Learning Environment (VLE) and a variety of configurations tested that can support privacy and security requirements of different stakeholders. The prototype online laboratory experiments developed have contributed and been featured in IEEE 1876 standard, as well as been integrated into an Industry Connections Actionable Data Book (ADB) that was featured in the Frankfurt Book Fair in 2017. SARL is being developed as the infrastructure to support a Latin American and Caribbean network of online laboratories.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013282
- Subject Headings
- Remote laboratories, Online laboratories, Engineering Education, Software architecture
- Format
- Document (PDF)
- Title
- MODELING AND SECURITY IN CLOUD AND RELATED ECOSYSTEMS.
- Creator
- Syed, Madiha Haider, Fernandez, Eduardo B., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Software systems increasingly interact with each other, forming ecosystems. Cloud is one such ecosystem that has evolved and enabled other technologies like IoT and containers. Such systems are very complex and heterogeneous because their components can have diverse origins, functions, security policies, and communication protocols, which makes it difficult to comprehend, utilize and consequently secure them. Abstract architectural models can be used to handle this complexity and...
Show moreSoftware systems increasingly interact with each other, forming ecosystems. Cloud is one such ecosystem that has evolved and enabled other technologies like IoT and containers. Such systems are very complex and heterogeneous because their components can have diverse origins, functions, security policies, and communication protocols, which makes it difficult to comprehend, utilize and consequently secure them. Abstract architectural models can be used to handle this complexity and heterogeneity but there is lack of work on precise, implementation/vendor neutral and holistic models which represent ecosystem components and their mutual interactions. We attempted to find similarities in systems and generalize to create abstract models for adding security. We represented the ecosystem as a Reference architecture (RA) and the ecosystem units as patterns. We started with a pattern diagram which showed all the components involved along with their mutual interactions and dependencies. We added components to the already existent Cloud security RA (SRA). Containers, being relatively new virtualization technology, did not have a precise and holistic reference architecture. We have built a partial RA for containers by identifying and modeling components of the ecosystem. Container security issues were identified from the literature as well as analysis of our patterns. We added corresponding security countermeasures to container RA as security patterns to build a container SRA. Finally, using container SRA as an example, we demonstrated an approach for RA validation. We have also built a composite pattern for fog computing that is an intermediate platform between Cloud and IoT devices. We represented an attack, Distributed Denial of Service (DDoS) using IoT devices, in the form of a misuse pattern which explains it from the attacker’s perspective. We found this modelbased approach useful to build RAs in a flexible and incremental way as components can be identified and added as the ecosystems expand. This provided us better insight to analyze security issues across boundaries of individual ecosystems. A unified, precise and holistic view of the system is not just useful for adding or evaluating security, this approach can also be used to ensure compliance, privacy, safety, reliability and/or governance for cloud and related ecosystems. This is the first work we know of where patterns and RAs are used to represent ecosystems and analyze their security.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013345
- Subject Headings
- Software ecosystems, Cloud computing--Security measures, Internet of things, Software architecture--Security measures, Computer modeling
- Format
- Document (PDF)
- Title
- THE EFFECT OF LANE CHANGE VOLATILITY ON REAL TIME ACCIDENT PREDICTION.
- Creator
- Tesheira, Hamilton, Mahgoub, Imad, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
According to a March 2019 publication by the National Highway Transportation Safety Administration(NHTSA), 62% of all police-reported accidents in the United States between 2011 and 2015 could have been prevented or mitigated with the use of five groups of collision avoidance technologies in passenger vehicles: (1) forward collision prevention, (2) lane keeping, (3) blind zone detection, (4) forward pedestrian impact, and (5) backing collision avoidance. These technologies work mostly by...
Show moreAccording to a March 2019 publication by the National Highway Transportation Safety Administration(NHTSA), 62% of all police-reported accidents in the United States between 2011 and 2015 could have been prevented or mitigated with the use of five groups of collision avoidance technologies in passenger vehicles: (1) forward collision prevention, (2) lane keeping, (3) blind zone detection, (4) forward pedestrian impact, and (5) backing collision avoidance. These technologies work mostly by reducing or removing the risks involved in a lane change maneuver; yet, the Broward transportation management system does not directly address these risk. Therefore, we are proposing a Machine Learning based approach to real-time accident prediction for Broward I-95 using the C5.1 Decision Tree and the Multi-Layer Perceptron Neural Network to address them. To do this, we design a new measure of volatility, Lane Change Volatility(LCV), which measures the potential for a lane change in a segment of the highway. Our research found that LCV is an important predictor of accidents in an exit zone and when considered in tandem with current system variable, such as lighting conditions, the machine learning classifiers are able to predict accidents in the exit zone with an accuracy rate of over 98%.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013420
- Subject Headings
- Traffic accidents, Traffic accidents--Forecasting, Automobile driving--Lane changing, Perceptrons, Neural networks (Computer science)
- Format
- Document (PDF)