Current Search: Document (PDF) (x) » pamphlet (x) » Department of Computer and Electrical Engineering and Computer Science (x)
View All Items
Pages
- Title
- MACHINE LEARNING ALGORITHMS FOR THE DETECTION AND ANALYSIS OF WEB ATTACKS.
- Creator
- Zuech, Richard, Khoshgoftaar, Taghi M., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
The Internet has provided humanity with many great benefits, but it has also introduced new risks and dangers. E-commerce and other web portals have become large industries with big data. Criminals and other bad actors constantly seek to exploit these web properties through web attacks. Being able to properly detect these web attacks is a crucial component in the overall cybersecurity landscape. Machine learning is one tool that can assist in detecting web attacks. However, properly using...
Show moreThe Internet has provided humanity with many great benefits, but it has also introduced new risks and dangers. E-commerce and other web portals have become large industries with big data. Criminals and other bad actors constantly seek to exploit these web properties through web attacks. Being able to properly detect these web attacks is a crucial component in the overall cybersecurity landscape. Machine learning is one tool that can assist in detecting web attacks. However, properly using machine learning to detect web attacks does not come without its challenges. Classification algorithms can have difficulty with severe levels of class imbalance. Class imbalance occurs when one class label disproportionately outnumbers another class label. For example, in cybersecurity, it is common for the negative (normal) label to severely outnumber the positive (attack) label. Another difficulty encountered in machine learning is models can be complex, thus making it difficult for even subject matter experts to truly understand a model’s detection process. Moreover, it is important for practitioners to determine which input features to include or exclude in their models for optimal detection performance. This dissertation studies machine learning algorithms in detecting web attacks with big data. Severe class imbalance is a common problem in cybersecurity, and mainstream machine learning research does not sufficiently consider this with web attacks. Our research first investigates the problems associated with severe class imbalance and rarity. Rarity is an extreme form of class imbalance where the positive class suffers extremely low positive class count, thus making it difficult for the classifiers to discriminate. In reducing imbalance, we demonstrate random undersampling can effectively mitigate the class imbalance and rarity problems associated with web attacks. Furthermore, our research introduces a novel feature popularity technique which produces easier to understand models by only including the fewer, most popular features. Feature popularity granted us new insights into the web attack detection process, even though we had already intensely studied it. Even so, we proceed cautiously in selecting the best input features, as we determined that the “most important” Destination Port feature might be contaminated by lopsided traffic distributions.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013823
- Subject Headings
- Machine learning, Computer security, Algorithms, Cybersecurity
- Format
- Document (PDF)
- Title
- Context-based Image Concept Detection and Annotation.
- Creator
- Zolghadr, Esfandiar, Furht, Borko, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Scene understanding attempts to produce a textual description of visible and latent concepts in an image to describe the real meaning of the scene. Concepts are either objects, events or relations depicted in an image. To recognize concepts, the decision of object detection algorithm must be further enhanced from visual similarity to semantical compatibility. Semantically relevant concepts convey the most consistent meaning of the scene. Object detectors analyze visual properties (e.g., pixel...
Show moreScene understanding attempts to produce a textual description of visible and latent concepts in an image to describe the real meaning of the scene. Concepts are either objects, events or relations depicted in an image. To recognize concepts, the decision of object detection algorithm must be further enhanced from visual similarity to semantical compatibility. Semantically relevant concepts convey the most consistent meaning of the scene. Object detectors analyze visual properties (e.g., pixel intensities, texture, color gradient) of sub-regions of an image to identify objects. The initially assigned objects names must be further examined to ensure they are compatible with each other and the scene. By enforcing inter-object dependencies (e.g., co-occurrence, spatial and semantical priors) and object to scene constraints as background information, a concept classifier predicts the most semantically consistent set of names for discovered objects. The additional background information that describes concepts is called context. In this dissertation, a framework for building context-based concept detection is presented that uses a combination of multiple contextual relationships to refine the result of underlying feature-based object detectors to produce most semantically compatible concepts. In addition to the lack of ability to capture semantical dependencies, object detectors suffer from high dimensionality of feature space that impairs them. Variances in the image (i.e., quality, pose, articulation, illumination, and occlusion) can also result in low-quality visual features that impact the accuracy of detected concepts. The object detectors used to build context-based framework experiments in this study are based on the state-of-the-art generative and discriminative graphical models. The relationships between model variables can be easily described using graphical models and the dependencies and precisely characterized using these representations. The generative context-based implementations are extensions of Latent Dirichlet Allocation, a leading topic modeling approach that is very effective in reduction of the dimensionality of the data. The discriminative contextbased approach extends Conditional Random Fields which allows efficient and precise construction of model by specifying and including only cases that are related and influence it. The dataset used for training and evaluation is MIT SUN397. The result of the experiments shows overall 15% increase in accuracy in annotation and 31% improvement in semantical saliency of the annotated concepts.
Show less - Date Issued
- 2016
- PURL
- http://purl.flvc.org/fau/fd/FA00004745, http://purl.flvc.org/fau/fd/FA00004745
- Subject Headings
- Computer vision--Mathematical models., Pattern recognition systems., Information visualization., Natural language processing (Computer science), Multimodal user interfaces (Computer systems), Latent structure analysis., Expert systems (Computer science)
- Format
- Document (PDF)
- Title
- Application of wavelets to image and video coding.
- Creator
- Zolghadr, Esfandiar, Florida Atlantic University, Furht, Borko, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this thesis we applied wavelet transforms to image and video coding. First, a survey of various wavelets and their features is presented, including continuous, discrete, and orthogonal wavelets. Theories and concepts underlying one and two-dimensional wavelet transforms are introduced and compared to Fourier transform and sub-band coding. The core of the thesis is the implementation of two-dimensional and three-dimensional codec architectures and their application to coding images and...
Show moreIn this thesis we applied wavelet transforms to image and video coding. First, a survey of various wavelets and their features is presented, including continuous, discrete, and orthogonal wavelets. Theories and concepts underlying one and two-dimensional wavelet transforms are introduced and compared to Fourier transform and sub-band coding. The core of the thesis is the implementation of two-dimensional and three-dimensional codec architectures and their application to coding images and videos, respectively. We studied performance of the wavelet codec by comparing it to DCT and JPEG coding techniques. We applied these techniques for compression of a variety of test images and videos. We also analyzed the adaptability and scalability of 2D and 3D codec. Experimental results, presented in the thesis, illustrate the superior performance of wavelets compared to other coding techniques.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13050
- Subject Headings
- Wavelets (Mathematics), Image compression, JPEG (Image coding standard)
- Format
- Document (PDF)
- Title
- Application of MoM: Scattering calculations using condition number.
- Creator
- Zhuang, Zhijun., Florida Atlantic University, Bagby, Jonathan S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Computational accuracy is widely recognized as a critical issue in applied electromagnetics. Increasing computational power is being applied to solve more complex electromagnetic systems with an emphasis on computational accuracy. The work of this thesis is focused on the implementation of Method of Moments (MoM) to integral equation formulations. The goal of this effort is to use what is known as condition number, and, a heuristic rule-of-thumb is applied to investigate the computational...
Show moreComputational accuracy is widely recognized as a critical issue in applied electromagnetics. Increasing computational power is being applied to solve more complex electromagnetic systems with an emphasis on computational accuracy. The work of this thesis is focused on the implementation of Method of Moments (MoM) to integral equation formulations. The goal of this effort is to use what is known as condition number, and, a heuristic rule-of-thumb is applied to investigate the computational accuracy of MoM in numerical electromagnetics. Other possible applications of condition number of the MoM matrix are also indicated.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15719
- Subject Headings
- Electromagnetism, Moments method (Statistics), Electromagnetic theory, Integral equations--Numerical solutions
- Format
- Document (PDF)
- Title
- NEIGHBORING NEAR MINIMUM-TIME CONTROLS WITH DISCONTINUITIES AND THE APPLICATION TO THE CONTROL OF MANIPULATORS (PATH-PLANNING, TRACKING, FEEDBACK).
- Creator
- Zhuang, Hanqi, Florida Atlantic University, Hamano, Fumio, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents several algorithms to treat the problem of closed-loop near minimum-time controls with discontinuities. First, a neighboring control algorithm is developed to solve the problem in which controls are bounded by constant constraints. Secondly, the scheme is extended to account for state-dependent control constraints. And finally, a path tracking algorithm for robotic manipulators is presented, which is also a neighboring control algorithm. These algorithms are suitable for...
Show moreThis thesis presents several algorithms to treat the problem of closed-loop near minimum-time controls with discontinuities. First, a neighboring control algorithm is developed to solve the problem in which controls are bounded by constant constraints. Secondly, the scheme is extended to account for state-dependent control constraints. And finally, a path tracking algorithm for robotic manipulators is presented, which is also a neighboring control algorithm. These algorithms are suitable for real time controls because the on-line computations involved are relatively simple. Simulation results show that these algorithms work well despite the fact that the prescribed final points can not be reached exactly.
Show less - Date Issued
- 1986
- PURL
- http://purl.flvc.org/fcla/dt/14326
- Subject Headings
- Manipulators (Mechanism), Control theory, Algorithms
- Format
- Document (PDF)
- Title
- Kinematic modeling, identification and compensation of robot manipulators.
- Creator
- Zhuang, Hanqi, Florida Atlantic University, Hamano, Fumio, Roth, Zvi S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Theoretical and practical issues of kinematic modeling, measurement, identification and compensation are addressed in this dissertation. A comprehensive robot calibration methodology using a new Complete and Parametrically Continuous (CPC) kinematic model is presented. The dissertation focuses on model-based robot calibration techniques. Parametric continuity of a kinematic model is defined and discussed to characterize model singularity. Irreducibility is defined to facilitate error model...
Show moreTheoretical and practical issues of kinematic modeling, measurement, identification and compensation are addressed in this dissertation. A comprehensive robot calibration methodology using a new Complete and Parametrically Continuous (CPC) kinematic model is presented. The dissertation focuses on model-based robot calibration techniques. Parametric continuity of a kinematic model is defined and discussed to characterize model singularity. Irreducibility is defined to facilitate error model reduction. Issues of kinematic parameter identification are addressed by utilizing generic forms of linearized kinematic error models. The CPC model is a complete and parametrically continuous kinematic model capable of describing geometry and motion of a robot manipulator. Owing to the completeness of the CPC model, the transformation from the base frame to the world frame and from the tool frame to the last link frame can be modeled with the same modeling convention as the one used for internal link transformations. Due to the parametric continuity of the CPC model, numerical difficulties in kinematic parameter identification using error models are reduced. The CPC model construction, computation of the link parameters from a given link transformation, inverse kinematics, transformations between the CPC model and the Denavit-Hartenberg model, and linearized CPC error model construction are investigated. New methods for self-calibration of a laser tracking coordinate-measuring-machine are reported. Two calibration methods, one based on a four-tracker system and the other based on three trackers with a precision plane, are proposed. Iterative estimation algorithms along with simulation results are presented. Linear quadratic regulator (LQR) theory is applied to design robot accuracy compensators. In the LQR algorithm, additive corrections of joint commands are found without explicitly solving the inverse kinematic problem for an actual robot; a weighting matrix and coefficients in the cost function can be chosen systematically to achieve specific objective such as emphasizing the positioning accuracy of the end-effector over its orientation accuracy and vice versa and taking into account joint travelling limits as well as singularity zones of the robot. The results of the kinematic identification and compensation experiments using the PUMA robot have shown that the CPC modeling technique presented in this dissertation is a convenient and effective means for accuracy improvements of industrial robots.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/12243
- Subject Headings
- Robotics, Manipulators (Mechanism)
- Format
- Document (PDF)
- Title
- Implementation of a fuzzy-logic-based trust model.
- Creator
- Zhao, Yuanhui., Florida Atlantic University, Zhuang, Hanqi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In the last 10 years, due to the rapid developments in computers and Internet, the Electronic Commerce has advanced significantly. More and more companies have shifted their businesses activities to the Internet. However, the popular use of ecommerce has also raised serious security problems. Therefore developing security schemes has become a key issue both in the academic as well as industrial research. Since the Internet is open to the public, the associated security issue is challenging. A...
Show moreIn the last 10 years, due to the rapid developments in computers and Internet, the Electronic Commerce has advanced significantly. More and more companies have shifted their businesses activities to the Internet. However, the popular use of ecommerce has also raised serious security problems. Therefore developing security schemes has become a key issue both in the academic as well as industrial research. Since the Internet is open to the public, the associated security issue is challenging. A good security strategy should not only protect the vendors' interest, but also enhance the mutual trust between vendors and customers. As a result, the people will feel more confident in conducting e-commerce. This thesis is dedicated to develop a fuzzy-logic based trust model. In general, the ecommerce transactions need costly verification and authentication process. In some cases, it is not cost effective to verify and authenticate each transaction, especially for transactions involving only small amount of money and for customers having an excellent transaction history. In view of this, in this research a model that distinguishes potentially safe transactions from unsafe transactions is developed. Only those potentially unsafe transactions need to be verified and authenticated. The model takes a number of fuzzy variables as inputs. However, this poses problems in constructing the trust table since the number of fuzzy rules will increase exponentially as the number of fuzzy variables increase. To make the problem more trackable, the variables are divided into several groups, two for each table. Each table will produce a decision on trust. The final decision is made based on the "intersection" of all these outputs. Simulation studies have been conducted to validate the effectiveness of the proposed trust model. Therefore simulations, however, need to be tested in a real business environment using real data. Relevant limitations on the proposed model are hence discussed and future research direction is indicated.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12812
- Subject Headings
- Fuzzy logic, Electronic commerce--Security measures
- Format
- Document (PDF)
- Title
- A VLSI implementable thinning algorithm.
- Creator
- Zhang, Wei, Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Thinning is a very important step in a Character Recognition System. This thesis evolves a thinning algorithm that can be hardware implemented to improve the speed of the process. The software thinning algorithm features a simple set of rules that can be applied on both hexagonal and orthogonal character images. The hardware architecture features the SIMD structure, simple processing elements and near neighbor communications. The algorithm was simulated against the U.S. Postal Service...
Show moreThinning is a very important step in a Character Recognition System. This thesis evolves a thinning algorithm that can be hardware implemented to improve the speed of the process. The software thinning algorithm features a simple set of rules that can be applied on both hexagonal and orthogonal character images. The hardware architecture features the SIMD structure, simple processing elements and near neighbor communications. The algorithm was simulated against the U.S. Postal Service Character Database. The architecture, evolved with consideration of both the software constraints and the physical layout limitations, was simulated using VHDL hardware description language. Subsequent to VLSI design and simulations the chip was fabricated. The project provides for a feasibility study in utilizing the parallel processor architecture for the implementation of a parallel image thinning algorithm. It is hoped that such a hardware implementation will speed up the processing and lead eventually to a real time system.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14837
- Subject Headings
- Optical character recognition devices--Computer simulation, Algorithms, Integrated circuits--Very large scale integration
- Format
- Document (PDF)
- Title
- A parallel and reliable robot controller system.
- Creator
- Zhang, Ruiguang., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In recent years robots have become increasingly important in many areas. Along with the development of robot-arm control theory, there has been an increased demand for faster and more reliable control systems. In this thesis, a parallel technique is applied to all of the units of a robot control system. Also, software fault-tolerance mechanisms such as timeout, conversation, exception handling, and their Occam implementations, are considered. A simulation study shows that pipelining, together...
Show moreIn recent years robots have become increasingly important in many areas. Along with the development of robot-arm control theory, there has been an increased demand for faster and more reliable control systems. In this thesis, a parallel technique is applied to all of the units of a robot control system. Also, software fault-tolerance mechanisms such as timeout, conversation, exception handling, and their Occam implementations, are considered. A simulation study shows that pipelining, together with a multiprocessing system, increases the performance of this real-time system, and it is a convenient way to speed up robot controller execution. While we have not evaluated the increase in reliability, we have shown that these fault tolerance mechanisms can be conveniently implemented in this type of application.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/14566
- Subject Headings
- Control theory, Robots--Programming
- Format
- Document (PDF)
- Title
- Design and modeling of hybrid software fault-tolerant systems.
- Creator
- Zhang, Man-xia Maria., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Fault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic...
Show moreFault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic methods are developed to construct hybrid fault tolerant systems with total cost constraints. The algorithms provide a systematic approach to the design of hybrid fault tolerant systems.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14783
- Subject Headings
- Computer software--Reliability, Fault-tolerant computing, Algorithms
- Format
- Document (PDF)
- Title
- Performance analysis of wireless communication systems in multipath fading environments with correlated shadowing and co-channel interference.
- Creator
- Zhang, Jingjun., Florida Atlantic University, Aalo, Valentine A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The performance aspects of conventional cellular (FDMA/TDMA) and CDMA systems with micro- and macroscopic diversity reception are investigated in a severe mobile communication environment which is characterized by path loss, correlated lognormal shadowing, multipath fading, background noise and interference. Under a co-channel interference-limited assumption, an exact analytical expression for the co-channel interference (CCI) probability is presented for a macroscopic diversity system with...
Show moreThe performance aspects of conventional cellular (FDMA/TDMA) and CDMA systems with micro- and macroscopic diversity reception are investigated in a severe mobile communication environment which is characterized by path loss, correlated lognormal shadowing, multipath fading, background noise and interference. Under a co-channel interference-limited assumption, an exact analytical expression for the co-channel interference (CCI) probability is presented for a macroscopic diversity system with an arbitrary number of correlated macroscopic branches. For noise-limited systems, the average bit-error-rate (BER) and outage probability performances of a narrowband mobile communication system with micro- and macrodiversity reception are evaluated. In the relevant analysis, both Nakagami and Rician fading channels are considered. When both co-channel interference and noise coexists, the results for a Nakagami fading channel show that diversity reception can be used to reduce the effects of interference while combating fading and shadowing. Micro- and macroscopic diversities are also applied to a multicell DS-CDMA system. In a conventional cellular system with macroscopic diversity, the mobile user is usually connected to the closest base station. However, a base-station selection scheme based on a least attenuation criterion is shown to provide a significant performance improvement over the conventional system. In this case, the system performance is examined in terms of BER and outage probability, while accounting for the effects of path loss, correlated shadowing, multipath fading, multiple access interference, and imperfect power control.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/12600
- Subject Headings
- Wireless communication systems, Code division multiple access
- Format
- Document (PDF)
- Title
- XYZ: A scalable, partially centralized lookup service for large-scale peer-to-peer systems.
- Creator
- Zhang, Jianying., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Peer-to-Peer (P2P) systems are characterized by direct access between peer computers, rather than through a centralized server. File sharing is the dominant P2P application on the Internet, allowing users to easily contribute, search and obtain content. The objective of this thesis was to design XYZ, a partially centralized, scalable and self-organizing lookup service for wide area P2P systems. The XYZ system is based on distributed hash table (DHT). A unique ID and a color assigned to each...
Show morePeer-to-Peer (P2P) systems are characterized by direct access between peer computers, rather than through a centralized server. File sharing is the dominant P2P application on the Internet, allowing users to easily contribute, search and obtain content. The objective of this thesis was to design XYZ, a partially centralized, scalable and self-organizing lookup service for wide area P2P systems. The XYZ system is based on distributed hash table (DHT). A unique ID and a color assigned to each node and each file. The author uses clustering method to create the system backbone by connecting the cluster heads together and uses color clustering method to create color overlays. Any lookup for a file with a color will only be forwarded in the color overlay with the same color so that the searching space is minimized. Simulations and analysis are also provided in this thesis.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/13263
- Subject Headings
- Wireless communication systems, Peer-to-peer architecture (Computer networks), Computational grids (Computer systems)
- Format
- Document (PDF)
- Title
- Models and Implementations of Online Laboratories; A Definition of a Standard Architecture to Integrate Distributed Remote Experiments.
- Creator
- Zapata Rivera, Luis Felipe, Larrondo Petrie, Maria M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Hands-on laboratory experiences are a key part of all engineering programs. Currently there is high demand for online engineering courses, but offering lab experiences online still remain a great challenge. Remote laboratories have been under development for more than 20 years and are part of a bigger category, called online laboratories, which includes also virtual laboratories. Development of remote laboratories in academic settings has been held back because of the lack of standardization...
Show moreHands-on laboratory experiences are a key part of all engineering programs. Currently there is high demand for online engineering courses, but offering lab experiences online still remain a great challenge. Remote laboratories have been under development for more than 20 years and are part of a bigger category, called online laboratories, which includes also virtual laboratories. Development of remote laboratories in academic settings has been held back because of the lack of standardization of technology, processes, operation and their integration with formal educational environments. Remote laboratories can be used in educational settings for a variety of reasons, for instance, when the equipment is not available in the physical laboratory; when the physical laboratory space available is not sufficient to either set up the experiments or permit access to all on-site students in the course; or when the teacher needs to provide online laboratory experiences to students taking courses via distance education. This dissertation proposes a new approach for the development and deployment of online laboratories over online platforms. The research activities performed include: The design and implementation of an architecture of a system for Smart Adaptive Remote Laboratories (SARL) integrated to educational environments to improve the remote laboratory users experience through the implementation of a modular architecture and the use of context information about the users and laboratory activities; the design pattern and implementation for the Remote Laboratory Management System (RLMS); the definition and implementation of an xAPI-based activity tracking system for online laboratories with support for both centralized and distributed architectures of Learning Record Stores (LRS); the definition of Smart Laboratory Learning Object (SLLO) capable of being integrated in different educational environments, including the implementation of a Lab Authoring module; and finally, the definition of a reliability model to detect and report failures and possible causes and countermeasures applying ruled based systems. The architecture proposed complies with the just approved IEEE 1876 Standard for Networked Smart Learning for Online Laboratories and supports virtual, remote, hybrid and mobile laboratories. A full set of low-cost online laboratory experiment stations were designed and implemented to support the Introduction to Logic Design course, providing true hands-on lab experience to students through the a low-cost, student-built mobile laboratory platform connected via USB to the SARL System. The SARL prototype have been successfully integrated to a Virtual Learning Environment (VLE) and a variety of configurations tested that can support privacy and security requirements of different stakeholders. The prototype online laboratory experiments developed have contributed and been featured in IEEE 1876 standard, as well as been integrated into an Industry Connections Actionable Data Book (ADB) that was featured in the Frankfurt Book Fair in 2017. SARL is being developed as the infrastructure to support a Latin American and Caribbean network of online laboratories.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013282
- Subject Headings
- Remote laboratories, Online laboratories, Engineering Education, Software architecture
- Format
- Document (PDF)
- Title
- Campus driver assistant on an Android platform.
- Creator
- Zankina, Iana., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
College campuses can be large, confusing, and intimidating for new students and visitors. Finding the campus may be easy using a GPS unit or Google Maps directions, but this is not the case when you are actually on the campus. There is no service that provides directional assistance for the campus itself. This thesis proposes a driver assistant application running on an Android platform that can direct drivers to different buildings and parking lots in the campus. The application's user...
Show moreCollege campuses can be large, confusing, and intimidating for new students and visitors. Finding the campus may be easy using a GPS unit or Google Maps directions, but this is not the case when you are actually on the campus. There is no service that provides directional assistance for the campus itself. This thesis proposes a driver assistant application running on an Android platform that can direct drivers to different buildings and parking lots in the campus. The application's user interface lets the user select a user type, a campus, and a destination through use of drop down menus and buttons. Once the user submits the needed information, then the next portion of the application runs in the background. The app retrieves the Campus Map XML created by the mapping tool that was constructed for this project. The XML data containing all the map elements is then parsed and stored in a hierarchal data structure. The resulting objects are then used to construct a campus graph, on which an altered version of Dijkstra's Shortest Path algorithm is executed. When the path to the destination has been discovered, the campus map with the computed path overlaid is displayed on the user's device, showing the route to the desired destination.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3359159
- Subject Headings
- Mobile computing, Software engineering, Application software, Development
- Format
- Document (PDF)
- Title
- OPTIMIZED DEEP LEARNING ARCHITECTURES AND TECHNIQUES FOR EDGE AI.
- Creator
- Zaniolo, Luiz, Marques, Oge, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
The recent rise of artificial intelligence (AI) using deep learning networks allowed the development of automatic solutions for many tasks that, in the past, were seen as impossible to be performed by a machine. However, deep learning models are getting larger, need significant processing power to train, and powerful machines to use it. As deep learning applications become ubiquitous, another trend is taking place: the growing use of edge devices. This dissertation addresses selected...
Show moreThe recent rise of artificial intelligence (AI) using deep learning networks allowed the development of automatic solutions for many tasks that, in the past, were seen as impossible to be performed by a machine. However, deep learning models are getting larger, need significant processing power to train, and powerful machines to use it. As deep learning applications become ubiquitous, another trend is taking place: the growing use of edge devices. This dissertation addresses selected technical issues associated with edge AI, proposes novel solutions to them, and demonstrates the effectiveness of the proposed approaches. The technical contributions of this dissertation include: (i) architectural optimizations to deep neural networks, particularly the use of patterned stride in convolutional neural networks used for image classification; (ii) use of weight quantization to reduce model size without hurting its accuracy; (iii) systematic evaluation of the impact of image imperfections on skin lesion classifiers' performance in the context of teledermatology; and (iv) a new approach for code prediction using natural language processing techniques, targeted at edge devices.
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013822
- Subject Headings
- Artificial intelligence, Deep learning (Machine learning), Neural networks (Computer science)
- Format
- Document (PDF)
- Title
- APPLICATION OF BLOCKCHAIN NETWORK FOR THE USE OF INFORMATION SHARING.
- Creator
- Zamir, Linir, Liu, Feng-Hao, Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The Blockchain concept was originally developed to provide security in the Bitcoin cryptocurrency network, where trust is achieved through the provision of an agreed-upon and immutable record of transactions between parties. The use of a Blockchain as a secure, publicly distributed ledger is applicable to fields beyond finance, and is an emerging area of research across many other fields in the industry. This thesis considers the feasibility of using a Blockchain to facilitate secured...
Show moreThe Blockchain concept was originally developed to provide security in the Bitcoin cryptocurrency network, where trust is achieved through the provision of an agreed-upon and immutable record of transactions between parties. The use of a Blockchain as a secure, publicly distributed ledger is applicable to fields beyond finance, and is an emerging area of research across many other fields in the industry. This thesis considers the feasibility of using a Blockchain to facilitate secured information sharing between parties, where a lack of trust and absence of central control are common characteristics. Implementation of a Blockchain Information Sharing system will be designed on an existing Blockchain network with as a communicative party members sharing secured information. The benefits and risks associated with using a public Blockchain for information sharing will also be discussed.
Show less - Date Issued
- 2019
- PURL
- http://purl.flvc.org/fau/fd/FA00013351
- Subject Headings
- Blockchains (Databases), Blockchains (Databases)--Industrial applications, Data encryption (Computer science), Personal data protection, Bitcoin
- Format
- Document (PDF)
- Title
- DECENTRALIZED SYSTEMS FOR INFORMATION SHARING IN DYNAMIC ENVIRONMENT USING LOCALIZED CONSENSUS.
- Creator
- Zamir, Linir, Nojoumian, Mehrdad, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Achieving a consensus among a large number of nodes has always been a challenge for any decentralized system. Consensus algorithms are the building blocks for any decentralized network that is susceptible to malicious activities from authorized and unauthorized nodes. Proof-of-Work is one of the first modern approaches to achieve at least a 51% consensus, and ever since many new consensus algorithms have been introduced with different approaches of consensus achievement. These decentralized...
Show moreAchieving a consensus among a large number of nodes has always been a challenge for any decentralized system. Consensus algorithms are the building blocks for any decentralized network that is susceptible to malicious activities from authorized and unauthorized nodes. Proof-of-Work is one of the first modern approaches to achieve at least a 51% consensus, and ever since many new consensus algorithms have been introduced with different approaches of consensus achievement. These decentralized systems, also called blockchain systems, have been implemented in many applications such as supply chains, medical industry, and authentication. However, it is mostly used as a cryptocurrency foundation for token exchange. For these systems to operate properly, they are required to be robust, scalable, and secure. This dissertation provides a different approach of using consensus algorithms for allowing information sharing among nodes in a secured fashion while maintaining the security and immutability of the consensus algorithm. The consensus algorithm proposed in this dissertation utilizes a trust parameter to enforce cooperation, i.e., a trust value is assigned to each node and it is monitored to prevent malicious activities over time. This dissertation also proposes a new solution, named localized consensus, as a method that allows nodes in small groups to achieve consensus on information that is only relevant to that small group of nodes, thus reducing the bandwidth of the system. The proposed models can be practical solutions for immense and highly dynamic environments with validation through trust and reputation values. Application for such localized consensus can be communication among autonomous vehicles where traffic data is relevant to only a small group of vehicles and not the entirety of the system.
Show less - Date Issued
- 2022
- PURL
- http://purl.flvc.org/fau/fd/FA00014028
- Subject Headings
- Blockchain, Consensus algorithms
- Format
- Document (PDF)
- Title
- Model reduction of large-scale systems using perturbed frequency-domain balanced structure.
- Creator
- Zadegan, Abbas Hassan., Florida Atlantic University, Zilouchian, Ali, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Model reduction of large-scale systems over a specified frequency range of operation is studied in this research and reported in this dissertation. Frequency-domain balanced structures with integration of singular perturbation are proposed for model reduction of large-scale continuous-time as well as discrete-time systems. This method is applied to both open-loop as well as closed-loop systems. It is shown that the response of reduced systems closely resemble that of full order systems within...
Show moreModel reduction of large-scale systems over a specified frequency range of operation is studied in this research and reported in this dissertation. Frequency-domain balanced structures with integration of singular perturbation are proposed for model reduction of large-scale continuous-time as well as discrete-time systems. This method is applied to both open-loop as well as closed-loop systems. It is shown that the response of reduced systems closely resemble that of full order systems within a specified frequency range of operation. Simulation experiments for the model reduction of several large-scale, continuous and discrete-time systems demonstrate the superiority of the proposed technique over the previously available methods.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fcla/dt/12114
- Subject Headings
- System analysis, Large scale systems--Mathematical models, System design, Control theory, Mathematical optimization
- Format
- Document (PDF)
- Title
- Modeling software quality with TREEDISC algorithm.
- Creator
- Yuan, Xiaojing, Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Software quality is crucial both to software makers and customers. However, in reality, improvement of quality and reduction of costs are often at odds. Software modeling can help us to detect fault-prone software modules based on software metrics, so that we can focus our limited resources on fewer modules and lower the cost but still achieve high quality. In the present study, a tree classification modeling technique---TREEDISC was applied to three case studies. Several major contributions...
Show moreSoftware quality is crucial both to software makers and customers. However, in reality, improvement of quality and reduction of costs are often at odds. Software modeling can help us to detect fault-prone software modules based on software metrics, so that we can focus our limited resources on fewer modules and lower the cost but still achieve high quality. In the present study, a tree classification modeling technique---TREEDISC was applied to three case studies. Several major contributions have been made. First, preprocessing of raw data was adopted to solve the computer memory problem and improve the models. Secondly, TREEDISC was thoroughly explored by examining the roles of important parameters in modeling. Thirdly, a generalized classification rule was introduced to balance misclassification rates and decrease type II error, which is considered more costly than type I error. Fourthly, certainty of classification was addressed. Fifthly, TREEDISC modeling was validated over multiple releases of software product.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15718
- Subject Headings
- Computer software--Quality control, Computer simulation, Software engineering
- Format
- Document (PDF)
- Title
- A study on ATM multiplexing and threshold-based connection admission control in connection-oriented packet networks.
- Creator
- Yuan, Xiaohong., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The research reported in this dissertation studies ATM multiplexing and connection admission control schemes for traffic management in connection-oriented packet networks. A new threshold-based connection admission control scheme is proposed and analyzed. The scheme uses effective bandwidth to make decision whether to accept or reject the connection request. This threshold specified effective-bandwidth method is first simulated on a simple 4-node connection-oriented packet network model, and...
Show moreThe research reported in this dissertation studies ATM multiplexing and connection admission control schemes for traffic management in connection-oriented packet networks. A new threshold-based connection admission control scheme is proposed and analyzed. The scheme uses effective bandwidth to make decision whether to accept or reject the connection request. This threshold specified effective-bandwidth method is first simulated on a simple 4-node connection-oriented packet network model, and then extended to a more complex 8-node network model under a variety of environments. To reduce the cell-loss ratio when the arrival rates of the connection requests are large, the dynamic effective bandwidth mechanism is proposed and relevant simulations are addressed on the two network models. The traffic used in the simulation is a multiplexed stream of cells from video, voice and data sources, which is typical in ATM environments. The multiplexed traffic is generated using a discrete event scheduling method. The simulation programs for the 4-node network model and for the 8-node network model are verified by the theoretical values of the blocking probabilities of the connection requests, and Little's Theorem. Simulations on the two network models show similar results. Pertinent to a network that supplying several service categories, the threshold-based connection admission control is shown to affect the blocking probabilities of each type of traffic. In some environments, having a threshold is advantageous over the case without a threshold in terms of cell-loss ratio, cell transfer delay and power (throughput divided by cell transfer delay). The simulation results also show that the dynamic effective bandwidth based method helps to reduce the cell-loss ratio significantly when the arrival rates of the connection requests are large.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12647
- Subject Headings
- Asynchronous transfer mode, Packet switching (Data transmission), Telecommunication--Traffic
- Format
- Document (PDF)