Current Search: Computer architecture (x)
View All Items
Pages
- Title
- A next generation computer network communications architecture.
- Creator
- Thor, Bernice Lynn., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A Next Generation Computer Network Communications Architecture, CNCA, is developed in this thesis. Existing communication techniques and available networking technologies are explored. This provides the background information for the development of the architecture. Hardware, protocol, and interface requirements are addressed to provide a practical architecture for supporting high speed communications beyond current implementations. A reduction process is then performed to extract the optimal...
Show moreA Next Generation Computer Network Communications Architecture, CNCA, is developed in this thesis. Existing communication techniques and available networking technologies are explored. This provides the background information for the development of the architecture. Hardware, protocol, and interface requirements are addressed to provide a practical architecture for supporting high speed communications beyond current implementations. A reduction process is then performed to extract the optimal components for the CNCA platform. The resulting architecture describes a next generation communications device that is capable of very fast switching and fast processing of information. The architecture interfaces with existing products, and provides extensive flexibility. This protects existing equipment investments, and supports future enhancements.
Show less - Date Issued
- 1991
- PURL
- http://purl.flvc.org/fcla/dt/14726
- Subject Headings
- Computer network architectures, Computer networks
- Format
- Document (PDF)
- Title
- PERFORMANCE EVALUATION OF A RIDGE 32 COMPUTER SYSTEM (RISC (REDUCED INSTRUCTION SET COMPUTER)).
- Creator
- YOON, SEOK TAE., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
As a new trend in designing a computer architecture, Reduced Instruction set Computers(RISC) have been proposed recently. This thesis reviews the new design approach behind the RISC and discuss the controversy between the proponents of the RISC approach and those of the traditional Complex Instruction set COmputer(CISC) approach. Ridge 32 is selected as a case study of the RISCs. Architectural parameters to evaluate the computer performance are considered to analyze the performance of the...
Show moreAs a new trend in designing a computer architecture, Reduced Instruction set Computers(RISC) have been proposed recently. This thesis reviews the new design approach behind the RISC and discuss the controversy between the proponents of the RISC approach and those of the traditional Complex Instruction set COmputer(CISC) approach. Ridge 32 is selected as a case study of the RISCs. Architectural parameters to evaluate the computer performance are considered to analyze the performance of the Ridge 32. A simulator for the Ridge 32 was implemented in PASCAL as a way of measuring those parameters. Measurement results on the several selected benchmark programs are given and analyzed to evaluate the characteristics of the Ridge 32.
Show less - Date Issued
- 1986
- PURL
- http://purl.flvc.org/fcla/dt/14348
- Subject Headings
- Computer architecture, Microprocessors
- Format
- Document (PDF)
- Title
- Memory latency evaluation in cluster-based cache-coherent multiprocessor systems with different interconnection topologies.
- Creator
- Asaduzzaman, Abu Sadath Mohammad, Florida Atlantic University, Mahgoub, Imad
- Abstract/Description
-
This research investigates memory latency of cluster-based cache-coherent multiprocessor systems with different interconnection topologies. We focus on a cluster-based architecture which is a variation of Stanford DASH architecture. The architecture, also, has some similarities with the STiNG architecture from Sequent Computer System Inc. In this architecture, a small number of processors and a portion of shared-memory are connected through a bus inside each cluster. As the number of...
Show moreThis research investigates memory latency of cluster-based cache-coherent multiprocessor systems with different interconnection topologies. We focus on a cluster-based architecture which is a variation of Stanford DASH architecture. The architecture, also, has some similarities with the STiNG architecture from Sequent Computer System Inc. In this architecture, a small number of processors and a portion of shared-memory are connected through a bus inside each cluster. As the number of processors per cluster is small, snoopy protocol is used inside each cluster. Each processor has two levels of caches and for each cluster a separate directory is maintained. Clusters are connected using directory-based scheme through an interconnection network to make the system scaleable. Trace-driven simulation has been developed to evaluate the overall memory latency of this architecture using three different network topologies, namely ring, mesh, and hypercube. For each network topology, the overall memory latency has been evaluated running a representative set of SPLASH-2 applications. Simulation results show that, the cluster-based multiprocessor system with hypercube topology outperforms those with mesh and ring topologies.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15447
- Subject Headings
- Computer network architectures, Multiprocessors
- Format
- Document (PDF)
- Title
- A REFERENCE ARCHITECTURE FOR NETWORK FUNCTION VIRTUALIZATION.
- Creator
- Alwakeel, Ahmed M., Fernandez, Eduardo B., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Cloud computing has provided many services to potential consumers, one of these services being the provision of network functions using virtualization. Network Function Virtualization is a new technology that aims to improve the way we consume network services. Legacy networking solutions are different because consumers must buy and install various hardware equipment. In NFV, networks are provided to users as a software as a service (SaaS). Implementing NFV comes with many benefits, including...
Show moreCloud computing has provided many services to potential consumers, one of these services being the provision of network functions using virtualization. Network Function Virtualization is a new technology that aims to improve the way we consume network services. Legacy networking solutions are different because consumers must buy and install various hardware equipment. In NFV, networks are provided to users as a software as a service (SaaS). Implementing NFV comes with many benefits, including faster module development for network functions, more rapid deployment, enhancement of the network on cloud infrastructures, and lowering the overall cost of having a network system. All these benefits can be achieved in NFV by turning physical network functions into Virtual Network Functions (VNFs). However, since this technology is still a new network paradigm, integrating this virtual environment into a legacy environment or even moving all together into NFV reflects on the complexity of adopting the NFV system. Also, a network service could be composed of several components that are provided by different service providers; this also increases the complexity and heterogeneity of the system. We apply abstract architectural modeling to describe and analyze the NFV architecture. We use architectural patterns to build a flexible NFV architecture to build a Reference Architecture (RA) for NFV that describe the system and how it works. RAs are proven to be a powerful solution to abstract complex systems that lacks semantics. Having an RA for NFV helps us understand the system and how it functions. It also helps us to expose the possible vulnerabilities that may lead to threats toward the system. In the future, this RA could be enhanced into SRA by adding misuse and security patterns for it to cover potential threats and vulnerabilities in the system. Our audiences are system designers, system architects, and security professionals who are interested in building a secure NFV system.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013434
- Subject Headings
- Virtual computer systems, Cloud computing, Computer network architectures, Computer networks
- Format
- Document (PDF)
- Title
- A very high-performance neural network system architecture using grouped weight quantization.
- Creator
- Karaali, Orhan., Florida Atlantic University, Shankar, Ravi, Gluch, David P., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Recently, Artificial Neural Network (ANN) computing systems have become one of the most active and challenging areas of information processing. The successes of experimental neural computing systems in the fields of pattern recognition, process control, robotics, signal processing, expert system, and functional analysis are most promising. However due to a number of serious problems, only small size fully connected neural networks have been implemented to run in real-time. The primary problem...
Show moreRecently, Artificial Neural Network (ANN) computing systems have become one of the most active and challenging areas of information processing. The successes of experimental neural computing systems in the fields of pattern recognition, process control, robotics, signal processing, expert system, and functional analysis are most promising. However due to a number of serious problems, only small size fully connected neural networks have been implemented to run in real-time. The primary problem is that the execution time of neural networks increases exponentially as the neural network's size increases. This is because of the exponential increase in the number of multiplications and interconnections which makes it extremely difficult to implement medium or large scale ANNs in hardware. The Modular Grouped Weight Quantization (MGWQ) presented in this dissertation is an ANN design which assures that the number of multiplications and interconnections increase linearly as the neural network's size increases. The secondary problems are related to scale-up capability, modularity, memory requirements, flexibility, performance, fault tolerance, technological feasibility, and cost. The MGWQ architecture also resolves these problems. In this dissertation, neural network characteristics and existing implementations using different technologies are described. Their shortcomings and problems are addressed, and solutions to these problems using the MGWQ approach are illustrated. The theoretical and experimental justifications for MGWQ are presented. Performance calculations for the MGWQ architecture are given. The mappings of the most popular neural network models to the proposed architecture are demonstrated. System level architecture considerations are discussed. The proposed ANN computing system is a flexible and a realistic way to implement large fully connected networks. It offers very high performance using currently available technology. The performance of ANNs is measured in terms of interconnections per second (IC/S); the performance of the proposed system changes between 10^11 to 10^14 IC/S. In comparison, SAIC's DELTA II ANN system achieves 10^7. A Cray X-MP achieves 5*10^7 IC/S.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/12245
- Subject Headings
- Neural circuitry, Neural computers, Computer architecture
- Format
- Document (PDF)
- Title
- A unified methodology for software and hardware fault tolerance.
- Creator
- Wang, Yijun., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The growing demand for high availability of computer systems has led to a wide application range of fault-tolerant systems. In some real-time applications ultrareliable computer systems are required. Such computer systems should be capable of tolerating failures of not only their hardware components but also of their software components. This dissertation discusses three aspects of designing an ultrareliable system: (a) a hierarchical ultrareliable system structure; (b) a set of unified...
Show moreThe growing demand for high availability of computer systems has led to a wide application range of fault-tolerant systems. In some real-time applications ultrareliable computer systems are required. Such computer systems should be capable of tolerating failures of not only their hardware components but also of their software components. This dissertation discusses three aspects of designing an ultrareliable system: (a) a hierarchical ultrareliable system structure; (b) a set of unified methods to tolerate both software and hardware faults in combination; and (c) formal specifications in the system structure. The proposed hierarchical structure has four layers: Application, Software Fault Tolerance, Combined Fault Tolerance and Configuration. The Application Layer defines the structure of the application software in terms of the modular structure using a module interconnection language. The failure semantics of the service provided by the system is also defined at this layer. At the Software Fault Tolerance Layer each module can use software fault tolerance methods. The implementation of the software and hardware fault tolerance is achieved at the Combined Fault Tolerance Layer which utilizes the combined software/hardware fault tolerance methods. The Configuration Layer performs actual software and hardware resource management for the requests of fault identification and recovery from the Combined Fault Tolerance Layer. A combined software and hardware fault model is used as the system fault model. This model uses the concepts of fault pattern and fault set to abstract the various occurrences of software and hardware faults. We also discuss extended comparison models that consider faulty software as well. The combined software/hardware fault tolerance methods are based on recovery blocks, N-version programming, extended comparison methods and both forward and backward recovery methods. Formal specifications and verifications are used in the system design process and the system structure to show that the design and implementation of a fault-tolerant system satisfy the functional and non-functional requirements. Brief discussions and examples of using formal specifications in the hierarchical structure are given.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/12424
- Subject Headings
- Fault-tolerant computing, Computer architecture
- Format
- Document (PDF)
- Title
- Concord: A proactive lightweight middleware to enable seamless connectivity in a pervasive environment.
- Creator
- Mutha, Mahesh., Florida Atlantic University, Hsu, Sam, Pandya, Abhijit S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
One of the major components of any pervasive system is its proactive behavior. Various models have been developed to provide system wide changes which would enable proactive behavior. A major drawback of these approaches is that they do not address the need to make use of existing applications whose design cannot be changed. To overcome this drawback, a middleware architecture called "Concord" is proposed. Concord is based on a simple model which consists of Lookup Server and Database. The...
Show moreOne of the major components of any pervasive system is its proactive behavior. Various models have been developed to provide system wide changes which would enable proactive behavior. A major drawback of these approaches is that they do not address the need to make use of existing applications whose design cannot be changed. To overcome this drawback, a middleware architecture called "Concord" is proposed. Concord is based on a simple model which consists of Lookup Server and Database. The rewards for this simple model are many. First, Concord uses the existing computing infrastructure. Second, Concord standardizes the interfaces for all services and platforms. Third new services can be added dynamically without any need for reconfiguration. Finally, Concord consists of Database that can maintain and publish the active set of available resources. Thus Concord provides a solid system for integration of various entities to provide seamless connectivity and enable proactive behavior.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/13234
- Subject Headings
- CONCORD (Computer architecture), Middleware, Computer architecture, Database management
- Format
- Document (PDF)
- Title
- Fault-tolerant multicasting in hypercube multicomputers.
- Creator
- Yao, Kejun., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Interprocessor communication plays an important role in the performance of multicomputer systems, such as hypercube multicomputers. In this thesis, we consider the multicast problem for a hypercube system in the presence of faulty components. Two types of algorithms are proposed. Type 1 algorithms, which are developed based on local network information, can tolerate both node failures and link failures. Type 2 algorithms, which are developed based on limited global network information, ensure...
Show moreInterprocessor communication plays an important role in the performance of multicomputer systems, such as hypercube multicomputers. In this thesis, we consider the multicast problem for a hypercube system in the presence of faulty components. Two types of algorithms are proposed. Type 1 algorithms, which are developed based on local network information, can tolerate both node failures and link failures. Type 2 algorithms, which are developed based on limited global network information, ensure that each destination receives message through the shortest path. Simulation results show that type 2 algorithms achieve very good results on both time and traffic steps, two main criteria in measuring the performance of interprocessor communication.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14896
- Subject Headings
- Hypercube networks (Computer networks), Computer architecture, Fault-tolerant computing
- Format
- Document (PDF)
- Title
- Time-step optimal broadcasting in mesh networks with minimum total communication distance.
- Creator
- Cang, Songluan., Florida Atlantic University, Wu, Jie
- Abstract/Description
-
We propose a new minimum total communication distance (TCD) algorithm and an optimal TCD algorithm for broadcast in a 2-dimensional mesh (2-D mesh). The former generates a minimum TCD from a given source node, and the latter guarantees a minimum TCD among all the possible source nodes. These algorithms are based on a divide-and-conquer approach where a 2-D mesh is partitioned into four submeshes of equal size. The source node sends the broadcast message to a special node called an eye in each...
Show moreWe propose a new minimum total communication distance (TCD) algorithm and an optimal TCD algorithm for broadcast in a 2-dimensional mesh (2-D mesh). The former generates a minimum TCD from a given source node, and the latter guarantees a minimum TCD among all the possible source nodes. These algorithms are based on a divide-and-conquer approach where a 2-D mesh is partitioned into four submeshes of equal size. The source node sends the broadcast message to a special node called an eye in each submesh. The above procedure is then recursively applied in each submesh. These algorithms are extended to a 3-dimensional mesh (3-D mesh), and are generalized to a d-dimensional mesh or torus. In addition, the proposed approach can potentially be used to solve optimization problems in other collective communication operations.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15647
- Subject Headings
- Computer algorithms, Parallel processing (Electronic computers), Computer architecture
- Format
- Document (PDF)
- Title
- Unifying the conceptual levels of network security through the use of patterns.
- Creator
- Kumar, Ajoy, Fernandez, Eduardo B., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Network architectures are described by the International Standard for Organization (ISO), which contains seven layers. The internet uses four of these layers, of which three are of interest to us. These layers are Internet Protocol (IP) or Network Layer, Transport Layer and Application Layer. We need to protect against attacks that may come through any of these layers. In the world of network security, systems are plagued by various attacks, internal and external, and could result in Denial...
Show moreNetwork architectures are described by the International Standard for Organization (ISO), which contains seven layers. The internet uses four of these layers, of which three are of interest to us. These layers are Internet Protocol (IP) or Network Layer, Transport Layer and Application Layer. We need to protect against attacks that may come through any of these layers. In the world of network security, systems are plagued by various attacks, internal and external, and could result in Denial of Service (DoS) and/or other damaging effects. Such attacks and loss of service can be devastating for the users of the system. The implementation of security devices such as Firewalls and Intrusion Detection Systems (IDS), the protection of network traffic with Virtual Private Networks (VPNs), and the use of secure protocols for the layers are important to enhance the security at each of these layers.We have done a survey of the existing network security patterns and we have written the missing patterns. We have developed security patterns for abstract IDS, Behavior–based IDS and Rule-based IDS and as well as for Internet Protocol Security (IPSec) and Transport Layer Security (TLS) protocols. We have also identified the need for a VPN pattern and have developed security patterns for abstract VPN, an IPSec VPN and a TLS VPN. We also evaluated these patterns with respect to some aspects to simplify their application by system designers. We have tried to unify the security of the network layers using security patterns by tying in security patterns for network transmission, network protocols and network boundary devices.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004132, http://purl.flvc.org/fau/fd/FA00004132
- Subject Headings
- Computer architecture, Computer network architectures, Computer network protocols, Computer network protocols, Computer networks -- Security measures, Expert systems (Computer science)
- Format
- Document (PDF)
- Title
- Modeling and analysis of security.
- Creator
- Ajaj, Ola, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Cloud Computing is a new computing model consists of a large pool of hardware and software resources on remote datacenters that are accessed through the Internet. Cloud Computing faces significant obstacles to its acceptance, such as security, virtualization, and lack of standardization. For Cloud standards, there is a long debate about their role, and more demands for Cloud standards are put on the table. The Cloud standardization landscape is so ambiguous. To model and analyze security...
Show moreCloud Computing is a new computing model consists of a large pool of hardware and software resources on remote datacenters that are accessed through the Internet. Cloud Computing faces significant obstacles to its acceptance, such as security, virtualization, and lack of standardization. For Cloud standards, there is a long debate about their role, and more demands for Cloud standards are put on the table. The Cloud standardization landscape is so ambiguous. To model and analyze security standards for Cloud Computing and web services, we have surveyed Cloud standards focusing more on the standards for security, and we classified them by groups of interests. Cloud Computing leverages a number of technologies such as: Web 2.0, virtualization, and Service Oriented Architecture (SOA). SOA uses web services to facilitate the creation of SOA systems by adopting different technologies despite their differences in formats and protocols. Several committees such as W3C and OASIS are developing standards for web services; their standards are rather complex and verbose. We have expressed web services security standards as patterns to make it easy for designers and users to understand their key points. We have written two patterns for two web services standards; WS-Secure Conversation, and WS-Federation. This completed an earlier work we have done on web services standards. We showed relationships between web services security standards and used them to solve major Cloud security issues, such as, authorization and access control, trust, and identity management. Close to web services, we investigated Business Process Execution Language (BPEL), and we addressed security considerations in BPEL and how to enforce them. To see how Cloud vendors look at web services standards, we took Amazon Web Services (AWS) as a case-study. By reviewing AWS documentations, web services security standards are barely mentioned. We highlighted some areas where web services security standards could solve some AWS limitations, and improve AWS security process. Finally, we studied the security guidance of two major Cloud-developing organizations, CSA and NIST. Both missed the quality of attributes offered by web services security standards. We expanded their work and added benefits of adopting web services security standards in securing the Cloud.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fau/fd/FA0004001
- Subject Headings
- Cloud Computing, Computational grids (Computer systems), Computer network architectures, Expert systems (Computer science), Web services -- Management
- Format
- Document (PDF)
- Title
- Towards a portal and search engine to facilitate academic and research collaboration in engineering and.
- Creator
- Bonilla Villarreal, Isaura Nathaly, Larrondo-Petrie, Maria M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
While international academic and research collaborations are of great importance at this time, it is not easy to find researchers in the engineering field that publish in languages other than English. Because of this disconnect, there exists a need for a portal to find Who’s Who in Engineering Education in the Americas. The objective of this thesis is to built an object-oriented architecture for this proposed portal. The Unified Modeling Language (UML) model developed in this thesis...
Show moreWhile international academic and research collaborations are of great importance at this time, it is not easy to find researchers in the engineering field that publish in languages other than English. Because of this disconnect, there exists a need for a portal to find Who’s Who in Engineering Education in the Americas. The objective of this thesis is to built an object-oriented architecture for this proposed portal. The Unified Modeling Language (UML) model developed in this thesis incorporates the basic structure of a social network for academic purposes. Reverse engineering of three social networks portals yielded important aspects of their structures that have been incorporated in the proposed UML model. Furthermore, the present work includes a pattern for academic social networks.
Show less - Date Issued
- 2014
- PURL
- http://purl.flvc.org/fau/fd/FA00004179, http://purl.flvc.org/fau/fd/FA00004179
- Subject Headings
- Computer network architecture, Critical theory, Embedded computer systems, Interdisciplinary research, Software architecture, UML (Computer science)
- Format
- Document (PDF)
- Title
- TOWARDS A SECURITY REFERENCE ARCHITECTURE FOR NETWORK FUNCTION VIRTUALIZATION.
- Creator
- Alnaim, Abdulrahman K., Fernandez, Eduardo B., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Network Function Virtualization (NFV) is an emerging technology that transforms legacy hardware-based network infrastructure into software-based virtualized networks. Instead of using dedicated hardware and network equipment, NFV relies on cloud and virtualization technologies to deliver network services to its users. These virtualized network services are considered better solutions than hardware-based network functions because their resources can be dynamically increased upon the consumer’s...
Show moreNetwork Function Virtualization (NFV) is an emerging technology that transforms legacy hardware-based network infrastructure into software-based virtualized networks. Instead of using dedicated hardware and network equipment, NFV relies on cloud and virtualization technologies to deliver network services to its users. These virtualized network services are considered better solutions than hardware-based network functions because their resources can be dynamically increased upon the consumer’s request. While their usefulness can’t be denied, they also have some security implications. In complex systems like NFV, the threats can come from a variety of domains due to it containing both the hardware and the virtualize entities in its infrastructure. Also, since it relies on software, the network service in NFV can be manipulated by external entities like third-party providers or consumers. This leads the NFV to have a larger attack surface than the traditional network infrastructure. In addition to its own threats, NFV also inherits security threats from its underlying cloud infrastructure. Therefore, to design a secure NFV system and utilize its full potential, we must have a good understanding of its underlying architecture and its possible security threats. Up until now, only imprecise models of this architecture existed. We try to improve this situation by using architectural modeling to describe and analyze the threats to NFV. Architectural modeling using Patterns and Reference Architectures (RAs) applies abstraction, which helps to reduce the complexity of NFV systems by defining their components at their highest level. The literature lacks attempts to implement this approach to analyze NFV threats. We started by enumerating the possible threats that may jeopardize the NFV system. Then, we performed an analysis of the threats to identify the possible misuses that could be performed from them. These threats are realized in the form of misuse patterns that show how an attack is performed from the point of view of attackers. Some of the most important threats are privilege escalation, virtual machine escape, and distributed denial-of-service. We used a reference architecture of NFV to determine where to add security mechanisms in order to mitigate the identified threats. This produces our ultimate goal, which is building a security reference architecture for NFV.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013435
- Subject Headings
- Computer network architectures--Safety measures, Virtual computer systems, Computer networks, Modeling, Computer
- Format
- Document (PDF)
- Title
- Misuse Patterns for the SSL/TLS Protocol.
- Creator
- Alkazimi, Ali, Fernandez, Eduardo B., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The SSL/TLS is the main protocol used to provide secure data connection between a client and a server. The main concern of using this protocol is to avoid the secure connection from being breached. Computer systems and their applications are becoming more complex and keeping these secure connections between all the connected components is a challenge. To avoid any new security flaws and protocol connections weaknesses, the SSL/TLS protocol is always releasing newer versions after discovering...
Show moreThe SSL/TLS is the main protocol used to provide secure data connection between a client and a server. The main concern of using this protocol is to avoid the secure connection from being breached. Computer systems and their applications are becoming more complex and keeping these secure connections between all the connected components is a challenge. To avoid any new security flaws and protocol connections weaknesses, the SSL/TLS protocol is always releasing newer versions after discovering security bugs and vulnerabilities in any of its previous version. We have described some of the common security flaws in the SSL/TLS protocol by identifying them in the literature and then by analyzing the activities from each of their use cases to find any possible threats. These threats are realized in the form of misuse cases to understand how an attack happens from the point of the attacker. This approach implies the development of some security patterns which will be added as a reference for designing secure systems using the SSL/TLS protocol. We finally evaluate its security level by using misuse patterns and considering the threat coverage of the models.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004873, http://purl.flvc.org/fau/fd/FA00004873
- Subject Headings
- Computer networks--Security measures., Computer network protocols., Computer software--Development., Computer architecture.
- Format
- Document (PDF)
- Title
- Analysis of a cluster-based architecture for hypercube multicomputers.
- Creator
- Obeng, Morrison Stephen., Florida Atlantic University, Mahgoub, Imad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation, we propose and analyze a cluster-based hypercube architecture in which each node of the hypercube is furnished with a cluster of n processors connected through a small crossbar switch with n memory modules. Topological analysis of the cluster-based hypercube architecture shows that it reduces the complexity of the basic hypercube architecture by reducing the diameter, the degree of a node and the number of links in the hypercube. The proposed architecture uses the higher...
Show moreIn this dissertation, we propose and analyze a cluster-based hypercube architecture in which each node of the hypercube is furnished with a cluster of n processors connected through a small crossbar switch with n memory modules. Topological analysis of the cluster-based hypercube architecture shows that it reduces the complexity of the basic hypercube architecture by reducing the diameter, the degree of a node and the number of links in the hypercube. The proposed architecture uses the higher processing power furnished by the cluster of execution processors in each node to address the needs of computation-intensive parallel application programs. It provides a smaller dimension hypercube with the same number of execution processors as a higher dimension conventional hypercube architecture. This scheme can be extended to meshes and other architectures. Mathematical analysis of the parallel simplex and parallel Gaussian elimination algorithms executing on the cluster-based hypercube show the order of complexity of executing an n x n matrix problem on the cluster-based hypercube using parallel simplex algorithm to be O(n^2) and that of the parallel Gaussian elimination algorithm to be O(n^3). The timing analysis derived from the mathematical analysis results indicate that for the same number of processors in the cluster-based hypercube system as the conventional hypercube system, the computation to communication ratio of the cluster-based hypercube executing a matrix problem by parallel simplex algorithm increases when the number of nodes of the cluster-based hypercube is decreased. Self-driven simulations were developed to run parallel simplex and parallel Gaussian elimination algorithms on the proposed cluster-based hypercube architecture and on the Intel Personal Supercomputer (iPSC/860), which is a conventional hypercube. The simulation results show a response time performance improvement of up to 30% in favor of the cluster-based hypercube. We also observe that for increased link delays, the performance gap increases significantly in favor of the cluster-based hypercube architecture when both the cluster-based hypercube and the Intel iPSC/860, a conventional hypercube, execute the same parallel simplex and Gaussian elimination algorithms.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/12435
- Subject Headings
- Computer architecture, Cluster analysis--Computer programs, Hypercube networks (Computer networks), Parallel computers
- Format
- Document (PDF)
- Title
- Dual Bus R-Net: A new local/metropolitan area network.
- Creator
- Chauhan, Sanjeev Birbal., Florida Atlantic University, Ilyas, Mohammad
- Abstract/Description
-
In this thesis we have proposed and analyzed a new architecture for high speed fiber optic LANs/MANs, called the Dual Bus R-Net. The scheme is based on a slotted unidirectional dual bus structure. It uses a reservation mechanism to generate slotted frames on each bus. Frames consist of a reservation slot and one or many information slots. Stations reserve slots by transmitting reservation requests on the bus carrying information in the opposite direction. The scheme has the advantage of...
Show moreIn this thesis we have proposed and analyzed a new architecture for high speed fiber optic LANs/MANs, called the Dual Bus R-Net. The scheme is based on a slotted unidirectional dual bus structure. It uses a reservation mechanism to generate slotted frames on each bus. Frames consist of a reservation slot and one or many information slots. Stations reserve slots by transmitting reservation requests on the bus carrying information in the opposite direction. The scheme has the advantage of superior channel utilization, bounded delay, fair access to all stations, dynamic bandwidth allocation to network users, and implementation simplicity. Extensive simulations have been carried out to verify the characteristics of the network. Simulation results reinforce the initial claims of the advantages offered by Dual Bus R-Net. Performance analysis is presented in terms of network delay and channel utilization. Simulation results are compared with similar results of X-Net, R-Net, DQDB, and Expressnet.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15006
- Subject Headings
- Local area networks (Computer networks), Metropolitan area networks (Computer networks), Computer network architectures, Computer network protocols
- Format
- Document (PDF)
- Title
- Home automation and power conservation using ZigBeeª.
- Creator
- DiBenedetto, Michael G., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The ZigBee standard is a wireless networking standard created and maintained by the ZigBee Alliance. The standard aims to provide an inexpensive, reliable, and efficient solution for wirelessly networked sensing and control products. The ZigBee Alliance is composed of over 300 member companies making use of the standard in different ways, ranging from energy management and efficiency, to RF remote controls, to health care products. Home automation is one market that greatly benefits from the...
Show moreThe ZigBee standard is a wireless networking standard created and maintained by the ZigBee Alliance. The standard aims to provide an inexpensive, reliable, and efficient solution for wirelessly networked sensing and control products. The ZigBee Alliance is composed of over 300 member companies making use of the standard in different ways, ranging from energy management and efficiency, to RF remote controls, to health care products. Home automation is one market that greatly benefits from the use of ZigBee. With a focus on conserving home electricity use, a sample design is created to test a home automation network using Freescale's ZigBee platform. Multiple electrical designs are tested utilizing sensors ranging from proximity sensors to current sense transformers. Software is fashioned as well, creating a PC application that interacts with two ZigBee transceiver boards performing different home automation functions such as air conditioner and automatic lighting control.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/368609
- Subject Headings
- Sensor networks, Wireless LANs, Computer network architecture, Assistive computer technology
- Format
- Document (PDF)
- Title
- Distributed management of heterogeneous networks using hypermedia data repositories.
- Creator
- Anderson, James M., Florida Atlantic University, Ilyas, Mohammad, Hsu, Sam, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Current management architectures address portions of the problem of managing high speed distributed networks; however, they do not provide a scalable end-to-end solution that can be applied to both large LAN and WAN high speed distributed networks. A new management architecture, "Web Integrated Network for Distributed Management Including Logic" (WINDMIL), is proposed to address the challenges of managing complex heterogeneous networks. The three primary components of the system are the...
Show moreCurrent management architectures address portions of the problem of managing high speed distributed networks; however, they do not provide a scalable end-to-end solution that can be applied to both large LAN and WAN high speed distributed networks. A new management architecture, "Web Integrated Network for Distributed Management Including Logic" (WINDMIL), is proposed to address the challenges of managing complex heterogeneous networks. The three primary components of the system are the Network Management Server (NMS), the Network Element Web Server (NEWS), and the Operator's Logic and Processing Platform (OLAPP). The NMS stores the management functions used by both the NEWS and the user. The NEWS is a Web server which collects and processes network element data in order to support management functions. The OLAPP executes the management functions and interfaces with the user.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12502
- Subject Headings
- Computer network architectures, Internetworking (Telecommunication), Computer network protocols
- Format
- Document (PDF)
- Title
- A high-speed switching node architecture for ATM networks.
- Creator
- Syed, Majid Ali, Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This research is aimed towards the concept of a new switching node architecture for cell-switched Asynchronous Transfer Mode (ATM) networks. The proposed architecture has several distinguishing features when compared with existing Banyan based switching node. It has a cylindrical structure as opposed to a flat structure as found in Banyans. The wrap around property results in better link utilization as compared with existing Banyans beside resulting in reduced average route length. Simplified...
Show moreThis research is aimed towards the concept of a new switching node architecture for cell-switched Asynchronous Transfer Mode (ATM) networks. The proposed architecture has several distinguishing features when compared with existing Banyan based switching node. It has a cylindrical structure as opposed to a flat structure as found in Banyans. The wrap around property results in better link utilization as compared with existing Banyans beside resulting in reduced average route length. Simplified digit controlled routing is maintained as found in Banyans. The cylindrical nature of the architecture, results in pipeline activity. Such architecture tends to sort the traffic to a higher address, eliminating the need of a preprocessing node as a front end processing node. Approximate Markov chain analyses for the performance of the switching node with single input buffers is presented. The analyses are used to compute the time delay distribution of a cell leaving the node. A simulation tool is used to validate the analytical model. The simulation model is free from the critical assumptions which are necessary to develop the analytical model. It is shown that the analytical results closely match with the simulation results. This confirms the authenticity of the simulation model. We then study the performance of the switching node for various input buffer sizes. Low throughput with single input buffered switching node is observed; however, as the buffer size is increased from two to three the increase in throughput is more than 100%. No appreciable increase in node delay is noted when the buffer size is increased from two to three. We conclude that the optimum buffer size for large throughput is three and the maximum throughput with offered load of 0.9 and buffer size three is 0.75. This is because of head of line blocking phenomenon. A technique to overcome such inherent problem is presented. Several delays which a cell faces are analyzed and summarized below. The wait delay with buffer sizes one and two is high. However, the wait delay is negligible when the buffer size is increased beyond two. This is because increasing the buffer size reduces the head of line blocking. Thus more cells can move forward. Node delay and switched delay are comparable when the buffer size is greater than two. The delay offered is within a threshold range as noted for real time traffic. The delay is clock rate dependent and can be minimized by running the switching node at a higher clock speed. The worst delay noted for a switched cell for a node operating at a clock rate of 200 Mhz is 0.5 usec.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/12309
- Subject Headings
- Computer networks, Computer architecture, Packet switching (Data transmission)
- Format
- Document (PDF)
- Title
- Hierarchical design, simulation and synthesis of a RISC processor using computer-aided design tools.
- Creator
- Freytag, Glenn A., Florida Atlantic University, Marcovitz, Alan B.
- Abstract/Description
-
The techniques employed in integrated circuit (IC) design have advanced significantly in the past decade. Design automation tools now offer hardware description languages (HDLs) for modeling and testing new designs. Some tools can even synthesize an IC from a model written in an HDL. Such design tools promise to facilitate greatly the development of new IC designs. They also make it possible for engineering students to learn advanced techniques of IC design and computer architecture in a...
Show moreThe techniques employed in integrated circuit (IC) design have advanced significantly in the past decade. Design automation tools now offer hardware description languages (HDLs) for modeling and testing new designs. Some tools can even synthesize an IC from a model written in an HDL. Such design tools promise to facilitate greatly the development of new IC designs. They also make it possible for engineering students to learn advanced techniques of IC design and computer architecture in a classroom setting. Two examples of such state-of-the-art design tools are Design Framework and Epoch. In this work, we present a hierarchical design for a reduced-instruction-set computer (RISC) processor, which we implemented using Design Framework and Epoch. The processor is based on the DLX architecture proposed by Hennessy and Patterson. We implemented our design according to a top-down methodology, which worked very well in these design tools.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15220
- Subject Headings
- RISC microprocessors, Computer architecture, Computer-aided design
- Format
- Document (PDF)