Current Search: Computer networks (x)
View All Items
Pages
- Title
- Probabilistic predictor-based routing in disruption-tolerant networks.
- Creator
- Yuan, Quan., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Disruption-Tolerant Networks (DTNs) are the networks comprised of a set of wireless nodes, and they experience unstable connectivity and frequent connection disruption because of the limitations of radio range, power, network density, device failure, and noise. DTNs are characterized by their lack of infrastructure, device limitation, and intermittent connectivity. Such characteristics make conventional wireless network routing protocols fail, as they are designed with the assumption the...
Show moreDisruption-Tolerant Networks (DTNs) are the networks comprised of a set of wireless nodes, and they experience unstable connectivity and frequent connection disruption because of the limitations of radio range, power, network density, device failure, and noise. DTNs are characterized by their lack of infrastructure, device limitation, and intermittent connectivity. Such characteristics make conventional wireless network routing protocols fail, as they are designed with the assumption the network stays connected. Thus, routing in DTNs becomes a challenging problem, due to the temporal scheduling element in a dynamic topology. One of the solutions is prediction-based, where nodes mobility is estimated with a history of observations. Then, the decision of forwarding messages during data delivery can be made with that predicted information. Current prediction-based routing protocols can be divided into two sub-categories in terms of that whether they are probability related: probabilistic and non-probabilistic. This dissertation focuses on the probabilistic prediction-based (PPB) routing schemes in DTNs. We find that most of these protocols are designed for a specified topology or scenario. So almost every protocol has some drawbacks when applied to a different scenario. Because every scenario has its own particular features, there could hardly exist a universal protocol which can suit all of the DTN scenarios. Based on the above motivation, we investigate and divide the current DTNs scenarios into three categories: Voronoi-based, landmark-based, and random moving DTNs. For each category, we design and implement a corresponding PPB routing protocol for either basic routing or a specified application with considering its unique features., Specifically, we introduce a Predict and Relay routing protocol for Voronoi-based DTNs, present a single-copy and a multi-copy PPB routing protocol for landmark-based DTNs, and propose DRIP, a dynamic Voronoi region-based publish/subscribe protocol, to adapt publish/subscribe systems to random moving DTNs. New concepts, approaches, and algorithms are introduced during our work.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/359928
- Subject Headings
- Routers (Computer networks), Computer network protocols, Computer networks, Reliability, Computer algorithms, Wireless communication systems, Technological innovations
- Format
- Document (PDF)
- Title
- Home automation and power conservation using ZigBeeª.
- Creator
- DiBenedetto, Michael G., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The ZigBee standard is a wireless networking standard created and maintained by the ZigBee Alliance. The standard aims to provide an inexpensive, reliable, and efficient solution for wirelessly networked sensing and control products. The ZigBee Alliance is composed of over 300 member companies making use of the standard in different ways, ranging from energy management and efficiency, to RF remote controls, to health care products. Home automation is one market that greatly benefits from the...
Show moreThe ZigBee standard is a wireless networking standard created and maintained by the ZigBee Alliance. The standard aims to provide an inexpensive, reliable, and efficient solution for wirelessly networked sensing and control products. The ZigBee Alliance is composed of over 300 member companies making use of the standard in different ways, ranging from energy management and efficiency, to RF remote controls, to health care products. Home automation is one market that greatly benefits from the use of ZigBee. With a focus on conserving home electricity use, a sample design is created to test a home automation network using Freescale's ZigBee platform. Multiple electrical designs are tested utilizing sensors ranging from proximity sensors to current sense transformers. Software is fashioned as well, creating a PC application that interacts with two ZigBee transceiver boards performing different home automation functions such as air conditioner and automatic lighting control.
Show less - Date Issued
- 2009
- PURL
- http://purl.flvc.org/FAU/368609
- Subject Headings
- Sensor networks, Wireless LANs, Computer network architecture, Assistive computer technology
- Format
- Document (PDF)
- Title
- Dynamic routing in grid-connected networks.
- Creator
- Jiang, Zhen., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This dissertation describes the effect of collection and distribution of fault information on routing capacity in grid-connected networks with faults occurring during the routing process. The grid-connected network, such as hypercubes, 2-D meshes, and 3-D meshes, is one of the simplest and least expensive structures to build a system using hundreds and even thousands of processors. In such a system, efficient communication among the processors is critical to performance. Hence, the routing of...
Show moreThis dissertation describes the effect of collection and distribution of fault information on routing capacity in grid-connected networks with faults occurring during the routing process. The grid-connected network, such as hypercubes, 2-D meshes, and 3-D meshes, is one of the simplest and least expensive structures to build a system using hundreds and even thousands of processors. In such a system, efficient communication among the processors is critical to performance. Hence, the routing of messages is an important issue that needs to be addressed. As the number of nodes in the networks increases, the chance of failure also increases. The complex nature of networks also makes them vulnerable to disturbances. Therefore, the ability to route messages efficiently in the presence of faulty components, especially those might occur during the routing process, is becoming increasingly important. A central issue in designing a fault-tolerant routing algorithm is the way fault information is collected and used. The safety level model is a special coded fault information model in hypercubes which is more cost effective and more efficient than other information models. In this model, each node is associated with an integer, called safety level, which is an approximated measure of the number and distribution of faulty nodes in the neighborhood. The safety level of each node in an n-dimensional hypercube can be easily calculated through (n - 1)-rounds information exchanges among neighboring nodes. A k-safe node indicates the existence of at least one Hamming distance path (also called optimal path or minimal path) from this node to any node with Hamming distance k. We focus on routing capacity using safety levels in a dynamic system. In this case, the update of safety levels and the routing process proceed hand-in-hand. During the converging period, the routing process may experience extra hops based on unstable (inconsistent) information. Under the assumption that the total number of faults is less than n, we provide an upper bound of extra hops and show its accuracy and effectiveness. After that, we extend the results to meshes. Our simulation results show the effectiveness of our information model and scalability of our fault-information-based routing in the grid-connected networks with dynamic faults. Because our information is easy to update and maintain and optimality is still preserved, it is more cost effective than the others.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12002
- Subject Headings
- Fault-tolerant computing, Hypercube networks (Computer networks)
- Format
- Document (PDF)
- Title
- Distributed management of heterogeneous networks using hypermedia data repositories.
- Creator
- Anderson, James M., Florida Atlantic University, Ilyas, Mohammad, Hsu, Sam, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Current management architectures address portions of the problem of managing high speed distributed networks; however, they do not provide a scalable end-to-end solution that can be applied to both large LAN and WAN high speed distributed networks. A new management architecture, "Web Integrated Network for Distributed Management Including Logic" (WINDMIL), is proposed to address the challenges of managing complex heterogeneous networks. The three primary components of the system are the...
Show moreCurrent management architectures address portions of the problem of managing high speed distributed networks; however, they do not provide a scalable end-to-end solution that can be applied to both large LAN and WAN high speed distributed networks. A new management architecture, "Web Integrated Network for Distributed Management Including Logic" (WINDMIL), is proposed to address the challenges of managing complex heterogeneous networks. The three primary components of the system are the Network Management Server (NMS), the Network Element Web Server (NEWS), and the Operator's Logic and Processing Platform (OLAPP). The NMS stores the management functions used by both the NEWS and the user. The NEWS is a Web server which collects and processes network element data in order to support management functions. The OLAPP executes the management functions and interfaces with the user.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12502
- Subject Headings
- Computer network architectures, Internetworking (Telecommunication), Computer network protocols
- Format
- Document (PDF)
- Title
- Specialized communications processor for layered protocols.
- Creator
- Mandalia, Baiju Dhirajlal., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This dissertation describes an architecture for a special purpose communications protocol processor (CPP) that has been developed for open systems interconnection (OSI) type layered protocol processing. There exists a performance problem with the implementation and processing of communication protocols and the problem can have an impact on the throughput of future network interfaces. This problem revolves around two issues, (i) communication processing bottlenecks to fully utilize high speed...
Show moreThis dissertation describes an architecture for a special purpose communications protocol processor (CPP) that has been developed for open systems interconnection (OSI) type layered protocol processing. There exists a performance problem with the implementation and processing of communication protocols and the problem can have an impact on the throughput of future network interfaces. This problem revolves around two issues, (i) communication processing bottlenecks to fully utilize high speed transmission mediums; (ii) mechanism used in the implementation of communications functions. It is the objective of this work to address this problem and develop a first of its kind processor that is dedicated to protocol processing. At first trends in computer communications technology are discussed along with issues that influence throughput in front end controllers for network interfaces that support OSI. Network interface requirements and a survey of existing technology are presented and the state of the art of layered communication is evaluated and specific parameters that contribute to the performance of communications processors are identified. Based on this evaluation a new set of instructions is developed to support the necessary functions. Each component of the new architecture is explained with respect to the mechanism for implementation. The CPP contains special-purpose circuits dedicated to quick performance (e.g. single machine cycle execution) of functions needed to process header and frame information, functions which are repeatedly encountered in all protocol layers, and instructions designed to take advantage of these circuits. The header processing functions include priority branch determination functions, register bit reshaping (rearranging) functions, and instruction address processing functions. Frame processing functions include CRC (cyclic redundancy check) computations, bit insertion/deletion operations and special character detection operations. Justifications for new techniques are provided and their advantages over existing technology are discussed. A hardware register transfer level model is developed to simulate the new architecture for path length computations. A performance queueing model is also developed to analyze the processor characteristics with various load parameters. Finally, a brief discussion indicates how such a processor would apply to future network interfaces along with possible trends.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/11933
- Subject Headings
- Computer network protocols, Computer networks, Data transmission systems
- Format
- Document (PDF)
- Title
- The balanced hypercube: A versatile cube-based multicomputer system.
- Creator
- Huang, Ke., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
We propose the balanced hypercube (BH), which is a variant of the standard hypercube (Q), as a multicomputer topological structure. An n-dimensional balanced hypercube BHn has the same desirable topological properties of the 2n-dimensional standard hypercube Q2n such as size (2^2n nodes and n2^2n edges), regularity and symmetry, connectivity (2n node-disjoint pathes between any pair of nodes), and diameter (2n when n = 1 or n is even). Moreover, BHn has smaller diameter (2n-1) than Qn's (2n)...
Show moreWe propose the balanced hypercube (BH), which is a variant of the standard hypercube (Q), as a multicomputer topological structure. An n-dimensional balanced hypercube BHn has the same desirable topological properties of the 2n-dimensional standard hypercube Q2n such as size (2^2n nodes and n2^2n edges), regularity and symmetry, connectivity (2n node-disjoint pathes between any pair of nodes), and diameter (2n when n = 1 or n is even). Moreover, BHn has smaller diameter (2n-1) than Qn's (2n) when n is odd other than 1. In addition, BHn is load balanced, i.e., for every node v of BHn, there exists another node v', called v's matching node, such that v and v' share the same adjacent node set. Therefore, BHn has a desirable fault tolerance feature: when a node v fails, we can simply shift the job execution on v to its matching node v' and the communication pattern between jobs remains the same. In this dissertation, we study the topological properties of BHn and explore its fault tolerance feature. Other design issues are considered, such as communication primitives, capability of simulating other multicomputer systems through graph embedding, resource placement. and VLSI/WSI layout. Finally, the use of BHn is illustrated by an application.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12519
- Subject Headings
- Hypercube networks (Computer networks), Fault-tolerant computing
- Format
- Document (PDF)
- Title
- Load balancing on multiprocessor systems.
- Creator
- More, Hemant B., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The utilization of a multiprocessor system is enhanced when idle time of processors is reduced. Allocation of processes from overloaded processors to idle processors can balance the load on multiprocessor systems and increase system throughput by reducing the process execution time. This thesis presents a study of parameters, issues and existing algorithms related to load balancing. The performance of load balancing on hypercubes using three new algorithms is explored and analyzed. A new...
Show moreThe utilization of a multiprocessor system is enhanced when idle time of processors is reduced. Allocation of processes from overloaded processors to idle processors can balance the load on multiprocessor systems and increase system throughput by reducing the process execution time. This thesis presents a study of parameters, issues and existing algorithms related to load balancing. The performance of load balancing on hypercubes using three new algorithms is explored and analyzed. A new algorithm to balance load on hypercubes in the presence of link faults is presented and analyzed here. Another algorithm to balance load on hypercube systems containing faulty processors is proposed and studied. The applicability of load balancing to real life problems is demonstrated by showing that the execution of branch and bound problem on hypercubes speeds up when load balancing is used.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14957
- Subject Headings
- Hypercube networks (Computer networks), Multiprocessors, Fault-tolerant computing
- Format
- Document (PDF)
- Title
- MULTI-LEVEL FLOW CONTROL IN COMPUTER NETWORKS (SIMULATION).
- Creator
- ROY, KAUSHIK., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Several different flow control methods for computer communications networks have been analyzed in this thesis. Isarithmic, end-to-end, and link level flow control have been dealt with in detail. A simulation program has been written to compare the performance of these flow control techniques. It has been found that the system performance of computer networks degrades at high values of data traffic when no flow control is implemented. This is due to congestion and buffer overflow problem. One...
Show moreSeveral different flow control methods for computer communications networks have been analyzed in this thesis. Isarithmic, end-to-end, and link level flow control have been dealt with in detail. A simulation program has been written to compare the performance of these flow control techniques. It has been found that the system performance of computer networks degrades at high values of data traffic when no flow control is implemented. This is due to congestion and buffer overflow problem. One flow control technique when implemented alone does not generally produce optimal results. A judicious mixture of several flow control techniques should be implemented together so as to get the desired throughput and delay performance in a computer communication network. The simulation program is quite versatile. It uses the eventscheduling method for faster computation. Several simulation results have been presented to illustrate the effects of multi-level flow on delay and throughput performance of a network under different traffic load conditions.
Show less - Date Issued
- 1985
- PURL
- http://purl.flvc.org/fcla/dt/14286
- Subject Headings
- Computer networks--Testing, Computer networks--Evaluation
- Format
- Document (PDF)
- Title
- A reduced overhead routing protocol for ad hoc wireless networks.
- Creator
- Ibriq, Jamil, Florida Atlantic University, Wu, Jie
- Abstract/Description
-
This document describes the Reduced Overhead Routing (ROR) protocol for use in mobile wireless ad hoc networks. The protocol is highly bandwidth-efficient. The protocol has three distinguishing features: First, it maintains, for each destination, multiple paths. Second, routing table updates are localized. Updates are initiated only when the update table is not empty and the update frequency has not exceeded a specified rate. Third, ROR uses threshold routing technique; it allows an...
Show moreThis document describes the Reduced Overhead Routing (ROR) protocol for use in mobile wireless ad hoc networks. The protocol is highly bandwidth-efficient. The protocol has three distinguishing features: First, it maintains, for each destination, multiple paths. Second, routing table updates are localized. Updates are initiated only when the update table is not empty and the update frequency has not exceeded a specified rate. Third, ROR uses threshold routing technique; it allows an intermediate node to deliver data packet via a longer sub-optimal route that is within the distance. To prevent frequent updates, at most one update is initiated every predefined period of time. A node transmits each update with propagation radius that is determined on the basis of node's network region using a novel probabilistic technique. Threshold routing and localized probabilistic updates greatly reduce routing overhead and network congestion and improve bandwidth efficiency.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12703
- Subject Headings
- Mobile computing, Computer networks, Wireless communication systems
- Format
- Document (PDF)
- Title
- Analysis of a new protocol for Bluetooth network formation.
- Creator
- Madhusoodanan, Vishakh., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This research work deals with analyzing the performance of a new protocol for distributed network formation and re-formation in Bluetooth ad-hoc networks. The Bluetooth network may be a piconet or a scatternet consisting of a number of piconets. One of the salient features of this protocol is that, there is no leader election process involved. We can establish simultaneous multiple connections between masters and slaves. The delay in formation of the network is analyzed with respect to the...
Show moreThis research work deals with analyzing the performance of a new protocol for distributed network formation and re-formation in Bluetooth ad-hoc networks. The Bluetooth network may be a piconet or a scatternet consisting of a number of piconets. One of the salient features of this protocol is that, there is no leader election process involved. We can establish simultaneous multiple connections between masters and slaves. The delay in formation of the network is analyzed with respect to the number of devices in the network. Also, the network diameter, the number of piconets formed and the impact of device failures are analyzed. Simulation results show that the protocol handles the failures of the devices with minimum delay except at the time of failure of a master. The scope for further improvement of this protocol is also discussed.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13036
- Subject Headings
- Bluetooth technology, Computer network protocols
- Format
- Document (PDF)
- Title
- PERFORMANCE EVALUATION OF THE EFFECTS OF MESSAGE SEGMENTATION IN TANDEM NODE COMPUTER NETWORKS.
- Creator
- LAMANNA, PETER JOHN., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Analytical and simulation performance evaluation results are presented on the effects of message segmentation and the validity of the Independence Assumption when applied to analytically modeling tandem node computer networks. Simulation results indicate that increasing message segmentation threshold will increase the network traffic intensity and consequently the total packet delay. Simulation and analytical results for total packet delay compared well only at low traffic intensities. At...
Show moreAnalytical and simulation performance evaluation results are presented on the effects of message segmentation and the validity of the Independence Assumption when applied to analytically modeling tandem node computer networks. Simulation results indicate that increasing message segmentation threshold will increase the network traffic intensity and consequently the total packet delay. Simulation and analytical results for total packet delay compared well only at low traffic intensities. At higher traffic intensities the discrepancy is due to the Independence Assumption since it does not account for the increasing dependency of interarrival times and service times as packets are made to wait at the nodes.
Show less - Date Issued
- 1986
- PURL
- http://purl.flvc.org/fcla/dt/14327
- Subject Headings
- Computer networks, Data transmission systems
- Format
- Document (PDF)
- Title
- A communication protocol for acoustic ad-hoc networks of autonomous underwater vehicles.
- Creator
- Baud, Bertrand., Florida Atlantic University, An, Edgar
- Abstract/Description
-
This thesis presents the design and implementation of an underwater network communication protocol. The goal is to enable several autonomous underwater vehicles (AUVs) to form a communication network and to exchange information during at-sea missions. The focus of this work is on the upper layers of the protocol: Network and Transport layers. Routing is a critical issue since all the nodes forming the network are moving. A study and comparison of existing routing algorithms is presented. Two...
Show moreThis thesis presents the design and implementation of an underwater network communication protocol. The goal is to enable several autonomous underwater vehicles (AUVs) to form a communication network and to exchange information during at-sea missions. The focus of this work is on the upper layers of the protocol: Network and Transport layers. Routing is a critical issue since all the nodes forming the network are moving. A study and comparison of existing routing algorithms is presented. Two routing algorithms have been chosen and implemented in the network layer of the protocol: Flooding and Destination Sequence Distance Vector Routing. The protocol has been tested on several types of simulated missions. An analysis of the results is proposed for each mission.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12774
- Subject Headings
- Underwater acoustics, Submersibles, Computer networks
- Format
- Document (PDF)
- Title
- GENERIC NETWORK EXECUTIVE.
- Creator
- SARMIENTO, JESUS LEOPOLDO., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A Generic Network Executive (GNE) package is presented in this thesis. It encompasses the strategy and methodology to follow when implementing data communication software. GNE was designed for portability and high utilization of available resources (efficiency). It does not impose implementation constraints because it does not include features specific to any system (hardware or operating system}. It uses a highly concurrent process model with a pipelined structure . It is not protocol...
Show moreA Generic Network Executive (GNE) package is presented in this thesis. It encompasses the strategy and methodology to follow when implementing data communication software. GNE was designed for portability and high utilization of available resources (efficiency). It does not impose implementation constraints because it does not include features specific to any system (hardware or operating system}. It uses a highly concurrent process model with a pipelined structure . It is not protocol dependent, rather it is meant to be used to implement low level services for higher level communic ation protocols. It is intended to provide interprocess communication in distributed systems by coupling application programs with a general purpose packet delivery system, i.e., a datagram service.
Show less - Date Issued
- 1986
- PURL
- http://purl.flvc.org/fcla/dt/14321
- Subject Headings
- Computer networks, Data transmission systems
- Format
- Document (PDF)
- Title
- Design and performance analysis of FDDI and DQDB network architectures.
- Creator
- Khera, Harbinder Singh., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The primary emphasis of this thesis is to study the behavioral characteristics of Fiber Distributed Data Interface (FDDI) and Distributed Queue Dual Bus (DQDB) High Speed Local Area Networks (HSLANs). An FDDI architecture with passive interfaces is proposed to provide a reliable and efficient network topology. This network architecture outperforms the existing FDDI architecture with active interfaces in terms of small asynchronous packet delays and high asynchronous packet throughput. The...
Show moreThe primary emphasis of this thesis is to study the behavioral characteristics of Fiber Distributed Data Interface (FDDI) and Distributed Queue Dual Bus (DQDB) High Speed Local Area Networks (HSLANs). An FDDI architecture with passive interfaces is proposed to provide a reliable and efficient network topology. This network architecture outperforms the existing FDDI architecture with active interfaces in terms of small asynchronous packet delays and high asynchronous packet throughput. The design and implementation issues involved in the design of the hierarchical (multi-level) DQDB and FDDI networks are also presented. The hierarchical network architecture provides modularity and scalability with respect to speed and the number of users. Simulation models are developed for each of these network architectures to study their performance. Simulation results are presented in terms of medium access delay, throughput, and packet delays.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14976
- Subject Headings
- Fiber Distributed Data Interface (Computer network standard), Computer architecture, Local area networks (Computer networks)
- Format
- Document (PDF)
- Title
- Reputation-based system for encouraging cooperation of nodes in mobile ad hoc networks.
- Creator
- Anantvalee, Tiranuch., Florida Atlantic University, Wu, Jie
- Abstract/Description
-
In a mobile ad hoc network, node cooperation in packet forwarding is required for the network to function properly. However, since nodes in this network usually have limited resources, some selfish nodes might intend not to forward packets to save resources for their own use. To discourage such behavior, we propose RMS, a reputation-based system, to detect selfish nodes and respond to them by showing that being cooperative will benefit there more than being selfish. We also detect, to some...
Show moreIn a mobile ad hoc network, node cooperation in packet forwarding is required for the network to function properly. However, since nodes in this network usually have limited resources, some selfish nodes might intend not to forward packets to save resources for their own use. To discourage such behavior, we propose RMS, a reputation-based system, to detect selfish nodes and respond to them by showing that being cooperative will benefit there more than being selfish. We also detect, to some degree, nodes who forward only the necessary amount of packets to avoid being detected as selfish. We introduce the use of a state model to decide what we should do or respond to nodes in each state. In addition, we introduce the use of a timing period to control when the reputation should be updated and to use as a timeout for each state. The simulation results show that RMS can identify selfish nodes and punish them accordingly, which provide selfish nodes with an incentive to behave more cooperatively.
Show less - Date Issued
- 2006
- PURL
- http://purl.flvc.org/fcla/dt/13406
- Subject Headings
- Computer networks--Security measures, Wireless communication systems, Routers (Computer networks), Computer network architectures
- Format
- Document (PDF)
- Title
- Design and analysis of key establishment protocols.
- Creator
- Neupane, Kashi., Charles E. Schmidt College of Science, Department of Mathematical Sciences
- Abstract/Description
-
Consider a scenario where a server S shares a symmetric key kU with each user U. Building on a 2-party solution of Bohli et al., we describe an authenticated 3-party key establishment which remains secure if a computational Bilinear Diffie Hellman problem is hard or the server is uncorrupted. If the BDH assumption holds during a protocol execution, but is invalidated later, entity authentication and integrity of the protocol are still guaranteed. Key establishment protocols based on hardness...
Show moreConsider a scenario where a server S shares a symmetric key kU with each user U. Building on a 2-party solution of Bohli et al., we describe an authenticated 3-party key establishment which remains secure if a computational Bilinear Diffie Hellman problem is hard or the server is uncorrupted. If the BDH assumption holds during a protocol execution, but is invalidated later, entity authentication and integrity of the protocol are still guaranteed. Key establishment protocols based on hardness assumptions, such as discrete logarithm problem (DLP) and integer factorization problem (IFP) are vulnerable to quantum computer attacks, whereas the protocols based on other hardness assumptions, such as conjugacy search problem and decomposition search problem can resist such attacks. The existing protocols based on the hardness assumptions which can resist quantum computer attacks are only passively secure. Compilers are used to convert a passively secure protocol to an actively secure protoc ol. Compilers involve some tools such as, signature scheme and a collision-resistant hash function. If there are only passively secure protocols but not a signature scheme based on same assumption then the application of existing compilers requires the use of such tools based on different assumptions. But the introduction of new tools, based on different assumptions, makes the new actively secure protocol rely on more than one hardness assumptions. We offer an approach to derive an actively secure two-party protocol from a passively secure two-party protocol without introducing further hardness assumptions. This serves as a useful formal tool to transform any basic algebric method of public key cryptography to the real world applicaticable cryptographic scheme. In a recent preprint, Vivek et al. propose a compiler to transform a passively secure 3-party key establishment to a passively secure group key establishment. To achieve active security, they apply this compiler to Joux's, protoc ol and apply a construction by Katz and Yung, resulting in a 3-round group key establishment. In this reserach, we show how Joux's protocol can be extended to an actively secure group key establishment with two rounds. The resulting solution is in the standard model, builds on a bilinear Diffie-Hellman assumption and offers forward security as well as strong entity authentication. If strong entity authentication is not required, then one half of the participants does not have to send any message in the second round, which may be of interest for scenarios where communication efficiency is a main concern.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3342239
- Subject Headings
- Computer networks, Security measures, Computer network protocols, Data encryption (Computer science), Public key infrastructure (Computer security)
- Format
- Document (PDF)
- Title
- TOWARDS A SECURITY REFERENCE ARCHITECTURE FOR NETWORK FUNCTION VIRTUALIZATION.
- Creator
- Alnaim, Abdulrahman K., Fernandez, Eduardo B., Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
Network Function Virtualization (NFV) is an emerging technology that transforms legacy hardware-based network infrastructure into software-based virtualized networks. Instead of using dedicated hardware and network equipment, NFV relies on cloud and virtualization technologies to deliver network services to its users. These virtualized network services are considered better solutions than hardware-based network functions because their resources can be dynamically increased upon the consumer’s...
Show moreNetwork Function Virtualization (NFV) is an emerging technology that transforms legacy hardware-based network infrastructure into software-based virtualized networks. Instead of using dedicated hardware and network equipment, NFV relies on cloud and virtualization technologies to deliver network services to its users. These virtualized network services are considered better solutions than hardware-based network functions because their resources can be dynamically increased upon the consumer’s request. While their usefulness can’t be denied, they also have some security implications. In complex systems like NFV, the threats can come from a variety of domains due to it containing both the hardware and the virtualize entities in its infrastructure. Also, since it relies on software, the network service in NFV can be manipulated by external entities like third-party providers or consumers. This leads the NFV to have a larger attack surface than the traditional network infrastructure. In addition to its own threats, NFV also inherits security threats from its underlying cloud infrastructure. Therefore, to design a secure NFV system and utilize its full potential, we must have a good understanding of its underlying architecture and its possible security threats. Up until now, only imprecise models of this architecture existed. We try to improve this situation by using architectural modeling to describe and analyze the threats to NFV. Architectural modeling using Patterns and Reference Architectures (RAs) applies abstraction, which helps to reduce the complexity of NFV systems by defining their components at their highest level. The literature lacks attempts to implement this approach to analyze NFV threats. We started by enumerating the possible threats that may jeopardize the NFV system. Then, we performed an analysis of the threats to identify the possible misuses that could be performed from them. These threats are realized in the form of misuse patterns that show how an attack is performed from the point of view of attackers. Some of the most important threats are privilege escalation, virtual machine escape, and distributed denial-of-service. We used a reference architecture of NFV to determine where to add security mechanisms in order to mitigate the identified threats. This produces our ultimate goal, which is building a security reference architecture for NFV.
Show less - Date Issued
- 2020
- PURL
- http://purl.flvc.org/fau/fd/FA00013435
- Subject Headings
- Computer network architectures--Safety measures, Virtual computer systems, Computer networks, Modeling, Computer
- Format
- Document (PDF)
- Title
- Misuse Patterns for the SSL/TLS Protocol.
- Creator
- Alkazimi, Ali, Fernandez, Eduardo B., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The SSL/TLS is the main protocol used to provide secure data connection between a client and a server. The main concern of using this protocol is to avoid the secure connection from being breached. Computer systems and their applications are becoming more complex and keeping these secure connections between all the connected components is a challenge. To avoid any new security flaws and protocol connections weaknesses, the SSL/TLS protocol is always releasing newer versions after discovering...
Show moreThe SSL/TLS is the main protocol used to provide secure data connection between a client and a server. The main concern of using this protocol is to avoid the secure connection from being breached. Computer systems and their applications are becoming more complex and keeping these secure connections between all the connected components is a challenge. To avoid any new security flaws and protocol connections weaknesses, the SSL/TLS protocol is always releasing newer versions after discovering security bugs and vulnerabilities in any of its previous version. We have described some of the common security flaws in the SSL/TLS protocol by identifying them in the literature and then by analyzing the activities from each of their use cases to find any possible threats. These threats are realized in the form of misuse cases to understand how an attack happens from the point of the attacker. This approach implies the development of some security patterns which will be added as a reference for designing secure systems using the SSL/TLS protocol. We finally evaluate its security level by using misuse patterns and considering the threat coverage of the models.
Show less - Date Issued
- 2017
- PURL
- http://purl.flvc.org/fau/fd/FA00004873, http://purl.flvc.org/fau/fd/FA00004873
- Subject Headings
- Computer networks--Security measures., Computer network protocols., Computer software--Development., Computer architecture.
- Format
- Document (PDF)
- Title
- Analysis of a cluster-based architecture for hypercube multicomputers.
- Creator
- Obeng, Morrison Stephen., Florida Atlantic University, Mahgoub, Imad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation, we propose and analyze a cluster-based hypercube architecture in which each node of the hypercube is furnished with a cluster of n processors connected through a small crossbar switch with n memory modules. Topological analysis of the cluster-based hypercube architecture shows that it reduces the complexity of the basic hypercube architecture by reducing the diameter, the degree of a node and the number of links in the hypercube. The proposed architecture uses the higher...
Show moreIn this dissertation, we propose and analyze a cluster-based hypercube architecture in which each node of the hypercube is furnished with a cluster of n processors connected through a small crossbar switch with n memory modules. Topological analysis of the cluster-based hypercube architecture shows that it reduces the complexity of the basic hypercube architecture by reducing the diameter, the degree of a node and the number of links in the hypercube. The proposed architecture uses the higher processing power furnished by the cluster of execution processors in each node to address the needs of computation-intensive parallel application programs. It provides a smaller dimension hypercube with the same number of execution processors as a higher dimension conventional hypercube architecture. This scheme can be extended to meshes and other architectures. Mathematical analysis of the parallel simplex and parallel Gaussian elimination algorithms executing on the cluster-based hypercube show the order of complexity of executing an n x n matrix problem on the cluster-based hypercube using parallel simplex algorithm to be O(n^2) and that of the parallel Gaussian elimination algorithm to be O(n^3). The timing analysis derived from the mathematical analysis results indicate that for the same number of processors in the cluster-based hypercube system as the conventional hypercube system, the computation to communication ratio of the cluster-based hypercube executing a matrix problem by parallel simplex algorithm increases when the number of nodes of the cluster-based hypercube is decreased. Self-driven simulations were developed to run parallel simplex and parallel Gaussian elimination algorithms on the proposed cluster-based hypercube architecture and on the Intel Personal Supercomputer (iPSC/860), which is a conventional hypercube. The simulation results show a response time performance improvement of up to 30% in favor of the cluster-based hypercube. We also observe that for increased link delays, the performance gap increases significantly in favor of the cluster-based hypercube architecture when both the cluster-based hypercube and the Intel iPSC/860, a conventional hypercube, execute the same parallel simplex and Gaussian elimination algorithms.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/12435
- Subject Headings
- Computer architecture, Cluster analysis--Computer programs, Hypercube networks (Computer networks), Parallel computers
- Format
- Document (PDF)
- Title
- Power based wide collision attacks on AES.
- Creator
- Ye, Xin, Eisenbarth, Thomas, Graduate College
- Date Issued
- 2011-04-08
- PURL
- http://purl.flvc.org/fcla/dt/3164806
- Subject Headings
- Computer networks, Data encryption (Computer science), Computer security
- Format
- Document (PDF)