Current Search: Department of Computer and Electrical Engineering and Computer Science (x) » Mahgoub, Imad (x)
View All Items
Pages
- Title
- Feature selection techniques and applications in bioinformatics.
- Creator
- Dittman, David, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Possibly the largest problem when working in bioinformatics is the large amount of data to sift through to find useful information. This thesis shows that the use of feature selection (a method of removing irrelevant and redundant information from the dataset) is a useful and even necessary technique to use in these large datasets. This thesis also presents a new method in comparing classes to each other through the use of their features. It also provides a thorough analysis of the use of...
Show morePossibly the largest problem when working in bioinformatics is the large amount of data to sift through to find useful information. This thesis shows that the use of feature selection (a method of removing irrelevant and redundant information from the dataset) is a useful and even necessary technique to use in these large datasets. This thesis also presents a new method in comparing classes to each other through the use of their features. It also provides a thorough analysis of the use of various feature selection techniques and classifier in different scenarios from bioinformatics. Overall, this thesis shows the importance of the use of feature selection in bioinformatics.
Show less - Date Issued
- 2011
- PURL
- http://purl.flvc.org/FAU/3175016
- Subject Headings
- Bioinformatifcs, Data mining, Technological innovations, Computational biology, Combinatorial group theory, Filters (Mathematics), Ranking and selection (Statistics)
- Format
- Document (PDF)
- Title
- Software reliability engineering with genetic programming.
- Creator
- Liu, Yi., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Software reliability engineering plays a vital role in managing and controlling software quality. As an important method of software reliability engineering, software quality estimation modeling is useful in defining a cost-effective strategy to achieve a reliable software system. By predicting the faults in a software system, the software quality models can identify high-risk modules, and thus, these high-risk modules can be targeted for reliability enhancements. Strictly speaking, software...
Show moreSoftware reliability engineering plays a vital role in managing and controlling software quality. As an important method of software reliability engineering, software quality estimation modeling is useful in defining a cost-effective strategy to achieve a reliable software system. By predicting the faults in a software system, the software quality models can identify high-risk modules, and thus, these high-risk modules can be targeted for reliability enhancements. Strictly speaking, software quality modeling not only aims at lowering the misclassification rate, but also takes into account the costs of different misclassifications and the available resources of a project. As a new search-based algorithm, Genetic Programming (GP) can build a model without assuming the size, shape, or structure of a model. It can flexibly tailor the fitness functions to the objectives chosen by the customers. Moreover, it can optimize several objectives simultaneously in the modeling process, and thus, a set of multi-objective optimization solutions can be obtained. This research focuses on building software quality estimation models using GP. Several GP-based models of predicting the class membership of each software module and ranking the modules by a quality factor were proposed. The first model of categorizing the modules into fault-prone or not fault-prone was proposed by considering the distinguished features of the software quality classification task and GP. The second model provided quality-based ranking information for fault-prone modules. A decision tree-based software classification model was also proposed by considering accuracy and simplicity simultaneously. This new technique provides a new multi-objective optimization algorithm to build decision trees for real-world engineering problems, in which several trade-off objectives usually have to be taken into account at the same time. The fourth model was built to find multi-objective optimization solutions by considering both the expected cost of misclassification and available resources. Also, a new goal-oriented technique of building module-order models was proposed by directly optimizing several goals chosen by project analysts. The issues of GP , bloating and overfitting, were also addressed in our research. Data were collected from three industrial projects, and applied to validate the performance of the models. Results indicate that our proposed methods can achieve useful performance results. Moreover, some proposed methods can simultaneously optimize several different objectives of a software project management team.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fau/fd/FADT12047
- Subject Headings
- Computer software--Quality control, Genetic programming (Computer science), Software engineering
- Format
- Document (PDF)
- Title
- Software quality classification using rule-based modeling.
- Creator
- Mao, Meihui., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Software-based products are part of our daily life. They can be encountered in most of the systems we interact with. This reliance on software products generates a strong need for better software reliability, reducing the cost associated with potential failures. Reliability in software systems may be achieved by using additional testing. However, extensive software testing is expensive and time consuming. Software quality classification models provide an early prediction of a module's quality...
Show moreSoftware-based products are part of our daily life. They can be encountered in most of the systems we interact with. This reliance on software products generates a strong need for better software reliability, reducing the cost associated with potential failures. Reliability in software systems may be achieved by using additional testing. However, extensive software testing is expensive and time consuming. Software quality classification models provide an early prediction of a module's quality. Boolean Discriminant Function (BDF), Generalized Boolean Discriminant Function (GBDF), and Rule-Based Modeling (RBM) can be used as classification models. This thesis demonstrates the ability of GBDF and RBM to correctly classify modules. The introduction of the AND operator in the GBDF model and the customizable outcomes for the rules in RBM, enhanced the discriminating quality of GBDF and RBM as compared to BDF. Furthermore, they also yielded better balances for the misclassification rates.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12886
- Subject Headings
- Computer software--Quality control, Software measurement
- Format
- Document (PDF)
- Title
- Software-implemented fault tolerance in a hypercube multiprocessor.
- Creator
- Sahai, Shankar., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis analyzes how software fault tolerance can be implemented in a hypercube multiprocessor. For concreteness we consider a multiprocessor using Intel 80286/386/486 processors. The Recovery Metaprogram approach (an architecture that isolates all fault tolerance functions in a special layer) has been used to implement application transparent and application specific fault tolerance technigues such as recovery blocks, N-version programming, exceptions and others. A fault tolerant routing...
Show moreThis thesis analyzes how software fault tolerance can be implemented in a hypercube multiprocessor. For concreteness we consider a multiprocessor using Intel 80286/386/486 processors. The Recovery Metaprogram approach (an architecture that isolates all fault tolerance functions in a special layer) has been used to implement application transparent and application specific fault tolerance technigues such as recovery blocks, N-version programming, exceptions and others. A fault tolerant routing algorithm is also described. While the details are specific to the 80286/386/486 processor these results apply also to any other processor with similar features.
Show less - Date Issued
- 1990
- PURL
- http://purl.flvc.org/fcla/dt/14633
- Subject Headings
- Hypercube networks (Computer networks), Intel 80x86 (Microprocessor)
- Format
- Document (PDF)
- Title
- Survey of design techniques for signal integrity.
- Creator
- Karnati, Raghuveer., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Signal Integrity is a major bottleneck for DSM designs. Signal integrity refers to wide variety of problems, which leads to misconception. Signal integrity causes delay or noise at the high-level, but this boils down to resistance, capacitance and inductance (RLC) at circuit level. Several analysis and reduction techniques were proposed for reducing these effects on signal integrity. This work solves the misconception by encompassing different problems Chat effect signal integrity and can be...
Show moreSignal Integrity is a major bottleneck for DSM designs. Signal integrity refers to wide variety of problems, which leads to misconception. Signal integrity causes delay or noise at the high-level, but this boils down to resistance, capacitance and inductance (RLC) at circuit level. Several analysis and reduction techniques were proposed for reducing these effects on signal integrity. This work solves the misconception by encompassing different problems Chat effect signal integrity and can be good reference for a integrated circuit designer. The objective is to analyze these modeling methods, reduction techniques, tools and make recommendations that aids in developing a methodology for perfect design closure with an emphasis on signal integrity. These recommendations would form a basis for developing a methodology to analyze interference effects at higher levels of abstraction.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13065
- Subject Headings
- Integrated circuits--Design and construction, Signal processing, Electronic circuit design
- Format
- Document (PDF)
- Title
- Subspace detection and scale evolutionary eigendecomposition.
- Creator
- Kyperountas, Spyros C., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
A measure of the potential of a receiver for detection is detectability. Detectability is a function of the signal and noise, and given any one of them the detectability is fixed. In addition, complete transforms of the signal and noise cannot change detectability. Throughout this work we show that "Subspace methods" as defined here can improve detectability in specific subspaces, resulting in improved Receiver Operating Curves (ROC) and thus better detection in arbitrary noise environments....
Show moreA measure of the potential of a receiver for detection is detectability. Detectability is a function of the signal and noise, and given any one of them the detectability is fixed. In addition, complete transforms of the signal and noise cannot change detectability. Throughout this work we show that "Subspace methods" as defined here can improve detectability in specific subspaces, resulting in improved Receiver Operating Curves (ROC) and thus better detection in arbitrary noise environments. Our method is tested and verified on various signals and noises, both simulated and real. The optimum detection of signals in noise requires the computation of noise eigenvalues and vectors (EVD). This process neither is a trivial one nor is it computationally cheap, especially for non-stationary noise and can result in numerical instabilities when the covariance matrix is large. This work addresses this problem and provides solutions that take advantage of the subspace structure through plane rotations to improve on existing algorithms for EVD by improving their convergence rate and reducing their computational expense for given thresholds.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/11965
- Subject Headings
- Eigenvalues, Eigenvectors, Wavelets (Mathematics)
- Format
- Document (PDF)
- Title
- Subband coding of images using binomial QMF and vector quantization.
- Creator
- Rajamani, Kannan., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents an image coding system using binomial QMF based subband decomposition and vector quantisation. An attempt was made to compress a still image of size 256 x 256 represented at a resolution of 8 bits/pixel to a bit rate of 0.5 bits/pixel using 16 channel subband decomposition with binomial QMFs and coding the subbands using a full search LBG Vector Quantizer (VQ). Simulations were done on SUN work station and the quality of the image was evaluated by computing the Signal to...
Show moreThis thesis presents an image coding system using binomial QMF based subband decomposition and vector quantisation. An attempt was made to compress a still image of size 256 x 256 represented at a resolution of 8 bits/pixel to a bit rate of 0.5 bits/pixel using 16 channel subband decomposition with binomial QMFs and coding the subbands using a full search LBG Vector Quantizer (VQ). Simulations were done on SUN work station and the quality of the image was evaluated by computing the Signal to Noise Ratio (SNR) between the original image and the reconstructed image.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15203
- Subject Headings
- Image compression--Digital techniques, Image processing--Digital techniques, Image transmission--Digital techniques, Coding theory, Vector fields
- Format
- Document (PDF)
- Title
- Studies on cross-talk in microstriplines.
- Creator
- Kopp, Markus Benjamin., Florida Atlantic University, Ungvichian, Vichate, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis is concerned with the analysis of the current distributions in coplanar parallel microstripline structures, and the calculation of crosstalk in these structures. This is accomplished by using a Finite Element Method approach. Two parallel strips, a right angle bend junction, and a T junction are studied in order to gain an insight into the current distributions and the primary causes of crosstalk. The control of crosstalk is also investigated, with alternative geometries for...
Show moreThis thesis is concerned with the analysis of the current distributions in coplanar parallel microstripline structures, and the calculation of crosstalk in these structures. This is accomplished by using a Finite Element Method approach. Two parallel strips, a right angle bend junction, and a T junction are studied in order to gain an insight into the current distributions and the primary causes of crosstalk. The control of crosstalk is also investigated, with alternative geometries for microstrip designs. It is seen that the finite element method can yield results comparable with other accepted methods, and other perceivable physical models of the test structures. Also shown in the present study that crosstalk can be reduced by decreasing the trace-to-ground plane separation.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/14954
- Subject Headings
- Microwave wiring, Crosstalk, Finite element method, Strip transmission lines
- Format
- Document (PDF)
- Title
- Self-calibration of parallel-link mechanisms.
- Creator
- Liu, Lixin., Florida Atlantic University, Zhuang, Hanqi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Self-calibration is a desirable feature for an intelligent machine such as a robot that must function outside of controlled laboratory conditions. This is because it is inevitable that variations in the kinematic model arise from imperfections in the manufacturing process and changes of environment conditions. Self-calibration has the potential of (a) removing the dependence on external pose sensing, (b) producing high accuracy measurement data over the entire workspace of the system with an...
Show moreSelf-calibration is a desirable feature for an intelligent machine such as a robot that must function outside of controlled laboratory conditions. This is because it is inevitable that variations in the kinematic model arise from imperfections in the manufacturing process and changes of environment conditions. Self-calibration has the potential of (a) removing the dependence on external pose sensing, (b) producing high accuracy measurement data over the entire workspace of the system with an extremely fast measurement rate, (c) being automated and completely non invasive, (d) facilitating on-line accuracy compensation, and (e) being cost effective. This dissertation concentrates on the study of self-calibrating parallel-link mechanisms. A framework of self-calibration of a parallel-link mechanism is created, which is based on kinematic analysis and the construction of measurement residuals utilizing the information provided by redundant sensors embedded in the system. Forward and inverse kinematic measurement residuals of the mechanisms are proposed. To avoid the estimation of redundant kinematic parameters of the mechanism, the concept of relative residuals is introduced. Guidelines for placement of sensors for self-calibration are presented. An approach to determining the number of independent kinematic parameters of the mechanism is introduced. Extensive simulation and experimental studies conducted on a parallel-link mechanism, the Stewart platform built in the Robotics Center at Florida Atlantic University, confirm the effectiveness of the proposed approach.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12539
- Subject Headings
- Manipulators (Mechanism)--Calibration, Robots--Control systems, Robotics
- Format
- Document (PDF)
- Title
- Selective texture characterization using Gabor filters.
- Creator
- Boutros, George., Florida Atlantic University, Sudhakar, Raghavan, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The objective of this dissertation is to develop effective algorithms for texture characterization, segmentation and labeling that operate selectively to label image textures, using the Gabor representation of signals. These representations are an analog of the spatial frequency tuning characteristics of the visual cortex cells. The Gabor function, of all spatial/spectral signal representations, provides optimal resolution between both domains. A discussion of spatial/spectral representations...
Show moreThe objective of this dissertation is to develop effective algorithms for texture characterization, segmentation and labeling that operate selectively to label image textures, using the Gabor representation of signals. These representations are an analog of the spatial frequency tuning characteristics of the visual cortex cells. The Gabor function, of all spatial/spectral signal representations, provides optimal resolution between both domains. A discussion of spatial/spectral representations focuses on the Gabor function and the biological analog that exists between it and the simple cells of the striate cortex. A simulation generates examples of the use of the Gabor filter as a line detector with synthetic data. Simulations are then presented using Gabor filters for real texture characterization. The Gabor filter spatial and spectral attributes are selectively chosen based on the information from a scale-space image in order to maximize resolution of the characterization process. A variation of probabilistic relaxation that exploits the Gabor filter spatial and spectral attributes is devised, and used to force a consensus of the filter responses for texture characterization. We then perform segmentation of the image using the concept of isolation of low energy states within an image. This iterative smoothing algorithm, operating as a Gabor filter post-processing stage, depends on a line processes discontinuity threshold. Selection of the discontinuity threshold is obtained from the modes of the histogram of the relaxed Gabor filter responses using probabilistic relaxation to detect the significant modes. We test our algorithm on simple synthetic and real textures, then use a more complex natural texture image to test the entire algorithm. Limitations on textural resolution are noted, as well as for the resolution of the image segmentation process.
Show less - Date Issued
- 1993
- PURL
- http://purl.flvc.org/fcla/dt/12342
- Subject Headings
- Image processing--Digital techniques, Computer vision
- Format
- Document (PDF)
- Title
- Software design using case based reasoning.
- Creator
- Smith, Nancy T., Florida Atlantic University, Ganesan, Krishnamurthy, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The project that was created for this thesis is a Case Based Reasoning application to be used in high level software design for Siemens' Telecommunications software. Currently, design engineers search for existing subtasks in the software that are similar to subtasks in their new designs by reading documentation and consulting with other engineers. The prototype for Software Design Using Case Based Reasoning (SDUCBR) stores these subtasks in a case library and enables the design engineer to...
Show moreThe project that was created for this thesis is a Case Based Reasoning application to be used in high level software design for Siemens' Telecommunications software. Currently, design engineers search for existing subtasks in the software that are similar to subtasks in their new designs by reading documentation and consulting with other engineers. The prototype for Software Design Using Case Based Reasoning (SDUCBR) stores these subtasks in a case library and enables the design engineer to locate relevant subtasks via three different indexing techniques. This thesis addresses knowledge representation and indexing mechanisms appropriate for this application. SDUCBR is domain-dependent. Cases are stored in a relational hierarchy to facilitate analyzing the existing implementation from various perspectives. The indexing mechanisms were designed to provide the software design engineer with the flexibility of describing a problem differently based on the objective, level of granularity, and special characteristics of the subtask.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15198
- Subject Headings
- Computer software--Development, Case-based reasoning, Artificial intelligence--Data processing, System design
- Format
- Document (PDF)
- Title
- Static and dynamic load balance strategies for a class of applications.
- Creator
- Pan, Jianping., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Parallel/distributed systems offer a tremendous processing capacity. However, in order to take full advantage of it, good load distributions are needed. We study the task graph partition problem for a given parallel/distributed application which is modeled using data parallelism and is implemented in a transputer system in a mesh construction. Our approach uses domain partition to separate a given domain into a set of subdomains of equal size and each of which has at most four neighbors. We...
Show moreParallel/distributed systems offer a tremendous processing capacity. However, in order to take full advantage of it, good load distributions are needed. We study the task graph partition problem for a given parallel/distributed application which is modeled using data parallelism and is implemented in a transputer system in a mesh construction. Our approach uses domain partition to separate a given domain into a set of subdomains of equal size and each of which has at most four neighbors. We devise three methods to partition a given domain, these methods are compared based on several criteria. The impact of the number of processors used in implementation is also investigated based on several parameters, including processor speed, communication speed, and amount of computation and communication per data point. We discuss implementation of our approach in the application based on the existing features of the transputer system, and compare different versions of application through running a simulation system.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15298
- Subject Headings
- Parallel processing (Electronic computers), Electronic data processing--Distributed processing, Computer capacity--Management, Transputers
- Format
- Document (PDF)
- Title
- Simulation-based performance evaluation of AODV routing protocol for ad hoc mobile wireless networks.
- Creator
- Suryaprasad, Deepa., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
An ad hoc wireless network is a network composed of mobile communication devices, which is designed to provide communication capability to satisfy the need of a temporary nature in an infrastructure-less environment. A routing protocol is necessary in ad hoc networks to ensure effective communication among nodes. This thesis presents a simulation-based study on the performance evaluation of Ad hoc on-demand distance vector (AODV) routing protocol, which is one of the core routing protocols...
Show moreAn ad hoc wireless network is a network composed of mobile communication devices, which is designed to provide communication capability to satisfy the need of a temporary nature in an infrastructure-less environment. A routing protocol is necessary in ad hoc networks to ensure effective communication among nodes. This thesis presents a simulation-based study on the performance evaluation of Ad hoc on-demand distance vector (AODV) routing protocol, which is one of the core routing protocols being promoted by the Mobile Ad Hoc Networking (MANET) group of the Internet Engineering Task Force. An event-advanced simulation program was developed in C++ to simulate the ad hoc wireless network implementing the AODV protocol. The performance metrics evaluated were throughput, average delay, route acquisition time and routing overhead. The network traffic was monitored in terms of the data packets created and successfully delivered within the simulation time. A discussion on the effect of different network parameters such as the mobility of the nodes, the number of nodes in the network, the size of the network and the data packet size on the performance characteristics of AODV protocol is also presented.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/12989
- Subject Headings
- Computer network protocols, Wireless communication systems
- Format
- Document (PDF)
- Title
- SEMI-CUSTOM DESIGN OF A MICROPROGRAMMED TESTABLE REDUCED INSTRUCTION SET COMPUTER.
- Creator
- POENATEETAI, VIWAT., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The concept of a Reduced Instruction Set Computer (RISC) has evolved out of a desire to enhance the performance of a computer. We present here a detailed design of a Testable Reduced Instruction Set Computer (TRISC) that utilizes a Multiple Register Set. Level Sensitive Scan Design (LSSD) is used to incorporate testability into our design. We first evolved a functional description of the design using Digital Design Language (DDL) a hardware programming language. We then entered the schematic...
Show moreThe concept of a Reduced Instruction Set Computer (RISC) has evolved out of a desire to enhance the performance of a computer. We present here a detailed design of a Testable Reduced Instruction Set Computer (TRISC) that utilizes a Multiple Register Set. Level Sensitive Scan Design (LSSD) is used to incorporate testability into our design. We first evolved a functional description of the design using Digital Design Language (DDL) a hardware programming language. We then entered the schematic of the design into Daisy's Logician V, a CAD/CAE workstation, using NCR CMOSII Digital Standard Cell Library. We then performed a unit delay simulation on the hierarchical design database to ascertain the logical functioning of the system.
Show less - Date Issued
- 1986
- PURL
- http://purl.flvc.org/fcla/dt/14284
- Subject Headings
- Computer architecture, Integrated circuits--Very large scale integration
- Format
- Document (PDF)
- Title
- SIMULATION STUDY OF TOKEN-PASSING BUS LOCAL AREA NETWORK.
- Creator
- RAO, SUHAS PALAKURTI., Florida Atlantic University, Ilyas, Mohammad, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis deals with simulation of a conflict-free token-passing protocol for local area bus networks. The primary emphasis of this simulation study is to observe the effects of token holding time on the performance of the network. Token holding time is adjusted to account for three types of service disciplines: purely non-exhaustive, non-exhaustive and exhaustive. Network performance for these three service disciplines is compared to determine, which one of the three gives a relatively...
Show moreThis thesis deals with simulation of a conflict-free token-passing protocol for local area bus networks. The primary emphasis of this simulation study is to observe the effects of token holding time on the performance of the network. Token holding time is adjusted to account for three types of service disciplines: purely non-exhaustive, non-exhaustive and exhaustive. Network performance for these three service disciplines is compared to determine, which one of the three gives a relatively better performance. Besides throughput and delay, a more compact form of performance measure called "power", has also been used in this study. Power is simply a ratio of throughput and delay. This study has shown that the token holding time has significant effect on the performance of a local area network. Simulation results of this study, are presented in terms of throughput, delay, power, logical ring size, token circulation time, efficiency/overhead versus offered load and token holding time. Some results are also presented in terms of histograms.
Show less - Date Issued
- 1985
- PURL
- http://purl.flvc.org/fcla/dt/14275
- Subject Headings
- Local area networks (Computer networks)
- Format
- Document (PDF)
- Title
- Routing in mobile ad-hoc wireless networks.
- Creator
- Li, Hailan., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis describes routing in mobile ad hoc wireless networks. Ad hoc networks are lack of wired backbone to maintain routes as mobile hosts move and power is on or off. Therefore, the hosts in ad hoc networks must cooperate with each other to determine routes in a distributed manner. Routing based on a connected dominating set is a frequently used approach, where the searching space for a route is reduced to nodes in small connected dominating set subnetwork. We propose a simple and...
Show moreThis thesis describes routing in mobile ad hoc wireless networks. Ad hoc networks are lack of wired backbone to maintain routes as mobile hosts move and power is on or off. Therefore, the hosts in ad hoc networks must cooperate with each other to determine routes in a distributed manner. Routing based on a connected dominating set is a frequently used approach, where the searching space for a route is reduced to nodes in small connected dominating set subnetwork. We propose a simple and efficient distributed algorithm for calculating connected dominating set in a given un-directed ad hoc network, then evaluate the proposed algorithm through simulation. We also discuss connected dominating set update/recalculation algorithms when the topology of the ad hoc network changes. We also explore the possible extension of using hierarchical connected dominating set. The shortest path routing and the dynamic source routing, which are based on the connected dominating set subnetwork, are discussed.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15695
- Subject Headings
- Mobile computing, Computer algorithms, Computer networks
- Format
- Document (PDF)
- Title
- SHINE: An integrated environment for software hardware co-design.
- Creator
- Jayadevappa, Suryaprasad., Florida Atlantic University, Shankar, Ravi, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The rapid evolution of silicon technology has brought exponential benefits in cost, scale of integration, power per function, size per function and speed. The ability to place multiple function "systems" on a single silicon chip, reduce development cycle while increasing product functionality, performance and quality. With this increased complexity, ability to model at high level of abstraction becomes crucial. Also, the fact that no known existing complete system on chip design packages with...
Show moreThe rapid evolution of silicon technology has brought exponential benefits in cost, scale of integration, power per function, size per function and speed. The ability to place multiple function "systems" on a single silicon chip, reduce development cycle while increasing product functionality, performance and quality. With this increased complexity, ability to model at high level of abstraction becomes crucial. Also, the fact that no known existing complete system on chip design packages with perfect tools, models, and formalisms further slows down and complicates the development. This dissertation provides an integrated environment for hardware software co-design at a high level of abstraction. We have developed a SystemC based cockpit for this purpose. The cockpit, known as SHINE consists of many components including architectural components, operating system components, and application software components. The ability to represent and manipulate these components at high levels of abstraction is a major challenge. To address these challenges we have developed a set of principles. Important principles evolved are synergy of separation of concerns, reusability, flexibility, ease of use, and support for multiple levels of abstraction. 'Synergy of Separation of Concerns' helps in maintaining transparency during all instances in the development of the integrated environment. One application is transparent to another application and in turn to the system architecture. Also in the system architecture, each module is designed independent of other modules. Well defined interfaces enable this transparency and easier to integrate. This also enhances component reuse and overall design environment modularity. 'Ease of Use' allows the user to shorten the learning curve involved. In SHINE, 'Flexibility' is addressed via support for plug-and-play of components in the design environment. We provide results to show the implementation of these principles. SHINE provides a cost-effective mechanism to develop a system co-design infrastructure. This will lead to early system verification and performance estimation resulting in shorter time-to-market. The design flow developed is structured and is easily extended. This is an exploratory study that is the result of a long term industrial collaboration to enhance design productivity. Significantly more work lies ahead in developing an industry standard tool and methodology.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fau/fd/FADT12065
- Subject Headings
- Computer architecture, System design, Systems software, Multiprocessors
- Format
- Document (PDF)
- Title
- Three-group software quality classification modeling with TREEDISC algorithm.
- Creator
- Liu, Yongbin., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Maintaining superior quality and reliability of software systems is important nowadays. Software quality modeling detects fault-prone modules and enables us to achieve high quality in software system by focusing on fewer modules, because of limited resources and budget. Tree-based modeling is a simple and effective method that predicts the fault proneness in software systems. In this thesis, we introduce TREEDISC modeling technique with a three-group classification rule to predict the quality...
Show moreMaintaining superior quality and reliability of software systems is important nowadays. Software quality modeling detects fault-prone modules and enables us to achieve high quality in software system by focusing on fewer modules, because of limited resources and budget. Tree-based modeling is a simple and effective method that predicts the fault proneness in software systems. In this thesis, we introduce TREEDISC modeling technique with a three-group classification rule to predict the quality of software modules. A general classification rule is applied and validated. The three impact parameters, group number, minimum leaf size and significant level, are thoroughly evaluated. An optimization procedure is conducted and empirical results are presented. Conclusions about the impact factors as well as the robustness of our research are performed. TREEDISC modeling technique with three-group classification has proved to be an efficient and convincing method in software quality control.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13008
- Subject Headings
- Computer software--Quality control, Software measurement, Decision trees
- Format
- Document (PDF)
- Title
- Time-frequency estimation for cyclostationary signals.
- Creator
- Frederick, Thomas James., Florida Atlantic University, Erdol, Nurgun, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis provides detailed analysis and design techniques for Wigner-Ville spectrum (WVS) estimators for use with cyclostationary signals. The resulting class of estimators represent a newly defined subset of Cohen's class characterized by a mixed discrete-time/continuous-frequency smoothing kernel. Although both time-variant and shift invariant versions of the estimator are developed, emphasis is placed on the shift-invariant version which is designed to estimate the WVS over an entire...
Show moreThis thesis provides detailed analysis and design techniques for Wigner-Ville spectrum (WVS) estimators for use with cyclostationary signals. The resulting class of estimators represent a newly defined subset of Cohen's class characterized by a mixed discrete-time/continuous-frequency smoothing kernel. Although both time-variant and shift invariant versions of the estimator are developed, emphasis is placed on the shift-invariant version which is designed to estimate the WVS over an entire period from a single observation. Bias and variance expressions are derived for the new estimator, and these are compared with the general estimator. For this development, we also derive mean and covariance expressions for the general quasi-stationary based estimators, both for the autocorrelation estimator and for the WVS estimator. The concept of quasi-stationarity is extended to cyclostationary models, and we develop a novel measure of kernel smoothing and variance reduction termed the time-bandwidth area. This is a generalization of time-bandwidth product to describe arbitrary kernel functions, even those which are not governed by the uncertainty principle (such as the newly proposed estimators). The properties of the estimator are examined in terms of constraints on the smoothing kernel. In sharp contrast to the conventional estimators based on the quasi-stationary assumption, the low bias and low variance constraints for the new class of estimators do not contradict one another. The relationship between time dependent spectral estimation for nonstationary processes and classical Blackman-Tukey type spectral estimation for stationary processes is developed next. Using examples the utility of the new estimator kernels are shown. It is seen that in random or noisy environments it may be difficult to achieve a reasonable trade-off between variance reduction and bias using conventional estimators. In the examples any assumption of quasi-stationarity sufficient to produce a low variance estimate would destroy many or all of the nonstationary features of the signal. However, since the signals are cyclostationary we can employ the new class of estimators to achieve an excellent balance between bias and variance reduction.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/12537
- Subject Headings
- Signal processing, Wigner distribution
- Format
- Document (PDF)
- Title
- Transforming directed graphs into uncertain rules.
- Creator
- Lantigua, Jose Salvador., Florida Atlantic University, Hoffman, Frederick, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The intent of this thesis is to show how rule structures can be derived from influence diagrams and how these structures can be mapped to existing rule-based shell paradigms. We shall demonstrate this mapping with an existing shell having the Evidence (E) --> Hypothesis (H), Certainty Factor (CF) paradigm structure. Influence diagrams are graphical representations of hypothesis to evidence, directed forms of Bayesian influence networks. These allow for inferencing about both diagnostic and...
Show moreThe intent of this thesis is to show how rule structures can be derived from influence diagrams and how these structures can be mapped to existing rule-based shell paradigms. We shall demonstrate this mapping with an existing shell having the Evidence (E) --> Hypothesis (H), Certainty Factor (CF) paradigm structure. Influence diagrams are graphical representations of hypothesis to evidence, directed forms of Bayesian influence networks. These allow for inferencing about both diagnostic and predictive (or causal) behavior based on uncertain evidence. We show how this can be implemented through a Probability (P) to CF mapping algorithm and a rule-set conflict resolution methodology. The thesis contains a discussion about the application of probabilistic semantics from Bayesian networks and of decision theory, to derive qualitative assertions about the likelihood of an occurrence; the sensitivity of a conclusion; and other indicators of usefulness. We show an example of this type of capability by the addition of a probability range function for the premise clause in our shell's rule structure.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/14570
- Subject Headings
- Decision-making--Mathematical models, Probabilities, Expert systems (Computer science)
- Format
- Document (PDF)