Current Search: Computer software--Reliability (x)
View All Items
- Title
- Choosing software reliability models.
- Creator
- Woodcock, Timothy G., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
One of the important problems which software engineers face is how to determine which software reliability model should be used for a particular system. Some recent attempts to compare different models used complementary graphical and analytical techniques. These techniques require an excessive amount of time for plotting the data and running the analyses, and they are still rather subjective as to which model is best. So another technique needs to be found that is simpler and yet yields a...
Show moreOne of the important problems which software engineers face is how to determine which software reliability model should be used for a particular system. Some recent attempts to compare different models used complementary graphical and analytical techniques. These techniques require an excessive amount of time for plotting the data and running the analyses, and they are still rather subjective as to which model is best. So another technique needs to be found that is simpler and yet yields a less subjective measure of goodness of fit. The Akaike Information Criterion (AIC) is proposed as a new approach for selecting the best model. The performance of AIC is measured by Monte-Carlo simulation and by comparison to published data sets. The AIC chooses the correct model 95% of the time.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/14561
- Subject Headings
- Computer software--Testing, Computer software--Reliability
- Format
- Document (PDF)
- Title
- Design and modeling of hybrid software fault-tolerant systems.
- Creator
- Zhang, Man-xia Maria., Florida Atlantic University, Wu, Jie, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Fault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic...
Show moreFault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic methods are developed to construct hybrid fault tolerant systems with total cost constraints. The algorithms provide a systematic approach to the design of hybrid fault tolerant systems.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14783
- Subject Headings
- Computer software--Reliability, Fault-tolerant computing, Algorithms
- Format
- Document (PDF)
- Title
- Reliability modeling of fault-tolerant software.
- Creator
- Leu, Shao-Wei., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
We have developed reliability models for a variety of fault-tolerant software constructs including those based on two well-known methodologies: recovery block and N-version programming, and their variations. We also developed models for the conversation scheme which provides fault tolerance for concurrent software and a newly proposed system architecture, the recovery metaprogram, which attempts to unify most of the existing fault-tolerant strategies. Each model is evaluated using either GSPN...
Show moreWe have developed reliability models for a variety of fault-tolerant software constructs including those based on two well-known methodologies: recovery block and N-version programming, and their variations. We also developed models for the conversation scheme which provides fault tolerance for concurrent software and a newly proposed system architecture, the recovery metaprogram, which attempts to unify most of the existing fault-tolerant strategies. Each model is evaluated using either GSPN, a software package based on Generalized Stochastic Petri Nets, or Sharpe, an evaluation tool for Markov models. The numerical results are then analyzed and compared. Major results derived from this process include the identification of critical parameters for each model, the comparisons of relative performance among different software constructs, the justification of a preliminary approach to the modeling of complex conversations, and the justification of recovery metaprogram regarding improvement of reliability.
Show less - Date Issued
- 1990
- PURL
- http://purl.flvc.org/fcla/dt/12256
- Subject Headings
- Fault-tolerant computing, Computer software--Reliability
- Format
- Document (PDF)
- Title
- The design of reliable decentralized computer systems.
- Creator
- Wu, Jie., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
With the increase in the applications of computer technology, there are more and more demands for the use of computer systems in the area of real-time applications and critical systems. Reliability and performance are fundamental design requirements for these applications. In this dissertation, we develop some specific aspects of a fault-tolerant decentralized system architecture. This system can execute concurrent processes and it is composed of processing elements that have only local...
Show moreWith the increase in the applications of computer technology, there are more and more demands for the use of computer systems in the area of real-time applications and critical systems. Reliability and performance are fundamental design requirements for these applications. In this dissertation, we develop some specific aspects of a fault-tolerant decentralized system architecture. This system can execute concurrent processes and it is composed of processing elements that have only local memories with point-to-point communication. A model using hierarchical layers describes this system. Fault tolerance techniques are discussed for the applications, software, operating system, and hardware layers of the model. Scheduling of communicating tasks to increase performance is also addressed. Some special problems such as the Byzantine Generals problem are considered. We have shown that, by combining reliable techniques on different layers and with consideration of system performance, one can provide a system with a very high level reliability as well as performance.
Show less - Date Issued
- 1989
- PURL
- http://purl.flvc.org/fcla/dt/12237
- Subject Headings
- Electronic digital computers--Reliability, Fault-tolerant computing, System design, Computer software--Reliability
- Format
- Document (PDF)
- Title
- Rough Set-Based Software Quality Models and Quality of Data.
- Creator
- Bullard, Lofton A., Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation we address two significant issues of concern. These are software quality modeling and data quality assessment. Software quality can be measured by software reliability. Reliability is often measured in terms of the time between system failures. A failure is caused by a fault which is a defect in the executable software product. The time between system failures depends both on the presence and the usage pattern of the software. Finding faulty components in the development...
Show moreIn this dissertation we address two significant issues of concern. These are software quality modeling and data quality assessment. Software quality can be measured by software reliability. Reliability is often measured in terms of the time between system failures. A failure is caused by a fault which is a defect in the executable software product. The time between system failures depends both on the presence and the usage pattern of the software. Finding faulty components in the development cycle of a software system can lead to a more reliable final system and will reduce development and maintenance costs. The issue of software quality is investigated by proposing a new approach, rule-based classification model (RBCM) that uses rough set theory to generate decision rules to predict software quality. The new model minimizes over-fitting by balancing the Type I and Type II niisclassiflcation error rates. We also propose a model selection technique for rule-based models called rulebased model selection (RBMS). The proposed rule-based model selection technique utilizes the complete and partial matching rule sets of candidate RBCMs to determine the model with the least amount of over-fitting. In the experiments that were performed, the RBCMs were effective at identifying faulty software modules, and the RBMS technique was able to identify RBCMs that minimized over-fitting. Good data quality is a critical component for building effective software quality models. We address the significance of the quality of data on the classification performance of learners by conducting a comprehensive comparative study. Several trends were observed in the experiments. Class and attribute had the greatest impact on the performance of learners when it occurred simultaneously in the data. Class noise had a significant impact on the performance of learners, while attribute noise had no impact when it occurred in less than 40% of the most significant independent attributes. Random Forest (RF100), a group of 100 decision trees, was the most, accurate and robust learner in all the experiments with noisy data.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012567
- Subject Headings
- Computer software--Quality control, Computer software--Reliability, Software engineering, Computer arithmetic
- Format
- Document (PDF)
- Title
- Tree-based classification models for analyzing a very large software system.
- Creator
- Bullard, Lofton A., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
Software systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational...
Show moreSoftware systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational prototype of the subsystem over a period of approximately three years. We used these metrics to train a decision tree model and to fit a discriminant model to classify each module as fault-prone or not fault-prone. The algorithm used to generate the decision tree model was TREEDISC, developed by the SAS Institute. The decision tree model is compared to the discriminant model.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15315
- Subject Headings
- Computer software--Quality control, Computer software--Reliability, Software engineering
- Format
- Document (PDF)
- Title
- A selectively redundant file system.
- Creator
- Veradt, Joy L., Florida Atlantic University, Fernandez, Eduardo B., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Disk arrays have been proposed as a means of achieving high performance, reliability and availability in computer systems. This study looks at the RAID (Redundant Array of Inexpensive Disks) disk array architecture and its advantages and disadvantages for use in personal computer environments, specifically in terms of how data is protected (redundant information) and the tradeoff required to achieve that protection (sacrifice of disk capacity). It then proposes an alternative to achieving a...
Show moreDisk arrays have been proposed as a means of achieving high performance, reliability and availability in computer systems. This study looks at the RAID (Redundant Array of Inexpensive Disks) disk array architecture and its advantages and disadvantages for use in personal computer environments, specifically in terms of how data is protected (redundant information) and the tradeoff required to achieve that protection (sacrifice of disk capacity). It then proposes an alternative to achieving a real-time method of protecting a user's data, which involves the modification of an operating system's file system to implement selective redundancy at the file level. This approach, based on modified RAIDs, is shown to be considerably more efficient in using the capacity of the available disks. It also provides flexibility in allowing users to tradeoff space for reliability.
Show less - Date Issued
- 1992
- PURL
- http://purl.flvc.org/fcla/dt/14844
- Subject Headings
- Computer files--Reliability, Systems software--Reliability, Databases--Reliability
- Format
- Document (PDF)
- Title
- Software quality modeling and analysis with limited or without defect data.
- Creator
- Seliya, Naeem A., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The key to developing high-quality software is the measurement and modeling of software quality. In practice, software measurements are often used as a resource to model and comprehend the quality of software. The use of software measurements to understand quality is accomplished by a software quality model that is trained using software metrics and defect data of similar, previously developed, systems. The model is then applied to estimate quality of the target software project. Such an...
Show moreThe key to developing high-quality software is the measurement and modeling of software quality. In practice, software measurements are often used as a resource to model and comprehend the quality of software. The use of software measurements to understand quality is accomplished by a software quality model that is trained using software metrics and defect data of similar, previously developed, systems. The model is then applied to estimate quality of the target software project. Such an approach assumes that defect data is available for all program modules in the training data. Various practical issues can cause an unavailability or limited availability of defect data from the previously developed systems. This dissertation presents innovative and practical techniques for addressing the problem of software quality analysis when there is limited or completely absent defect data. The proposed techniques for software quality analysis without defect data include an expert-based approach with unsupervised clustering and an expert-based approach with semi-supervised clustering. The proposed techniques for software quality analysis with limited defect data includes a semi-supervised classification approach with the Expectation-Maximization algorithm and an expert-based approach with semi-supervised clustering. Empirical case studies of software measurement datasets obtained from multiple NASA software projects are used to present and evaluate the different techniques. The empirical results demonstrate the attractiveness, benefit, and definite promise of the proposed techniques. The newly developed techniques presented in this dissertation is invaluable to the software quality practitioner challenged by the absence or limited availability of defect data from previous software development experiences.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/12151
- Subject Headings
- Software measurement, Computer software--Quality control, Computer software--Reliability--Mathematical models, Software engineering--Quality control
- Format
- Document (PDF)