Current Search: Computer software--Quality control (x)
View All Items
Pages
- Title
- An improved neural net-based approach for predicting software quality.
- Creator
- Guasti, Peter John., Florida Atlantic University, Khoshgoftaar, Taghi M., Pandya, Abhijit S.
- Abstract/Description
-
Accurately predicting the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take action against emerging quality problems. Most often the predictive models are based upon multiple regression analysis which become unstable when certain data assumptions are not met. Since neural networks require no data assumptions, they are more appropriate for predicting software...
Show moreAccurately predicting the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take action against emerging quality problems. Most often the predictive models are based upon multiple regression analysis which become unstable when certain data assumptions are not met. Since neural networks require no data assumptions, they are more appropriate for predicting software quality. This study proposes an improved neural network architecture that significantly outperforms multiple regression and other neural network attempts at modeling software quality. This is demonstrated by applying this approach to several large commercial software systems. After developing neural network models, we develop regression models on the same data. We find that the neural network models surpass the regression models in terms of predictive quality on the data sets considered.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15134
- Subject Headings
- Neural networks (Computer science), Computer software--Development, Computer software--Quality control, Software engineering
- Format
- Document (PDF)
- Title
- Rough Set-Based Software Quality Models and Quality of Data.
- Creator
- Bullard, Lofton A., Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation we address two significant issues of concern. These are software quality modeling and data quality assessment. Software quality can be measured by software reliability. Reliability is often measured in terms of the time between system failures. A failure is caused by a fault which is a defect in the executable software product. The time between system failures depends both on the presence and the usage pattern of the software. Finding faulty components in the development...
Show moreIn this dissertation we address two significant issues of concern. These are software quality modeling and data quality assessment. Software quality can be measured by software reliability. Reliability is often measured in terms of the time between system failures. A failure is caused by a fault which is a defect in the executable software product. The time between system failures depends both on the presence and the usage pattern of the software. Finding faulty components in the development cycle of a software system can lead to a more reliable final system and will reduce development and maintenance costs. The issue of software quality is investigated by proposing a new approach, rule-based classification model (RBCM) that uses rough set theory to generate decision rules to predict software quality. The new model minimizes over-fitting by balancing the Type I and Type II niisclassiflcation error rates. We also propose a model selection technique for rule-based models called rulebased model selection (RBMS). The proposed rule-based model selection technique utilizes the complete and partial matching rule sets of candidate RBCMs to determine the model with the least amount of over-fitting. In the experiments that were performed, the RBCMs were effective at identifying faulty software modules, and the RBMS technique was able to identify RBCMs that minimized over-fitting. Good data quality is a critical component for building effective software quality models. We address the significance of the quality of data on the classification performance of learners by conducting a comprehensive comparative study. Several trends were observed in the experiments. Class and attribute had the greatest impact on the performance of learners when it occurred simultaneously in the data. Class noise had a significant impact on the performance of learners, while attribute noise had no impact when it occurred in less than 40% of the most significant independent attributes. Random Forest (RF100), a group of 100 decision trees, was the most, accurate and robust learner in all the experiments with noisy data.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012567
- Subject Headings
- Computer software--Quality control, Computer software--Reliability, Software engineering, Computer arithmetic
- Format
- Document (PDF)
- Title
- Classification of software quality using tree modeling with the S-Plus algorithm.
- Creator
- Deng, Jianyu., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
In today's competitive environment for software products, quality has become an increasingly important asset to software development organizations. Software quality models are tools for focusing efforts to find faults early in the development. Delaying corrections can lead to higher costs. In this research, the classification tree modeling technique was used to predict the software quality by classifying program modules either as fault-prone or not fault-prone. The S-Plus regression tree...
Show moreIn today's competitive environment for software products, quality has become an increasingly important asset to software development organizations. Software quality models are tools for focusing efforts to find faults early in the development. Delaying corrections can lead to higher costs. In this research, the classification tree modeling technique was used to predict the software quality by classifying program modules either as fault-prone or not fault-prone. The S-Plus regression tree algorithm and a general classification rule were applied to yield classification tree models. Two classification tree models were developed based on four consecutive releases of a very large legacy telecommunications system. The first release was used as the training data set and the subsequent three releases were used as evaluation data sets. The first model used twenty-four product metrics and four execution metrics as candidate predictors. The second model added fourteen process metrics as candidate predictors.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15707
- Subject Headings
- Computer software--Quality control, Software measurement, Computer software--Evaluation
- Format
- Document (PDF)
- Title
- Developing accurate software quality models using a faster, easier, and cheaper method.
- Creator
- Lim, Linda., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Managers of software development need to know which components of a system are fault-prone. If this can be determined early in the development cycle then resources can be more effectively allocated and significant costs can be reduced. Case-Based Reasoning (CBR) is a simple and efficient methodology for building software quality models that can provide early information to managers. Our research focuses on two case studies. The first study analyzes source files and classifies them as fault...
Show moreManagers of software development need to know which components of a system are fault-prone. If this can be determined early in the development cycle then resources can be more effectively allocated and significant costs can be reduced. Case-Based Reasoning (CBR) is a simple and efficient methodology for building software quality models that can provide early information to managers. Our research focuses on two case studies. The first study analyzes source files and classifies them as fault-prone or not fault-prone. It also predicts the number of faults in each file. The second study analyzes the fault removal process, and creates models that predict the outcome of software inspections.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12746
- Subject Headings
- Computer software--Development, Computer software--Quality control, Software engineering
- Format
- Document (PDF)
- Title
- Prediction of software quality using classification tree modeling.
- Creator
- Naik, Archana B., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Reliability of software systems is one of the major concerns in today's world as computers have really become an integral part of our lives. Society has become so dependent on reliable software systems that failures can be dangerous in terms of worsening a company's business, human relationships or affecting human lives. Software quality models are tools for focusing efforts to find faults early in the development. In this experiment, we used classification tree modeling techniques to predict...
Show moreReliability of software systems is one of the major concerns in today's world as computers have really become an integral part of our lives. Society has become so dependent on reliable software systems that failures can be dangerous in terms of worsening a company's business, human relationships or affecting human lives. Software quality models are tools for focusing efforts to find faults early in the development. In this experiment, we used classification tree modeling techniques to predict the software quality by classifying program modules either as fault-prone or not fault-prone. We introduced the Classification And Regression Trees (scCART) algorithm as a tool to generate classification trees. We focused our experiments on very large telecommunications system to build quality models using set of product and process metrics as independent variables.
Show less - Date Issued
- 1998
- PURL
- http://purl.flvc.org/fcla/dt/15600
- Subject Headings
- Computer software--Quality control, Computer software--Evaluation, Software measurement
- Format
- Document (PDF)
- Title
- Multivariate modeling of software engineering measures.
- Creator
- Lanning, David Lee., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
One goal of software engineers is to produce software products. An additional goal, that the software production must lead to profit, releases the power of the software product market. This market demands high quality products and tight cycles in the delivery of new and enhanced products. These market conditions motivate the search for engineering methods that help software producers ship products quicker, at lower cost, and with fewer defects. The control of software defects is key to...
Show moreOne goal of software engineers is to produce software products. An additional goal, that the software production must lead to profit, releases the power of the software product market. This market demands high quality products and tight cycles in the delivery of new and enhanced products. These market conditions motivate the search for engineering methods that help software producers ship products quicker, at lower cost, and with fewer defects. The control of software defects is key to meeting these market conditions. Thus, many software engineering tasks are concerned with software defects. This study considers two sources of variation in the distribution of software defects: software complexity and enhancement activity. Multivariate techniques treat defect activity, software complexity, and enhancement activity as related multivariate concepts. Applied techniques include principal components analysis, canonical correlation analysis, discriminant analysis, and multiple regression analysis. The objective of this study is to improve our understanding of software complexity and software enhancement activity as sources of variation in defect activity, and to apply this understanding to produce predictive and discriminant models useful during testing and maintenance tasks. These models serve to support critical software engineering decisions.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12383
- Subject Headings
- Software engineering, Computer software--Testing, Computer software--Quality control
- Format
- Document (PDF)
- Title
- Cost of misclassification in software quality models.
- Creator
- Guan, Xin., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
Reliability has become a very important and competitive factor for software products. Using software quality models based on software measurements provides a systematic and scientific way to detect software faults early and to improve software reliability. This thesis considers several classification techniques including Generalized Classification Rule, MetaCost algorithm, Cost-Boosting algorithm and AdaCost algorithm. We also introduce the weighted logistic regression algorithm, and a new...
Show moreReliability has become a very important and competitive factor for software products. Using software quality models based on software measurements provides a systematic and scientific way to detect software faults early and to improve software reliability. This thesis considers several classification techniques including Generalized Classification Rule, MetaCost algorithm, Cost-Boosting algorithm and AdaCost algorithm. We also introduce the weighted logistic regression algorithm, and a new method to evaluate the performance of classification models---ROC Analysis. We focus our experiments on a very large legacy telecommunications system (LLTS) to build software quality models with principal components analysis. Two other data sets, CCCS and LTS are also used in our experiments.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/15762
- Subject Headings
- Computer software--Quality control, Software measurement, Computer software--Testing
- Format
- Document (PDF)
- Title
- Software reliability engineering with genetic programming.
- Creator
- Liu, Yi., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Software reliability engineering plays a vital role in managing and controlling software quality. As an important method of software reliability engineering, software quality estimation modeling is useful in defining a cost-effective strategy to achieve a reliable software system. By predicting the faults in a software system, the software quality models can identify high-risk modules, and thus, these high-risk modules can be targeted for reliability enhancements. Strictly speaking, software...
Show moreSoftware reliability engineering plays a vital role in managing and controlling software quality. As an important method of software reliability engineering, software quality estimation modeling is useful in defining a cost-effective strategy to achieve a reliable software system. By predicting the faults in a software system, the software quality models can identify high-risk modules, and thus, these high-risk modules can be targeted for reliability enhancements. Strictly speaking, software quality modeling not only aims at lowering the misclassification rate, but also takes into account the costs of different misclassifications and the available resources of a project. As a new search-based algorithm, Genetic Programming (GP) can build a model without assuming the size, shape, or structure of a model. It can flexibly tailor the fitness functions to the objectives chosen by the customers. Moreover, it can optimize several objectives simultaneously in the modeling process, and thus, a set of multi-objective optimization solutions can be obtained. This research focuses on building software quality estimation models using GP. Several GP-based models of predicting the class membership of each software module and ranking the modules by a quality factor were proposed. The first model of categorizing the modules into fault-prone or not fault-prone was proposed by considering the distinguished features of the software quality classification task and GP. The second model provided quality-based ranking information for fault-prone modules. A decision tree-based software classification model was also proposed by considering accuracy and simplicity simultaneously. This new technique provides a new multi-objective optimization algorithm to build decision trees for real-world engineering problems, in which several trade-off objectives usually have to be taken into account at the same time. The fourth model was built to find multi-objective optimization solutions by considering both the expected cost of misclassification and available resources. Also, a new goal-oriented technique of building module-order models was proposed by directly optimizing several goals chosen by project analysts. The issues of GP , bloating and overfitting, were also addressed in our research. Data were collected from three industrial projects, and applied to validate the performance of the models. Results indicate that our proposed methods can achieve useful performance results. Moreover, some proposed methods can simultaneously optimize several different objectives of a software project management team.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fau/fd/FADT12047
- Subject Headings
- Computer software--Quality control, Genetic programming (Computer science), Software engineering
- Format
- Document (PDF)
- Title
- Software quality prediction using case-based reasoning.
- Creator
- Berkovich, Yevgeniy., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
The ability to efficiently prevent faults in large software systems is a very important concern of software project managers. Successful testing allows us to build quality software systems. Unfortunately, it is not always possible to effectively test a system due to time, resources, or other constraints. A critical bug may cause catastrophic consequences, such as loss of life or very expensive equipment. We can facilitate testing by finding where faults are more likely to be hidden. Case...
Show moreThe ability to efficiently prevent faults in large software systems is a very important concern of software project managers. Successful testing allows us to build quality software systems. Unfortunately, it is not always possible to effectively test a system due to time, resources, or other constraints. A critical bug may cause catastrophic consequences, such as loss of life or very expensive equipment. We can facilitate testing by finding where faults are more likely to be hidden. Case-Based Reasoning (CBR) is one of many methodologies that make this process faster and cheaper by discovering faults early in the software life cycle. This is one of the methodologies used to predict software quality of the system by discovering fault-prone modules. We employ the SMART tool to facilitate CBR , using product and process metrics as independent variables. The study found that CBR is a robust tool capable of carrying out software quality prediction on its own with acceptable results. We also show that CBR's weaknesses do not hinder its effectiveness in finding misclassified modules.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12671
- Subject Headings
- Computer software--Quality control, Computer software--Evaluation, Software measurement
- Format
- Document (PDF)
- Title
- Tree-based classification models for analyzing a very large software system.
- Creator
- Bullard, Lofton A., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
Software systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational...
Show moreSoftware systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational prototype of the subsystem over a period of approximately three years. We used these metrics to train a decision tree model and to fit a discriminant model to classify each module as fault-prone or not fault-prone. The algorithm used to generate the decision tree model was TREEDISC, developed by the SAS Institute. The decision tree model is compared to the discriminant model.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15315
- Subject Headings
- Computer software--Quality control, Computer software--Reliability, Software engineering
- Format
- Document (PDF)
- Title
- Correcting noisy data and expert analysis of the correction process.
- Creator
- Seiffert, Christopher N., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis expands upon an existing noise cleansing technique, polishing, enabling it to be used in the Software Quality Prediction domain, as well as any other domain where the data contains continuous values, as opposed to categorical data for which the technique was originally designed. The procedure is applied to a real world dataset with real (as opposed to injected) noise as determined by an expert in the domain. This, in combination with expert assessment of the changes made to the...
Show moreThis thesis expands upon an existing noise cleansing technique, polishing, enabling it to be used in the Software Quality Prediction domain, as well as any other domain where the data contains continuous values, as opposed to categorical data for which the technique was originally designed. The procedure is applied to a real world dataset with real (as opposed to injected) noise as determined by an expert in the domain. This, in combination with expert assessment of the changes made to the data, provides not only a more realistic dataset than one in which the noise (or even the entire dataset) is artificial, but also a better understanding of whether the procedure is successful in cleansing the data. Lastly, this thesis provides a more in-depth view of the process than previously available, in that it gives results for different parameters and classifier building techniques. This allows the reader to gain a better understanding of the significance of both model generation and parameter selection.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/13223
- Subject Headings
- Computer interfaces--Software--Quality control, Acoustical engineering, Noise control--Computer programs, Expert systems (Computer science), Software documentation
- Format
- Document (PDF)
- Title
- Classification of software quality using Bayesian belief networks.
- Creator
- Dong, Yuhong., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
In today's competitive environment for software products, quality has become an increasingly important asset to software development organizations. Software quality models are tools for focusing efforts to find faults early in the development. Delaying corrections can lead to higher costs. In this research, the classification Bayesian Networks modelling technique was used to predict the software quality by classifying program modules either as fault-prone or not fault-prone. A general...
Show moreIn today's competitive environment for software products, quality has become an increasingly important asset to software development organizations. Software quality models are tools for focusing efforts to find faults early in the development. Delaying corrections can lead to higher costs. In this research, the classification Bayesian Networks modelling technique was used to predict the software quality by classifying program modules either as fault-prone or not fault-prone. A general classification rule was applied to yield classification Bayesian Belief Network models. Six classification Bayesian Belief Network models were developed based on quality metrics data records of two very large window application systems. The fit data set was used to build the model and the test data set was used to evaluate the model. The first two models used median based data cluster technique, the second two models used median as critical value to cluster metrics using Generalized Boolean Discriminant Function and the third two models used Kolniogorov-Smirnov test to select the critical value to cluster metrics using Generalized Boolean Discriminant Function; All six models used the product metrics (FAULT or CDCHURN) as predictors.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12918
- Subject Headings
- Computer software--Quality control, Software measurement, Bayesian statistical decision theory
- Format
- Document (PDF)
- Title
- Partitioning filter approach to noise elimination: An empirical study in software quality classification.
- Creator
- Rebours, Pierre., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents two new noise filtering techniques which improve the quality of training datasets by removing noisy data. The training dataset is first split into subsets, and base learners are induced on each of these splits. The predictions are combined in such a way that an instance is identified as noisy if it is misclassified by a certain number of base learners. The Multiple-Partitioning Filter combines several classifiers on each split. The Iterative-Partitioning Filter only uses...
Show moreThis thesis presents two new noise filtering techniques which improve the quality of training datasets by removing noisy data. The training dataset is first split into subsets, and base learners are induced on each of these splits. The predictions are combined in such a way that an instance is identified as noisy if it is misclassified by a certain number of base learners. The Multiple-Partitioning Filter combines several classifiers on each split. The Iterative-Partitioning Filter only uses one base learner, but goes through multiple iterations. The amount of noise removed is varied by tuning the filtering level or the number of iterations. Empirical studies on a high assurance software project compare the effectiveness of our noise removal approaches with two other filters, the Cross-Validation Filter and the Ensemble Filter. Our studies suggest that using several base classifiers as well as performing several iterations with a conservative scheme may improve the efficiency of the filter.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fcla/dt/13110
- Subject Headings
- Software measurement, Computer software--Quality control, Decision trees, Recursive partitioning
- Format
- Document (PDF)
- Title
- Predicting decay in program modules of legacy software systems.
- Creator
- Joshi, Dhaval Kunvarabhai., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
Legacy software systems may go through many releases. It is important to ensure that the reliability of a system improves with subsequent releases. Methods are needed to identify decaying software modules, i.e., modules for which quality decreases with each system release. Early identification of such modules during the software life cycle allows us to focus quality improvement efforts in a more productive manner, by reducing resources wasted for testing and improving the entire system. We...
Show moreLegacy software systems may go through many releases. It is important to ensure that the reliability of a system improves with subsequent releases. Methods are needed to identify decaying software modules, i.e., modules for which quality decreases with each system release. Early identification of such modules during the software life cycle allows us to focus quality improvement efforts in a more productive manner, by reducing resources wasted for testing and improving the entire system. We present a scheme to classify modules in three groups---Decayed, Improved, and Unchanged---based on a three-group software quality classification method. This scheme is applied to three different case studies, using a case-based reasoning three-group classification model. The model identifies decayed modules, and is validated over different releases. The main goal of this work is to focus on the evolution of program modules of a legacy software system to identify modules that are difficult to maintain and may need to be reengineered.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12899
- Subject Headings
- Software reengineering, Computer software--Quality control, Software measurement, Software maintenance
- Format
- Document (PDF)
- Title
- A comparative study of attribute selection techniques for CBR-based software quality classification models.
- Creator
- Nguyen, Laurent Quoc Viet., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
To achieve high reliability in software-based systems, software metrics-based quality classification models have been explored in the literature. However, the collection of software metrics may be a hard and long process, and some metrics may not be helpful or may be harmful to the classification models, deteriorating the models' accuracies. Hence, methodologies have been developed to select the most significant metrics in order to build accurate and efficient classification models. Case...
Show moreTo achieve high reliability in software-based systems, software metrics-based quality classification models have been explored in the literature. However, the collection of software metrics may be a hard and long process, and some metrics may not be helpful or may be harmful to the classification models, deteriorating the models' accuracies. Hence, methodologies have been developed to select the most significant metrics in order to build accurate and efficient classification models. Case-Based Reasoning is the classification technique used in this thesis. Since it does not provide any metric selection mechanisms, some metric selection techniques were studied. In the context of CBR, this thesis presents a comparative evaluation of metric selection methodologies, for raw and discretized data. Three attribute selection techniques have been studied: Kolmogorov-Smirnov Two-Sample Test, Kruskal-Wallis Test, and Information Gain. These techniques resulted in classification models that are useful for software quality improvement.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12944
- Subject Headings
- Case-based reasoning, Software engineering, Computer software--Quality control
- Format
- Document (PDF)
- Title
- Fuzzy logic techniques for software reliability engineering.
- Creator
- Xu, Zhiwei., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Modern people are becoming more and more dependent on computers in their daily lives. Most industries, from automobile, avionics, oil, and telecommunications to banking, stocks, and pharmaceuticals, require computers to function. As the tasks required become more complex, the complexity of computer software and hardware has increased dramatically. As a consequence, the possibility of failure increases. As the requirements for and dependence on computers increases, the possibility of crises...
Show moreModern people are becoming more and more dependent on computers in their daily lives. Most industries, from automobile, avionics, oil, and telecommunications to banking, stocks, and pharmaceuticals, require computers to function. As the tasks required become more complex, the complexity of computer software and hardware has increased dramatically. As a consequence, the possibility of failure increases. As the requirements for and dependence on computers increases, the possibility of crises caused by computer failures also increases. High reliability is an important attribute for almost any software system. Consequently, software developers are seeking ways to forecast and improve quality before release. Since many quality factors cannot be measured until after the software becomes operational, software quality models are developed to predict quality factors based on measurements collected earlier in the life cycle. Due to incomplete information in the early life cycle of software development, software quality models with fuzzy characteristics usually perform better because fuzzy concepts deal with phenomenon that is vague in nature. This study focuses on the usage of fuzzy logic in software reliability engineering. Discussing will include the fuzzy expert systems and the application of fuzzy expert systems in early risk assessment; introducing the interval prediction using fuzzy regression modeling; demonstrating fuzzy rule extraction for fuzzy classification and its usage in software quality models; demonstrating the fuzzy identification, including extraction of both rules and membership functions from fuzzy data and applying the technique to software project cost estimations. The following methodologies were considered: nonparametric discriminant analysis, Z-test and paired t-test, neural networks, fuzzy linear regression, fuzzy nonlinear regression, fuzzy classification with maximum matched method, fuzzy identification with fuzzy clustering, and fuzzy projection. Commercial software systems and the COCOMO database are used throughout this dissertation to demonstrate the usefulness of concepts and to validate new ideas.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/11948
- Subject Headings
- Software engineering, Fuzzy logic, Computer software--Quality control, Fuzzy systems
- Format
- Document (PDF)
- Title
- A metrics-based software quality modeling tool.
- Creator
- Rajeevalochanam, Jayanth Munikote., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In today's world, high reliability has become an essential component of almost every software system. However, since the reliability-enhancement activities entail enormous costs, software quality models, based on the metrics collected early in the development life cycle, serve as handy tools for cost-effectively guiding such activities to the software modules that are likely to be faulty. Case-Based Reasoning (CBR) is an attractive technique for software quality modeling. Software Measurement...
Show moreIn today's world, high reliability has become an essential component of almost every software system. However, since the reliability-enhancement activities entail enormous costs, software quality models, based on the metrics collected early in the development life cycle, serve as handy tools for cost-effectively guiding such activities to the software modules that are likely to be faulty. Case-Based Reasoning (CBR) is an attractive technique for software quality modeling. Software Measurement Analysis and Reliability Toolkit (SMART) is a CBR tool customized for metrics-based software quality modeling. Developed for the NASA IV&V Facility, SMART supports three types of software quality models: quantitative quality prediction, classification, and module-order models. It also supports a goal-oriented selection of classification models. An empirical case study of a military command, control, and communication system demonstrates the accuracy and usefulness of SMART, and also serves as a user-guide for the tool.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12967
- Subject Headings
- Software measurement, Computer software--Quality control, Case-based reasoning
- Format
- Document (PDF)
- Title
- An empirical study of combining techniques in software quality classification.
- Creator
- Eroglu, Cemal., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
In the literature, there has been limited research that systematically investigates the possibility of exercising a hybrid approach by simply learning from the output of numerous base-level learners. We analyze a hybrid learning approach upon the systems that had previously been worked with twenty-four different classifiers. Instead of relying on only one classifier's judgment, it is expected that taking into account the opinions of several learners is a wise decision. Moreover, by using...
Show moreIn the literature, there has been limited research that systematically investigates the possibility of exercising a hybrid approach by simply learning from the output of numerous base-level learners. We analyze a hybrid learning approach upon the systems that had previously been worked with twenty-four different classifiers. Instead of relying on only one classifier's judgment, it is expected that taking into account the opinions of several learners is a wise decision. Moreover, by using clustering techniques some base-level classifiers were eliminated from the hybrid learner input. We had three different experiments each with a different number of base-level classifiers. We empirically show that the hybrid learning approach generally yields better performance than the best selected base-level learners and majority voting under some conditions.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fcla/dt/13162
- Subject Headings
- Computer software--Testing, Computer software--Quality control, Computational learning theory, Machine learning, Digital computer simulation
- Format
- Document (PDF)
- Title
- Ensemble-classifier approach to noise elimination: A case study in software quality classification.
- Creator
- Joshi, Vedang H., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
This thesis presents a noise handling technique that attempts to improve the quality of training data for classification purposes by eliminating instances that are likely to be noise. Our approach uses twenty five different classification techniques to create an ensemble of classifiers that acts as a noise filter on real-world software measurement datasets. Using a relatively large number of base-level classifiers for the ensemble-classifier filter facilitates in achieving the desired level...
Show moreThis thesis presents a noise handling technique that attempts to improve the quality of training data for classification purposes by eliminating instances that are likely to be noise. Our approach uses twenty five different classification techniques to create an ensemble of classifiers that acts as a noise filter on real-world software measurement datasets. Using a relatively large number of base-level classifiers for the ensemble-classifier filter facilitates in achieving the desired level of noise removal conservativeness with several possible levels of filtering. It also provides a higher degree of confidence in the noise elimination procedure as the results are less likely to get influenced by (possible) inappropriate learning bias of a few algorithms with twenty five base-level classifiers than with a relatively smaller number of base-level classifiers. Empirical case studies of two different high assurance software projects demonstrate the effectiveness of our noise elimination approach by the significant improvement achieved in classification accuracies at various levels of filtering.
Show less - Date Issued
- 2004
- PURL
- http://purl.flvc.org/fcla/dt/13144
- Subject Headings
- Computer interfaces--Software--Quality control, Acoustical engineering, Noise control--Case studies, Expert systems (Computer science), Software documentation
- Format
- Document (PDF)
- Title
- Software quality modeling and analysis with limited or without defect data.
- Creator
- Seliya, Naeem A., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The key to developing high-quality software is the measurement and modeling of software quality. In practice, software measurements are often used as a resource to model and comprehend the quality of software. The use of software measurements to understand quality is accomplished by a software quality model that is trained using software metrics and defect data of similar, previously developed, systems. The model is then applied to estimate quality of the target software project. Such an...
Show moreThe key to developing high-quality software is the measurement and modeling of software quality. In practice, software measurements are often used as a resource to model and comprehend the quality of software. The use of software measurements to understand quality is accomplished by a software quality model that is trained using software metrics and defect data of similar, previously developed, systems. The model is then applied to estimate quality of the target software project. Such an approach assumes that defect data is available for all program modules in the training data. Various practical issues can cause an unavailability or limited availability of defect data from the previously developed systems. This dissertation presents innovative and practical techniques for addressing the problem of software quality analysis when there is limited or completely absent defect data. The proposed techniques for software quality analysis without defect data include an expert-based approach with unsupervised clustering and an expert-based approach with semi-supervised clustering. The proposed techniques for software quality analysis with limited defect data includes a semi-supervised classification approach with the Expectation-Maximization algorithm and an expert-based approach with semi-supervised clustering. Empirical case studies of software measurement datasets obtained from multiple NASA software projects are used to present and evaluate the different techniques. The empirical results demonstrate the attractiveness, benefit, and definite promise of the proposed techniques. The newly developed techniques presented in this dissertation is invaluable to the software quality practitioner challenged by the absence or limited availability of defect data from previous software development experiences.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/12151
- Subject Headings
- Software measurement, Computer software--Quality control, Computer software--Reliability--Mathematical models, Software engineering--Quality control
- Format
- Document (PDF)