Current Search: Software engineering (x)
View All Items
Pages
- Title
- Modeling fault-prone modules of subsystems.
- Creator
- Thaker, Vishal Kirit., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In software engineering software quality has become a topic of major concern. It has also been recognized that the role of maintenance organization is to understand and estimate the cost of maintenance releases of software systems. Planning the next release so as to maximize the increase in functionality and the improvement in quality are essential to successful maintenance management. With the growing collection of software in organizations this cost is becoming substantial. In this research...
Show moreIn software engineering software quality has become a topic of major concern. It has also been recognized that the role of maintenance organization is to understand and estimate the cost of maintenance releases of software systems. Planning the next release so as to maximize the increase in functionality and the improvement in quality are essential to successful maintenance management. With the growing collection of software in organizations this cost is becoming substantial. In this research we have compared two software quality models. We tried to see whether a model built on entire system which predicts subsystem and a model built on subsystem which predicts the same subsystem has similar, better or worst classification results. We used Classification And Regression Tree algorithm (CART) to build classification models. A case study is based on a very large telecommunication system.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/12700
- Subject Headings
- Computer software--Quality control, Software engineering
- Format
- Document (PDF)
- Title
- Classification of software quality using tree modeling with the SPRINT/SLIQ algorithm.
- Creator
- Mao, Wenlei., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Providing high quality software products is the common goal of all software engineers. Finding faults early can produce large savings over the software life cycle. Therefore, software quality has become the main subject in our research field. This thesis presents a series of studies on a very large legacy telecommunication system. The system has significantly more than ten million lines of code written in a high-level language similar to Pascal. Software quality models were developed to...
Show moreProviding high quality software products is the common goal of all software engineers. Finding faults early can produce large savings over the software life cycle. Therefore, software quality has become the main subject in our research field. This thesis presents a series of studies on a very large legacy telecommunication system. The system has significantly more than ten million lines of code written in a high-level language similar to Pascal. Software quality models were developed to predict the class of each module either as fault-prone or as not fault-prone. We used the SPRINT/SLIQ algorithm to build the classification tree models. We found out that SPRINT/ SLIQ as an improved CART algorithm can give us tree models with more accuracy, more balance, and less overfitting. We also found that software process metrics can significantly improve the predictive accuracy of software quality models.
Show less - Date Issued
- 2000
- PURL
- http://purl.flvc.org/fcla/dt/15767
- Subject Headings
- Computer software--Quality control, Software engineering, Software measurement
- Format
- Document (PDF)
- Title
- Impact of best manufacturing engineering practices on software engineering practices.
- Creator
- Akhter, Shahina., Florida Atlantic University, Coulter, Neal S.
- Abstract/Description
-
This thesis involves original research in the area of semantic analysis of textual databases (content analysis). The main intention of this study is to examine how software engineering practices can benefit from the best manufacturing practices. There is a deliberate focus and emphasis on competitive effectiveness worldwide. The ultimate goal of the U.S. NAVY's Best Manufacturing Practices Program is to strengthen the U.S. industrial base and reduce the cost of defense systems by solving...
Show moreThis thesis involves original research in the area of semantic analysis of textual databases (content analysis). The main intention of this study is to examine how software engineering practices can benefit from the best manufacturing practices. There is a deliberate focus and emphasis on competitive effectiveness worldwide. The ultimate goal of the U.S. NAVY's Best Manufacturing Practices Program is to strengthen the U.S. industrial base and reduce the cost of defense systems by solving manufacturing problems and improving quality and reliability. Best manufacturing practices can assist software engineering practices in a way that when software companies use these practices they can: (1) Improve both software quality and staff productivity; (2) Determine the current status of the organization's software process; (3) Set goals for process improvement; (4) Create effective plans for reaching those goals; (5) Implement the major elements of the plans.
Show less - Date Issued
- 1997
- PURL
- http://purl.flvc.org/fcla/dt/15445
- Subject Headings
- Software engineering, Production engineering, Production management--Computer softwares
- Format
- Document (PDF)
- Title
- Detection of change-prone telecommunications software modules.
- Creator
- Weir, Ronald Eugene., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Accurately classifying the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take actions against emerging quality problems. The use of a neural network as a tool to classify programs as a low, medium, or high risk for errors or change is explored using multiple software metrics as input. It is demonstrated that a neural network, trained using the back-propagation...
Show moreAccurately classifying the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take actions against emerging quality problems. The use of a neural network as a tool to classify programs as a low, medium, or high risk for errors or change is explored using multiple software metrics as input. It is demonstrated that a neural network, trained using the back-propagation supervised learning strategy, produced the desired mapping between the static software metrics and the software quality classes. The neural network classification methodology is compared to the discriminant analysis classification methodology in this experiment. The comparison is based on two and three class predictive models developed using variables resulting from principal component analysis of software metrics.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15183
- Subject Headings
- Computer software--Evaluation, Software engineering, Neural networks (Computer science)
- Format
- Document (PDF)
- Title
- Modeling software quality with classification trees using principal components analysis.
- Creator
- Shan, Ruqun., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Software quality models often have raw software metrics as the input data for predicting quality. Raw metrics are usually highly correlated with one another and thus may result in unstable models. Principal components analysis is a statistical method to improve model stability. This thesis presents a series of studies on a very large legacy telecommunication system. The system has significantly more than ten million lines of code written in a high level language similar to Pascal. Software...
Show moreSoftware quality models often have raw software metrics as the input data for predicting quality. Raw metrics are usually highly correlated with one another and thus may result in unstable models. Principal components analysis is a statistical method to improve model stability. This thesis presents a series of studies on a very large legacy telecommunication system. The system has significantly more than ten million lines of code written in a high level language similar to Pascal. Software quality models were developed to predict the class of each module either as fault-prone or as not fault-prone. We found out that the models based on principal components analysis were more robust than those based on raw metrics. We also found out that software process metrics can significantly improve the predictive accuracy of software quality models.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15714
- Subject Headings
- Principal components analysis, Computer software--Quality control, Software engineering
- Format
- Document (PDF)
- Title
- Modeling software quality with TREEDISC algorithm.
- Creator
- Yuan, Xiaojing, Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Software quality is crucial both to software makers and customers. However, in reality, improvement of quality and reduction of costs are often at odds. Software modeling can help us to detect fault-prone software modules based on software metrics, so that we can focus our limited resources on fewer modules and lower the cost but still achieve high quality. In the present study, a tree classification modeling technique---TREEDISC was applied to three case studies. Several major contributions...
Show moreSoftware quality is crucial both to software makers and customers. However, in reality, improvement of quality and reduction of costs are often at odds. Software modeling can help us to detect fault-prone software modules based on software metrics, so that we can focus our limited resources on fewer modules and lower the cost but still achieve high quality. In the present study, a tree classification modeling technique---TREEDISC was applied to three case studies. Several major contributions have been made. First, preprocessing of raw data was adopted to solve the computer memory problem and improve the models. Secondly, TREEDISC was thoroughly explored by examining the roles of important parameters in modeling. Thirdly, a generalized classification rule was introduced to balance misclassification rates and decrease type II error, which is considered more costly than type I error. Fourthly, certainty of classification was addressed. Fifthly, TREEDISC modeling was validated over multiple releases of software product.
Show less - Date Issued
- 1999
- PURL
- http://purl.flvc.org/fcla/dt/15718
- Subject Headings
- Computer software--Quality control, Computer simulation, Software engineering
- Format
- Document (PDF)
- Title
- A comprehensive comparative study of multiple classification techniques for software quality estimation.
- Creator
- Puppala, Kishore., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Reliability and quality are desired features in industrial software applications. In some cases, they are absolutely essential. When faced with limited resources, software project managers will need to allocate such resources to the most fault prone areas. The ability to accurately classify a software module as fault-prone or not fault-prone enables the manager to make an informed resource allocation decision. An accurate quality classification avoids wasting resources on modules that are not...
Show moreReliability and quality are desired features in industrial software applications. In some cases, they are absolutely essential. When faced with limited resources, software project managers will need to allocate such resources to the most fault prone areas. The ability to accurately classify a software module as fault-prone or not fault-prone enables the manager to make an informed resource allocation decision. An accurate quality classification avoids wasting resources on modules that are not fault-prone. It also avoids missing the opportunity to correct faults relatively early in the development cycle, when they are less costly. This thesis seeks to introduce the classification algorithms (classifiers) that are implemented in the WEKA software tool. WEKA (Waikato Environment for Knowledge Analysis) was developed at the University of Waikato in New Zealand. An empirical investigation is performed using a case study at a real-world system.
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/13039
- Subject Headings
- Software engineering, Computer software--Quality control, Decision trees
- Format
- Document (PDF)
- Title
- Information theory and software measurement.
- Creator
- Allen, Edward B., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Development of reliable, high quality, software requires study and understanding at each step of the development process. A basic assumption in the field of software measurement is that metrics of internal software attributes somehow relate to the intrinsic difficulty in understanding a program. Measuring the information content of a program attempts to indirectly quantify the comprehension task. Information theory based software metrics are attractive because they quantify the amount of...
Show moreDevelopment of reliable, high quality, software requires study and understanding at each step of the development process. A basic assumption in the field of software measurement is that metrics of internal software attributes somehow relate to the intrinsic difficulty in understanding a program. Measuring the information content of a program attempts to indirectly quantify the comprehension task. Information theory based software metrics are attractive because they quantify the amount of information in a well defined framework. However, most information theory based metrics have been proposed with little reference to measurement theory fundamentals, and empirical validation of predictive quality models has been lacking. This dissertation proves that representative information theory based software metrics can be "meaningful" components of software quality models in the context of measurement theory. To this end, members of a major class of metrics are shown to be regular representations of Minimum Description Length or Variety of software attributes, and are interval scale. An empirical validation case study is presented that predicted faults in modules based on Operator Information. This metric is closely related to Harrison's Average Information Content Classification, which is the entropy of the operators. New general methods for calculating synthetic complexity at the system level and module level are presented, quantifying the joint information of an arbitrary set of primitive software measures. Since all kinds of information are not equally relevant to software quality factors, components of synthetic module complexity are also defined. Empirical case studies illustrate the potential usefulness of the proposed synthetic metrics. A metrics data base is often the key to a successful ongoing software metrics program. The contribution of any proposed metric is defined in terms of measured variation using information theory, irrespective of the metric's usefulness in quality models. This is of interest when full validation is not practical. Case studies illustrate the method.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/12412
- Subject Headings
- Software engineering, Computer software--Quality control, Information theory
- Format
- Document (PDF)
- Title
- Count models for software quality estimation.
- Creator
- Gao, Kehan, Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The primary aim of software engineering is to produce quality software that is delivered on time, within budget, and fulfils all its requirements. A timely estimation of software quality can serve as a prerequisite in achieving high reliability of software-based systems. More specifically, software quality assurance efforts can be prioritized for targeting program modules that are most likely to have a high number of faults. Software quality estimation models are generally of two types: a...
Show moreThe primary aim of software engineering is to produce quality software that is delivered on time, within budget, and fulfils all its requirements. A timely estimation of software quality can serve as a prerequisite in achieving high reliability of software-based systems. More specifically, software quality assurance efforts can be prioritized for targeting program modules that are most likely to have a high number of faults. Software quality estimation models are generally of two types: a classification model that predicts the class membership of modules into two or more quality-based classes, and a quantitative prediction model that estimates the number of faults (or some other software quality factor) that are likely to occur in software modules. In the literature, a variety of techniques have been developed for software quality estimation, most of which are suited for either prediction or classification but not for both, e.g., the multiple linear regression (only for prediction) and logistic regression (only for classification).
Show less - Date Issued
- 2003
- PURL
- http://purl.flvc.org/fcla/dt/12042
- Subject Headings
- Computer software--Quality control, Software engineering, Econometrics, Regression analysis
- Format
- Document (PDF)
- Title
- Improved models of software quality.
- Creator
- Szabo, Robert Michael., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Though software development has been evolving for over 50 years, the development of computer software systems has largely remained an art. Through the application of measurable and repeatable processes, efforts have been made to slowly transform the software development art into a rigorous engineering discipline. The potential gains are tremendous. Computer software pervades modern society in many forms. For example, the automobile, radio, television, telephone, refrigerator, and still-camera...
Show moreThough software development has been evolving for over 50 years, the development of computer software systems has largely remained an art. Through the application of measurable and repeatable processes, efforts have been made to slowly transform the software development art into a rigorous engineering discipline. The potential gains are tremendous. Computer software pervades modern society in many forms. For example, the automobile, radio, television, telephone, refrigerator, and still-camera have all been transformed by the introduction of computer based controls. The quality of these everyday products is in part determined by the quality of the computer software running inside them. Therefore, the timely delivery of low-cost and high-quality software to enable these mass market products becomes very important to the long term success of the companies building them. It is not surprising that managing the number of faults in computer software to competitive levels is a prime focus of the software engineering activity. In support of this activity, many models of software quality have been developed to help control the software development process and ensure that our goals of cost and quality are met on time. In this study, we focus on the software quality modeling activity. We improve existing static and dynamic methodologies and demonstrate new ones in a coordinated attempt to provide engineering methods applicable to the development of computer software. We will show how the power of separate predictive and classification models of software quality may be combined into one model; introduce a three group fault classification model in the object-oriented paradigm; demonstrate a dynamic modeling methodology of the testing process and show how software product measures and software process measures may be incorporated as input to such a model; demonstrate a relationship between software product measures and the testability of software. The following methodologies were considered: principal components analysis, multiple regression analysis, Poisson regression analysis, discriminant analysis, time series analysis, and neural networks. Commercial grade software systems are used throughout this dissertation to demonstrate concepts and validate new ideas. As a result, we hope to incrementally advance the state of the software engineering "art".
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/12409
- Subject Headings
- Software engineering--Standards, Software engineering--Management, Computer software--Development, Computer software--Quality control
- Format
- Document (PDF)
- Title
- Analytical study of capability maturity model using content analysis.
- Creator
- Sheth, Dhaval Ranjitlal., Florida Atlantic University, Coulter, Neal S., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Content analysis is used to investigate the essence of the Software Engineering Institute's Capability Maturity Model (CMM) through associated software process evaluation instruments. This study yields lexical maps of key terms from each questionnaire. The content analysis is studied in three possible ways for each of the questionnaires: By question, by key process area, and by maturity level. These maps are named suitably. Super network and distribution maps are used for finding relations...
Show moreContent analysis is used to investigate the essence of the Software Engineering Institute's Capability Maturity Model (CMM) through associated software process evaluation instruments. This study yields lexical maps of key terms from each questionnaire. The content analysis is studied in three possible ways for each of the questionnaires: By question, by key process area, and by maturity level. These maps are named suitably. Super network and distribution maps are used for finding relations among the maps. Analysis of the key terms from the maps are compared to extract the essence of CMM and the ability of the questionnaires to adequately assess an organization's process maturity.
Show less - Date Issued
- 1998
- PURL
- http://purl.flvc.org/fcla/dt/15554
- Subject Headings
- Software engineering--Management, Computer software--Development, Computer software--Evaluation
- Format
- Document (PDF)
- Title
- Developing accurate software quality models using a faster, easier, and cheaper method.
- Creator
- Lim, Linda., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Managers of software development need to know which components of a system are fault-prone. If this can be determined early in the development cycle then resources can be more effectively allocated and significant costs can be reduced. Case-Based Reasoning (CBR) is a simple and efficient methodology for building software quality models that can provide early information to managers. Our research focuses on two case studies. The first study analyzes source files and classifies them as fault...
Show moreManagers of software development need to know which components of a system are fault-prone. If this can be determined early in the development cycle then resources can be more effectively allocated and significant costs can be reduced. Case-Based Reasoning (CBR) is a simple and efficient methodology for building software quality models that can provide early information to managers. Our research focuses on two case studies. The first study analyzes source files and classifies them as fault-prone or not fault-prone. It also predicts the number of faults in each file. The second study analyzes the fault removal process, and creates models that predict the outcome of software inspections.
Show less - Date Issued
- 2001
- PURL
- http://purl.flvc.org/fcla/dt/12746
- Subject Headings
- Computer software--Development, Computer software--Quality control, Software engineering
- Format
- Document (PDF)
- Title
- Multivariate modeling of software engineering measures.
- Creator
- Lanning, David Lee., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
One goal of software engineers is to produce software products. An additional goal, that the software production must lead to profit, releases the power of the software product market. This market demands high quality products and tight cycles in the delivery of new and enhanced products. These market conditions motivate the search for engineering methods that help software producers ship products quicker, at lower cost, and with fewer defects. The control of software defects is key to...
Show moreOne goal of software engineers is to produce software products. An additional goal, that the software production must lead to profit, releases the power of the software product market. This market demands high quality products and tight cycles in the delivery of new and enhanced products. These market conditions motivate the search for engineering methods that help software producers ship products quicker, at lower cost, and with fewer defects. The control of software defects is key to meeting these market conditions. Thus, many software engineering tasks are concerned with software defects. This study considers two sources of variation in the distribution of software defects: software complexity and enhancement activity. Multivariate techniques treat defect activity, software complexity, and enhancement activity as related multivariate concepts. Applied techniques include principal components analysis, canonical correlation analysis, discriminant analysis, and multiple regression analysis. The objective of this study is to improve our understanding of software complexity and software enhancement activity as sources of variation in defect activity, and to apply this understanding to produce predictive and discriminant models useful during testing and maintenance tasks. These models serve to support critical software engineering decisions.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/12383
- Subject Headings
- Software engineering, Computer software--Testing, Computer software--Quality control
- Format
- Document (PDF)
- Title
- An improved neural net-based approach for predicting software quality.
- Creator
- Guasti, Peter John., Florida Atlantic University, Khoshgoftaar, Taghi M., Pandya, Abhijit S.
- Abstract/Description
-
Accurately predicting the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take action against emerging quality problems. Most often the predictive models are based upon multiple regression analysis which become unstable when certain data assumptions are not met. Since neural networks require no data assumptions, they are more appropriate for predicting software...
Show moreAccurately predicting the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take action against emerging quality problems. Most often the predictive models are based upon multiple regression analysis which become unstable when certain data assumptions are not met. Since neural networks require no data assumptions, they are more appropriate for predicting software quality. This study proposes an improved neural network architecture that significantly outperforms multiple regression and other neural network attempts at modeling software quality. This is demonstrated by applying this approach to several large commercial software systems. After developing neural network models, we develop regression models on the same data. We find that the neural network models surpass the regression models in terms of predictive quality on the data sets considered.
Show less - Date Issued
- 1995
- PURL
- http://purl.flvc.org/fcla/dt/15134
- Subject Headings
- Neural networks (Computer science), Computer software--Development, Computer software--Quality control, Software engineering
- Format
- Document (PDF)
- Title
- INTEGRATING DESIGN THINKING MODEL AND ITEMS PRIORITIZATION DECISION SUPPORT SYSTEMS INTO REQUIREMENTS MANAGEMENT IN SCRUM.
- Creator
- Alhazmi, Alhejab Shawqi, Huang, Shihong, Florida Atlantic University, Department of Computer and Electrical Engineering and Computer Science, College of Engineering and Computer Science
- Abstract/Description
-
The Agile methodologies have attracted the software development industry's attention due to their capability to overcome the limitations of the traditional software development approaches and to cope with increasing complexity in system development. Scrum is one of the Agile software development processes broadly adopted by industry. Scrum promotes frequent customer involvement and incremental short releases. Despite its popular use, Scrum’s requirements engineering stage is inadequately...
Show moreThe Agile methodologies have attracted the software development industry's attention due to their capability to overcome the limitations of the traditional software development approaches and to cope with increasing complexity in system development. Scrum is one of the Agile software development processes broadly adopted by industry. Scrum promotes frequent customer involvement and incremental short releases. Despite its popular use, Scrum’s requirements engineering stage is inadequately defined which can lead to increase development time and cost, along with low quality or failure for the end products. This research shows the importance of activity planning of requirements engineering in improving the product quality, cost, and scheduling as well as it points out some drawbacks of Agile practices and available solutions. To improve the Scrum requirements engineering by overcoming its challenges in cases, such as providing a comprehensive understanding of the customer’s needs and addressing the effects of the challenges in other cases, such as frequent changes of requirements, the Design Thinking model is integrated into the Scrum framework in the context of requirements engineering management. The use of the Design Thinking model, in the context of requirements engineering management, is validated through an in-depth scientific study of the IBM Design Thinking framework. In addition, this research presents an Items Prioritization dEcision Support System (IPESS) which is a tool to assist the Product Owners for requirements prioritization. IPESS is built on information collected in the Design Thinking model. The IPESS tool adopts Analytic Hierarchy Process (AHP) technique and PageRank algorithm to deal with the specified factors and to achieve the optimal order for requirements items based on the prioritization score. IPESS is a flexible and comprehensive tool that focuses on different important aspects including customer satisfaction and product quality. The IPESS tool is validated through an experiment that was conducted in a real-world project
Show less - Date Issued
- 2021
- PURL
- http://purl.flvc.org/fau/fd/FA00013699
- Subject Headings
- Scrum (Computer software development), Computer software--Development--Management, Software engineering
- Format
- Document (PDF)
- Title
- Rough Set-Based Software Quality Models and Quality of Data.
- Creator
- Bullard, Lofton A., Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation we address two significant issues of concern. These are software quality modeling and data quality assessment. Software quality can be measured by software reliability. Reliability is often measured in terms of the time between system failures. A failure is caused by a fault which is a defect in the executable software product. The time between system failures depends both on the presence and the usage pattern of the software. Finding faulty components in the development...
Show moreIn this dissertation we address two significant issues of concern. These are software quality modeling and data quality assessment. Software quality can be measured by software reliability. Reliability is often measured in terms of the time between system failures. A failure is caused by a fault which is a defect in the executable software product. The time between system failures depends both on the presence and the usage pattern of the software. Finding faulty components in the development cycle of a software system can lead to a more reliable final system and will reduce development and maintenance costs. The issue of software quality is investigated by proposing a new approach, rule-based classification model (RBCM) that uses rough set theory to generate decision rules to predict software quality. The new model minimizes over-fitting by balancing the Type I and Type II niisclassiflcation error rates. We also propose a model selection technique for rule-based models called rulebased model selection (RBMS). The proposed rule-based model selection technique utilizes the complete and partial matching rule sets of candidate RBCMs to determine the model with the least amount of over-fitting. In the experiments that were performed, the RBCMs were effective at identifying faulty software modules, and the RBMS technique was able to identify RBCMs that minimized over-fitting. Good data quality is a critical component for building effective software quality models. We address the significance of the quality of data on the classification performance of learners by conducting a comprehensive comparative study. Several trends were observed in the experiments. Class and attribute had the greatest impact on the performance of learners when it occurred simultaneously in the data. Class noise had a significant impact on the performance of learners, while attribute noise had no impact when it occurred in less than 40% of the most significant independent attributes. Random Forest (RF100), a group of 100 decision trees, was the most, accurate and robust learner in all the experiments with noisy data.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012567
- Subject Headings
- Computer software--Quality control, Computer software--Reliability, Software engineering, Computer arithmetic
- Format
- Document (PDF)
- Title
- Tree-based classification models for analyzing a very large software system.
- Creator
- Bullard, Lofton A., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
Software systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational...
Show moreSoftware systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational prototype of the subsystem over a period of approximately three years. We used these metrics to train a decision tree model and to fit a discriminant model to classify each module as fault-prone or not fault-prone. The algorithm used to generate the decision tree model was TREEDISC, developed by the SAS Institute. The decision tree model is compared to the discriminant model.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15315
- Subject Headings
- Computer software--Quality control, Computer software--Reliability, Software engineering
- Format
- Document (PDF)
- Title
- Software quality modeling and analysis with limited or without defect data.
- Creator
- Seliya, Naeem A., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The key to developing high-quality software is the measurement and modeling of software quality. In practice, software measurements are often used as a resource to model and comprehend the quality of software. The use of software measurements to understand quality is accomplished by a software quality model that is trained using software metrics and defect data of similar, previously developed, systems. The model is then applied to estimate quality of the target software project. Such an...
Show moreThe key to developing high-quality software is the measurement and modeling of software quality. In practice, software measurements are often used as a resource to model and comprehend the quality of software. The use of software measurements to understand quality is accomplished by a software quality model that is trained using software metrics and defect data of similar, previously developed, systems. The model is then applied to estimate quality of the target software project. Such an approach assumes that defect data is available for all program modules in the training data. Various practical issues can cause an unavailability or limited availability of defect data from the previously developed systems. This dissertation presents innovative and practical techniques for addressing the problem of software quality analysis when there is limited or completely absent defect data. The proposed techniques for software quality analysis without defect data include an expert-based approach with unsupervised clustering and an expert-based approach with semi-supervised clustering. The proposed techniques for software quality analysis with limited defect data includes a semi-supervised classification approach with the Expectation-Maximization algorithm and an expert-based approach with semi-supervised clustering. Empirical case studies of software measurement datasets obtained from multiple NASA software projects are used to present and evaluate the different techniques. The empirical results demonstrate the attractiveness, benefit, and definite promise of the proposed techniques. The newly developed techniques presented in this dissertation is invaluable to the software quality practitioner challenged by the absence or limited availability of defect data from previous software development experiences.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/12151
- Subject Headings
- Software measurement, Computer software--Quality control, Computer software--Reliability--Mathematical models, Software engineering--Quality control
- Format
- Document (PDF)
- Title
- CAD design patterns.
- Creator
- Hayes, Joey C., Florida Atlantic University, Pandya, Abhijit S.
- Abstract/Description
-
Software reuse has been looked upon in recent years as a promising mechanism for achieving increased levels of software quality and productivity within an organization. A form of software reuse which has been gaining in popularity is the use of design patterns. Design patterns are a higher level of abstraction than source code and are proving to be a valuable resource for both software developers and new hires within a company. This thesis develops the idea of applying design patterns to the...
Show moreSoftware reuse has been looked upon in recent years as a promising mechanism for achieving increased levels of software quality and productivity within an organization. A form of software reuse which has been gaining in popularity is the use of design patterns. Design patterns are a higher level of abstraction than source code and are proving to be a valuable resource for both software developers and new hires within a company. This thesis develops the idea of applying design patterns to the Computer Aided Design (CAD) software development environment. The benefits and costs associated with implementing a software reuse strategy are explained and the reasoning for developing design patterns is given. Design patterns are then described in detail and a potential method for applying design patterns within the CAD environment is demonstrated through the development of a CAD design pattern catalog.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15358
- Subject Headings
- Computer-aided design, Computer-aided software engineering, Computer software--Development, Computer software--Reusability
- Format
- Document (PDF)
- Title
- Campus driver assistant on an Android platform.
- Creator
- Zankina, Iana., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
College campuses can be large, confusing, and intimidating for new students and visitors. Finding the campus may be easy using a GPS unit or Google Maps directions, but this is not the case when you are actually on the campus. There is no service that provides directional assistance for the campus itself. This thesis proposes a driver assistant application running on an Android platform that can direct drivers to different buildings and parking lots in the campus. The application's user...
Show moreCollege campuses can be large, confusing, and intimidating for new students and visitors. Finding the campus may be easy using a GPS unit or Google Maps directions, but this is not the case when you are actually on the campus. There is no service that provides directional assistance for the campus itself. This thesis proposes a driver assistant application running on an Android platform that can direct drivers to different buildings and parking lots in the campus. The application's user interface lets the user select a user type, a campus, and a destination through use of drop down menus and buttons. Once the user submits the needed information, then the next portion of the application runs in the background. The app retrieves the Campus Map XML created by the mapping tool that was constructed for this project. The XML data containing all the map elements is then parsed and stored in a hierarchal data structure. The resulting objects are then used to construct a campus graph, on which an altered version of Dijkstra's Shortest Path algorithm is executed. When the path to the destination has been discovered, the campus map with the computed path overlaid is displayed on the user's device, showing the route to the desired destination.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3359159
- Subject Headings
- Mobile computing, Software engineering, Application software, Development
- Format
- Document (PDF)