Current Search: Reliability Engineering (x)
View All Items
- Title
- Reliability analyses and risk assessment techniques in nonparametric applications.
- Creator
- Ross, Robert Thomas., Florida Atlantic University, Mazouz, Abdel Kader, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
Reliability and risk assessment play an important role in product design, development and production. In mass production items, data is abundant and testing in product development and certification is usually thorough. In contrast, for nonparametric applications, product cost is high, there is limited production and the product is often used only once and discarded. In this type of manufacturing, data is usually limited because of the cost of testing. This makes reliability and risk...
Show moreReliability and risk assessment play an important role in product design, development and production. In mass production items, data is abundant and testing in product development and certification is usually thorough. In contrast, for nonparametric applications, product cost is high, there is limited production and the product is often used only once and discarded. In this type of manufacturing, data is usually limited because of the cost of testing. This makes reliability and risk assessments a difficult task. To circumvent this shortfall in data and its analysis, it is the intent of this paper to provide an alternative approach to models for reliability and risk analysis. This was accomplished by first surveying existing literature and models; then, approaching the problem with a set of block diagrams for each of the required analyses. Additionally, a full set of current models and failure analysis tools were also incorporated. With these tools the proposed methodology was demonstrated in case studies. These studies provided the validation for the methodology presented.
Show less - Date Issued
- 2002
- PURL
- http://purl.flvc.org/fcla/dt/12896
- Subject Headings
- Nonparametric statistics, New products, Reliability (Engineering)
- Format
- Document (PDF)
- Title
- Fault tolerance and reliability patterns.
- Creator
- Buckley, Ingrid A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The need to achieve dependability in critical infrastructures has become indispensable for government and commercial enterprises. This need has become more necessary with the proliferation of malicious attacks on critical systems, such as healthcare, aerospace and airline applications. Additionally, due to the widespread use of web services in critical systems, the need to ensure their reliability is paramount. We believe that patterns can be used to achieve dependability. We conducted a...
Show moreThe need to achieve dependability in critical infrastructures has become indispensable for government and commercial enterprises. This need has become more necessary with the proliferation of malicious attacks on critical systems, such as healthcare, aerospace and airline applications. Additionally, due to the widespread use of web services in critical systems, the need to ensure their reliability is paramount. We believe that patterns can be used to achieve dependability. We conducted a survey of fault tolerance, reliability and web service products and patterns to better understand them. One objective of our survey is to evaluate the state of these patterns, and to investigate which standards are being used in products and their tool support. Our survey found that these patterns are insufficient, and many web services products do not use them. In light of this, we wrote some fault tolerance and web services reliability patterns and present an analysis of them.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/FAU/166447
- Subject Headings
- Fault-tolerant computing, Computer software, Reliability, Reliability (Engineering), Computer programs
- Format
- Document (PDF)
- Title
- Towards a methodology for building reliable systems.
- Creator
- Buckley, Ingrid A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
Reliability is a key system characteristic that is an increasing concern for current systems. Greater reliability is necessary due to the new ways in which services are delivered to the public. Services are used by many industries, including health care, government, telecommunications, tools, and products. We have defined an approach to incorporate reliability along the stages of system development. We first did a survey of existing dependability patterns to evaluate their possible use in...
Show moreReliability is a key system characteristic that is an increasing concern for current systems. Greater reliability is necessary due to the new ways in which services are delivered to the public. Services are used by many industries, including health care, government, telecommunications, tools, and products. We have defined an approach to incorporate reliability along the stages of system development. We first did a survey of existing dependability patterns to evaluate their possible use in this methodology. We have defined a systematic methodology that helps the designer apply reliability in all steps of the development life cycle in the form of patterns. A systematic failure enumeration process to define corresponding countermeasures was proposed as a guideline to define where reliability is needed. We introduced the idea of failure patterns which show how failures manifest and propagate in a system. We also looked at how to combine reliability and security. Finally, we defined an approach to certify the level of reliability of an implemented web service. All these steps lead towards a complete methodology.
Show less - Date Issued
- 2012
- PURL
- http://purl.flvc.org/FAU/3342037
- Subject Headings
- Computer software, Reliability, Reliability (Engineering), Computer programs, Fault-tolerant computing
- Format
- Document (PDF)
- Title
- Resilient system design and efficient link management for the wireless communication of an ocean current turbine test bed.
- Creator
- Marcus, Anthony M., Cardei, Ionut E., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
To ensure that a system is robust and will continue operation even when facing disruptive or traumatic events, we have created a methodology for system architects and designers which may be used to locate risks and hazards in a design and enable the development of more robust and resilient system architectures. It uncovers design vulnerabilities by conducting a complete exploration of a systems’ component operational state space by observing the system from multi-dimensional perspectives and...
Show moreTo ensure that a system is robust and will continue operation even when facing disruptive or traumatic events, we have created a methodology for system architects and designers which may be used to locate risks and hazards in a design and enable the development of more robust and resilient system architectures. It uncovers design vulnerabilities by conducting a complete exploration of a systems’ component operational state space by observing the system from multi-dimensional perspectives and conducts a quantitative design space analysis by means of probabilistic risk assessment using Bayesian Networks. Furthermore, we developed a tool which automated this methodology and demonstrated its use in an assessment of the OCTT PHM communication system architecture. To boost the robustness of a wireless communication system and efficiently allocate bandwidth, manage throughput, and ensure quality of service on a wireless link, we created a wireless link management architecture which applies sensor fusion to gather and store platform networked sensor metrics, uses time series forecasting to predict the platform position, and manages data transmission for the links (class based, packet scheduling and capacity allocation). To validate our architecture, we developed a link management tool capable of forecasting the link quality and uses cross-layer scheduling and allocation to modify capacity allocation at the IP layer for various packet flows (HTTP, SSH, RTP) and prevent congestion and priority inversion. Wireless sensor networks (WSN) are vulnerable to a plethora of different fault types and external attacks after their deployment. To maintain trust in these systems and increase WSN reliability in various scenarios, we developed a framework for node fault detection and prediction in WSNs. Individual wireless sensor nodes sense characteristics of an object or environment. After a smart device successfully connects to a WSN’s base station, these sensed metrics are gathered, sent to and stored on the device from each node in the network, in real time. The framework issues alerts identifying nodes which are classified as faulty and when specific sensors exceed a percentage of a threshold (normal range), it is capable of discerning between faulty sensor hardware and anomalous sensed conditions. Furthermore we developed two proof of concept, prototype applications based on this framework.
Show less - Date Issued
- 2013
- PURL
- http://purl.flvc.org/fau/fd/FA0004035
- Subject Headings
- Fault tolerance (Engineering), Reliability (Engineering), Sensor networks -- Security measures, Systems engineering, Wireless communication systems -- Technological innovations
- Format
- Document (PDF)
- Title
- Solar cell degradation under ionizing radiation ambient: preemptive testing and evaluation via electrical overstressing.
- Creator
- Thengum Pallil, George A., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The efforts addressed in this thesis refer to assaying the degradations in modern solar cells used in space-borne and/or nuclear environment applications. This study is motivated to address the following: 1. Modeling degradations in Si pn-junction solar cells (devices-under-test or DUTs) under different ionizing radiation dosages 2. Preemptive and predictive testing to determine the aforesaid degradations that decide eventual reliability of the DUTs; and 3. Using electrical overstressing (EOS...
Show moreThe efforts addressed in this thesis refer to assaying the degradations in modern solar cells used in space-borne and/or nuclear environment applications. This study is motivated to address the following: 1. Modeling degradations in Si pn-junction solar cells (devices-under-test or DUTs) under different ionizing radiation dosages 2. Preemptive and predictive testing to determine the aforesaid degradations that decide eventual reliability of the DUTs; and 3. Using electrical overstressing (EOS) to emulate the fluence of ionizing radiation dosage on the DUT. Relevant analytical methods, computational efforts and experimental studies are described. Forward/reverse characteristics as well as ac impedance performance of a set of DUTs under pre- and post- electrical overstressings are evaluated. Change in observed DUT characteristics are correlated to equivalent ionizing-radiation dosages. The results are compiled and cause-effect considerations are discussed. Conclusions are enumerated and inferences are made with direction for future studies.
Show less - Date Issued
- 2010
- PURL
- http://purl.flvc.org/FAU/2979384
- Subject Headings
- Renewable energy sources, Solar cells, Effect of radiation on, Reliability (Engineering), Electric discharges, Ionizing radiation
- Format
- Document (PDF)
- Title
- Rough Set-Based Software Quality Models and Quality of Data.
- Creator
- Bullard, Lofton A., Khoshgoftaar, Taghi M., Florida Atlantic University, College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
In this dissertation we address two significant issues of concern. These are software quality modeling and data quality assessment. Software quality can be measured by software reliability. Reliability is often measured in terms of the time between system failures. A failure is caused by a fault which is a defect in the executable software product. The time between system failures depends both on the presence and the usage pattern of the software. Finding faulty components in the development...
Show moreIn this dissertation we address two significant issues of concern. These are software quality modeling and data quality assessment. Software quality can be measured by software reliability. Reliability is often measured in terms of the time between system failures. A failure is caused by a fault which is a defect in the executable software product. The time between system failures depends both on the presence and the usage pattern of the software. Finding faulty components in the development cycle of a software system can lead to a more reliable final system and will reduce development and maintenance costs. The issue of software quality is investigated by proposing a new approach, rule-based classification model (RBCM) that uses rough set theory to generate decision rules to predict software quality. The new model minimizes over-fitting by balancing the Type I and Type II niisclassiflcation error rates. We also propose a model selection technique for rule-based models called rulebased model selection (RBMS). The proposed rule-based model selection technique utilizes the complete and partial matching rule sets of candidate RBCMs to determine the model with the least amount of over-fitting. In the experiments that were performed, the RBCMs were effective at identifying faulty software modules, and the RBMS technique was able to identify RBCMs that minimized over-fitting. Good data quality is a critical component for building effective software quality models. We address the significance of the quality of data on the classification performance of learners by conducting a comprehensive comparative study. Several trends were observed in the experiments. Class and attribute had the greatest impact on the performance of learners when it occurred simultaneously in the data. Class noise had a significant impact on the performance of learners, while attribute noise had no impact when it occurred in less than 40% of the most significant independent attributes. Random Forest (RF100), a group of 100 decision trees, was the most, accurate and robust learner in all the experiments with noisy data.
Show less - Date Issued
- 2008
- PURL
- http://purl.flvc.org/fau/fd/FA00012567
- Subject Headings
- Computer software--Quality control, Computer software--Reliability, Software engineering, Computer arithmetic
- Format
- Document (PDF)
- Title
- Tree-based classification models for analyzing a very large software system.
- Creator
- Bullard, Lofton A., Florida Atlantic University, Khoshgoftaar, Taghi M.
- Abstract/Description
-
Software systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational...
Show moreSoftware systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational prototype of the subsystem over a period of approximately three years. We used these metrics to train a decision tree model and to fit a discriminant model to classify each module as fault-prone or not fault-prone. The algorithm used to generate the decision tree model was TREEDISC, developed by the SAS Institute. The decision tree model is compared to the discriminant model.
Show less - Date Issued
- 1996
- PURL
- http://purl.flvc.org/fcla/dt/15315
- Subject Headings
- Computer software--Quality control, Computer software--Reliability, Software engineering
- Format
- Document (PDF)
- Title
- Software quality modeling and analysis with limited or without defect data.
- Creator
- Seliya, Naeem A., Florida Atlantic University, Khoshgoftaar, Taghi M., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
- Abstract/Description
-
The key to developing high-quality software is the measurement and modeling of software quality. In practice, software measurements are often used as a resource to model and comprehend the quality of software. The use of software measurements to understand quality is accomplished by a software quality model that is trained using software metrics and defect data of similar, previously developed, systems. The model is then applied to estimate quality of the target software project. Such an...
Show moreThe key to developing high-quality software is the measurement and modeling of software quality. In practice, software measurements are often used as a resource to model and comprehend the quality of software. The use of software measurements to understand quality is accomplished by a software quality model that is trained using software metrics and defect data of similar, previously developed, systems. The model is then applied to estimate quality of the target software project. Such an approach assumes that defect data is available for all program modules in the training data. Various practical issues can cause an unavailability or limited availability of defect data from the previously developed systems. This dissertation presents innovative and practical techniques for addressing the problem of software quality analysis when there is limited or completely absent defect data. The proposed techniques for software quality analysis without defect data include an expert-based approach with unsupervised clustering and an expert-based approach with semi-supervised clustering. The proposed techniques for software quality analysis with limited defect data includes a semi-supervised classification approach with the Expectation-Maximization algorithm and an expert-based approach with semi-supervised clustering. Empirical case studies of software measurement datasets obtained from multiple NASA software projects are used to present and evaluate the different techniques. The empirical results demonstrate the attractiveness, benefit, and definite promise of the proposed techniques. The newly developed techniques presented in this dissertation is invaluable to the software quality practitioner challenged by the absence or limited availability of defect data from previous software development experiences.
Show less - Date Issued
- 2005
- PURL
- http://purl.flvc.org/fcla/dt/12151
- Subject Headings
- Software measurement, Computer software--Quality control, Computer software--Reliability--Mathematical models, Software engineering--Quality control
- Format
- Document (PDF)
- Title
- Model of preventive maintenance, reliability and replacement analysis.
- Creator
- Saenz, George., Florida Atlantic University, Mazouz, Abdel Kader, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
- Abstract/Description
-
The main objective of this research is the development of a model that combines reliability engineering analysis with cost accounting techniques, for the automation of maintenance data analysis. A maintenance engineer will be able to estimate the time to schedule the next preventive maintenance, for any repairable system under consideration before failure occurs. The model performs replacement analysis between the system and similar equipment in order to select an alternative based on the...
Show moreThe main objective of this research is the development of a model that combines reliability engineering analysis with cost accounting techniques, for the automation of maintenance data analysis. A maintenance engineer will be able to estimate the time to schedule the next preventive maintenance, for any repairable system under consideration before failure occurs. The model performs replacement analysis between the system and similar equipment in order to select an alternative based on the reliability and maintenance history. This work develops the handling of the mathematical procedures and the analysis of statistical tests. A case study to apply the developed model uses the automotive environment. Because of the integration of the reliability and cost analysis under the same computer based model, a maintenance plan is derived considering the failure history of the equipment.
Show less - Date Issued
- 1994
- PURL
- http://purl.flvc.org/fcla/dt/15099
- Subject Headings
- Maintainability (Engineering)--Statistical methods, Replacement of industrial equipment--Statistical methods, Reliability (Engineering)--Statistical methods, Cost accounting--Data processing
- Format
- Document (PDF)