SRC Forum - Message Replies
Forum: Reliability & Maintainability Questions and Answers
Topic: Reliability & Maintainability Questions and Answers
Topic Posted by: Reliability & Maintainability Forum
Organization: System Reliability Center
Date Posted: Mon Aug 31 12:47:36 US/Eastern 1998
Posted by: Jim Miller
Organization:United Defense, L.P.
Date posted: Wed Mar 28 14:16:21 US/Eastern 2001
Subject: Reliability Prediction Confidence Level
We are being asked to provide a confidence level (percent) to our current reliability prediction of a Navy mechanical/electrical/electronics system still in preliminary design. Several areas of the design are still in the "maybe it will be like this" stage. Prediction data sources currently include a mix of handbook, data supplied from hopeful vendors, 217, and Navy / Army failure rate data of similar items we may be using in the new design. Can a confidence level be assigned or calculated at this stage of the prediction, or at any stage of the prediction? If so, is there an established methodology for doing so? We understand that as the design goes forward and actual hardware is specified and tested, and subassemblies are built and tested, we can incorporate vendor data, testing data, FRACAS data, etc. into the prediction, thereby increasing our "confidence" in the prediction. But can this confidence ever be quantified?
Subject: Reliability Prediction Confidence Levels
Reply Posted by: B. W. Dudley
Organization: Reliability Analysis Center
Date Posted: Wed Mar 28 16:18:46 US/Eastern 2001
MIL-HDBK-217 models were based on data from various sources and complete models were seldom developed under a single contract nor was the failure data from a single source. For example, all MIL-HDBK-217 environmental factors were developed under separate study efforts from the part failure models. Therefore, because of the fragmented nature of the data and the fact that it is often necessary to interpolate or extrapolate from many available data sources when developing new models, no statistical confidence intervals should be associated with the overall model results. In addition, adding vendor and Army/Navy failure rate data to the mix, results in a combination prediction that may or may not represent the "new" design. Outside data sources are usually from units or components that have been previously developed and can be similar to the new but may have different technologies, hence, indeterminable correlation to the "new" design.
If the similar units or components are representative of the technology of the "new" design and sufficient hours and failures are available, confidence levels can be developed using the CHI SQUARE statistics for those "elements" alone. Combining those elements with MIL-HDBK-217 prediction models is not possible as the 217 models do not have add-on data capabilities built-in, so complete confidence limit assessment is lacking until actual confidence level testing can be performed on the "entire product".
As many users of reliability prediction models know, the purpose of the reliability prediction is a feasibility evaluation that compares competing designs and identifies potential problem areas. Predictions should not be intended to demonstrate field operation. The reasons for this are numerous including different failure definitions for field problems that MIL-HDBK-217 and other models do not account. These problems include maintenance induced failures, intermittent failures (can not duplicate), software problems and design problems (i.e., overstressed parts operated beyond their ratings). It must be emphasized that this does not diminish the value of the handbook or prediction process since none of the purposes described above require an absolute prediction of field reliability.
For your information, our new PRISM reliability model does have some built-in capability of adding test and field data to change the model outputs. Inaddition, induced failures, software problems and overstress conditions are included in the PRISM reliability model.