Accuracy is defined as the difference between the true value and the observed average of a measurement. The lack of accuracy reflects a systematic bias in the measurement such as a gauge out of calibration, worn, or used improperly by the operator. Accuracy is measured as the amount of error in a measurement in proportion to the total size of the measurement. One measurement is more accurate than another if it has a smaller relative error.
Ex) Suppose that two instruments measure a dimension whose true value is 0.250 inch. Instrument A may read 0.248 inch, whereas instrument B may read 0.259 inch. The relative error of instrument A is (0.250 2 0.248)/0.250 5 0.8%; the relative error of instrument B is (0.259 2 0.250)/0.250 5 3.6%. Thus, instrument A is said to be more accurate than instrument B.
Precision is defined as the closeness of repeated measurements to each other. Precision, therefore, relates to the variance of repeated measurements. A measuring instrument with a low variance is more precise than another having a higher variance. Low precision is the result of random variation that is built into the instrument, such as friction among its parts. This random variation may be the result of a poor design or lack of maintenance.
Ex) Now suppose that each instrument measures the dimension three times. Instrument A records values of 0.248, 0.246, and 0.251; instrument B records values of 0.259, 0.258, and 0.259. Instrument B is more precise than instrument A because its values are clustered closer together.
'Engineering Management > Quality Management' 카테고리의 다른 글
360 review (0) | 2014.12.05 |
---|