Студопедия

КАТЕГОРИИ:

АвтоАвтоматизацияАрхитектураАстрономияАудитБиологияБухгалтерияВоенное делоГенетикаГеографияГеологияГосударствоДомЖурналистика и СМИИзобретательствоИностранные языкиИнформатикаИскусствоИсторияКомпьютерыКулинарияКультураЛексикологияЛитератураЛогикаМаркетингМатематикаМашиностроениеМедицинаМенеджментМеталлы и СваркаМеханикаМузыкаНаселениеОбразованиеОхрана безопасности жизниОхрана ТрудаПедагогикаПолитикаПравоПриборостроениеПрограммированиеПроизводствоПромышленностьПсихологияРадиоРегилияСвязьСоциологияСпортСтандартизацияСтроительствоТехнологииТорговляТуризмФизикаФизиологияФилософияФинансыХимияХозяйствоЦеннообразованиеЧерчениеЭкологияЭконометрикаЭкономикаЭлектроникаЮриспунденкция

THE RANDOM ERRORS AND ESTIMATION OF THEIR INFLUENCE ON THE MEASUREMENT RESULT




While the systematic errors theoretically (and in the most cases – to a coinsiderable extent also practically) can be excluded from the measurement results, but it is impossible to avoid the random errors experimentally.

But it is necessary, using a statistical material of the multifold observations being researched PQ, to know how to estimate the random-error influence on the measurements results in each case. For this purpose the mathematical methods of probability theory are applied.

In the simplest case we have the following problem on the random errors influence estimation.

Having made n equivalent observations of the same physical quantitty, we have a series of results:

                           X1, X2, X3,…,Xn,

    Where Xi is a result of i-th observation; n is an amountof observations of a series.

The systematic errors have already excluded, that is this measurement is correct.Estimate the random errors influence on the measurement  result.

 The most authentic value from a given series is arithmetic mean :

 

                                            = ,

The random error  of eachi-th observation:

i =Xi – Xtrue,

     Where Xtrue – is the true value of PQ under research .

So far as the Xtrue is inaccessible, also the random error concept has only theoretical, but not practical sense. In practice, instead of it, the residual error of the i-th measurement nі is used:

 

 

At unlimitedly large observations amount the arithmetic mean tends to the true value, and the residual error tends to the random one.

For checking out the calculations correctness it is necessarytoremember also, that the algebraic amount of the residual errors approaches to zero :

 

 

What criterion must be used to characterize and to compare accuracy of different series of observations?

The most widespread one is the (experimental) standard deviation σ. It is a parameter of the observation results distribution function, which characterizes their dispersion. It  is   determined as a positive value of a square root from a variance of the observation results and it is expressed in the same units, as a measurand. The variance is a less convenient parameter, because its unit is square  of  the   measurand unit.

Let us compare two targets (Fig. 12) with the results of fire from two guns (analogy to two series of measurements of the same PQ by the different instruments; and, analogy of a measurement error is a deflection of the hit point from the target center).

 

         

         

       

 Fig. 12. Comparison of the results of fire from two guns.

 

Concentration of the hits is characteristic for the left target, though some noticeable deviations seldom occur. The right target is characterized by more even "dissipation" of the hits.

Comparing these targets, an experienced shooter will say at once, that a gun, from which one shot at the left target, is better.

It is more difficult to make such estimation, having only two series of numbers without a visual image. But having computed the experimental standard deviations for each of the series, we shall see, that for the left target it is much less, than for the right one:

                              

                       sL < sR .

 

           For the experimental standard deviation (ESD) calculation it is necessary to know the random error distribution law. This law relates value of the random error to probability of its appearance.

One of the most widespread laws in measurement practice is normal distribution law of random errors, the so-called the Gauss law(Fig.13).                    Fig. 13. The normal distribution law of random errors

 

The probability density of the random error is plotted on the ordinate axis, the random error itself is plotted on the abscissa axis.

      In Fig.13 two curves are plotted. They characterize two series of observations with the different ESD - s1 and s2. The curve with ESD s1 can characterize the results of fire in the left target (Fig. 12), and s2 - in the right one, and both the curves characterize the Gauss law, which is based on two axioms.

1. Axiom of contingency: at a very large amount of observations the random errors of the same, but with inverse sign, occur equally often.

2. Axiom of distribution: most often the small errors arise, and the large ones occur the less often, the larger they are.

The graphs are strong proof of veracity of these axioms.

Really, the axiom of contingency is confirmed by a symmetrical disposition of curves with respect to the ordinate axis. On the right side there are the positive random errors, and on the left side there are the negative ones.

 The axiom of distribution is confirmed by that the most probability of occurrence (the highest ordinate) is inherent for the zero and close to it small errors. At the same time the large errors both as positive, and negative, large errors have the small ordinates of curves, that is a small probability of occurrence.

The graph s1 is characterized by that the large part of the area restricted by it is concentrated in a zone of small errors, and the graph with the s2 is differed by more even distribution of the error dissipation: there is no big difference between occurrence probability of both the small and large errors .

When the random errors are distributed under the normal law, 95% of them are entered into the values limits ±, 99,73 % - into the values limits ±.

These limits, between which with the known probability the random errors are concentrated , are called confidence interval, and this probability is called confidence. For example, at a normal distribution law of errors the confidence interval from + to - corresponds to the confidence coefficient P=0,9973.

The errors by the modulus larger than occur very seldom (not more often, than 27 times by 10 000 measurements), therefore they are called the gross errors, and, as a rule, rejected from a series of observations results.

 

 

The experimental standard deviation σ can be approximately calculated by the Bessel formula:

 

For a measurement accuracy improvement, that is, a reduction of the σ, it is necessary to augment an amount of observations. According to the theory, if each value of the above given series is considered as arithmetic mean of “n” observations, the estimation of its ESD will be diminished in   times:

                           

A reliable, though labour-consuming way of the measurement accuracy raise is as follows: to increase themeasurement accuracy in “n” times, it is necessary to increase the amount of observations in n2 times.

8.1.The random errors distribution laws approximations .

Each series of observations has its own distribution law. But for certain groups of MIs, conditions of the measurements performance, objects, etc., it is possible to distinguish and to generalize similar distribution laws. They are called approximations, that is, approachs to the actual laws.

    The most widespread in practice of measurements approximations of distribution laws are tubulated in Fig. 14.

                                 g

For each approximation the title, graph and factor gare represented. Where a is a half-width of a confidence interval at a confidence coefficient, P=0, 9973; σ is the experimental standard deviation (ESD).

 

 

        Fig.14. The errors distribution laws approximations:

Approximation of the random error normal distribution law is the so called truncated normal law, which is truncated by the confidence interval limits of ±. As it was reviewed above, between these limits there are 99,73 % of errors, therefore a remainder of 0,27 %, that occurs outside these limits, is simply neglected. This law is very widespread; it takes place , when the measurement result is influenced by various mutually independent causes.

 When a distribution law of a series of observations random errors is unknown, most often just this approximation is accepted. Herewith the computed confidence interval will be the largest one contrasting with the others approximations (here the factor g = 3) and it is possible to be sure, that the actual errors in practice will not exceed the computed errores.

The uniform distribution law  has interesting features. Just in accordance with it the deflectional instruments errors are distributed (they can be detected during the MIs verification). The permissible absolute errors, which are determined by the deflectional instrument accuracy class are the abscissas of its graph slump.

For example, for a voltmeter with a measurement range of 500 V and accuracy class of 4,0 the permissible absolute error (see above) is Δmax= ±20 V (that is 4 % from 500 V). These values (±20 V) also are by abscissas of slump of the distribution law graph. Thus, for this instrument with an identical probability any absolute errors from minus 20 V to plus 20 V are possible.

 At the first thought, there is a doubt in the reality, the natural origin of this law. Is really possible with identical probability as the zero error, and the errors up to ±20 V? But an error, little bit more, for example, 21 V, is absolutely impossible (according to the graph the probability for errors larger by the size of 20 V is zero).

Certainly, it is not a natural phenomenon, but the will of the person, who legislatively, normatively, has limited the permissible errors for the given accuracy class just with these hard limits. The instrument, for which the detected at verification error exceeds the permissible error, should be rejected and put of operation.

Even less natural, but peculiar for the human activity, the antimodal laws are, in which a zero error is impossible.

In such a way the error can, admitted by a worker-tuner on a conveyer be distributed. The products with the larger or smaller deviation of a regulated parameter from the necessary value are given to him. The problem  is to "force" the controlled parameter inside the normalized"tolerance", being checked by an inspection department. For example, it is necessary, that this parameter be equal to (100 ± 5) units. Having already adjusted the parameter for the product with the smaller one to 95 and for the product with the larger one to 105, the worker ceases its activity (there is no time to complete the adjustment precisely to 100 units) goes to the following iten. Therefore the products, for which the adjusted parameter value is exactly 100 units, that is zero error, do not occur almost.

The Rayleigh law is very racy. If all the laws, considered before are in accordance with car to the axioma of contingency (that is, they have identical probability of the positive and negative errors of the same size), this law demonstrates possibility only of the positive errors of a series of observations, such as the errors of a "throb" of the arbor at rotation owing to its eccentricity or camber.

In practice, to relate the random error distribution law to a definite approximation, it is necessary to execute many tens, hundreds observations and on the basis of their results to construct of the so called histogram (Fig. 15)

 

            

Fig. 15. Histogram: Xmin, Xmax are the smallest and the largest results from a series of observations, respectively; “n” is amount of the results of a series of observations, which fall into the respective interval of the measurand values.

 For this purpose on an abscissa axis the random errors of the indivi-dual observations (or their results) are plotted and divided into the groups according to the size. On an ordinate axis the amounts of errors (or results) from a series of observations are plotted, which are in the respective group. Then the rectangles are built, the bases of which are the intervals from the smallest to the largest error (or result) in the group, and the altitudes are the amounts of the errors (or results), which brought into this group (Fig. 15). At a sufficient quantity of such rectangles it is possible to rate in the error distribution law ,at the level of terms "convex" or "concave" at least.

 










Последнее изменение этой страницы: 2018-04-12; просмотров: 240.

stydopedya.ru не претендует на авторское право материалов, которые вылажены, но предоставляет бесплатный доступ к ним. В случае нарушения авторского права или персональных данных напишите сюда...