Subject today: errors. And we will review formulae to handle them with experimental data.
Errors can be generally speaking:
1st. Random. Due to imperfections of measurements or intrinsically random sources.
2nd. Systematic. Due to the procedures used to measure or uncalibrated apparatus.
There is also a distinction of accuracy and precision:
1st. Accuracy is closeness to the true value of a parameter or magnitude. It is, as you keep this definition, a measure of systematic bias or error. However, sometime accuracy is defined (ISO definition) as the combination between systematic and random errors, i.e., accuracy would be the combination of the two observational errors above. High accuracy would require, in this case, higher trueness and high precision.
2nd. Precision. It is a measure of random errors. They can be reduced with further measurements and they measure statistical variability. Precision also requires repeatability and reproducibility.
1. Statistical estimators.
Average deviation or error:
Variance or average quadratic error or mean squared error:
This is the unbiased variance, when the total population is the sample, a shift must be done from to (Bessel correction). The unbiased formula is correct as far as it is a sample from a larger population.
Standard deviation (mean squared error, mean quadratic error):
This is the unbiased estimator of the mean quadratic error, or the standard deviation of the sample. The Bessel correction is assumed whenever our sample is lesser in size that than of the total population. For total population, the standard deviation reads after shifting :
Mean error or standard error of the mean:
If, instead of the unbiased quadratic mean error we use the total population error, the corrected standar error reads
Variance of the mean quadratic error (variance of the variance):
Standard error of the mean quadratic error (error of the variance):
2. Gaussian/normal distribution intervals for a given confidence level (interval width a number of entire sigmas)
Here we provide the probability of a random variable distribution X following a normal distribution to have a value inside an interval of width .
1 sigma amplitude ().
2 sigma amplitude ().
3 sigma amplitude ().
4 sigma amplitude ().
5 sigma amplitude ().
6 sigma amplitude ().
For a given confidence level (generally ), the interval width will be .
3. Error propagation.
Usually, the error propagates in non direct measurements.
3A. Sum and substraction.
Let us define and . Furthermore, define the variable . The error in would be:
Example. , . , with and , with . Then, we have:
as liquid mass.
, as total liquid error.
is the liquid mass and its error, together, with 3 significant digits or figures.
3B. Products and quotients (errors).
then, with you get
If , you obtain essentially the same result:
3C. Error in powers.
With , , then you derive
and if , with the error of being , you get
In the case of a several variables function, you apply a generalized Pythagorean theorem to get
or, equivalently, the errors are combined in quadrature (via standard deviations):
for independent random errors (no correlations). Some simple examples are provided:
1st. , with , implies .
2nd. , with , implies .
3rd. would imply
When different experiments with measurements are provided, the best estimator for the combined mean is a weighted mean with the variance, i.e.,
The best standard deviation from the different combined measurements would be:
This is also the maximal likelihood estimator of the mean assuming they are independent AND normally distributed. There, the standard error of the weighted mean would be
Least squares. Linear fits to a graph from points using least square procedure proceeds as follows. Let from be some sets of numbers from experimental data. Then, the linear function that is the best fit to the data can be calculated with , where
We can also calculate the standard errors for and fitting. Let the data be
We want to minimize the variance, i.e., the squared errors , i.e., we need to minimize
Writing , the estimates are rewritten as
where are the uncorrected standard deviations of samples, are the sample variance and covariance. Moreover, the fit parameters have the standard errors
Alternatively, all the above can be also written as follows. Define
then, for a minimum square fit with , we find out that
and where the correlation coefficient is
and where are the corrected sample standard deviations of . To know what is in a more general setting, we note that the sample mean vector is a column vector whose -element is the average value of the observations of the -variable:
and thus, the sample average or mean vector contains the average of every variable as component, such as
The sample covariance matrix is a “K”-by-“K” matrix
where is an estimate of the covariance between the -th variable and the -th variable of the population underlying the data. In terms of the observation vectors, the sample covariance is
Finally, you can also provide a calculation with confidence level of the intervals where are. The t-vallue has a Student’s t-distribution with degrees of freedom. Using it, we can construct a confidence interval for :
at confidence level (C.L.) , where is the quantile of the distribution. For example, , then the C.L. is .
Similarly, the confidence interval for the intercept coefficient is given by
at confidence level (C.L.) , where as before above
Remark: for non homogenous samples, the best estimation of the average is not the arithmetic mean, but the median.
See you in other blog post!
Current average ratings.