Previous Topic: Photometric Reduction Lecture Index Next Topic: Intro to Spectroscopy

ASTR 3130, Majewski [SPRING 2015]. Lecture Notes

ASTR 3130 (Majewski) Lecture Notes


ERROR ANALYSIS, PART II:
CENTRAL LIMIT THEREOM AND COMBINATION OF ERRORS

REFERENCE: Lyons Chapter 1.5-1.12.

We have discussed earlier the issue of experimental errors, including the concepts of random and systematic errors and their connection to precision and accuracy.

In this lecture I want to give some important practical aspects of dealing with errors, including the concepts of Gaussian distributions and the important Central Limit Theorem, and proper ways to both combine and propagate errors.

Much of this lecture is taken directly out of Lyons' book, A Practical Guide for Physical Science Students, for which you have already been asked to read Chapter 1.


  • The Gaussian distribution is central to any discussion of the treatment of errors.
  • Gaussian distribution

  • The general form of the Gaussian distribution in one variable x is:
    • The curve of y as a function of x is symmetric about the value of x = μ, at which point y has its maximum value.

    • The parameter σ characterizes the width of the distribution.

    • The factor ensures that the distribution is normalized to have unit area underneath the whole curve, i.e.,
  • The parameter μ is the mean of the distribution, while σ has the following properties:
    • The mean square deviation of the distribution from μ is σ2, called the variance.

      (The reason that the curious factor of 2 appears within the exponent in the equation for y above is to make sure that σ is the RMS deviation.

      Otherwise the root mean square deviation from the mean would have been σ/(21/2), which is unaesthetic.)
    • σ is known as the standard deviation.

    • The height of the curve at x= μ ± σ is e-1/2 of the maximum value. Since

      σ is very roughly the half width at half height of the distribution.
    • In fact, one finds that the Full-Width-Half-Maximum is equal to

      FWHM = 2.354 σ

      So that HWHM = 1.177 σ. (Prove this to yourself!).

    • The fractional area underneath the curve in the range

      (i.e., within ± σ of the mean μ) is 0.68.

      This is a very important thing to keep in mind: When you repeat an experiment with random errors, about 2/3 of the results will be within 1σ (one standard deviation) of μ and 1/3 of your results will be beyond 1σ.

    • Note that 95% of the results will be within 2σ, and 99.7% are within 3σ. (See Table A6.1 for more of these values.)

      Image and caption from Wikipedia: Dark blue is less than one standard deviation from the mean. For the normal distribution, this accounts for 68.27 % of the set; while two standard deviations from the mean (medium and dark blue) account for 95.45 %; and three standard deviations (light, medium, and dark blue) account for 99.73 %.
    • The height of the distribution at its maximum is . As σ decreases the distribution becomes narrower, and hence, to maintain the normalization condition, also higher at the peak.
  • By a suitable change of variable to

    any normal distribution can be transformed into a standardized form

    with mean zero and unit variance

The Meaning of σ

  • It is customary to quote σ, the standard deviation, as the accuracy of a measurement.
  • Since σ is not the maximum possible error, we should not get too upset if our measurement is more than σ away from the expected value. Indeed, we should expect this to happen with about 1/3 of our experimental results.

    Since, however, the fractional areas beyond ± 2σ and beyond ± 3σ are only 4.6% and 0.3% respectively, we should expect such deviations to occur much less frequently.
  • To see this, consult the following figure, which is a graph showing the fractional area under the Gaussian curve with

    where

    i.e., it gives (on the right hand vertical scale) the area in the tails of the Gaussian beyond any value r of the parameter f, which is plotted on the horizontal axis.

Central Limit Theorem

A feature that helps to make the Gaussian distribution of such widespread relevance is the Central Limit Theorem. One statement of this is as follows.
  • Consider a set of n independent variables xi, taken at random from a population with mean μ and variance σ2, and then calculate the mean () of these n values.
  • If we repeat this procedure many times, since the individual xi are random, then the calculated means will have some distribution.
  • The surprising fact is that, for large n, the distribution of tends to a Gaussian (of mean μ and variance σ2 / n). The actual SHAPE of the distribution of the xi is actually irrelevant.
  • The only important feature is that the variance σ2 should be finite.
  • Thus: If the xi are already Gaussian distributed, then the distribution of is also Gaussian for all values of n from 1 upwards.

    But even if the xi have some other distribution --- say, for example, a uniform distribution over a finite range --- then the distribution of the sum or average of a few xi will already look Gaussian.
  • Thus, whatever the original distribution, a linear combination of a few representatives from the distribution almost always approximates to a Gaussian distribution.

    Regardless of the shape of the parent population, the distribution of the means calculated from samples quickly approaches the normal distribution
    as shown below for four very different parent populations (left to right) and by doing averages of an increasing number of independent "draws" from the parent population.
  • Regardless of the shape of the parent population, the distribution of the means calculated from samples drawn from the parent population quickly approaches the normal distribution as the number of averaged samples, n, increases. Image from http://flylib.com/books/en/2.528.1.68/1/.

    Some practical rules of thumb (from http://flylib.com/books/en/2.528.1.68/1/):

    • If the parent population is normal, will always be normal for any sample size.
    • If the population is at least symmetric, sample sizes of n = 5 to 20 should be OK (Gaussian-like).
    • Worst-case scenario: Sample sizes of 30 should be sufficient to make approximately normal (i.e., Gaussian) no matter how far the parent population is from being normal.
    • If making a distribution of , use a standard subgroup size (e.g., all subgroups with averaging 5 observations, or 30 observations).

  • An important aspect of the Central Limit Thereom (what makes it important for empirical science) is that if we adopt the Gaussian distribution of the to represent the error in our knowledge of some mean measurement, , and we adopt the width of that distribution as the 1-σ error of the derived mean value , then we find that

    σ2 = σ2 / n

    σ = σ / n1/2

    We call σ the standard deviation of the mean or the error in the mean.

    The above implies the the error in the mean, σ, is smaller than the error in each individual measure, σ, by the square root of n.

    The reason why the error in the mean (σ) is smaller than the error in individual measurements (σ) for n > 1 is that when we have more than one measure we have more than one estimate of the actual value μ. With a distribution of measures, you always have a better idea of where the mean might lie than with only one measure, and intuitively you know that the more measures you have (i.e., as n gets large) the better is your estimate of the mean value.

  • Be clear about the difference between σ and σ:

      σ is the spread in the distribution of x values, and represents the standard deviation of a single measure. It is always the same no matter how many times you repeat the experiment with the same measurement apparatus.

      σ is the standard distribution of your estimates of , and both and σ can always be improved by increasing the number of measures n, with the latter improving as σ / n1/2.

    We will derive this latter equation in another way shortly.

  • Imagine the parent distribution being the number of people in a particular class of of different heights:

    Image from http://scienceblogs.com/builtonfacts/2009/02/05/the-central-limit-theorem-made/ .
    I can estimate the mean height of people in the class by drawing out subsamples of n students in the class and averaging the results. If I do this many times the answers I get for the mean height will form a Gaussian distribution, and the error in my calculated mean for any particular subsample is just given by the intrinsic spread, σ, of the parent population, divided by the square root of the number of people I used to estimate the mean height.


Propagation/Combination of Errors

We are frequently confronted with a situation where the result of an experiment is given in terms of two (or more) measurements (of either the same or different types).

We want to know what is the error on the final answer in terms of the errors on the individual measurements.

We first consider in detail the case where the answer is a linear combination of the measurements. Then we go on to consider products and quotients and the general case of combining errors.
  • As a very simple illustration, consider:
  • Provided that the errors on b and c are uncorrelated, the rule is that we add the contributions of error in b and c in quadrature (because we have seen above that in dealling with errors it is most sensible to consider RMS deviations):
  • The errors in two measurements are uncorrelated when the measurement of one variable has no bearing on the measurement of another -- they are independent.

  • Formal proof of simple quadrature summation of errors:

    In the line just above, the first two terms are and respectively.

    The last term depends on whether the errors on b and c are correlated. In most situations that we shall be considering, such correlations are absent. In that case, whether b is measured as being above or below its average value is independent of whether c is larger than or less than . The result of this is that the term will average to zero. Thus for uncorrelated errors of b and c, the formal proof equation reduces to:
  • THOUGHT PROBLEM: What is the error in a for a combination that goes as a = b + c ?

The General Case

Lyons derives the general propagation of errors formula but I will not in lecture. Please read the Lyons treatment of this.

  • Let

    which defines our answer f in terms of measured quantities xi each with its own error σi. Again we assume the errors on the xi are uncorrelated.
  • Then

    This gives us the total error σf in terms of the known measurement errors σi .
  • Special Case: Averaging results with equal errors σi (derivation of the error in the mean):
  • This is the result already discussed as part of the Central Limit Theorem above.

  • The functions

    and

    are so common that it is worth writing the combination of error formula for them explicitly:
  • THOUGHT PROBLEM: You should, however, make sure you can derive the above equation for the errors of products and divisions using the generalized equation for sigma given above.


Combining Errors: More complicated example

  • A certain result z is determined in terms of independent measured quantities a, b, and c by the formula

  • To determine the error on z in terms of those on a, b, and c, we first differentiate partially with respect to each variable:

    Then we use the equation from the general case,

    to obtain


Combining Results of Experiments Having Differing Errors

  • When several experiments measure the same physical quantity and give a set of answers ai with different errors σi, then the best estimates of a and its accuracy σ are given by:

    and

    Thus each experiment is to be weighted by a factor of . In some sense, gives a measure of the information content of that particular experiment.
  • The weighting to get the mean a above makes intuitive sense: We want contributions from experiments with worse errors to be less than the contributions from experiments with better errors.

  • The simplest case is when all the errors σi are equal. Then the best combined value a from the above equation becomes the ordinary average of the individual measurements of ai, and the error σ on a is , where N is the number of measurements.

Example Problems

Appendix 1 (Page 73) of Lyons summarizes the important results and equations described on this webpage. These equations are key guidelines that every empirical scientist should live by.

Test problems for this material can be found at the end of Chapter 1 of Lyons.

Click here for a set of practice problems on this material, including some specifically related to astronomical contexts. The file that comes up will be in pdf format.


Previous Topic: Photometric Reduction Lecture Index Next Topic: Intro to Spectroscopy

All material from A Practical Guide to Data Analysis for Physical Science Students by Louis Lyons, Cambridge University Press: Cambridge, 1991. These notes are intended for the private, noncommercial use of students enrolled in Astronomy 313 and Astronomy 3130 at the University of Virginia.