PAGE UNDER CONSTRUCTION
SUMMARY/SUPPLEMENTARY INFORMATION ONLY NOW
References: Read Binney & Merrifield Section 3.6.
Other useful references are: Sandage in The Deep Universe, and Trumpler & Weaver (1953)
Statistical Astronomy.
Statistical studies depending on data that have inherent uncertainties are subject to a number of
biases that affect the interpretation of results.
These bias effects (not surprisingly) are most pernicious where the observational
uncertainties are "significant" and/or when some kind of limits are imposed on
the survey sample.
Three particular effects are commonly seen in Galactic/extragalactic studies:
Eddington bias: Describes the effects of observational uncertainties
on some measured trend (e.g., star/galaxy/radio source counts).
Malmquist bias: Describes the effects of an imposed apparent magnitude
survey limit on the mean absolute magnitudes of contributing objects in the
survey.
Lutz-Kelker bias: Describes the effects of parallax errors on the
calibration of absolute magnitudes.
Eddington Bias
Reference: Eddington (1913, MNRAS, 73, 359), Trumpler & Weaver, pp.123-126.
In astronomy it often happens that we make count distributions of objects having
successive values of a certain measured property, for example
Star/galaxy/radio source counts by apparent flux.
Number of stars having different values of proper motion.
Number of stars have different values of [Fe/H].
Photometric redshifts of galaxies.
In general, the observations have observational uncertainties. Let's assume:
The probable errors are at least approximately known.
The errors are in general small (e.g., compared to the actual values measured),
otherwise we would not have much trust in the data to begin with.
These uncertainties must have some effect on the data.
Imagine the measured quantity as a differential distribution in successive
bins of the measured quantity.
We hope that the observed distribution resembles the true
distribution, but due to observational errors some stars will find themselves
tabulated in the incorrect bin.
Clearly the smaller the random errors (e.g., relative to the bin width), the
less "bin-mixing" there will be.
But when there is bin mixing, it is clear that bins with a higher count
are more likely to scatter members into bins with lower counts, than vice versa.
Following Eddington, we will explore the situation using the specific case of starcounts,
but the same formalism holds for other measured quantities.
After working through the math, one finds:
Thus:
The correction from the observed to the true distribution
is most sensitive to the curvature (second derivative) of the
distribution function at any given point.
The correction is such that peaks get flattened out and minima get "filled in".
The size of the correction scales by the spread in the errors.
(Top) Some observed distribution function A_{o} (in the starcount example
we have been doing, this is A_{o}(m) ). (Middle) The first derivative.
(Bottom) The second derivative, which is used as a multiplicative factor in the correction of
the observed function in the top panel, to give the true distribution function,
A_{t} .
Malmquist Bias
Reference: Malmquist (1922, 1936),
Binney & Merrifield, Section 3.6.1. Sandage in The Deep Universe.
One of the more tourbling effects in surveys of stars and galaxies is the Malmquist Bias.
If we were ideally/properly
determine density laws, D(r), or luminosity functions, &Phi(M),
we would use volume-limited surveys -- i.e. sample every star/galaxy in the survey volume.
However, it is rare that we find ourselves in this situation.
Typically we are in magnitude-limited to some limiting
magnitude, m_{lim} .
Because from the distance modulus formula
m = M + 5 log r - 5 + A(r)
it is clear that to a given r we can only see sources with
M_{max} < m_{lim} - 5 log r + 5 - A(r).
Intrinsically fainter sources drop out of the apparent magnitude-limited survey.
The key to the Malmquist bias is that we can see intrinsically brighter sources
to larger distances.
This means that in an apparent magnitude-limited survey, the volume probed by
intrinsically brighter sources is larger than that probed by intrinsically fainter
sources.
Because different volumes are probed, an apparent magnitude-limited survey will
have an over-representation of intrinsically brighter compared to fainter
sources than in a volume-limited survey.
As a simple example, imagine a sample of of objects with an even distribution of brightness,
&Phi(M), from M_{bright} to M_{faint}, and ignore
reddening:
Then, for any given apparent magnitude, objects from a set range of distance
can contribute:
From 5 log (r) = m_{lim} - M_{bright} + 5
to 5 log (r) = m_{lim} - M_{faint} + 5.
Note that, of course, the intrinsically brighter sources can be seen to larger distances.
Now, impose an apparent magnitude limit to the survey:
What is the mean M at each distance?
The horizontal line in the plot below shows that when
5 log (r) > m_{lim} - M_{faint} + 5,
the mean M (shown by the dotted line) starts to become brighter than
(M_{bright} + M_{faint}) / 2 :
Note that the mean distance at each apparent magnitude is not affected
by the flux limitation.
Again, the net effect of the flux limit is that more luminous objects will be over-represented.
This effect plays an important role in almost every branch of astronomy.
For example, calculation of a luminosity function from a volume limited sample will
be seriously affected.
Or imagine, for example, what is the mean [Fe/H] for a sample of RR Lyrae stars?
Since [Fe/H] is correlated to the absolute magnitude of an RR Lyrae, one can
imagine getting spurious results for a large sample (one can see more metal poor
RR Lyraes to a greater distance in a magnitude limited sample).
We want to calculate the correction one needs to apply to go from an observed sample mean
absolute magnitude, <M>_{m},
to a true population mean absolute magnitude, M_{o}
if an m_{lim} is imposed.
This is most easily figured out in the case that the luminosity function of
what we are studying is Gaussian.
The mathematical nicety of the exponential in the Gaussian, particularly in its
derivative, delivers the correction,
<M >_{m} - M_{o}, sought after.
The derivation is not done here -- see Binney & Merrifield!.
One finds:
Now, dA / dM is almost always positive, so that objects in the
magnitude-limited survey at magnitudes m will, on average, be of a smaller
(brighter) absolute magnitude than the volume mean = M_{o} .
Note, that to make this correction, one needs to know what &sigma is.
The true &sigma can be obtained from the measured &sigma_{m}
as (see Binney & Merrifield, Section 3.6.1):
Now, d ln(A) / dM is often a constant (for example, in a
homogeneous, Euclidean situation).
Then &sigma ~ &sigma_{m} .
As an example, for main sequence stars, &sigma_{m} ~ 0.5;
thus, if we are in a homogeneous situation,
Case Study: Naked Eye Stars
A rather dramatic example of the Malmquist bias is one you should already be familiar with:
The absolute magnitude distribution of naked eye stars is dominated by
very bright, evolved stars, even though these are much less represented in the
true local luminosity function than faint stars.
At a dark site, we are limited to naked eye magnitudes near V = 6, whereas
in a bright site (like a city) we are limited to something like being able to see
V ~ 2-3 (or worse!).
The table below from Binney & Merrifield gives the normalized (to 100 stars)
luminosity function one would obtain if one were limited to V = 2.5 or V = 6
stellar samples, compared to a volume-limited sample.
The differences are obvious.
A way to correct luminosity functions for this severe Malmquist bias to obtain
the true luminosity function (like that shown above) is to
account for the effective volumes inhabited by stars of each luminosity
contributing to your magnitude-limited survey.
A simple way is to use the V / V_{max} method (see homework).
The homework (Binney & Merrifield Problem 3.6) gives another exercise in calculating
V_{eff}.
Obviously, one can say little about the luminosity function for stars that
are intrinsically faint and that have tiny effective volumes for a
give apparent magnitude limit.
As seen in the figure above, the faint end of the LF is not well established
for this reason.
Case Study: Gravitational Attractors
In our discussion of the local universe, we found that large mass concentrations can
cause flows of galaxies tha appear to have peculiar velocities deviating from
the Hubble law.
Recall the Great Attractor, whose presence was inferred by non-Hubble flows, presumably
due to infall of materal to the large mass concentration:
Some controversy about the GA came about due to concern over the effects of Malmquist bias.
Can infall velocities be exaggerated by Malmquist bias?
Imagine a cluster of galaxies in an otherwise homogeneous distribution.
Imagine that we look at galaxies on the frontside of the cluster.
Here, because the true density of galaxies may be rising quickly,
d ln(A) / dM > 0.6, then the estimated mean
absolute magnitude here will be brighter than they should be
(e.g., in the homogenous density case).
In this case, we think galaxies are actually farther than they are,
perhaps even to the point of attributing some of these galaxies to the
backside.
If these galaxies were generally following the Hubble flow, they will now seem
to be moving too slow for their projected distance.
Imagine now that we look at the gallaxies on the backside of the
cluster.
Here, because the true density of galaxies may be dropping quickly,
the galaxy counts may even appear to level off, with
d ln(A) / dM < 0.6.
By the Malmquist bias, the estimated mean absolute magnitude here will be
fainter than they should be
(e.g., in the homogenous density case).
In this case, we may think the galaxies are closer than they really are,
perhaps even to the point of attributing some of these galaxies to the
frontside.
If these galaxies were generally following the Hubble flow, they will now seem
to be moving too fast for their projected distance.
The net effect may give us something that resembles peculiar galaxy motions
of the kind used to infer a large infall velocity into the "attractor".
The latter equation shows what the form of the correction function implies. Two factors:
a Gaussian distribution of width &sigma / &pi'
Z^{-4} = ( &pi / &pi) ^{-4}
This means that (see Lutz-Kelker Fig. 1 below):
When &sigma / &pi' is small (i.e. good parallaxes with small errors)
the distribution is mainly Gaussian, but gets pushed slightly to &pi / &pi' < 1
(Table 1 and Fig.3 below).
As &sigma / &pi' grows, the shift gets larger.
When &sigma / &pi' gets large (&sigma / &pi' >~ 0.2),
the Z^{-4} term dominates and
we can't learn anything about the true &pi .
Note that what matters is NOT the absolute size of the error chosen
to limit a study, but rather &sigma / &pi .
Case Study: Hipparcos Satellite
&sigma ~ 0.001 arcsec to about V = 8.
If a limit of &sigma / &pi' ~ 0.15 is reasonable one for being
able to make LK-corrections, then we find
&sigma / &pi' ~ 0.15 means &pi ~ 0.007, or distance ~ 150 pc limit.
If the nearest star is at 1.3 pc, then
M = m - 5 log r + 5 ~ m + 5 ~ 8 + 5 ~ 13 for Hipparcos.
Stars fainter than M ~ 13 not even represented in Hipparcos even though these
are the most common types of stars.
--> This is another demonstration of why the nature
of the faint end of the luminosity function is still debated.