Previous Topic: CCDs: Quantum Efficiency Lecture Index Next Topic: Photometry: Magnitudes

ASTR 3130, Majewski [SPRING 2015]. Lecture Notes

ASTR 3130 (Majewski) Lecture Notes


REDUCTION OF CCD DATA

CCD Image Reduction: Purpose and General Philosophy

  • Goal: Remove systematic effects in data introduced by the detection process itself.
  • General Approach: Take a set of calibration frames that show the same systematic effects as the data you care about, and use these calibration data to gauge the level and character of the systematics in your data frames.

    • Ideal calibration frames would be those that isolate each of the effects separately, but in a way that exactly reproduces the effect.

    • In general, this is not possible, because to make the calibration frames we have to use the same detection processes that produce multiple systematic errors in the original data.

    • Also, some systematic effects are additive in nature (e.g., readnoise is contributed once per CCD frame), while others are multiplicative in nature (e.g., dark current scales with integration time).

    • Among the multiplicative systematic effects, some scale with the detected flux levels in a pixel while others scale with the integration time or even the sky level independent of the object flux.

  • Noise Considerations: In general, we hope that by introducing and applying calibration data to correct for systematic errors we do not significantly increase the random errors in the data.

    • At minimum the quality of our calibration data should not degrade the final reduced images in any way.
    • Therefore, we try to reduce the random errors in the calibration data while preserving the systematic errors!

  • Pixel Stacking and The "More Is Better" Principle: We achieve this by combining or stacking multiple images of the same type to reduce random errors.

    • For example, lets imagine that we have a signal S measured in a pixel, and that the detection has a noise given by σ.

      • We can reduce the relative noise if we measured that same signal N times and averaged the result.

        In this case the mean signal is still approximately S, but the noise becomes reduced to only σ/N1/2 (the reason why will be explained later).

    • Now imagine each image, A, B, C, ... made up of pixels A(i,j), B(i,j), C(i,j), ... .

    • Image combining is a process that takes the (i,j) pixel out of each of the images A, B, C, ... in an ensemble of similar type images, and combines them to create a final pixel (i,j) in a new image Z made of Z(i,j) pixels.

      The processes most commonly used in combining are (not surprisingly):

      • Take the mode of the pixels A(i,j), B(i,j), C(i,j), ... to create pixel Z(i,j).

        This only works with MANY frames in the ensemble.

        Generally robust to the influences of, and can be used to remove, cosmic rays.

      • Take the average of the pixels A(i,j), B(i,j), C(i,j), ... to create pixel Z(i,j).

        This is useful for a small number of frames and especially when you need to conserve flux.

        BUT, the average is a problem when you have cosmic rays, because they are included in the averaging process (they get reduced in intensity, but not removed).

      • Take the median of the pixels A(i,j), B(i,j), C(i,j), ... to create pixel Z(i,j).

        This is the easiest (though not always best) way to deal with cosmic rays, and is useful when you have more than a few images.

  • Important example: The bias frames shown before can be combined to reduce two sources of random noise:

    • The cosmic rays can actually be completely removed.

    • The readnoise in the other pixels can be reduced by N1/2.

    If you look carefully (click on the images), the combined bias image below is much smoother (or less grainy) in general. (Indeed, the noise is so much lower in the combined image that you can see a slightly hot column much more clearly in the combined image that is hard to notice in the original images.)

    In addition, the cosmic rays are completely gone.

    This is a process you will do repeatedly this semester.

    Set of zero second exposures (bias frames) showing different cosmic ray patterns.


    This is a median combination of ten bias frames like those shown just above. Note that the cosmic rays disappear and the image pixels are of a more homogeneous ADU level.


    Here is another example of doing the same thing, this time with domeflats (images of an illuminated screen within the telescope dome --- described below). The images shown below are magnified subsections of the complete domeflat frames so you can see the noise patterns. Note how, while the systematic patterns are preserved in the combined image, the small scale, random noise is substantially reduced by median combining ten frames.

    Zoomed in views of a set of "identical" images of an evenly illuminated projection screen (domeflats). The various donut patterns are from pieces of dust on the filter and dewar window of the camera, while other large scale patterns are QE variations in the pixels. While these large scale patterns look the same in all three images, at the smallest scales the variations are due to shot noise and are different from frame to frame.


    This is a median combination of ten of the domeflat frames shown above. Note that the fine scale noise has been reduced while the large scale patterns are preserved. (You can even see regular horizontal lines as a result of the laying down of the pixels in the manufacturing of the CCD.)


  • Combining more similar frames is always better, because in general the noise is reduced by N1/2.

    • For example, average 25 bias frames, noise decreases by factor of 5.
    • While obtaining more calibration frames is always better, it is good to be aware of what is reasonable and "good enough" -- one has to sleep!

      • The goal is to make the error introduced by the calibration frames a small fraction of the error budget in the final image you will analyze.

      • If the final image has a lot of sky background, it may be relatively easy to be in the regime where the Poisson noise in the sky alone is much larger than the combination of read noise in the object frame and the Poisson+read noise in the calibration frames.

  • Note that when you combine frames of a different type, the net error increases.

    • For example, if you take an object frame of M15, and a corresponding dark frame and then subtract, the errors in the final image are the quadrature sum of the errors in the two images.

      • For example, say you are working in a RN-limited regime for both the object and dark frame. Then subtracting the dark from the object frame will increase the noise in the resulting image by ~21/2.
      • For this reason, it is important to reduce the errors in your calibration frames to be minuscule, so that when you use them you don't substantially increase the random errors in the reduced frame.

    • Have to be cognizant of desired S/N
      • e.g., 1% accuracy photometry requires images that have brightnesses measured to ~ 0.01 mags.
      • Have to flatten out systematic errors to better than 1%.

    Reduction of CCD DATA commonly uses a set of calibration frames:

    • BIAS or ZERO FRAMES: Flush chip of all previous photoelectrons, then 0 second integration (simple read right after flush).
      • Purpose: Gives measure of how amplifier will read each physical pixel with no photoelectrons in it (the "zero level"). I.e., the net voltage in each pixel from combined + and - carriers at start of exposure.
      • Use: Subtract from all frames (unless you use dark frame).

        Bias frames mainly used when dark current is minuscule (liquid nitrogen cooled CCDs).

    • DARK FRAME: Timed exposure with shutter closed.
      • Purpose: To measure rate of accumulation of thermal electrons in each pixel. Includes bias level (as do all exposures).
      • Use: Two ways:
        1. Take dark frame of exact integration time as each other exposure, and subtract matched darks from latter. Note removes both dark current and bias in one operation.

          The operation we will use with ST-8, ST-1001E or Apogee Alta camera.
        2. Take one (set of equal) long exposure(s). Subtract bias from long dark exposure. Result gives only dark current during the exposure. Divide by exposure time to get rate in e- / sec -- can scale this dark current image to any exposure time and subtract.

          This means doing separate bias and dark current subtractions for each image.
      • Note: Dark frames are often not used with 77 K, cold CCDs where there is almost no dark current (though there can still be some hot pixels to take care of.
      • Imperative that dark frame taken under identical conditions (temperature, electronics, binning) as other images being corrected. Take as near as possible in time to exposures being corrected.


    • DOME FLAT: Exposure through entire optical system pointed at evenly illuminated screen -- typically mounted on inside of telescope dome ("Great White Spot").
      • Purpose: Allows the measurement of pixel-to-pixel Q.E. variations across CCD.
        • Assumption of the domeflat is that every pixel is exposed to same flux. Differences in ADU levels of output image show the relative sensitivity of each pixel. Want to normalize each picture by this map of relative sensitivities of pixels so that we are left with only the relative incoming flux levels of the region of the sky portrayed in the image.
      • Use: First, must subtract bias or dark from dome flat. Then can divide dome flat into any image of the sky (of course, already itself bias- and/or dark-corrected) to correct Q.E. variations.
      • Examples:

        Domeflat processing of a single CCD image:

        Image of star field with only bias subtraction. Can see sky does not appear flat because of variations in QE across image.

        Domeflat (after bias subtraction) made of evenly illuminated projection screen shows the QE variation pattern only.

        Image of star field corrected by dividing by domeflat. Now the sky level appears more or less flat.

        The following images are from a 4x2 CCD array camera called MOSAIC at Kitt Peak National Observatory. Note how each of the four chips has its own distinctive "flatfield" pattern, but each pattern can be repaired by taking a domeflat (middle) and dividing it into the raw data frame.

        Image of sky with only bias subtraction.

        Domeflat made of illuminated projection screen.

        Image of sky corrected by domeflat.


    • SKY FLAT: Exposures of "blank night sky".
      • Purpose: Same as dome flat, but in some ways better.

        • Dome flat screen may not really be evenly illuminated.

        • Dome flat bulb is not same color as sky, so pixels (which have unique QE responses as a function of wavelength) will respond differently to domeflat and real sky.

          Thus, domeflat can introduce color-dependent QE errors in sky pictures.

          Sky flats are better, because made from actual sky (correct color match and generally "evenly illuminated").

      • Use: Three possible ways:
        1. Simply use instead of dome flat (but expensive to get during night - whereas dome flats can be made in daytime hours).
        2. Make dome flat*. Divide domeflat by sky flat* to correct domeflat. Use latter to correct other frames.
          (*Must be bias/dark subtracted)
        3. Use bias-subtracted dome flat on all images, including sky flat*, which leaves only color-errors in latter. Then divide all image frames by the new sky flat to correct all frames for residual color-error left by dome flat.
      • Practical aside: Impossible to avoid stars/galaxies in sky flats. Take many at different places on the sky, then median stack these unregistered images to remove effects of stars/galaxies from combined image.
        • Can find a relatively blank piece of sky with few stars and make small moves of telescope to place these stars in different pixels for each exposure.

        • The above process is very commonly done, even with target frames, and is called dithering.

      • Note: Color-error is generally a large scale, low frequency variation across CCD image of sky. Can simply fit two-dimensional function (a manifold) to the sky ignoring stars/galaxies and use this function as a correction instead.
        • Good compromise, since dome flats "cheap" (lots of signal), sky flats "expensive" (low signal).

    • TWILIGHT FLATS: Short exposures of twilight sky -- like a "sky flat" illumination, but lots of signal like a dome flat.
      • Purpose: Same as sky flat.
      • Use: Same as sky flat.
      • Note: Twilight sky (dominated by scattered sunlight) is not the same color as the night sky (which, with no moon, is dominated by airglow), so twilight flats are not exactly matched to images of the dark sky --- but often good enough, and especially useful for getting good calibrations frames in the blue and UV where the night sky tends to be rather dark.

    SUMMARY OF REDUCTION PROCESSING STEPS

    • Because of CCD sensitivity to cosmic ray events and desire to minimize random errors, always want to combine multiple bias, dark, dome flat, sky flat, twilight flat, and object image frames.
    • Can combine using median value of same pixels in each of the multiple images in set to remove cosmic rays (mean of values will not work).
    • Typical Reduction Procedure In Professional Astronomy:

      NOTE: For dark frames, an alternative for cold CCDs is to use a master bias frame (easier to collect many bias frames).




    Typical ASTR 3130 Reduction Procedure With ST-8, ST-1001E or Apogee Alta Camera:

      1. Collect multiple image frames of same exposure time, binning and detector temperature. Record all images in your observing log.

      2. Collect multiple dark frames of same exposure time, binning and detector temperature as image frames. Record all images in your observing log.

      3. Save all images in Raw directory, copy second set to Reduction directory. Work only on the latter set of images.

      4. Median combine dark frames.

      5. Subtract median-combined dark from each image frame.

      6. Register dark-subtracted image frames and median combine.


    Previous Topic: CCDs: Quantum Efficiency Lecture Index Next Topic: Photometry: Magnitudes

    All material copyright © 2002,2006,2008,2012,2015 Steven R. Majewski. All rights reserved. These notes are intended for the private, noncommercial use of students enrolled in Astronomy 313 and Astronomy 3130 at the University of Virginia.