Previous Topic: CCDs: Quantum Efficiency Lecture Index Next Topic: Photometry: Magnitudes

ASTR 5110, Majewski [FALL 2015]. Lecture Notes

ASTR 5110 (Majewski) Lecture Notes



  • Berry Chapters 1-3, 5-7.

  • Howell, Chapter 4 (especially Section 4.5).

  • Gullixon 1992, ASPC, 23, 130.

  • The CCD reduction write-up in the back of the ASTR313/511 Observatory Handbook.

  • The local website:

CCD Image Reduction: Purpose, General Philosophy, and Dangers

  • Goal: Remove systematic effects in data introduced by the detection process itself.
  • General Approach: Take a set of calibration frames that show the same systematic effects as the data you care about, and use these calibration data to gauge the level and character of the systematics in your data frames (and account for or remove them).

    • Ideal calibration frames would be those that isolate each of the effects separately, but in a way that exactly reproduces the effect.

    • In general, this is not possible, because to make the calibration frames we have to use the same detection processes that produce multiple systematic errors in the original data.

    • Also, some systematic effects are additive in nature (e.g., readnoise is contributed once per CCD frame), while others are multiplicative in nature (e.g., dark current scales with integration time).

    • Among the multiplicative systematic effects, some scale with the detected flux levels in a pixel while others scale with the integration time or even the sky level independent of the object flux.

  • Noise Considerations: In general, we hope that by introducing and applying calibration data to correct for systematic errors we do not significantly increase the random errors in the data.

    • At minimum the quality of our calibration data should not degrade the final reduced images in any way.
    • Therefore, in the creation of calibration data, we try to reduce the random errors in the calibration data while preserving the systematic errors!

  • Pixel Stacking and The "More Is Better" Principle: We achieve this by combining or stacking multiple images of the same type to reduce random errors.

    • For example, lets imagine that we have a signal S measured in a pixel, so that the detection has a noise given by .

      • We can reduce the relative noise if we measured that same signal N times and averaged the result.

        In this case the mean signal is still approximately S, but the noise becomes reduced to only /N1/2.

    • Now imagine each image, A, B, C, ... made up of pixels A(i,j), B(i,j), C(i,j), ... .

    • Image combining is a process that takes the (i,j) pixel out of each of the images A, B, C, ... in an ensemble of similar type images, and combines them to create a final pixel (i,j) in a new image Z made of Z(i,j) pixels.

      The processes most commonly used in combining are (not surprisingly):

      • Take the mode of the pixels A(i,j), B(i,j), C(i,j), ... to create pixel Z(i,j).

        This only works with MANY frames in the ensemble.

        Generally robust to the influences of, and can be used to remove, cosmic rays.

      • Take the average of the pixels A(i,j), B(i,j), C(i,j), ... to create pixel Z(i,j).

        This is useful for a small number of frames and especially when you need to conserve flux.

        BUT, the average is a problem when you have cosmic rays, because they are included in the averaging process.

      • Take the median of the pixels A(i,j), B(i,j), C(i,j), ... to create pixel Z(i,j).

        This is the easiest (though not always best) way to deal with cosmic rays, and is useful when you have more than a few images.

  • Important example: The bias frames shown before can be combined to reduce two sources of random noise:

    • The cosmic rays can actually be completely removed.

    • The readnoise in the other pixels can be reduced by N1/2.

    If you look carefully (click on the images), the combined bias image below is much smoother (or less grainy) in general. (Indeed, the noise is so much lower in the combined image that you can see a slightly hot column much more clearly in the combined image that is hard to notice in the original images.)

    In addition, the cosmic rays are completely gone.

    This is a process you will do repeatedly this semester.

    Set of zero second exposures (bias frames) showing different cosmic ray patterns.

    This is a median combination of ten bias frames like those shown just above. Note that the cosmic rays disappear.

    The histogram of pixel values for a bias containing cosmic rays (left) versus one cleaned of cosmic rays (right). Images from

    Here is another example of doing the same thing, this time with domeflats (described below). The images are magnified so you can see the noise patterns. Note how, while the systematic patterns are preserved in the combined image, the small scale, random noise is substantially reduced by median combining ten frames.

    Zoomed in views of a set of "identical" images of an evenly illuminated projection screen (domeflats). The various donut patterns are from pieces of dust on the filter and dewar window of the camera, while other large scale patterns are QE variations in the pixels. While these large scale patterns look the same in all three images, at the smallest scales the variations are due to shot noise and are different from frame to frame.

    This is a median combination of ten of the domeflat frames shown above. Note that the fine scale noise has been reduced while the large scale patterns are preserved.

  • Combining more similar frames ("clones") is always better, because, in the case of averaging for example, in general the noise is reduced by N1/2.

    • For example, average 25 bias frames, random noise decreases by factor of 5.
    • While more calibration frames are always better, it is good to be aware of what is reasonable and "good enough" -- one has to sleep!

      • The goal is to make the error introduced by the calibration frames a small fraction of the error budget in the final image you will analyze.

      • If the final image has a lot of sky background, it may be relatively easy to be in the regime where the Poisson noise in the sky alone is much larger than the combination of read noise in the object frame and the Poisson+read noise in the calibration frames.


    • In a median combine, for the typical pixel with no cosmic ray hits or other problems you don't realize the reduction in random noise as quickly as the N1/2 achieved by averaging, because the output pixel values can only be taken as one of the pixel values of the input images.

      Thus, especially when N is low, you will always reduce the random noise better by averaging, HOWEVER you will not achieve adequate rejection of cosmic rays, etc.

    • If your images are not good clones of one another, you may gain no advantage from a median combine.

      • For an extreme example: Imagine you take three flats of a fading evening twilight with the intent of using them as flatfield frames (see below).

        If you use the same integration time, the typical flux values will go down.

        If the typical ADU counts are something like 4000, 2000 and 1000, then, in this case, if you did a strict median combine with these input images, your output image will typically ALWAYS select the middle frame pixels to put into the output image.

        In this case, you gain no advantage in the combine, because your output image is just a single one of the input images.

      One solution here is to, of course, scale the typical pixel values to be similar to one another (see warning near the end of this webpage on how to properly do this scaling), but this of course means that each image will have its noise (σ) scaled different amounts so that the image with the highest S/N will be more commonly adopted (because it's distribution of pixel values will be more tightly clustered around the correct "mean" value).

      This means that the higher S/N frame will have a higher influence (weight) on the creation of your median stack (as it probably should).

    • If you are median combining images that contain objects in it (stars, galaxies) as you would in making a "super sky-flat" (see below), even if these objects are in non-correlated positions from frame to frame, the very presence of an object at one (i,j) position means that the "median" at that position may be unduly influenced to an improperly high level.

      Can happen in two ways:

      • By bad luck, more than half of the input images may have an object in the frame at the same pixel, so that the output image is a non-sky value.

      • Even if less than half of the images have a non-sky value at a pixel, these "non-sky" values "take up" slots in the rank order of pixel values, so that the selected "median" is not really the median.

      See "skyflats" below for more on this point.

  • So -- if median combining is not as good as averaging, but averaging doesn't do bad pixel/cosmic ray rejection, what do you do??!!

    Many combination routines depend on "clipping" rejection algorithms. Some examples (seen, e.g., in IRAF tasks), where one is doing either a mean or median, are:

    • SIGMA CLIPPED MEAN/MEDIAN: the mean/median of the values left after rejecting those outside of a given number of standard deviation of the initial mean/median. Usually, the process is repeated iteratively (in IRAF "combine", this is the "niter" keyword).

    • THRESHOLD CLIPPED MEAN/MEDIAN: the mean/median of the values after rejecting values above and below defined thresholds.

    • MINIMUM AND MAXIMUM EXCLUSION MEAN/MEDIAN: the mean after a specified number of values ("nlow" and "nhigh") OR percentiles at the minimum and maximum of the range of values are rejected.

    • AVSIGCLIP: Best used for small numbers if images, when one cannot develop a good sense of σ. In this case neighboring pixels are used to add to the statistics to define σ, under the assumption that neighboring pixels are like the pixel of interest in terms of pixel value.

    Here are the rejection options in IRAF's "images.imred.ccded.combine" routine:

    Read about these options in more detail doing a "help combine" command in IRAF.

  • Note that when you combine frames of a different type, the net error increases.

    • For example, if you take an object frame of M15, and a corresponding dark frame and then subtract, the errors in the final image are the quadrature sum of the errors in the two images.

      • For example, say you are working in a RN-limited regime for both the object and dark frame. Then subtracting the dark from the object frame will increase the noise in the resulting image by ~21/2.
      • For this reason, it is important to reduce the errors in your calibration frames to be minuscule, so that when you use them you don't substantially increase the random errors in the reduced frame.

    • Have to be cognizant of desired S/N
      • e.g., 1% accuracy photometry requires images that have brightnesses measured to ~ 0.01 mags
      • Have to flatten out systematic errors to better than 1%.

Reduction of CCD DATA commonly uses a set of calibration frames:

  • BIAS or ZERO FRAMES: Flush chip of all previous dark and photoelectrons, then 0 second integration (simple read right after flush).
    • Purpose: Gives measure of how amplifier will read each physical pixel with no photoelectrons in it (the "zero level"). I.e., the net voltage in each pixel from combined + and - carriers at start of exposure.
    • Use: Subtract from all frames (unless you use dark frame).

      Bias frames mainly used when dark current is minuscule (liquid nitrogen cooled CCDs).
    • Note: Still important to do overscan correction as a separate process on ALL frames before all of this -- i.e. as first step to remove the usually unique drifts in the readout amplifier imprinted on each CCD image.
Close up of a single bias frame showing cosmic ray (left) and a region clean of such effects. From

  • DARK FRAME: Timed exposure with shutter closed.
    • Purpose: To measure rate of accumulation of thermal electrons in each pixel. Includes bias level (as do all exposures).
    • Use: Two ways:
      1. Take dark frame of exact integration time as each other exposure, and subtract matched darks from latter. Note removes both dark current and bias in one operation. Operation we use with ST-8 camera.
      2. Take one (set of equal) long exposure(s). Subtract bias from long dark exposure. Result gives only dark current during the exposure. Divide by exposure time to get rate in e- / sec. Can scale this dark current image to any exposure time and subtract. This means doing separate bias and dark current subtractions for each image.
      In both cases, combining MULTIPLE darks is an advantage to reduce noise.

      In both cases, it is preferable to take the darks in the daytime so as not to waste observing time, but this works ONLY if the camera can be kept at a very stable temperature.

      The second method can be more efficient because only one set of darks (albeit long ones) can serve for frames of all integration times.

    • Note: We generally concern ourselves less with dark current in liquid nitrogen cooled CCDs because the dark rate is so low.
    • Imperative for thermoelectrically cooled CCDs (like the ST-8).

      In this case it is imperative that dark frame taken under identical conditions (temperature, electronics, binning) as other images being corrected.

      Because thermoelectric coolers tend not to be very stable, and they operate at temperatures where the dark current curve is steep, it is generally important to take the dark as near as possible in time to exposures being corrected.

      Unfortunately, this usually limits one to only a single dark at a time.

    Dark subtraction is a processing step used to remove bright dots resulting from excess dark current that builds up in certain pixels of a CCD chip. To perform a dark subtraction, take an image (called a dark frame) with the CCD detector shutter closed. Then, subtract that image from your original. Photo by All images: Tony Hallas. From

  • DOME FLAT: Exposure through entire optical system pointed at evenly illuminated screen -- typically mounted on inside of telescope dome ("Great White Spot").
  • The White Spot at the Las Campanas 2.5-m DuPont Telescope illuminated by lamps fixed to the top end of the telescope. Photos by Guillermo Damke.

    • Purpose: Allows the measurement of pixel-to-pixel Q.E. variations across CCD.
      • Assumption of domeflat is that every pixel is exposed to same flux. Differences in ADU levels of output image show the relative sensitivity of each pixel. Want to normalize each picture by this map of relative sensitivities of pixels so that we are left with only the relative incoming flux levels of the region of the sky portrayed in the image.
    • Use: First, must subtract bias or dark from dome flat. Then can divide dome flat into any image of sky to correct Q.E. variation
    • Examples:

      Domeflat processing of a single CCD image:

      Image of star field with only bias subtraction. Can see sky does not appear flat because of variations in QE across image.

      Domeflat (after bias subtraction) made of evenly illuminated projection screen shows the QE variation pattern only.

      Image of star field corrected by dividing by domeflat.

      The following images are from a 4x2 CCD array camera called MOSAIC at Kitt Peak National Observatory. Note how each of the four chips has its own distinctive "flatfield" pattern, but each pattern can be repaired by taking a domeflat (middle) and dividing it into the raw data frame.

      Image of sky with only bias subtraction.

      Domeflat made of illuminated projection screen.

      Image of sky corrected by domeflat.

  • Normalization: Typical to normalize the flatfields (domeflats and other types) so that the typical value is about 1. Then when you use divide by the flatfields, the object frames will have similar flux values before and after.

  • Very Important: Particularly when doing absolute photometry, it is imperative that you use the same flatfield images and flatfielding processes for all frames (both for your target and your standard star calibration fields) in the same filter, to ensure that any renormalizations are done identically.

  • CAREFUL!: Telescopes where the dome is slaved to the telescope (e.g., the ARC 3.5-m) may not be able to have a great white spot they can point to. In these cases pseudo-domeflats might be available using the mirror covers, windscreen or other flat surface that can be put in front of the primary mirror.

  • SKY FLAT: Exposures of "blank night sky".
    • Purpose: Same as dome flat, but in some ways better.

      • Dome flat screen may not really be evenly illuminated.

      • Dome flat bulb does not emit same spectrum as sky, so pixels (which have unique QE responses as a function of wavelength) will respond differently to domeflat and real sky.

        Thus, domeflat can introduce color-dependent QE errors in sky pictures.

        Sky flats are better, because made from actual sky (correct color match and generally "evenly illuminated").

        But have to remove stars, galaxies, etc. from image.

    • Use: Three ways:
      1. Simply use instead of dome flat (but expensive to get during night - dome flats can be made in daytime hours).
      2. Make dome flat*. Divide domeflat by sky flat* to correct domeflat. Use latter to correct other frames
        (*Both must be bias/dark subtracted)
      3. Use bias-subtracted dome flat on all images, including sky flat*, which leaves only color-errors in latter. Then divide all image frames by the new sky flat to correct all frames for residual color-error left by dome flat.
    • Practical aside: Impossible to avoid stars/galaxies in sky flats. Take many at different places, then median stack frames unregistered by sky position but registered by pixel position to remove effects of stars/galaxies from combined image.
      • Can find a relatively blank piece of sky with few stars and make dithers of telescope to place these stars in different pixels for each exposure.

        A representation of the making a sky frame from dithered images. From
      • The above process is very commonly done, even with target frames, and is called dithering.

        Dithering is your friend! You should do this whenever possible for lots of reasons (including making good input to skyflat process).

      • A simple alternative to active dithering is to turn off the telscope tracking and let the sky drift past; but this leaves long star trails across the skyflat that take up lots of chip area and are more challenging to completely remove.

    • If you are median combining images that contain objects in it (stars, galaxies) as you would in making a "super sky-flat" (see below), even if these objects are in non-correlated positions from frame to frame, the very presence of an object at one (i,j) position means that the "median" at that position may be unduly influenced to an improperly high level.

      Can happen in two ways:

      • By bad luck, more than half of the input images may have an object in the frame at the same pixel, so that the output image is a non-sky value.

      • Even if less than half of the images have a non-sky value at a pixel, these "non-sky" values "take up" slots in the rank order of pixel values, so that the selected "median" is not really the median.

      Statistics for making skyflats:

      • Let p be the typical fraction of pixels in an image that are NOT sky, and 1 - p be the typical number of sky pixels.

        One can use combinatorics to calculate how many times a given pixel of N disregistered images with these characteristics will have more than half of the N images containing non-sky values at a given pixel, and therefore resulting in a non-sky value put into the average.

        The figure below gives, for different p values and N images the number of non-sky pixels that will be put into a "super-sky flat" with 1 million pixels (e.g., for a 1k X 1k CCD).

        From Crawford & Trueblood 2004, SASS, 23, 151.
        Of course, p will depend on how deeply you image. For shallow images of the sky in reasonable seeing away from the Galactic plane, p may be as low as 0.10, but for very deep galaxy imaging of random sky, one will still have many background galaxies and p can be very high, approaching 0.5.

  • Note: Color-error is generally a large scale, low frequency variation across CCD image of sky. Can simply fit two-dimensional function (a manifold) ignoring stars/galaxies and use function instead.
    • Good compromise to use both domeflats and sky flats because dome flats "cheap" (lots of signal -- good for pixel-to-pixel, high frequency information), sky flats "expensive" (low signal, but useful for getting low frequency variations across chip).

  • TWILIGHT FLATS: Short exposures of twilight sky -- like a "sky flat" illumination, but lots of signal like a dome flat.
    • Purpose: Same as sky flat
    • Use: Same as sky flat
    • Note: Twilight sky (dominated by scattered sunlight) not same color as dark sky (with no moon, dominated by airglow), so twilight flats not exactly matched to dark sky - but often good enough, and especially useful for getting good calibrations frames in the blue and UV where the night sky tends to be rather dark.


When you have your final, reduced images, the gradient or large scale variations in the sky background across each chip must be smaller (say, in percent) than the desired precision in your photometry/measurements of things you care about.

  • For example, if you want to achieve 1% photometry, you need to correct and flatfield BETTER than 1% across the image.


  • Because of CCD sensitivity to cosmic ray events and desire to minimize random errors, always want to combine multiple bias, dark, dome flat, sky flat, twilight flat, and object image frames.
  • Can combine using median value of same pixels in each of the multiple images in set to remove cosmic rays (mean of values will not work).
  • Typical Reduction Procedure In Professional Astronomy:

    NOTE: For dark frames, an alternative for cold CCDs is to use bias frame (easier to collect).

Typical Reduction Procedure For a Non-Cryogenic Camera (e.g., ST-8):

    1. Collect multiple image frames of same exposure time, binning and detector temperature. Record all images in your observing log.

    2. Collect multiple dark frames of same exposure time, binning and detector temperature as image frames. Record all images in your observing log.

    3. Save all images in Raw directory, copy second set to Reduction directory. Work only on the latter set of images.

    4. Median combine dark frames.

    5. Subtract median-combined dark from each image frame.

    6. Register dark-subtracted image frames and median combine.



  • The CCD reduction write-up in the back of the ASTR313/511 Observatory Handbook.

  • The local website:

  • It is also helpful to do a "help flatfields" in IRAF, when in the CCDRED package.

The IRAF software contains a package, called ccdred, that is designed around the image processing steps discussed above required to fully reduce imaging data.

  • The heart of the package is the CCDPROC task.

  • The CCDPROC task is designed so that, in principle, if the files are set-up properly (with frames properly designated with certain keywords in their headers), one needs only say " ccdproc * " in IRAF and all of the above steps in the processing are completed automatically.

  • In practice it is dangerous to run the code this way, since any problem in the processing along the way may be very difficult to track down.

    Therefore, I recommend using the code "one step at a time", and, especially while you are unfamiliar with the code, check the results of every step by viewing the output images.

  • The information given here is just a basic introduction for the most direct reduction of Fan Mountain data, without making use of many of the fancier possibilities within the CCDRED package.

  1. First, load the CCDRED package, which is part of the IMRED package:

    cl>  imred
          argus.      crutil.     echelle.    iids.       kpnocoude.  specred.    
          bias.       ctioslit.   generic.    irred.      kpnoslit.   vtel.       
          ccdred.     dtoi.       hydra.      irs.        quadred.    
    im> ccdred
          badpiximage      ccdlist          combine          mkillumcor       setinstrument
          ccdgroups        ccdmask          darkcombine      mkillumflat      zerocombine
          ccdhedit         ccdproc          flatcombine      mkskycor         
          ccdinstrument    ccdtest          mkfringecor      mkskyflat        
  2. Next, here are some things to do to set things up for FMO UBVRI data which I will tell you without explanation (to learn more about this setup stuff and to set up for other data, read the help files for the tasks setinstrument and subsets):

    • First put a file in your directory called "fits.dat" which has the following in it:

      exptime         EXPTIME 
      darktime        itime
      imagetyp        data-typ
      subset          FILTER2
      biassec         biassec []
      datasec         datasec []
      fixpix          bp-flag 0
      overscan        bt-flag 0
      zerocor         bi-flag 0
      darkcor         dk-flag 0
      flatcor         ff-flag 0
      fringcor        fr-flag 0
      'object (  0 )'         object
      'dark (  1 )'           dark
      'proj flat (  2 )'      flat
      'sky flat (  3 )'       other
      'comp (  4 )'           other
      'bias (  5 )'           zero
      'dome flat (  6 )'      flat
    • Also create a file called "subsets" that contains the following:

      'U' U
      'B' B
      'V' V
      'R' R
      'I' I
    • Do an "epar setinstrument" so that the parameters are:

         instrument = "fits"          Instrument ID (type ? for a list)
              query = "direct"        Instrument ID (type q to quit)
              (site = "")             Site ID
         (directory = "")             Instrument directory
            (review = yes)            Review instrument parameters?
              (mode = "ql")           
      and then do

      cc> setinstrument
      Instrument ID (type ? for a list) (fits): 
    • Finally, do an "epar ccdred" and set the following:

      PACKAGE = imred
         TASK = ccdred
      (pixelty=            real real) Output and calculation pixel datatypes
      (verbose=                  yes) Print log information to the standard output?
      (logfile=              logfile) Text log file
      (plotfil=                     ) Log metacode plot file
      (backup =                     ) Backup directory or prefix
      (instrum=             fits.dat) CCD instrument file
      (ssfile =              subsets) Subset translation file
      (graphic=             stdgraph) Interactive graphics output device
      (cursor =                     ) Graphics cursor input
      Note that by setting logfile=logfile, you will create a file in your processing directory that will contain a record of all your processing steps.

  3. The CCDLIST task is a helpful way to not only learn useful information about the content of each image, but to keep track of the extent to which the frame has been processed.

    • You should do a

      	cc> help ccdlists
      to learn more about the command. Here I give only the most basic description.

      For example, when you start out with Fan Mtn. data, you might see something like this:

      cc> ccdlist *.fits
      ccd3004.fits[2088,2049][ushort][none][V]:V domeflat
      ccd3005.fits[2088,2049][ushort][none][V]:V domeflat
      ccd3006.fits[2088,2049][ushort][none][V]:V domeflat
      ccd3007.fits[2088,2049][ushort][none][I]:I domeflat
      ccd3008.fits[2088,2049][ushort][none][I]:I domeflat
      ccd3009.fits[2088,2049][ushort][none][I]:I domeflat
      ccd3010.fits[2088,2049][ushort][none][V]:V skyflat 
      ccd3011.fits[2088,2049][ushort][none][V]:V skyflat
      ccd3012.fits[2088,2049][ushort][none][V]:V skyflat
      ccd3013.fits[2088,2049][ushort][none][I]:I skyflat
      ccd3014.fits[2088,2049][ushort][none][I]:I skyflat
      ccd3015.fits[2088,2049][ushort][none][I]:I skyflat
      ccd3016.fits[2088,2049][ushort][none][V]: King 13
      ccd3017.fits[2088,2049][ushort][none][V]: King 13
      ccd3018.fits[2088,2049][ushort][none][I]: King 13
      ccd3020.fits[2088,2049][ushort][none][I]: King 13
      In the above description for each file we get:

      • The name of the file.

      • Its size ("[2088,2049]") in columns and rows. Note that the raw data contain an overscan, so the image is larger than the physical CCD size.

      • How many bytes deep is each pixel. In this case ushort means the pixels are described by a 16 bit integer ("ushort").

        Once you star doing calculations on the data, the images should be converted to real numbers.

      • If properly set-up, the next piece of information (with "none" here) tells you the type of image (e.g., zero, domeflat, object, etc.). However, our CCD software at FMO does not implant the proper keyword in the image headers to make this function work.

      • The next piece of information tells you the filter that was in position then the image was taken (note that in the above example, the V filter was in place during the biases, even though this is meaningless in this case).

        Note that this will only work if the setup stuff in part 2 above is done.

      • Finally, the title of the image used at the telescope is given.

        (In the 2005 FMO run for ASTR 511, it appears that this option was not used by the students and so this may come up as "None".)

      Later on, after you begin processing the data, an additional piece of information will be conveyed when you run "CCDLIST", which is the state of the processing of the data.

      • The following processing codes are used:

                    B - Bad pixel replacement
                    O - Overscan bias subtraction
                    T - Trimming
                    Z - Zero level subtraction
                    D - Dark count subtraction
                    F - Flat field calibration
                    I - Illumination correction
                    Q - Fringe correction
      • Here is an example of what you might see after the basic processing steps described above to the skyflat level:

        cc> ccdlist *.fits
        ccd3004.fits[2088,2049][ushort][none][V][OTZ]:V domeflat
        ccd3005.fits[2088,2049][ushort][none][V][OTZ]:V domeflat
        ccd3006.fits[2088,2049][ushort][none][V][OTZ]:V domeflat
        ccd3007.fits[2088,2049][ushort][none][I][OTZ]:I domeflat
        ccd3008.fits[2088,2049][ushort][none][I][OTZ]:I domeflat
        ccd3009.fits[2088,2049][ushort][none][I][OTZ]:I domeflat
        ccd3010.fits[2088,2049][ushort][none][V][OTZF]:V skyflat 
        ccd3011.fits[2088,2049][ushort][none][V][OTZF]:V skyflat
        ccd3012.fits[2088,2049][ushort][none][V][OTZF]:V skyflat
        ccd3013.fits[2088,2049][ushort][none][I][OTZF]:I skyflat
        ccd3014.fits[2088,2049][ushort][none][I][OTZF]:I skyflat
        ccd3015.fits[2088,2049][ushort][none][I][OTZF]:I skyflat
        ccd3016.fits[2088,2049][ushort][none][V][OTZFI]: King 13
        ccd3017.fits[2088,2049][ushort][none][V][OTZFI]: King 13
        ccd3018.fits[2088,2049][ushort][none][I][OTZFI]: King 13
        ccd3020.fits[2088,2049][ushort][none][I][OTZFI]: King 13

  4. Now, let's begin to process the data with CCDPROC.

    Leave a copy of the raw data untouched and copy the frames you intend to reduce to a separate directory.

  5. We should begin by overscan correcting all of the images and trimming off the excess parts of the image we no longer need after this step (this gets us to the "[OT]" level for all images).

    • First, decide what part of the image you want to use for calculating the overscan.

      Note that though there are 40 extra "reads" per row (giving the 2088 column dimension) in the FMO data, it is not always good to use them all:

      • One often finds weird frame "edge" effects that are best ignored.

      • The transition from real data to overscan data is often not perfect, especially for images (like sky and domeflats) where the mean level is high. Thus it is best to avoid this part of the overscan strip as well.

      Only by inspection of many different kinds of images in your data set can you really decide the best "clean" part of the overscan strip.

      You might use the IRAF command IMPLOT to look at rows and columns in the image to decide the best overscan (and trim) sections.

      Keep in mind that you should use the same overscan region for all images, so check a variety of image types.

      Let's say that of the 40 columns of overscan you decide to use only the central 24. Then in this case the overscan region (called "BIASSEC" in the CCDPROC command) will be given by the image section [2056:2080,*] (this is only an EXAMPLE, determine the best region from your data).

    • Next decide what parts of the image you wish to "save".

      • The overscan should be removed.

      • You'll note that FMO adds an extra row in the image (giving the "2049" dimension above) at line 1. Remove this.

      • One might also want to remove the edge pixels of the true image because of weird edge effects that are often seen.

        Use IMPLOT to find these strange rows and columns.

    • Let's say that after all of the above considerations you decide that the part of the image that is good are rows 4 to 2041 and columns 2 to 2047.

      Then the TRIMSEC is [2:2047,4:2041].

    • Now do the overscan and trim.

      Do an "epar ccdproc" and set up the file as follows (including the BIASSEC and TRIMSEC you selected (the above examples inserted here):

      PACKAGE = ccdred
         TASK = ccdproc
      images  =                       List of CCD images to correct
      (output =                     ) List of output CCD images
      (ccdtype=                     ) CCD image type to correct
      (max_cac=                    0) Maximum image caching memory (in Mbytes)
      (noproc =                   no) List processing steps only?
      (fixpix =                   no) Fix bad CCD lines and columns?
      (oversca=                  yes) Apply overscan strip correction?
      (trim   =                  yes) Trim the image?
      (zerocor=                   no) Apply zero level correction?
      (darkcor=                   no) Apply dark count correction?
      (flatcor=                   no) Apply flat field correction?
      (illumco=                   no) Apply illumination correction?
      (fringec=                   no) Apply fringe correction?
      (readcor=                   no) Convert zero level image to readout correction?
      (scancor=                   no) Convert flat field image to scan correction?
      (readaxi=                 line) Read out axis (column|line)
      (fixfile=                     ) File describing the bad lines and columns
      (biassec=        [2056:2080,*]) Overscan strip image section
      (trimsec=      [2:2047,4:2041]) Trim data section
      (zero   =                     ) Zero level calibration image
      (dark   =                     ) Dark count calibration image
      (flat   =                     ) Flat field images
      (illum  =                     ) Illumination correction images
      (fringe =                     ) Fringe correction images
      (minrepl=                   1.) Minimum flat field value
      (scantyp=            shortscan) Scan type (shortscan|longscan)
      (nscan  =                    1) Number of short scan lines
      (interac=                  yes) Fit overscan interactively?
      (functio=            chebyshev) Fitting function
      (order  =                    3) Number of polynomial terms or spline pieces
      (sample =                    *) Sample points to fit
      (naverag=                    1) Number of sample points to combine
      (niterat=                   10) Number of rejection iterations
      (low_rej=                   3.) Low sigma rejection factor
      (high_re=                   3.) High sigma rejection factor
      (grow   =                   0.) Rejection growing radius
      (mode   =                   ql)
    • The last set of parameters starting with "interac" pertains to the fitting program used to do the overscan correction.

      • It is good to run the fitting program interactively, as shown.

        One can interactively set the order of the polynomial (":order 5", for example to change to fifth order in the graphics program).

        Any of the other fitting parameters can be set interactively in the same way.

        Use "q" when one is finished, and the fitted overscan will be subtracted and the trimming of the image will commence.

      • The above set-up starts the fit at a 3rd order chebyshev polynomial, but you should determine the order you need to fit the low frequency variations of the overscan.

      • The parameters "low_rej" and "high_rej" are used to reject extreme points (e.g., cosmic rays) from the fit (in this case a 3σ rejection is performed).

        Rejection of extreme points is done iteratively, in the above example 10 iterations of rejection and refitting will be done.

      • Under the assumption that a cosmic ray (or other bad pixel) can affect nearby pixels, one can set "grow" to the number of neighboring pixels one would like to reject.

    • When one runs CCDPROC on the images you like (in this case, all of the images should be run for the above step), they should all have [OT] flags set when you do CCDLIST.

  6. Next do a bias subtraction.

    • First check that the biases you have collected all look similar.

      • Make a file "bias.list" that names the list of bias frames you have, one per line.

      • If your image contains bad columns, the following process may be dangerous. To be safe, do imstat only on sections of images that are "safe" and clear of these problems.

        For example, your bias.list file might look like this:

        where the image section shown (should be same for all biases) is the "safe" section.

      • Use the iraf imstat command, set as follows:

        PACKAGE = imutil
           TASK = imstatistics
        images  =           @bias.list   List of input images
        (fields = image,npix,mean,median,mode,stddev,min,max) Fields to be printed
        (lower  =                INDEF) Lower limit for pixel values
        (upper  =                INDEF) Upper limit for pixel values
        (nclip  =                    0) Number of clipping iterations
        (lsigma =                   3.) Lower side clipping factor in sigma
        (usigma =                   3.) Upper side clipping factor in sigma
        (binwidt=                  0.1) Bin width of histogram in sigma
        (format =                  yes) Format output and print column labels ?
        (cache  =                   no) Cache image in memory ?
        (mode   =                   ql)
        The task will give you statistics of these images. Check that the median and modes are the same (within the errors of course).

        If they are not, check for frames that are weird.

    • If you are happy with the biases being similar, then combine them using zerocombine (do a "help combine" to familiarize yourself with this very useful command and its variants):

      PACKAGE = ccdred
         TASK = zerocombine
      input   =           @bias.list  List of zero level images to combine
      (output =          bias_n1_ave) Output zero level name
      (combine=              average) Type of combine operation
      (reject =            avsigclip) Type of rejection
      (ccdtype=                 zero) CCD image type to combine
      (process=                   no) Process images before combining?
      (delete =                   no) Delete input images after combining?
      (clobber=                   no) Clobber existing output image?
      (scale  =                 none) Image scaling
      (statsec=                     ) Image section for computing statistics
      (nlow   =                    0) minmax: Number of low pixels to reject
      (nhigh  =                    1) minmax: Number of high pixels to reject
      (nkeep  =                    1) Minimum to keep (pos) or maximum to reject (neg)
      (mclip  =                  yes) Use median in sigma clipping algorithms?
      (lsigma =                   3.) Lower sigma clipping factor
      (hsigma =                   3.) Upper sigma clipping factor
      (rdnoise=              rdnoise) ccdclip: CCD readout noise (electrons)
      (gain   =                 gain) ccdclip: CCD gain (electrons/DN)
      (snoise =                   0.) ccdclip: Sensitivity noise (fraction)
      (pclip  =                 -0.5) pclip: Percentile clipping parameter
      (blank  =                   0.) Value if there are no pixels
      (mode   =                   ql)
      In the above set of parameters, you will be doing a simple average, but with an "avisgclip" which rejects any pixel of one of the bias images that is extreme by more than "lsigma" or "hsigma".

      • Note that in the above example I have put the input as "bias.list" -- BUT BEWARE that you should use the FULL images here and so remove the image sections in the "bias.list" file listing.

      • The final averaged bias is called "bias_n1_ave" in the example, for the bias from night 1.

      • Note that you should check the biases in multinight data to see if they are similar. It is usually best to use the bias from that night to correct that night's data.

    • Now bias subtract all of the images in your dataset.

      • If you have different biases for different nights, make sure to keep track of which bias_ave frame to use with each frame.

        You could make a separate file listing (e.g., "n1_frames.list") for each night of data to keep things straight.

      • Do an "epar ccdproc" and now set the following:

        images  =      @n1_frames.list  List of CCD images to correct
        (zerocor=                  yes) Apply zero level correction?
        (zero   =          bias_n1_ave) Zero level calibration image
      • Run ccdproc on the @n1_frames.list file. (Note, it doesn't matter if you have left "overscan=yes" and "trim=yes" in ccdproc --- ccdproc knows it has already tun this once and won't run it again).

      • Repeat this for each night's data, with the appropriate bias frame.

      • All of the bias subtracted frames will have the [OTZ] flag set with a ccdlist command.

  7. Now create and use a domeflat for each filter you have used.

    The procedure is similar to that used above for the biases, but now you must separate images by filter.

    • Make list files for the domeflats in each filter (e.g., "Vdomeflat.list").

      Since we will first do an imstat to check the images, you should have your list include image sections of the "safe" parts of the images.

    • Check that your domeflats are truly clones of one another (varying only by random noise) by using imstat (e.g., run "imstat @Vdomeflat.list").

      If any image is weird, make sure to find out why and perhaps eliminate it from consideration.

    • Combine your domeflats using flatcombine.

      (Edit your "Vdomeflat.list" and other files to remove the image sections, or make new file lists without them.)

      PACKAGE = ccdred
         TASK = flatcombine
      input   =      @Vdomeflat.list  List of flat field images to combine
      (output =       Vdomeflat_ave.) Output flat field root name
      (combine=              average) Type of combine operation
      (reject =            avsigclip) Type of rejection
      (ccdtype=                     ) CCD image type to combine
      (process=                   no) Process images before combining?
      (subsets=                   no) Combine images by subset parameter?
      (delete =                   no) Delete input images after combining?
      (clobber=                   no) Clobber existing output image?
      (scale  =                 mode) Image scaling
      (statsec=  [500:1500,200:2200]) Image section for computing statistics
      (nlow   =                    1) minmax: Number of low pixels to reject
      (nhigh  =                    1) minmax: Number of high pixels to reject
      (nkeep  =                    1) Minimum to keep (pos) or maximum to reject (neg)
      (mclip  =                  yes) Use median in sigma clipping algorithms?
      (lsigma =                   3.) Lower sigma clipping factor
      (hsigma =                   3.) Upper sigma clipping factor
      (rdnoise=              rdnoise) ccdclip: CCD readout noise (electrons)
      (gain   =                 gain) ccdclip: CCD gain (electrons/DN)
      (snoise =                   0.) ccdclip: Sensitivity noise (fraction)
      (pclip  =                 -0.5) pclip: Percentile clipping parameter
      (blank  =                   1.) Value if there are no pixels
      (mode   =                   ql)
      • In the above example, we are going to again use avsigclip rejection of spurious pixels in the averaging process.

      • Also, to account for the possibility that the intensity of the domeflat lamp may have wavered during the taking of the domeflats, the above parameter set is designed to scale the pixel ADU levels by the mode of the pixels determined in the "statsec".

        The "statsec" should be set to a part of the image free of bad columns (and might be the same "safe" section you used above.

      • If the domeflat lamp was stable (as determined by the imstat command), you could set "scale=none" for no scaling).

    • Run flatcombine for each filter.

    • Make lists of the frames you want to domeflat (e.g., "Vframe.list"). This should include all object frames and all twilight or sky frames.

    • Edit the ccdproc parameter file:

      images  =         @Vframe.list  List of CCD images to correct
      (flatcor=                  yes) Apply flat field correction?
      (flat   =        Vdomeflat_ave) Flat field images
    • Run ccdproc on all relevant frames with the relevant domeflats.

    • The domeflatted frames should all be flagged as [OTZF].

    • Check the object frames at this point.

      • The object and sky/twilight frames by this point should look much improved over the raw versions.

      • Check for the uniformity of the sky level in the object and sky frame, using implot or imexamine (e.g., the "m" key) or moving the cursor around the image in ds9 (and playing with the contrast).

        • The sky should be "flat". The fractional variation in the sky level across the image tells you the fractional photometric systematic error we can expect across the image.

          If we want 1-2% quality photometry, then the sky should not vary by this much in your images.

          If it does, then we need to skyflat.

  8. Make a skyflat correction.

    What we want is a single image (in each filter) that describes the general, low frequency shape of the residual non-flatness in our sky levels.

    • For each filter, make a file listing of all frames that will be used to construct a sky correction image (e.g., "Vskycor.list").

      • Consider using each frame you have that contains significant sky flux, including twilights and long exposures of your targets.

      • You should inspect each frame to make sure that there aren't large sections of the image that are not predominantly sky (e.g., an image dominated by a large galaxy or very bright star will not be good.

    • Now we need to combine the images in such a way to account for variations in sky level, and to remove stars and cosmic rays, etc.

      This could take a little fooling around. Try using the combine command as follows:

      PACKAGE = ccdred
         TASK = combine
      input   =        @Vskycor.list  List of images to combine
      output  =              Vskycor  List of output images
      (plfile =                     ) List of output pixel list files (optional)
      (sigma  =                     ) List of sigma images (optional)
      (ccdtype=                     ) CCD image type to combine (optional)
      (subsets=                   no) Combine images by subset parameter?
      (delete =                   no) Delete input images after combining?
      (clobber=                   no) Clobber existing output image?
      (combine=               median) Type of combine operation
      (reject =            avsigclip) Type of rejection
      (project=                   no) Project highest dimension of input images?
      (outtype=                 real) Output image pixel datatype
      (offsets=                 none) Input image offsets
      (masktyp=                 none) Mask type
      (maskval=                   0.) Mask value
      (blank  =                   0.) Value if there are no pixels
      (scale  =                 mode) Image scaling
      (zero   =                 none) Image zero point offset
      (weight =                 none) Image weights
      (statsec=                     ) Image section for computing statistics
      (lthresh=                INDEF) Lower threshold
      (hthresh=                INDEF) Upper threshold
      (nlow   =                    1) minmax: Number of low pixels to reject
      (nhigh  =                    1) minmax: Number of high pixels to reject
      (nkeep  =                    1) Minimum to keep (pos) or maximum to reject (neg)
      (mclip  =                  yes) Use median in sigma clipping algorithms?
      (lsigma =                   3.) Lower sigma clipping factor
      (hsigma =                   3.) Upper sigma clipping factor
      (rdnoise=              rdnoise) ccdclip: CCD readout noise (electrons)
      (gain   =                 gain) ccdclip: CCD gain (electrons/DN)
      (snoise =                   0.) ccdclip: Sensitivity noise (fraction)
      (sigscal=                  0.1) Tolerance for sigma clipping scaling corrections
      (pclip  =                 -0.5) pclip: Percentile clipping parameter
      (grow   =                    0) Radius (pixels) for 1D neighbor rejection
      (mode   =                   ql)
    • Note again the scaling by the mode is essential in this case.

      From Mike Bolte's lecture pages:
    • The output of this process serves as the "check" on the "sky shape".

      There are two ways to do this.

      • One can use the task IMSURFIT to fit a two-dimensional manifold to the image Vskycor.

      • One can use the mkskycor task, which basically just heavily smooths the Vskycor image to remove the high frequency variations.

    • The output image from one of the two processes (lets call it "Vskyflat") will be used as a final correction to all of your images (in the relevant filter).

      The ccdproc command may give you an error if the "Vskyflat" image does not contain the "MKILLUM" keyword in its header.

      	PACKAGE = imutil
         TASK = hedit
      images  =         ccd3020.fits  images to be edited
      fields  =              MKILLUM  fields to be edited
      value   = Insert something here like the date  value expression
      (add    =                  yes) add rather than edit fields
      (addonly=                   no) add only if field does not exist
      (delete =                   no) delete rather than edit fields
      (verify =                  yes) verify each edit operation
      (show   =                  yes) print record of each edit operation
      (update =                  yes) enable updating of the image header
      (mode   =                   ql)
    • Then run ccdproc for this one last step on all of the object frames:

      (illumco=                  yes) Apply illumination correction?
      (illum  =             Vskyflat) Illumination correction images
    • The fully processed object frames should have an [OTZFI] flag.

  9. If one wants to coadd images to make one deeper image (presumably for each filter), one needs to:

    • Calculate the frame shifts in X and Y relative to one another, by centroiding a set of stars on each frame and deriving the δX and δY shifts needed to bring the images into alignment.

      You can do this with IMCENTROID or IMEXAMINE's r key.

    • Apply these shifts using IMSHIFT.

    • The above steps can also be done using the iraf task IMALIGN.

    • Add the frames together using COMBINE.

      • If the frames are "clones -- i.e., same integration, weather conditions, etc. -- you can use a median stack, but this is rare. Applying a median stack must be done carefully at least, considering how the noise of each image will enter into the result.

      • COMBINE's "sum" might be the preferred option, though it will now include all of the cosmic rays from all images in the result. This is not a problem if we do photometry with DAOPHOT.

Previous Topic: CCDs: Quantum Efficiency Lecture Index Next Topic: Photometry: Magnitudes

All material copyright © 2002, 2005, 2007, 2009, 2011, 2015 Steven R. Majewski. All rights reserved. These notes are intended for the private, noncommercial use of students enrolled in Astronomy 5110 at the University of Virginia.