Previous Topic: Detectors: CCDs Lecture Index Next Topic: CCDs: Quantum Efficiency

ASTR 3130, Majewski [SPRING 2015]. Lecture Notes

ASTR 3130 (Majewski) Lecture Notes



A linear CCD array.
    One only needs a single linear array of CCD detectors in order to create a two-dimensional image.

    There are two ways that this can be done, and you are almost certainly familiar with mechanisms that make use of linear CCD arrays .

    • Fixed CCD camera but moving target:

      Linear CCD camera used to inspect items on a moving conveyor belt.
    • Fixed target but moving CCD camera:

      Left: Manual barcode reader. Right: Scanning satellite imaging with different perspectives (e.g., forward, nadir and backward looking), which allow stereo/3-D perspective.

    • QUESTION: Can you think of two other devices you commonly use that are of the second mode of operation?

    • In either case, the CCD must be read out more quickly along the row than it takes to step along in the perpendicular direction.

  • Simplest way to read out a CCD array with two dimensions is called line address readout.
    • Arrange columns of CCD-linked pixels parallel to one another.

    • At end of columns is a set of pixels arranged and charge-coupled in a perpendicular row, called a serial register or Multiplexer or MUX (because all columns of data are are read through the same set of electronics at end of this MUX row).
    • Readout CCD by:

      1. shifting all columns by one pixel into multiplexer.

      2. Then readout the full MUX pixels in order by shifting charges along MUX to amplifier.

      3. When MUX completely empty after transfer of entire row of charge, repeat at 1.

    • The actual image that is assembled from this process is put together row by row. Note the difference/correspondence between the physical MOS pixels and the "picture element" pixels:

      • The physical columns of charge-coupled CCD MOS pixels correspond to the image columns of "picture element" pixels.

      • However, on the physical CCD device, the physical MOS pixels are not normally coupled by row -- this coupling takes place in the MUX and results in adjacent picture elements by row.

  • Potential problem - still collecting photons while cycling through above process for entire array.

    • Would result in a smearing of the image unless careful.


      1. Readout very fast - this is bad because the amplifier can't measure charge packets accurately.
      2. Use a shutter to cover CCD during readout.

    • For broadcast industry, use either:
      1. Interline transfer - light sensitive columns interleaved with light-shielded columns. Charges shift laterally into shielded columns, which are then read out as above.
      2. Frame transfer - charge quickly transferred to an entirely separate section of CCD that is protected, then readout slowly as needed.
    • For example: Original RCA CCDs were 320 x 512 pixel frame transfer devices, because TV image is 320 x 256 pixels.

      (Astronomers who wanted to use these devices requested that the shielding not be applied and made use of twice the area, but with shuttered readout).

    As a CCD clocks out charges through the MUX, an amplifier at the end of the MUX converts the electron packet net charge into a digital signal:

  • The εRN is an imposed noise in the readout process -- ignore for now (we will discuss this below).
  • The Gain is the number of electrons combined to make one "count" in the output picture.
  • These converted "counts" are also called "Analog-to-Digital Units" or "ADUs". All three of these expressions are commonly used to describe the digital levels recorded in each image pixel.

  • Typically the gains in CCDs are set to several e-/ADU.

    • For ST-8 CCD, G = 2.3 e-/ADU.
    • For ST-1001E CCD, G = 2.2 e-/ADU.
  • Note that normally we think of amplifiers as making a signal larger (rather than smaller, as here) and in the numerator. Thus, G is more accurately called the Inverse Gain - but you more often see people call G simply the "Gain".
  • The dynamic range of the image output is generally limited by the Analog to Digital Converter (ADC) which is capable of converting to a certain number of distinct digital "bits". Typical limits are:

    12 bit = 212 = 4096 distinct values
    15 bit = 215 = 32768 distinct values
    16 bit = 216 = 65536 distinct values --- ST 8 and ST-1001E (highest number)

    Note the ADC's effect on dynamic range.



  • All sources of noise are an annoyance and limit the accuracy and or precision of the experimental results.

  • We discuss the quality of a measurement by giving the Signal-to-Noise (S/N) of that measurement, which is to say that we take the ratio of the measure to the error in the measure.

      The higher the S/N, the more reliable the measure.

  • SHOT NOISE: One source of noise that we can never remove from our experiments is the statistical noise from Nature itself.

    • It can be shown that this statistical "shot noise", also called "Poissonian noise", is given by a square root rule:

      This is a fundamental law of Nature that the standard deviation of the number of randomly occurring events N is given by the square-root of the number of those events seen, (N)1/2.

        That is to say, if one repeatedly counts the number of photoelectrons, Ne- collected from a source in the same integration time, the standard deviation one will get is given by

    • If shot noise is the only source of noise in the experiment, then we have that the Signal-to-Noise is given by:

      S/N = S/(S1/2) = S1/2

      • QUESTION: What is the S/N in a pixel that accumulated 225 electrons?

      • QUESTION: How many electrons would one have to accumulate to ensure only a 10% error in the measure?

      • QUESTION: How many electrons would one have to accumulate to ensure only a 1% error in the measure?

      • QUESTION: While we can never remove statistical noise entirely from our experiment, we can try to minimize the relative error it introduces. By the above examples, how is this done?

    • Since we always have shot noise in our experiments, and can do nothing to eliminate it, an experiment with only shot noise represents the ideal in terms of S/N.

    • Most empirical scientists think about things in terms of signal-to-noise, and, in particular, in terms of the above ideal limit to the signal-to-noise in an experiment. YOU SHOULD TOO!

    • Be careful in how you apply the square root rule:

      Nature produces Poisson noise in the photon stream from a source, which in the case of CCDs is reflected in the photoelectron count, NOT in the translation to ADU counts.

  • READ NOISE Another source of error related specifically to CCDs, but also to many other detectors, relates to the accuracy of the amplification process that converts a certain electron packet size (i.e., measured pixel voltage) into a digital number. This is intrinsically limited by the electronics of the associated circuitry.

    • This error in the output amplifier conversion of the electron voltage to an output signal is called the Readout Noise or, more simply, the Read Noise.
    • The εRN is an imposed noise in the readout process.
    • It's units are given in electrons.

    • Because of how the amplifier reads out, the process is more accurate if allowed more time to respond/gauge the size of the charge packet.

      Thus, a slower readout process yields a lower readout noise. See example below:

      The relationship of camera noise on readout time is shown in this figure. Figures (a) and (b) compare the noise only from readout (unilluminated images). The remaining frames show illuminated images, but (from left to right) with decreasing amounts of signal. Note how as the signal decreases (i.e., lower light levels, rightmost images) the effect of the readnoise becomes more prevalent when it is large (top row) but has less effect when the readnoise is low (bottom row). Images show human cervical carcinoma cells. Image from
    • All sources of noise in an experiment add together in quadrature; that is to say:

    • For a given device and circuitry, the readout noise is of a constant size (in electrons, or, equivalently, in the converted ADUs), in each pixel.

      For example, the readnoise for the ST-1001E is 15 e- (RMS).

      Unlike shot noise, the read noise is independent of the signal, S.

      • Thus, at low light levels, the readout noise can dominate other sources of noise:
      • (see upper righthand image just above)

        A CCD image taken under these conditions we say to be readnoise-limited.

        • In this case, no matter how low the light level, the absolute error in our ability to measure it remains the same, and:

        • If we were in a totally readnoise-limited situation, the S/N would rise in direct proportion to the signal (i.e. integration time).

          E.g., in the absence of other noise sources, to double the S/N we simply double the exposure (i.e. signal).

        • Of course, as soon as the signal starts becoming comparable or so to the readnoise level, shot noise starts to contribute more noticeably.

      • At high light levels, the shot noise can be made to be much larger than the read noise

        so that in this case we can approach the ideal experiment:

        • In this case the S/N will be the square root of the signal.

        • Unlike the linear, read-noise-limited regime, here the S/N only improves as the square root of the integration time.

          E.g., to double the S/N requires FOUR TIMES the integration time for the same source.

  • READ NOISE VERSUS SHOT NOISE: In astronomy we prefer to be in the ideal regime, so we try to take CCD exposures in such a way that the readout noise does not dominate other sources of noise in all pixels.
    • This is not always as hard as it seems, because even if our celestial source is limited to only a small number of pixels, there is flux from the sky itself that is contributed to every pixel!

        We will get to this later in the semester, but the sources of "sky" flux are:

      • scattered moonlight

      • unresolved starlight

      • reflected/scattered sunlight

      • auroral emission from molecules in the earth's atmosphere

      • light pollution

    • The flux from the sky is Poissonian in Nature, and there is nothing we can do about eliminating it either.

    • Thus we aim to take pictures so all sources of Poisson noise, including the source and sky, together dominate the read noise.

      Total noise ~ σPOISSON

      We call such an image sky-limited.

    • Using the equations we have introduced on this page, you should be able to show how to achieve a sky-limited image:

        For a CCD with gain G and read noise σRN(e-) given in electrons, you can reach the sky-limited regime if the sky level yields a number of counts N(ADU) per sky pixel (given in ADUs) when


  • An effective way to reduce the effects of readout noise is through Pixel Binning.
  • Binning means you combine signals from adjacent pixels before they get to the readout amplifier.
    • In effect, you are combining sets of electron packets from more than one adjacent physical pixel to create one image pixel.

    • One result is to make the dimensions of the final image smaller.

      But the actual area of the sky imaged remains the same, you have simply sacrificed resolution (actually, pixel scale).

      That is, CCD field of view does not change but each image pixel represents more sky area and we have overall coarser resolution.

  • Common binning modes for ST-1001E CCD used in ASTR 3130:
  • 1 x 1 1024 x 1024 No binning
    2 x 2 512 x 512 Final image size
    3 x 3 341 x 341 Final image size
    2 x 1 Not available Mode commonly used for spectroscopy
    3 x 1 Not available Mode commonly used for spectroscopy

    Equal-sized binning in each dimension is what we most often do when we are using CCDs to take pictures of the sky (this is the only mode allowed with our current camera).

    But on some CCDs other, non-square combinations are possible, and this is sometimes useful for spectroscopic or other applications.

  • How physically do we do the binning?

  • Well, what's the point? How does binning help with reducing noise?
      1. Fewer actual amplifier readouts for the same picture area.

      2. The final binned pixels that are read out have more total counts (collected together from individual pixels).

    • The net effect, then is to increase the signal compared to the readout noise.

      • E.g. 4 pixels with 1x1 (i.e., no) binning:

        4 pixels with 2x2 binning:

        Effect of 2x2 binning is 2x less read noise per area of the sky imaged.
      • Thus, binning effectively increases sensitivity at faint levels (less integration time needed to detect a given object).
      • Where has this "binning" concept already been discussed in this class??


    • Because the readout amplifier has elements that work capacitatively, there is a finite response time for it to work well, and this tends to dominate the time it takes to readout the CCD.
      • One can always try to speed up the amplifier readout (shorten the duration of the sensing phase), but always at the risk of a higher readout noise per pixel.
    • Binning results in a faster total chip readout because fewer amplifier reads are needed (less of the time limiting process).
      • e.g. 2x2 binning is 4x faster than 1x1 binning
      • ST-8 read (digitization) rate = 30 kHz / pixel

        Binning # Readouts Read Time*
        1x1 1.56x106 52 sec
        2x2 3.90x105 13 sec
        3x3 1.73x105 6 sec
      • *Note there is additional time needed to write image to computer disk, display image, etc.


    Note, largest CCDs now 4096x4096. At 30 kHz, it would take 560 sec = 9 minutes to read!
  • Here are some new developments to deal with this problem, apart from binning:

    • Can build CCDs with multiple amplifiers to speed up readout with parallel processes (e.g., Fan Mtn. CCD):
    • Quad-Amp Readout 4x Faster

    • Latest electronics approaching > 100 kHz.
    • Note: In future circuitry non-destructive readout.

        Send same charge packet to amplifier M times - readout noise reduced to σ / M1/2.

  • When bin?
  • Let's review when it makes sense to consider binning.

    • Faint or low surface brightness objects where you are starved for photons (higher S/N).

    • When you are taking short integrations and the sky flux will contribute very little to the "blank sky" pixels, driving you from the sky-limited regime for these pixels (higher S/N).
    • When faster CCD readout speed is desired.

    • If smaller images (in Megabytes, not sky area!) are desired.

    • When loss of resolution is not important

      (resolution is the potential trade off for higher S/N, faster readout, smaller images).
      • e.g. ST-8 with 9 micron pixels on the 26" refractor:

        1x1 pixels about 0.19" x 0.19"

        2x2 pixels about 0.38" x 0.38"

        3x3 pixels about 0.57" x 0.57"

        Since the seeing is typically > 1.5" at McCormick - there is no real loss of useful resolution by binning this CCD camera.
      • Always want to Nyquist sample -- means you need to sample at twice the frequency of information you want to see. So, for a star with seeing width 1.5" need pixels smaller than 0.75".

        Click here to get an explanation of where Nyquist sampling theorem comes from.

      • But the ST-1001E CCD has much larger, 24 micron pixels. On the 26" refractor we then have:

        1x1 pixels about 0.5" x 0.5"

        2x2 pixels about 1.0" x 1.0"

        3x3 pixels about 1.5" x 1.5"

        Only if the seeing is fairly poor -- i.e. worse than 2 arcsec -- does it make sense to bin the already large pixels of the ST-1001E CCD camera.

      • The APOGEE Alta camera has 12 mircon pixels. I leave to you to figure out the pixel scale in this case.

      • Note also that the ST-8 camera, which has a 1530 X 1024 format, has more pixels than the ST-1001E, but because they are physically smaller (9 microns compared to 24 microns in the ST-1001E) the ST-8 covers a significantly smaller area on the sky.

        The 0.19"/pixel scale of the ST-8 is a poor match to the 26-inch telescope plate scale, whereas the 0.5"/pixel scale of the ST-1001E is a very nice match for Nyquist sampling the typical seeing profile.


Another CCD readout trick is Time Delay Integration (TDI) or Drift Scanning.

  • This is something like the sweeping detector concept we saw above with the linear CCD array, except we do the same thing now with a two-dimensional CCD array.

  • In TDI we read the chip along columns at exactly the rate the CCD camera sweeps past a fixed scene to build a long strip image.

    • In this case each physical CCD pixel creates multiple image pixels.
    • Indeed, every physical pixel in a CCD column contributes to making each image pixel for the corresponding column in the image.

  • In astronomy, most common technique is to turn off clock drive on telescope and have CCD / scope move with Earth past stellar scene; clock CCD at sidereal rate.
    • Total integration time per picture pixel is time to cross CCD array.
    • Note each image pixel in a column created from equal contributions of each CCD pixel in column. Time each image pixel spent in each physical pixel is = total transit time across CCD/number of pixels in column.
    • Result is a single image covering much greater extent.

      Example of a drift scanned image made of the Pleaides star cluster (Messier 45) made with an SBIG ST-7 camera. Image from .
    • A great advantage of this kind of image is that the final picture is "smoother" than a normal CCD image, because the output image has had every image pixel creaed from the average Q.E. response from many individual CCD pixels that contributed to making it.
  • EXAMPLE: Sloan Digital Sky Survey to map large areas of the sky.

    Sloan camera - 54 CCDs!
    • 30 "photometry" chips (2048 x 2048)
    • 22 "astrometry" chips (2048 x 400)
      • Reduced star transit time, less net integration, brighter stars can be observed without saturation

    • 2 "focus" chips (2048 x 400)
    Various images showing the arrangement of CCDs in the Sloan Digital Sky Survey camera (now at the Smithsonian Museum).

    How "filled" images of the sky are made by interleaving SDSS images.

    Of course, ever larger cameras made of numerous CCDs that can be read out simultaneously are a modern focus of astronomy.

    • Allows one to cover more area at the same time -- equal to largest photographic plates, but with much better QE.

    • Currently largest is the Dark Energy Camera (DECam), a 570-megapixel camera to be used for studying the distributions of galaxies as a means to understand dark energy, built at Fermilab in Illinois and mounted on the Blanco 4-m telescope in Chile.

      Each DECam image is 1 Gbyte in file size and covers a FOV of 2.2 degrees across (an area equal to 20 times that of the moon as seen from Earth).

      The camera contains 62 CCDs with 2048 x 4096 pixels each and 12 CCDs of 2048 x 2048 for guiding, alignment and focus.

    Image of the DECam CCD array.

    One of the first images taken with the DECam CCD array on September 12, 2012. The image is of the globular cluster 47 Tucanae.

Previous Topic: Detectors: CCDs Lecture Index Next Topic: CCDs: Quantum Efficiency

Linear CCD image taken from The conveyor-belt linear CCD images were taken from while the satellite imaging linear CCD images were taken from The line address readout conveyor belt image from The barcode reader image is from All other material copyright © 2002,2006,2008,2012,2015 Steven R. Majewski. All rights reserved. These notes are intended for the private, noncommercial use of students enrolled in Astronomy 313 and Astronomy 3130 at the University of Virginia.