ASTR 5110, Majewski [FALL 2017]. Lecture Notes

## DETECTORS: READOUT ARCHITECTURES, AMPLIFIERS AND NOISE

REFERENCES:

Berry's CCD Astronomy, Chapter 1.

Rieke's The Detection of Light, Section 6.3.

Howell's Handbook of CCD Astronomy, Chapters 2 and 3.

##### A linear CCD array.
One can use a single linear array of CCD detectors to create a two-dimensional image.

There are two ways that this can be done, and you are almost certainly familiar with mechanisms that make use of linear CCD arrays .

• Fixed CCD camera but moving target:

##### Linear CCD camera used to inspect items on a moving conveyor belt.
• Fixed target but moving CCD camera:

##### Left: Manual barcode reader. Right: Scanning satellite imaging with different perspectives (e.g., forward and backward looking) to give stereo imaging.

• QUESTION: Can you think of two other devices you commonly use that are of the second mode of operation?

• In either case, the CCD is read out more quickly along the row than it takes to step along in the perpendicular direction.

• Simplest way to read out a CCD array with two dimensions is called line address readout.
• Arrange columns of CCD-linked pixels parallel to one another.

• At end of columns is a set of pixels arranged and charge-coupled in a perpendicular row, called a serial register or Multiplexer or MUX (because all columns of data are are read through the same set of electronics at end of this MUX row).

1. shifting all columns by one pixel into multiplexer.

2. Then readout the full MUX pixels in order by shifting charges along MUX to amplifier.

3. When MUX completely empty after transfer of entire row of charge, repeat at 1.

• The actual image that is assembled from this process is put together row by row. Note the difference/correspondence between the physical MOS pixels and the "picture element" pixels:

• The physical columns of charge-coupled CCD MOS pixels correspond to the image columns of "picture element" pixels.

• However, on the physical CCD device, the physical MOS pixels are not normally coupled by row -- this coupling takes place in the MUX and results in adjacent picture elements by row.

• Problem - still collecting photons while cycling through above process for entire array.

• Results in a smearing of the image unless careful.

Either:

1. Readout very fast - this is bad because the amplifier can't measure charge packets accurately.
2. Use a shutter to cover CCD during readout.

• For broadcast industry, use either:
1. Interline transfer - light sensitive columns interleaved with light-shielded columns. Charges shift laterally into shielded columns, which are then read out as above.
2. Frame transfer - charge quickly transferred to an entirely separate section of CCD that is protected, then readout slowly as needed.
• For example: Original RCA CCDs were 320 x 512 pixel frame transfer devices, because pre-HD TV image was 320 x 256 pixels. Astronomers requested that the shielding not be applied and made use of twice the area.

As a CCD clocks out charges through the MUX, an amplifier at the end of the MUX converts the electron packet net charge into a digital signal:

• The Gain is the number of electrons combined to make one picture "count".
• These converted "counts" are also called "Analog-to-Digital Units" or "ADUs". All three of these expressions are commonly used to describe the digital levels recorded in each image pixel.

• Typically the gains in CCDs are set to several e-/ADU.

• For ST-8 CCD, G = 2.3 e-/ADU.
• For FMO Gen I CCD, two gains:

• High Gain = 2.06 e-/ADU.

• Low Gain = 3.84 e-/ADU.

• Note that normally we think of amplifiers as making a signal larger (rather than smaller, as here) and in the numerator. Thus, G is more accurately called the Inverse Gain - but you more often see people call G simply the "Gain".
• The dynamic range of the image output is limited by the Analog to Digital Converter (ADC) which is capable of converting to a certain number of distinct digital "bits". Typical limits are:

 12 bit = 212 = 4096 distinct values 15 bit = 215 = 32768 distinct values 16 bit = 216 = 65536 distinct values --- ST 8 (highest number)

Note the ADC's effect on dynamic range.

SIGNAL-TO-NOISE, SHOT NOISE AND READOUT NOISE

PLEASE REVIEW THIS SECTION CAREFULLY -- THE CONCEPTS ARE IMPORTANT BUT CAN BE CONFUSING.

• All sources of noise are an annoyance and limit the accuracy and or precision of the experimental results.

• We discuss the quality of a measurement by giving the Signal-to-Noise (S/N) of that measurement, which is to say that we take the ratio of the measure to the error in the measure.

The higher the S/N, the more reliable the measure.

• SHOT NOISE: One source of noise that we can never remove from our experiments is the statistical noise from Nature itself.

• We have not discussed this yet, but it can be shown that this statistical "shot noise", also called "Poissonian noise", is given by a square root rule:

This is a fundamental law of Nature that the standard deviation of the number of randomly occurring events N is given by the square-root of the number of those events seen, (N)1/2.

That is to say, if one repeatedly counts the number of photoelectrons, Ne- collected from a source in the same integration time, the standard deviation one will get is given by

• If shot noise is the only source of noise in the experiment, then we have that the Signal-to-Noise is given by:

S/N = S/(S1/2) = S1/2

• QUESTION: What is the S/N in a pixel that accumulated 225 counts?

• QUESTION: How many counts would one have to accumulate to ensure only a 10% error in the measure?

• QUESTION: How many counts would one have to accumulate to ensure only a 1% error in the measure?

• QUESTION: While we can never remove statistical noise entirely from our experiment, we can try to minimize the relative error it introduces. By the above examples, how is this done?

• Since we always have shot noise in our experiments, and can do nothing to eliminate it, an experiment with only shot noise represents the ideal in terms of S/N.

• Most empirical scientists think about things in terms of signal-to-noise, and, in particular, in terms of the above ideal limit to the signal-to-noise in an experiment. YOU SHOULD TOO!

• READ NOISE Another source of error related specifically to CCDs, but also to many other detectors, relates to the accuracy of the amplification process that converts a certain electron packet size (i.e., measured pixel voltage) into a digital number. This is intrinsically limited by the electronics of the associated circuitry.

• This error in the output amplifier conversion of the electron voltage to an output signal is called the Readout Noise or, more simply, the Read Noise.
• The readnoise is typically given as an error in units of electrons.

• All sources of noise in an experiment are typically uncorrelated and therefore add together in quadrature; that is to say:

• For a given device and circuitry, the readout noise is characterized by a (Gaussian) disribution of a constant width (in electrons, or, equivalently, in the converted ADUs).

That is to say, unlike shot noise, the read noise is independent of the signal, S.

• Thus, at low light levels, the readout noise can dominate other sources of noise:
• A CCD image taken under these conditions we say to be readout-limited.

In this case, no matter how low the light level, the absolute error in our ability to measure it remains the same, and:

• At high light levels, the shot noise can be made to be much larger than the read noise

so that in this case we can approach the ideal experiment:

• For FMO Gen I CCD, read out noise depends on gain setting:

• High Gain = 2.06 e-/ADU, RN = 16.9 e-.

• Low Gain = 3.84 e-/ADU, RN = 8.9 e-..

• READ NOISE VERSUS SHOT NOISE: In astronomy we prefer to be in the ideal regime, so we try to take CCD exposures in such a way that the readout noise does not dominate other sources of noise in all pixels.
• This is not always as hard as it seems, because even if our celestial source is limited to only a small number of pixels, there is flux from the sky itself that is contributed to every pixel!

We will get to this later in the semester, but the sources of "sky" flux are:

• scattered moonlight

• unresolved starlight

• reflected/scattered sunlight

• auroral emission from molecules in the earth's atmosphere

• light pollution

• The flux from the sky is Poissonian in Nature, and there is nothing we can do about eliminating it either.

• So a common goal is to take pictures wherein all sources of Poisson noise, including the source and sky, together dominate the read noise.

Total noise ~ POISSON

We call such an image sky-limited.

• Using the equations we have introduced on this page, you should be able to show how to achieve a sky-limited image:

For a CCD with gain G and read noise RN(e-) given in electrons, you can reach the sky-limited regime if the sky level yields a number of counts N(ADU) per sky pixel (given in ADUs) when

HOW TO MEASURE RN AND GAIN FOR A CCD (see also Howell, pp. 52-53):

BINNING

• An effective way to reduce the effects of readout noise is through Pixel Binning.
• Binning means you combine signals from adjacent pixels before they get to the readout amplifier.
• ##### From Rieke, The Detection of Light.
• In effect, you are combining sets of electron packets from more than one adjacent physical pixel to create one image pixel.

• One result is to make the dimensions of the final image smaller.

But the actual area of the sky imaged remains the same, you have simply sacrificed resolution. That is, CCD field of view does not change but each image pixel represents more sky area and we have overall coarser resolution.

• For example, common binning modes for ST-8 CCD used in ASTR 3130:
•  1 x 1 1530 x 1020 No binning 2 x 2* 765 x 510 Final image size 3 x 3* 510 x 340 Final image size 2 x 1 Not available Mode commonly used for spectroscopy 3 x 1 Not available Mode commonly used for spectroscopy

*Note that while other combinations are possible, equal-sized binning in each dimension is what we most often do when we are using CCDs to take pictures of the sky.

• How physically do we do the binning?

• Well, what's the point? How does binning help with reducing noise?
1. Fewer actual amplifier readouts for the same picture area.

2. The final binned pixels that are readout have more total counts (collected together from individual pixels).

• The net effect, then is to increase the signal compared to the readout noise.

• E.g. 4 pixels with 1x1 (i.e., no) binning:

4 pixels with 2x2 binning:

^1/2

Effect of 2x2 binning is 2x less read noise per area of the sky imaged.
• Thus, binning effectively increases sensitivity at faint levels (less integration time needed to detect given object).
• Where has this concept already been discussed in this class??

• ANCILLARY BENEFITS TO BINNING:

• Because the readout amplifier has elements that work capacitatively, there is a finite response time for it to work well, and it tends to dominate the time it takes to readout the CCD.
• One can always try to speed up the amplifier readout duration, but always at the risk of a higher readout noise per pixel.
• Binning results in a faster total chip readout because fewer amplifier reads are needed (less of the time limiting process).
• e.g. 2x2 binning is 4x faster than 1x1 binning
• ST-8 read (digitization) rate = 30 kHz / pixel

Thus:
1x1 1.56x106 52 sec
2x2 3.90x105 13 sec
3x3 1.73x105 6 sec
• *Note there is additional time needed to write image to computer disk, display image, etc.

• OTHER SPEED CONSIDERATIONS:

Note, largest CCDs now 4096x4096. At 30 kHz, it would take 560 sec = 9 minutes to read!
• Here are some new developments to deal with this problem, apart from binning:

• Can build CCDs with multiple amplifiers to speed up readout with parallel processes (e.g., new Fan Mtn. CCD):

• Latest electronics approaching > 100 kHz.
• Note: In future circuitry non-destructive readout.

• Send same charge packet to amplifier M times - readout noise reduced to / M1/2.
• However, this does mean taking longer to readout, depending on how many reads you do.

• To reduce the amount of time that a series of reads would take at the end of an exposure, these arrays can be used in several modes to improve the readout noise while the integration is still happening, such as "Sampling-Up-The-Ramp" and "Fowler Sampling" to effectively reduce the readout noise.

##### From Finger et al. (http://www.eso.org/~gfinger/muc2000/muc2000.html).

Both methods rely on calculating the rate of accumulating flux during the integration, which you then use to calculate the the collected flux during the integration.

• When bin?
• Let's review when it makes sense to consider binning.

• Faint or low surface brightness objects where you are starved for photons.

• When you are taking short integrations and the sky flux will contribute very little to the "blank sky" pixels .
• When high CCD readout speed is needed.

• When loss of resolution is not important:
• e.g., ST-8 on 26" refractor

1x1 pixels about 0.2" x 0.2"

2x2 pixels about 0.4" x 0.4"

3x3 pixels about 0.6" x 0.6"

• But seeing typically > 1.5" - so loss of resolution not important.
• Always want to Nyquist sample -- means you need to sample at twice the frequency of information you want to see.

For a given seeing FWHM and pixel size, p, one typically wants r = FWHM / p > ~2.

So, for a star with seeing width 1.5" need pixels smaller than 0.75".

Click here or here to get an explanation of where Nyquist sampling theorem comes from.

• For r less than about 1.5, the data are considered undersampled (see Howell Section 5.6).

(This is not 2.0 as one might expect because the actual critical sampling depends on the standard deviation width of the PSF, and for a Gaussian PSF the FWHM = 2.355σ.)

• As data become increasingly undersampled, standard software techniques for either centroiding (astrometry) or measuring the flux of (photometry) of sources that depend on fitting the PSF give increasingly large errors.

To see why, look at the following figures and note how poorly an undersampled image is approximated by a Gaussian:

##### The top figure shows the distribution of pixel levels for a star in a well sampled image. The bottom two panels show severely undersampled stellar images, one where the center of the star lands in the center of a pixel and one where the center of the star lands at the intersection of four pixels. From Howell, Handbook of CCD Astronomy.

TIME DELAY INTEGRATION

Another CCD readout trick is Time Delay Integration (TDI) or Drift Scanning.

• This is something like the sweeping detector concept we saw above with the linear CCD array, except we do the same thing now with a two-dimensional CCD array.

• In TDI we read the chip along columns at exactly the rate the CCD camera sweeps past a fixed scene to build a long strip image.

• In this case each physical CCD pixel creates multiple image pixels.
• Indeed, every physical pixel in a CCD column contributes to making each image pixel for the corresponding column in the image.

• In astronomy, most common technique is to turn off clock drive on telescope and have CCD / scope move with Earth past stellar scene; clock CCD at sidereal rate.
• Total integration time per picture pixel is time to cross CCD array.
• Note each image pixel in a column created from equal contributions of each CCD pixel in column. Time each image pixel spent in each physical pixel is = total transit time across CCD/number of pixels in column.
• Final picture "smoother", average over many individual CCD pixel Q.E.'s.
• EXAMPLE: Sloan Digital Sky Survey to map large areas of the sky.

Sloan camera - 54 CCDs!
• 30 "photometry" chips (2048 x 2048)
• 22 "astrometry" chips (2048 x 400)
• Reduced star transit time, less net integration, brighter stars can be observed without saturation

• 2 "focus" chips (2048 x 400)

ORTHOGONAL TRANSFER ARRAYS

Traditional CCDs are designed to move charge in one dimension (from row to row along columns to the MUX).

However, a new CCD structure has been designed that allows motion of charge packets in TWO dimensions -- the Orthogonal Transfer Array (OTA).

• Invented by U. Hawaii group (Burke et al. 1994, Tonry & Burke 1998).

• Basis is a 4-phase charge transfer mechanism.

• Two of the gates in this system are triangular.

##### From Rieke, The Detection of Light.
• To move charge left-right, electrode 3 is held negatively biased (to act as a repelling channel stop) and electrodes 1,2,4 are operated like a normal 3-phase CCD.

• To move charge up-down, electrode 4 negative, and use electrodes 1,2,3 as a three-phase CCD.

• WHY DO THIS??
On-chip tip-tilt compensation!

Localized tip-tilt compensation!

Use some of the chips or fractions thereof for fast readout monitoring of bright stars for local centroiding.

• Being used for billion pixel (gigapixel), 40 cm X 40 cm cameras for the Pan-STARRS experiment.

Each camera (on each of four identical telescopes) has an 8 X 8 array of OTA chips.

##### From http://pan-starrs.ifa.hawaii.edu/public/design/cameras.html.
• NOAO has been working on a series of OTA cameras for use on the WIYN 3.5-m telescope (which has excellent natural seeing):

• OPTIC: Two 2K X 4K OTA CCDs.

• QUOTA (Quad Orthogonal Transfer Array): Four 4K X 4K CCDs.

• ODI (One Degree Imager): 64 OTAs --> 32K X 32K.

##### From http://pan-starrs.ifa.hawaii.edu/public/design/cameras.html.

A FINAL WORD ABOUT LARGE IMAGING CAMERAS AND THEIR COMPARISON

We have talked now about a number of large imaging sky surveys with some very impressive hardware characteristics, e.g., SDSS, Pan-STARRS, LSST, ODI.

How could would compare the relative performance of a system like Pan-STARRS, which has enormous imagers on multiple small telescopes with something like LSST, which has an enormous telescope aperture?

• A standard metric in common use these days is the throughput or etendue, A*ω, where A is the aperture area, and ω is the area of the sky that can be imaged simultaneously.

• For example:

 Telescope/Imager Aperture (m) CCD field (deg2) Aω (m2deg2) FMO 1-m/Gen I 1.0 0.04 0.03 NOAO 4-m/Mosaic 4.0 0.36 4.5 MMT/Megacam 6.5 0.16 5.3 Sloan 2.5 1.5 7.5 WIYN/ODI 3.5 1.0 9.6 Pan-STARRS Four 1.8 3.0 30.5 CTIO-4m + Dark Energy Cam 4.0 4.84 60 LSST/DMT(>~2012) 6.9 9.00 97.5

• One can modify the metric to include things like the relative seeing FWHM, θ, and an efficiency (fraction of time spent integrating on sky, e.g., accounting for readout times, etc.), ε.

For example, a metric like:
M = A * ω * ε / θ2.

##### A look inside the Dark Energy Camera shows the 74 blue-tinged sensors, totaling 570 Megapixels, that detect light. The camera will survey distant, faint galaxies to learn more about dark energy. From: http://www.npr.org/2011/08/22/139849705/giant-camera-will-hunt-for-signs-of-dark-energy.

THE NOD AND SHUFFLE

A recent technique, called nod and shuffle has been used for increasing the quality of spectroscopic observations, particularly those of very faint objects.

The goal of nod and shuffle is improved subtraction of sky background from the target spectra, which can be a headache -- especially with changing QE patterns from pixel to pixel.

• In typical slit spectroscopy, one simultaneously collects a sampling of sky spectrum in those parts of the slit not containing the target, but the problem is that these sky spectra go through different parts of the slit and are detected with different pixels with different QE properties.

The degree of success in subtracting the sky background can be severely affected by systematic errors, particularly if the sky is brighter than the source.

The method actually combines the "chopping" or "beam-switching" technique commonly used in infrared astronomy with clever charge packet manipulation (which is why it is worth bringing up here).

The following picture demonstrates the basic concept in the case of a single slit spectroscopic observation of a single galaxy.
##### From
• Requires a CCD with three times the area as that taken up by the image of the spectrum (in this example), and with only the central 1/3 of the chip uncovered to collect photoelectrons.

• Imagine putting the galaxy on one side of the slit and collecting a spectrum for a set time period, τ (left panel).

• Close the shutter, and frame transfer the charge packets 1/3 the chip size, transferring the just collected charge packets to an unilluminated part of the CCD (middle panel).

Simultaneously, move the pointing of the telescope, so that now the image of the galaxy is on the opposite of the slit.

• Collect another image for a time τ.

• Next, shuffle the newly collected charge packets as well as the previously stored charge packets in the opposite direction, to a second hidden part of the CCD.

Simultaneously nod the telescope back to its original pointing, so the galaxy now lands in the original position.

• Repeat the process over and over, collecting a pair of images, which have the galaxy on opposite parts of the slit.

• In the end, you have two images of the spectrum (on the same readout CCD frame) but with the galaxy in two different places.

• Each image has a spectrum of the sky at the position where the galaxy is in the other image, where the sky spectrum has been taken through the exact same part of the slit and with the exact same physical CCD pixels.

Though these sky and target spectra are stored in different regions of the CCD detector, they were imaged with exactly the same pixels through identical optical paths.

The effects of pixel response (flat-field), fringing, irregularities in the slit, and temporal variations in the sky background cancel out when one subtracts the sky spectrum from the object spectrum.

• Moreover, if you take the two halves of the CCD frame and subtract them from one another, you will have TWO independent spectra with the sky very precisely subtracted from the target (in this case, galaxy) spectra.

You will be left with one cleanly sky-subtracted positive galaxy spectrum and one cleanly sky-subtracted negative galaxy spectrum, which can be combined together.

• For long exposures, one can realize a factor of 10 times improvement in the systematic uncertainties associated with subtraction of bright sky lines, especially in the red (600-1000 nm) where such errors typically dominate over photon or read noise.

• The read-noise contribution will be increased because you have twice the readouts involved.

• Because part of the detector is used for charge storage and therefore can not be illuminated, one necessarily loses between 50 and 66% of the CCD field of view.

• There are overheads involved with closing shutter and moving charge.

• Click here to see an example of what the images produced by this process look like.

Here is a version of the technique that makes use of multi-slit spectra:

##### From Glazebrook & Bland-Hawthorne 2001, PASP, 113, 197.
• Because with Nod & Shuffle one no longer derives the sky from regions adjacent to the object, one can use significantly shorter slits than in classical multi-slit spectroscopy.

In the limit where the slits have the same length as the object size (or the seeing disk) one can have a density of these "micro-slits'' which is therefore 5-10 times higher than in classic multi-slit mode.

This can be particularly advantageous when attempting spectroscopy of many objects in crowded fields.

If you want to know more about the nod and shuffle technique, which is now in use on the NOAO Gemini, the Magellan and other telescopes you have access to, see:

THE COMPLEMENTARY METAL OXIDE SEMICONDUCTOR (CMOS) ARRAY

As demonstrated above, an important limitation of the CCD is the lengthy amount of time that its limited number of amplifiers can read out the full array,

A year before the CCD was invented (1969), another silicon substrate device with MOS pixels but an alternative readout structure had been invented -- the Complementary Metal Oxide Semiconducter (CMOS) array.

• The CMOS array is an example of a device that has active-pixel sensors --- that is, the essence of the CMOS array is that each pixel has its own readout amplifier and circuitry (unlike in the case of the CCD, where the pixels are passive-pixel sensors).

• In addition, in a CMOS array, each and any pixel output can be addressed directly by its x-y position, rather than having to be accessed after a sequence of bucket brigade tasnfers of charge.

##### Cartoon showing basic format of a CMOS array. From http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm .
The CMOS architecture has a number of advantages over the CCD architecture:

• Manufacturing: It turns out that CMOS arrays are easier to manufacture, because they can be made using similar processes made for creating computer processeors, memories and other commonly made integrated cirucit components.

In contrast, because CCDs require multiple clocking circuits and inputs, they require special processes to manufacture.

• Power Consumption: CCDs use a great deal more power because the clocking circuits are constantly charging and discharging large capacitors in their operation.

"In contrast, CMOS arrays use only a single voltage input and clock, meaning they consume much less power than CCDs..." (http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm).

CMOS sensors can be as much as 100 times less power hungry than CCDs. This makes them particularly attractive options for battery-powered devices.

• Addressible Pixels: In a CCD, one cannot read a single particular pixel, but has to go through the entire shift and read process along columns, then rows.

In contrast, the pixels in CMOS imagers can be addressed directly by their x-y position.

This makes it easier to read out sub-arrays and do other imaging techniques.

• Blooming: In a CCD the pixels are connected to one another along columns. Large, oversaturated charge packets can "bleed" into adjacent charge packets, creating "blooming" along the columns.

In CMOS arrays, the pixels are disconnected from one another, which basically eliminates blooming, excpet in the most extreme situations.

• Speed: Perhaps the most important advantage of the CMOS array is that all pixels can readout in parallel, and then all that is passed along the cross-array circuitry are digital signals.

Compared to CCDs, the massively parallel CMOS technology allows a much faster array readout.

The speed, pixel addressibility, and low power consumption of CMOS arrays make them preferable for commercial applications like digital cameras.

However, CCD arrays at first won out and doiminated over CMOS arrays, especially for low light applications, like astronomy, because of certain disadvantages compared to CCDs:

• Reduced Pixel Fill Factors: Because of the associated circuitry in each pixel, some amount of each pixel is "dead" -- i.e., not sensitive to light.

Note that originally fill factors were as low as 40%, but as technology improves, circuit miniaturization is improving and fill factors as high as 80% can be reached.

In contrast, CCD arrays have nearly 100% fill factors, and therefore have higher net quantum efficiency.

##### Cartoon showing basic format of a CMOS pixel -- showing the reduced fill factors that resiult in lost light. From http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm .
• Time Delay Integration: Obviously possible with CCDs, TDI is currently not feasible with CMOS arrags.

• Fixed Pattern Noise: Because each pixel has its own amplifier, there can be large variations in bias levels, gains and read noise in CMOS arrays.

In contrast, with a common amplifier for all pixels, the uniformity of CCD arrays is much greater.

Until recently, CCDs produced far superior image quality.

For some time, semiconductor lithography simply did not make it possible for CMOS arrays to be made with the uniformity to compete with CCDs.

Because CCDs matured faster, for some time they were superior quality, had more pixels, and greater sensitivity than CMOS arrays,

However,

• recently "Renewed interest in CMOS was based on expectations of lowered power consumption, camera-on-a-chip integration, and lowered fabrication costs from the reuse of mainstream logic and memory device fabrication. Achieving these benefits in practice while simultaneously delivering high image quality has taken far more time, money, and process adaptation than original projections suggested, but CMOS imagers have [now] joined CCDs as mainstream, mature technology."

• "With the promise of lower power consumption and higher integration for smaller components, CMOS designers focused efforts on imagers for mobile phones, the highest volume image sensor application in the world. An enormous amount of investment was made to develop and fine tune CMOS imagers and the fabrication processes that manufacture them. As a result of this investment, we witnessed great improvements in image quality, even as pixel sizes shrank. Therefore, in the case of high volume consumer area and line scan imagers, based on almost every performance parameter imaginable, CMOS imagers outperform CCDs..." (http://www.teledynedalsa.com/imaging/knowledge-center/appnotes/ccd-vs-cmos/)

• Money talks, and according to CMOS inventor Eric Fossum, "The force of marketing is greater than the force of engineering..."

• "CMOS chips can be fabricated on just about any standard silicon production line, so they tend to be extremely inexpensive compared to CCD sensors." (https://electronics.howstuffworks.com/cameras-photography/digital/question362.htm)

Note that whether the array is a CCD or a CMOS architecture, the modern digital camera industry uses the same method to make color images, through use of a Bayer filter to combine/averag each 2x2 array of pixels into a single colored pixel. (Obviously this is not done for professional astronomical imaging, where the color separation is done externally by large single filters over all pixels.)

##### (Left) A Bayer filter for RGB color imaging. Note that green has double the net detector area, which is a reflection of this being the peak sensitivity of thre human eye. (Right) Configuration of a color imaging system using an interline transfer CCD. Both images from http://www.camerarepair.org/2012/05/ccd-vs-cmos-the-sensor-breakdown/ .

THE ELECTRON-MULTIPYING CCD (EMCCD)

A brand new technology combined the technology of a CCD with the concept of a photomultiplier system.

In the electron-multiplying CCD, the S/N of the output signal is increased by adding to the serial register a sequence of multiplication registers to increase the size of the electron charge packets in size, making them more able to dominate the amplifier readout noise.

##### From http://www.nuvucameras.com/emccd-tutorial/ .

• The multiplication register is an increasingly higher voltage sequence designed to create an electron cascade before the output amplifier.

This process is called impact ionization, and is much like the photomultiplier dynode chain.

• The gain at each step of the multiplication sequence is small (a few percent), but there are a large number of amplification steps (>500).

• Although there are more phases in the readout process, that process can be sped up considerably, because hte packet is so much larger by the end -- one doesn't have to be as careful to slow down the read of the charge packet for accuracy.