An Introduction to Electronic Image Acquisition during Light Microscopic Observation of Biological Specimens

I. INTRODUCTION
This article introduces the reader to the choices and considerations that need to be made when designing an image acquisition system. It focuses on the use of electronic cameras to record images of biological specimens generated with light microscopy techniques. There is no one image acquisition system that will work for every light microscopy application. To make the best choices for your application, you should understand your needs, as well as the strengths and limitations of the equipment available.

II. UNDERSTANDING YOUR NEEDS
The ideal image acquisition system would be sensitive enough to acquire beautiful images of very dim fluorescence specimens, fast enough to record the most dynamic processes, have high resolution to capture the finest detail, and have enough useful dynamic range to accurately measure minute differences in intensity. However, optimizing any one of these criteria can only be accomplished by limiting one or more of the others (Shotton, 1993; Spring, 2000). It is therefore impossible to design one image acquisition system that will be ideal for the wide range of light microscopy applications in cell biology. Instead, one should determine how best to meet the needs of the majority of system users, with an understanding that some compromises will probably need to be made.

To begin, it is useful to recognize the demands that the mode of microscopy used to generate the images imposes on the image acquisition system. In their research, cell biologists most commonly use fluorescence microscopy and various forms of transmitted light microscopy. Fluorescence microscopy techniques include standard wide-field epifluorescence (i.e., where the fluorescence excitation light reaches the specimen through the same objective lens that is used to image the fluorescence emission) (Herman, 1998), laser-scanning confocal (Inoué, 1995), and Nipkow spinning disk confocal (see article by Waterman-Storer and Gupton; Maddox, 2003). Laser-scanning confocal systems use photomultiplier tubes, not electronic cameras, to collect photons sequentially from the pixels that are used to generate images. Transmitted light techniques include standard bright-field microscopy (Murphy, 2001) and contrast generation methods such as phase contrast and differential interference contrast (DIC, Nomarski) (Inoué and Spring, 1997; Murphy, 2001).

A. Fluorescence Microscopy
In fluorescence microscopy, cellular components are labeled with molecules that emit photons when illuminated with excitatory light of the appropriate wavelength (Berland, 2001; Herman, 1998). Its high level of molecular specificity and relative ease of use have made fluorescence microscopy the most commonly used mode of light microscopy in cell biology research today. The use of fluorescence microscopy to visualize the dynamics of specific molecules in live cells over time has increased greatly since the discovery and cloning of green fluorescent protein (GFP) (Tsien, 1998).

Acquiring fluorescence images presents a special challenge because of the relatively low number of photons emitted by the specimen that must be collected by the microscope optics and detected by the camera (Herman, 1998; Spring, 2001). The number of photons emitted by the specimen is limited by the concentration of fluorophore and the efficiency with which the fluorophore emits photons when exposed to excitation wavelengths of light. Fluorescence image acquisition is further complicated by the gradual decrease in fluorescence known as photobleaching. When fluorophores are exposed to excitation photons, they can and do undergo chemical modifications that irreversibly inhibit further emission of light (Herman, 1998). Therefore, when choosing microscope optics and a camera for a fluorescence microscopy image acquisition system, attention should be paid to light efficiency: achieving the highest image quality with the smallest number of excitation photons (Salmon and Waters, 1996). Light efficiency is particularly critical when the acquisition system will be used for imaging fluorescence in live cells over time, where the rate of photobleaching imposes limits on the length of an experiment, and where overexposure to excitation light can also induce phototoxicity in the illuminated cells.

B. Bright-Field-Transmitted Light Microscopy
There are several different types of transmitted light microscopy that are routinely used by cell biologists (Murphy, 2001). Most of the thin specimens that cell biologists examine with transmitted light microscopy absorb very little light and are therefore essentially transparent. Standard bright-field microscopy is consequently used primarily in conjunction with histology stains that bind with some specificity to cellular components and absorb particular wavelengths of light. When illuminated with white light, these stains result in a bright high-contrast color image. A camera capable of recording color images (Spring, 2000) is usually preferred when working with such histologystained slides in bright-field mode.

Even though thin biological specimens usually absorb little light, they do cause changes in the optical path of the light rays that pass through them because of refractive index changes; these changes, however, are invisible to our eyes and to electronic cameras. A variety of optical contrast generation methods use additional components in the transmitted light path that translate changes in refractive index into contrast that our eyes and electronic cameras can detect (Murphy, 2001; Inoué and Spring, 1997). The contrast generation methods used most commonly by cell biologists are phase contrast and differential interference contrast. These methods have the benefit of generating contrast in transparent specimens in a noninvasive manner that allows them to be used with living cells and tissues without the need for chemical treatments or the application of stains.

The light efficiency of the image acquisition system that is of such concern to the fluorescence microscopist matters less for the various transmitted light-imaging techniques, which are usually not photon limited. In transmitted bright-field microscopy, briefly increasing the intensity of illumination light will increase the brightness of the image without inducing specimen damage, allowing the user to meet the requirements of a less sensitive and less expensive camera, although it should be noted that prolonged exposure to intense light below 500nm in wavelength (i.e., blue and ultraviolet) is phototoxic to cells. Cameras optimized for fast acquisition can be used to record transmitted light images of dynamic processes, such as cell motility, or changes in cell or organelle morphology with the high temporal resolution (Shotton, 1993; see article by Weiss).

III. THE IMAGE ACQUISITION SYSTEM
An image acquisition system minimally consists of a microscope and a camera, but may also include a computer, imaging software, and a variety of motorized components for automated acquisition (Salmon and Waters, 1996). Selection of the appropriate equipment for your application requires a basic knowledge of the parameters used to evaluate the equipment and how they affect performance.

A. Electronic Cameras
There are many different electronic cameras available for acquiring images from light microscopes. The basics of choosing the appropriate camera for your application will be introduced here, while subsequent articles will discuss the details of video-enhanced light microscopy (see article by Weiss) and cooled chargecoupled device (CCD) camera technology.

1. Analog Video Cameras versus Digital CCD Cameras
The majority of video and digital cameras produced today use a charge-coupled device, comprising a two-dimensional array of on-chip single pixel photodiodes to sense incident photons (Spring, 2000; Hiraoka et al., 1987). However, video and digital cameras differ in their final signal output; a video camera has an analog output corresponding to one of the video standards (PAL, SECAM, or NTSC) while a digital camera has digital output. The analog signal from a video camera can be viewed in real time on a conventional video monitor and is usually recorded onto analog videotape, which is a very inexpensive medium. A CCD camera is referred to as a digital camera when the analog signal is digitized prior to output from the camera or camera controller. In a 12-bit camera, for example, the analog signal is converted into gray scale values ranging from 0 (equivalent to black) to 4095 (equivalent to white), giving a total of 212 gray levels. The digital image information can be recorded on digital videotape in DV format or can be output to a computer RAM or hard drive and archived onto other digital media, such as the hard drives of network servers, CD-ROMs, and DVDs.

Digital imaging has now become the norm in many research laboratories. Digital images can be viewed and inspected immediately, processed and analyzed using specialized software packages, and inserted easily into digital documents or shared via the Internet. Significant advances in camera technology, software capabilities, computer speed, and storage capacity have made digital imaging favorable and affordable for a wide range of light microscopy applications (Spring, 2001; Salmon and Waters, 1996; Mason, 1999).

2. Video-Enhanced Microscopy
Analog video microscopy has some unique advantages over digital imaging that make it preferable for select applications, particularly video-enhanced contrast microscopy where electronic adjustment of the contrast and background "black level" are required during real-time recording (Inoué and Spring, 1997; Murphy, 2001; Shotton, 1993; see article by Weiss). Video cameras are also useful and economical for bright-field applications that require continuous monitoring over long periods of time, without worrying about the memory limitations of computer RAM and hard disk space. Video cameras have high temporal resolution, with their images being recorded onto videotape at 25 frames per second (PAL and SECAM) or 30 frames per second (NTSC). Alternatively, timelapse VCRs can be used to slow the rate at which the video signal is sampled and recorded. However, videotape fails to capture the full resolution of the video signal. Full resolution digital images can be recorded directly from a video camera to disk using a specialized frame-grabber board containing an analogto- digital converter installed in a computer (Inoué and Spring, 1997; Shotton, 1993).

Video cameras are commonly used to record movement and changes in specimen shape, including processes such as cell motility, chromosome movement during mitosis, cytokinesis, early embryonic development, chemotaxis, and wound healing. In the early 1980s, Shinya Inoué, Robert Allen, and others demonstrated that using video camera controls to manipulate the analog signal electronically can increase the dynamic range and contrast of very low contrast images that would otherwise be invisible to the eye (Inoué and Spring, 1997). Digital image processors can be used in conjunction with video camera controls to reduce noise and further improve image contrast by real-time digital subtraction of a background image. For example, high-resolution video-enhanced DIC microscopy can be used to image the dynamics of individual microtubules in real time: individual microtubules, which at a diameter of 25nm are about 10 times below the resolution limit of the light microscope, can be visualized clearly (Inoué and Spring, 1997; see article by Weiss).

3. Digital-Cooled CCD Cameras
Slow scan-cooled CCD cameras record full size images at a slower rate (1-10 frames per second) than video cameras, but have superior light sensitivity, low noise, large dynamic range, linear response, and high spatial resolution (Spring, 2000; Inoué and Spring, 1997; Murphy, 2001). These properties make cooled CCDs the detector of choice for acquisition of fluorescence microscopy images (Spring, 2001; Hiraoka et al., 1987) and are used routinely to collect images of fixed specimens labeled with fluorescent antibodies and dyes. Cooled CCDs are also used to record the dynamics of live fluorescent cells, including cells loaded with calcium indicators (Mason, 1999) and cells expressing GFP-tagged proteins (Tsien, 1998), and to collect images from Nipkow spinning disk confocals. The Yokogawa CSU-10 dual Nipkow spinning disk confocal (Maddox et al., 2003; Mason, 1999) can be used to collect high-resolution confocal images of weakly fluorescent specimens with greater speed and signal to noise than traditional laser-scanning confocals. This is probably due in part to the use of a low noise-cooled CCD as a detector instead of the photomultiplier tube used by laser-scanning confocals. There are many different cooled CCD cameras available that can be used for these types of fluorescence applications, with a wide range in price and performance. Comparison shopping for a cooled CCD camera requires a basic understanding of the properties used to describe performance (Spring, 2000).

Quantum efficiency is a measure of the percentage of incident photons that are recorded by a CCD camera (Spring, 2000; Inoué and Spring, 1997; Murphy, 2001). The higher the quantum efficiency, the more sensitive the camera will be to photons emitted from the specimen, which translates into shorter exposure times and better temporal resolution. Quantum efficiency of a CCD chip varies with the wavelength of light. Backthinned CCD chips can have a maximum quantum efficiency as high as 90%, but are relatively delicate and expensive detectors. The more commonly used CCD chips have a maximum quantum efficiency in the range of 50-70%. The camera manufacturers provide graphs of the quantum efficiency of the CCD chips used in their camera at different wavelengths of light. For very low-light applications, image intensifiers can be used in conjunction with the CCD chip to increase the intensity of the signal (Inoué and Spring, 1997), while limiting spatial resolution and dynamic range. Dynamic range is a measure of the range of intensities that can be detected by the CCD chip and is defined by the full well capacity of a single pixel on-chip photodiode divided by the mean camera pixel noise (Spring, 2000; Inoué and Spring, 1997; Murphy, 2001). A new class of backthinned CCD chip, called an electron multiplying charge-coupled device (EMCCD), which has built-in amplification of the signal from each pixel before readout, gives a sensitivity that exceeds even that of avalanche photodiodes, while retaining the benefits of fast CCD array readout, and promises to bring additional benefits to low-light imaging applications.

All CCD cameras contribute some amount of noise to the images they create (Inoué and Spring, 1997; Spring, 2000). Excessive noise can drown out the signal coming from a specimen, effectively decreasing the sensitivity and dynamic range of the camera. For fluorescence microscopy, in which the level of signal is inherently low, it is important to choose a camera that has been designed to minimize noise. The two main types of CCD camera noise are thermal noise and readout noise (Spring, 2000). Thermal noise (also known as dark noise) is reduced greatly by cooling the CCD chip below ambient temperature, making cooled CCD cameras optimal for fluorescence applications. Readout noise is generated as the signal is read from the CCD chip and is proportional to the readout speed: the faster the CCD chip is read, the greater the probability of error in the A-to-D converter and thus the higher the readout noise. Slow scan-cooled CCD cameras are preferable for low-light level fluorescence microscopy applications (Spring, 2001), as long as the speed of acquisition can be sacrificed.

The spatial resolution of a CCD camera is determined by the size of the photon-sensitive photodiodes that make up the CCD chip relative to the optical magnification of the microscopic image focused upon it (Shotton, 1993, 1997; Inoué and Spring, 1997; Spring, 2000). Cooled CCD cameras usually have at least 10242 light-sensing photodiodes (i.e., pixels), which range in size on CCDs from different manufacturers from 5 x 5 µm to 25 x 25 µm each. (The pixels on some CCDs, particularly those on three-color CCD chips, are oblong rather than square, but these should be avoided for scientific imaging applications because of the potential problems caused in subsequent image processing.) The optical resolution of the microscope will be adequately preserved by the camera only if each resolvable point in the magnified image is sampled by at least two photodiodes (the Nyquist criterion) (Shotton, 1993; Inoué and Spring, 1997; Shotton, 1997; Stelzer, 1998). For example, the Abbe diffraction resolution limit (Inoué, 1995; Inoué and Spring, 1997) of an objective lens with a numerical aperture of 1.4 at 510nm (the peak emission wavelength of GFP) is 0.22µm. With 60x magnification by the objective lens and no secondary magnification between this lens and the CCD camera, this would be projected to 13.2µm at the CCD chip. Therefore, a photodiode size of 6.5 µm or smaller would be necessary to match the resolution of the objective lens and prevent loss of specimen detail by digital undersampling of the optical image.

For live cell fluorescence applications, in which greater temporal resolution is needed, some cooled CCD cameras allow the user to increase the acquisition rate by binning adjacent photodiodes together and/or reading only a subarray of the chip (Inoué and Spring, 1997; Shotton, 1997; Spring, 2000; Salmon and Waters, 1996). Binning photodiodes 2 x 2, for example, results in a fourfold increase in image intensity, allowing for shorter exposure times. Binning also decreases the number of pixels in the output image, using less computer memory and allowing for faster image transfer. The increase in image intensity and acquisition times does come at a price: such 2 x 2 binning of adjacent pixels results in a twofold loss in spatial resolution. For many fluorescence applications, however, where the critical parameter is usually the ability to detect faint fluorescent objects rather than to resolve them spatially, the increase in image intensity and speed of acquisition are well worth the decrease in spatial resolution (Salmon and Waters, 1996). Some cameras allow the user to increase the image transfer time by reading only a subarray, or region-of-interest, of the full CCD chip. When the interesting area of the specimen does not fill the entire camera field of view, this is an easy way to increase image acquisition speed and decrease the image file size.

4. Color Cameras
Because CCD chips cannot differentiate between different wavelengths of light, color CCD cameras must use wavelength selection components within the camera itself to produce red, green, and blue images of the specimen from subarrays of pixels, which are then combined into the resulting color image (Spring, 2000). Color CCD cameras are significantly less sensitive and have lower spatial resolution than monochromatic cameras because of the additional filters and beam splitters used for wavelength selection and because of the division of the pixels to image the three primary colors. Color CCD cameras are useful for recording bright-field colored specimens and may be necessary for fluorescence imaging in the rare case that the color of light emitted by the fluorophore is diagnostic. However, for the majority of fluorescence microscopists, the best solution is a monochromatic CCD camera used in conjunction with appropriate wavelength-specific filters within the microscope and with image processing software to pseudo-color and merge the wavelength-specific monochrome images subsequent to acquisition.

B. Computers and Software for Digital Imaging

Digital cameras output the signal in a format that can be interfaced directly to the computer (IEEE-1394 "FireWire," RS-422, and SCSI interfaces are commonly used; Inoué and Spring, 1997). The necessary computer boards are usually purchased from the manufacturer of the camera or the image acquisition software. The software manufacturer may recommend purchasing the computer as well as necessary boards for image acquisition directly from them; this is usually preferable because the software manufacturer will install the boards into the computer and make sure that all of the components are compatible before they arrive in your laboratory. A computer dedicated to image acquisition will need a fast processor and a significant amount of RAM and hard drive space, as a single full frame image from a high-resolution 12-bit monochrome CCD camera can easily exceed 2MB. At least 512 MB of computer RAM will be needed for most live cell applications. A CD-ROM or DVD writer is also useful.

Image acquisition software is used to manipulate camera settings such as gain, exposure time, and binning. An image acquisition and processing software package should minimally provide such useful features as pseudo-coloring, image merging/overlay, and manipulation of image brightness, contrast, and gamma for optimal display. Advanced software packages will also drive additional hardware, such as shutters and filter wheels, necessary for automated imaging (see Section IIID), and will provide tools for postacquisition image analysis. For a live cell image acquisition system, a software package that allows full customization will provide the most flexibility in designing experiments. For example, the software package MetaMorph (Universal Imaging Corp., a subsidiary of Molecular Devices) allows customized control of the parameters, sequence, and timing of image acquisition via "journals," a sequence of instructions to the software that are easily generated by recording selections from the program menus (Salmon and Waters, 1996). When choosing a software package, it is important to consider the long-term imaging and analysis goals of the user group. A software package that can meet the growing needs of the users may cost more up front, but will be well worth the investment in the long run. The long-term benefits that accrue from use of a digital asset management system for the organization of all image data within a laboratory, such as the freely available Open Microscopy Environment (www.openmicroscopy.org), which permits recording of descriptive metadata and image analysis results in a database together with raw and processed image files (Swedlow et al., 2003), should not be underestimated.

C. Microscope and Optics
To prevent vibration from degrading images, a stable, high-quality microscope stand is needed, preferably mounted on a vibration isolation table. Microscope optics should be kept clean, and every effort should be made to keep the environment dust free (for instructions on how to clean optics, see Inoué and Spring, 1997). The microscope illumination pathway should be aligned using the principles of Koehler illumination (Murphy, 2001; Inoué and Spring, 1997; http://www.microscopyu.com). All of the modern advances in image processing cannot compensate for dirty, misaligned optics.

Electronic cameras can be mounted on either an upright or an inverted microscope stand. An inverted microscope is usually preferable for live cell work, as the design allows easy access to the specimen during image acquisition for microinjection or perfusion and accommodates most heated incubation chambers. Inverted stands also have the added benefit of being particularly stable and resistant to focus drift.

Electronic cameras are usually mounted onto light microscopes via a camera port. An additional adapter, available from the microscope manufacturer, is used to couple the camera to the port. Research grade light microscopes can be equipped with multiple camera ports for attaching more than one camera. This is particularly useful on microscopes that are used for different modes of light microscopy so that cameras optimally suited for each mode can be mounted on the microscope simultaneously. For example, a microscope used for both bright-field histology-stained slides and fluorescence microscopy would be best outfitted with a color CCD camera as well as a cooled CCD camera. While cameras can be taken on and off of microscopes with relative ease, it is preferable to leave cameras mounted on the microscope to prevent dust from entering the body of the microscope and from adhering to the camera faceplate. Microscopes with camera ports come with a set of mirrors and/or prisms that are used to reflect to the camera image-forming light that would otherwise go to the eyepieces. Any light sent to the eyepieces during image acquisition is done so at the expense of light sent to the camera. Therefore, it is preferable to use a 100% reflecting mirror to send all of the light to the camera when acquiring photonlimited fluorescence microscopy images.

Provided that the optical image is adequately sampled by the camera, the spatial resolution of an image acquisition system is determined primarily by the microscope optics (Inoué and Spring, 1997; Shotton, 1997; Murphy, 2001). For transmitted light microscopy techniques, the spatial resolution limit is defined as the minimum distance two objects must be separated in order from them to be distinguished as separate and, according to Rayleigh's criterion, is equal to 1.22λ divided by the condenser NA plus the objective NA, where λ is the wavelength of the light source and NA is the numerical aperture (Inoué, 1995). For fluorescence microscopy, the optical resolution limit is defined as the radius of an image of an infinitely small point source of light and is equal to 0.61λ divided by the NA of the objective lens, whereλ is the fluorescence emission wavelength (Inoué, 1995). However, for two-photon confocal microscopy, the resolution is limited by the longer infrared excitation wavelength. To ensure that spatial resolution is limited by the optics and not by the pixel size in the CCD chip, the minimum total magnification used from specimen to image plane should be equal to twice the photodiode width divided by the optical resolution limit (Shotton, 1997; Spring, 2000, 2001), as explained earlier. For low-light (and therefore low-contrast) specimens, it is ideal to oversample the image by using a magnification equal to three times the photodiode width divided by the optical resolution limit (Maddox et al., 2003; Stelzer, 1998), although, as explained earlier, it may alternatively be necessary to sacrifice spatial resolution in order to capture enough photons per pixel to render the objects under study visible at all. Microscopes can be equipped with relay lenses that increase the magnification of the image to the camera in order to meet these requirements.

For some applications, spatial resolution is less important than capturing a large field of view. The rectangular CCD chip in a camera does not capture the entire round field of view seen through the microscope eyepiecemusually only 50-70% of the field of view is collected by the camera. If a larger field of view is required, magnification-reducing lenses can be placed in front of the camera. A 0.6x relay lens will usually come close to matching the camera field of view with the eyepiece field of view, while sacrificing image resolution. Alternatively, to maintain high resolution, multiple images of adjacent fields of view can be collected at high magnification, ideally using a scanning specimen stage on the microscope, and then "stitched" together into a larger image using image processing software.

For fluorescence microscopy in particular, it is important that the microscope is optimized for maximum collection and transmission of light. The single most important parameter in determining the amount of image forming light that reaches the camera is the numerical aperture of the objective lens (Murphy, 2001; Abramowitz et al., 2002). The brightness of an image formed by an objective lens can be defined by B = 104NA4/M2, where NA and M refer to the numerical aperture and the magnification of the objective lens, respectively (Abramowitz et al., 2002). The numerical aperture, which is usually marked on the barrel of an objective lens just to the right of the magnification, is a measure of the half angle of the cone of light accepted by the objective lens times the refractive index of the medium between the lens and the specimen. The higher the numerical aperture, the more light the lens is capable of collecting. Conversely, the brightness of an image is inversely proportional to the square of the magnification. Therefore, a 60x/1.4 NA objective lens will create a brighter image than a 100x/1.4NA objective, while having the same optical resolution. The actual light transmission of a lens is also affected by the extent to which the lens is corrected for aberrations, a highly corrected plan apochromat objective with many internal lens elements transmitting less light than a more simple fluorite lens (Abramowitz et al., 2002). Phase-contrast objective lenses are not optimal for low-light fluorescence imaging, as these objectives have a light-attenuating phase annulus in the back focal plane that decreases the intensity of the signal by 10-15%.

To perform fluorescence microscopy effectively, the optimal fluorophore, filters, and illuminator should be chosen for a given application (Kinoshita, 2002; Reichman, 2000). It is also important to minimize the number of filters and prisms in the light path that absorb the excitation or emission light and decrease the intensity of the signal. For example, on a microscope with both fluorescence and DIC optics, the polarization analyzer (Murphy, 2001) is usually situated in the shared light path just behind the objective lens and should be removed when using the microscope for fluorescence to prevent attenuation of the signal.

D. Automated Live Cell Image Acquisition
Motorized microscope components allow the user to automate image acquisition and are particularly useful for live cell time-lapsed studies (Salmon and Waters, 1996; Mason, 1999). Components such as electronic shutters, filter wheels, motorized stages, and focus motors can be attached to a research grade microscope and controlled through a computer using image acquisition software. Electronic shutters, which are used to block the light source from illuminating the specimen between camera exposures, are particularly important for fluorescence specimens as a means of minimizing photobleaching and phototoxicity, thereby increasing image quality and cell viability over long periods of time-lapse recording. For live cell studies in which the localization of more than one fluorophore is to be time-lapse recorded, automated switching between fluorescence filter sets is necessary. Microscopes are now available that come equipped with a motorized fluorescence filter set slider or turret; however, they tend to be too slow for live cell applications in which the time between acquisition of the different fluorophores needs to be minimized. A faster solution is the addition of motorized filter wheels to the fluorescence light path (Salmon and Waters, 1996; Mason, 1999). In this setup, the standard fluorescence filter set (Herman, 1998) is replaced with a multiple band-pass dichroic mirror (Reichman, 2000). Excitation illumination is then controlled with a motorized wheel filled with single band-pass excitation filters placed in front of the light source. Either a single multiple band-pass emission filter or a set of single bandpass emission filters in a second motorized wheel (placed just before the camera) is then used to select emission wavelengths. Filter wheels are available that can move between adjacent positions in 30-50ms, allowing rapid switching between fluorophores.

Focus motors attach to the fine focus mechanism of the microscope and allow automated focus control through image acquisition software. Focus motors can be used in conjunction with autofocus functions in the software or to collect a stack of optical sections for subsequent deconvolution and/or 3D reconstruction. Motorized stages can be used with image acquisition software to automate movement of the stage between more than one field of view or between wells in a culture dish. An image acquisition system equipped with a focus motor, a motorized stage, and advanced image acquisition software can be used for "4D" (x, y, z and time), "5D" (x, y, z, time, and wavelength), and "6D" (multiple x, y areas, z, time, and wavelength) live cell imaging.

It is often useful to collect both fluorescence and bright-field transmitted light images of the same specimen (Salmon and Waters, 1996). For many experiments, it is sufficient to acquire a phase or DIC image at the beginning and end of the fluorescence time lapse. When automated switching between fluorescence and bright field is required, both illumination pathways must have an electronic shutter in front of the light source. For DIC and fluorescence microscopy, a motorized polarization analyzer is also desirable so that this light-attenuating filter can be automatically removed from the light path while the fluorescence image is acquired. This DIC analyzer can alternatively be placed into a motorized wheel in front of the camera, alongside fluorescence emission filters.

IV. PREPARING YOUR SPECIMEN FOR IMAGING
An important part of generating excellent images of fluorescence specimens is to learn to evaluate critically the quality of the fluorescence signal relative to background fluorescence. It is difficult, and in many cases impossible, to use subsequent image processing to remove noise that comes from nonspecific fluorescence staining from the collected image. Fluorophores with high quantum efficiency and low photobleaching should be chosen whenever possible (Herman, 1998). It is well worth the time and effort of optimizing your labeling protocol to maximize signal while minimizing background fluorescence (Harper, 2001).

For high-resolution fluorescence work it is also important that specimens are mounted properly for viewing. The majority of high numerical aperture objectives are marked "0.17" on the objective barrel, indicating that they are designed to image through a 0.17-µm-thick coverslip (Murphy, 2001; Abramowitz et al., 2002). This means that there should be nothing between the specimen and the front of the objective lens except a 0.17-µm-thick glass coverslip and the appropriate immersion medium (usually immersion oil). This means that the specimen to be imaged must be located adjacent to the coverslip, not at a great distance into its aqueous support medium. If the latter situation is unavoidable, a suitable water-immersion objective designed for confocal imaging deep within the aqueous specimen should be used. High-resolution light microscopy cannot be performed through a plastic petri dish or 16-well plate. Live cells should instead be grown in petri dishes or welled chambers that have a glass coverslip cemented into the bottom. There are also commercially available viewing chambers for live cell work that are designed to hold a coverslip and a volume of media. Similar chambers can be built with the help of a clever machine shop (for one such design, see Rieder and Cole, 1998). Fixed fluorescence specimens should be mounted in a glycerolbased media with an antiphotobleaching reagent. There are many commercially available mounting media, but it is also easy and inexpensive to make your own (one reliable recipe is a 1:1 mixture of doublestrength phosphate-buffered saline and glycerol containing 0.5% n-propyl gallate). For live cell work, reducing the concentration of oxygen in media will also help prevent photobleaching. The product Oxyrase (http://www.oxyrase.com) works very well when used with a closed viewing chamber. However, it should be realized that lack of oxygen may compromise the long-term survival of the cells.

Focus drift while filming live cells can be disruptive and frustrating. A small amount of focus drift (1-2µm per hour) is usual and must be tolerated, but steps can be taken to minimize the amount of focus drift in your image acquisition system. It is important to be sure that coverslips are mounted securely to the slide or viewing chamber and that the slide or viewing chamber itself is clamped tightly onto the microscope stage. With oil objectives, some amount of focus drift will occur within the first few minutes of viewing as the oil settles across the objective lens. It is thus often preferable to wait several minutes after placing a specimen on the microscope before beginning filming. Heated stage chambers can also lead to focus drift by causing fluctuations in temperature that make microscope components expand. Heated incubation chambers that totally enclose and heat the entire body of the microscope as well as the specimen are often the best way to maintain temperature and focus during image acquisition, especially during long-term experiments. Use of a fixed, permanently mounted specimen and monitoring focus over time will help determine whether focus drift is coming from your equipment or your specimen.

References
Abramowitz, M., Spring, K. R., Keller, H. E., and Davidson, M. W. (2002). Basic principles of microscope objectives. BioTechniques 33, 772-781.

Berland, K. (2001). Basics of fluorescence. In "Methods in Cellular Imaging" (A. Periasamy, ed.). Oxford Univ. Press, New York.

Harper, I. (2001). Fluorophores and their labeling procedures for monitoring various biological signals. In "Methods in Cellular Imaging" (A. Periasamy, ed.). Oxford Univ. Press, New York.

Herman, B. (1998). "Fluorescence Microscopy." 2nd Ed. Springer Verlag, Hiedelberg.

Hiraoka, Y., Sedat, J. W., and Agard, D. A. (1987). The use of a charged-couple device for quantitative optical microscopy of biological structures. Science 238, 36-41.

Inoué, S. (1995). Foundations of confocal scanned imaging in light microscopy. In "Handbook of Biological Confocal Microscopy" (J. Pawley, ed.), 2nd Ed. Plenum Press, New York.

Inoué, S., and Spring, K. R. (1997). "Video microscopy," 2nd Ed. Plenum Press, New York. Kinoshita, R. (2002). Optimize your system with the right filter set. Biophoton. Int. 9(9), 46-50.

Maddox, P. S., Moree, B., Canman, J. C., and Salmon, E. D. (2003). A spinning disk confocal microscope system for rapid high resolution, multimode, fluorescence speckle microscopy and GFP imaging in living cells. Methods Enzymol. 360.

Mason, W. T., Dempster, J., Hoyland, J., McCann, T. J., Somasundaram, B., and O'Brien, W. (1999). Quantitative digital imaging of biological activity in living cells with ion-sensitive fluorescent probes. In "Fluorescent and Luminescent Probes" (W. T. Mason, ed.), 2nd Ed. Academic Press, London.

Murphy, D. B. (2001). "Fundamentals of Light Microscopy and Digital Imaging." Wiley-Liss, New York.

Reichman, J. (2000). "Handbook of Optical Filters for Fluorescence Microscopy." Download from http://www.chroma.com.

Rieder, C. L., and Cole, R. (1998). Perfusion chambers for high-resolution video light microscopic studies of vertebrate cell monolayers: Some considerations and a design. Methods Cell Biol. 56, 253-275.

Salmon, E. D., and Waters, J. C. (1996). A high resolution multimode digital imaging system for fluorescence studies of mitosis. In "Analytical Use of Fluorescent Probes in Oncology" (Kohen and Hirschberg, eds.), pp. 349-356. Plenum Press, New York.

Shotton, D. M. (1993). An introduction to the electronic acquisition of light microscope images. In "Electronic Light Microscopy" (D. M. Shotton, ed.), pp. 2-35. Wiley-Liss, New York.

Shotton, D. M. (1997). Image resolution and digital image processing in electronic light microscopy. In "Celt Biology: A Laboratory Handbook" (J. E. Celis, ed.), 2nd Ed., Vol. 3, pp. 85-98. Academic Press, San Diego.

Spring, K. (2000). Scientific imaging with digital cameras. BioTechniques 29, 70-76.

Spring, K. (2001). Detectors for fluorescence microscopy. In "Methods in Cellular Imaging" (A. Periasamy, ed.). Oxford Univ. Press, New York.

Stelzer, E. H. K. (1998). Contrast, resolution, pixilation, dynamic range, and signal-to-noise ratio. J. Microsc. 189, 15-24.

Swedlow, J. R., Goldberg, I., Brauner, E., and Sorger, P. K. (2003). Informatics and quantitative analysis in biological imaging. Science 300, 100-102.

Tsien, R. Y. (1998). The green fluorescent protein. Annu. Rev. Biochem. 67, 509-544.

Support our developers

Buy Us A Coffee