Cinematography

A primer on digital cinematography

Cinematography begins with light.

Figure 1.1 The spectrum of visible color .

Cinematography is the art of manipulating, capturing and recording motion pictures on a medium such as film, or in the case of digital cinematography, on an image sensor such as a charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) chip set.

Figure 1.2 Charlie Chaplin at his hand-cranked Bell and Howell model 2709 camera .

In order to understand how digital photography works, it is important to understand how visible light gets separated by narrow band color filters into the three primary colors (red, green and blue) that we use to reproduce images.

Figure 1.3 Visible light divided into red, green, and blue (RGB) components .

In a modern 3-chip CCD camera, light of various color wavelengths is directed to three individual 1920 × 1080 photosite monochrome red-only, green-only, and blue-only sensors by a system of filters and prisms.

Figure 1.4 How an image is separated into red, green and blue components. A blue sensor (A), a red sensor (B), and a green sensor (C) collect light directed to them by dichroic prism surfaces F1 and F2.

Each of these three sensors collects photons from all its photosites to create three individual pictures of the scene: red only, green only, and blue only.

Three-chip cameras are very efficient collectors of light from the scene they record. Relatively speaking, not much of the light from the scene is wasted, because the photosites are said to be co-sited; they have the ability to sample light in all three color wavelengths from the same apparent place by using a semitransparent dichroic prism system.

Every photosite functions like a microscopic light meter; more photons entering and collecting in a light well generate a higher voltage out, whereas fewer photons entering generates a lower voltage out.

Figure 1.5 Photosites are photon-collection buckets that turn light into voltages.

Thousands of these photon-collecting buckets work like microscopic light meters, giving light readings on a pixel-for-pixel, frame-for-frame basis.

Figure 1.6 Photosites turn light into voltages .

For every frame, on a frame-by-frame basis, each of the three color sensors generates and measures the individual voltage from each discrete photosite commensurate with the number of photons that arrived at that photosite. Those voltages are sampled at very a high frequency and converted to digital code values in an analog to digital (A-to-D) sampling processor.

Figure 1.7 The process of analog to digital .

In single-chip (monoplanar) sensor cameras, light is directed to a grid of adjacent individual photosites that are optically filtered by microscopic red, green, and blue (RGB) filters at each site. Each photosite captures light from only one of the primary colors while rejecting light from the other two primary colors.

Figure 1.8 Bayer pattern color filter array .

Much of the light (and, therefore, color information) arriving at such a sensor is discarded, rejected by the color filtration scheme, and RGB pixels must be created by combining samples from adjacent, non– co-sited photosites.

Figure 1.9 Filtered red, green, and blue light landing on non–co-sited photosites .

Those photosites are arranged in one of numerous possible patterns according to the dictates of the hardware manufacturer (see Figure 1.10). The light falling on such an array of photosites is largely wasted. A green photosite can only collect the green light that falls on it; red and blue light are rejected. A red photosite can only collect the red light that falls on it; green and blue are rejected. A blue photosite can only collect the blue light that falls on it, rejecting red and green light. The ineffi ciency of such systems can be fairly easily intuited.

Figure 1.10 A variety of color filter patterns.

What are Pixels?

The word pixel is a contraction of pix (“picture”) and el (for “element”).

A pixel is the smallest addressable full-color (RGB) element in a digital imaging device. The address of a pixel corresponds to its physical coordinates on a sensor or screen.

Pixels are full-color samples of an original image. More pixels provide a more accurate presentation of the original image. The color and tonal intensity of a pixel are variable. In digital motion picture cinematography systems, a color is typically represented by three component intensities of red, green, and blue.

Figure 1.11 Only pixels contain RGB (red, green, and blue) information .

Photosites Are Not Pixels!

This is one of the most important distinctions we can make when talking about digital cinema cameras!

Figure 1.12 Red, green, and blue photosites combine to create full color RGB pixels .

Photosites (or sensels as they are referred to in camera sensor design) can only carry information about one color. A photosite can only be red or green or blue.

Photosites must be combined to make pixels. Pixels carry tricolor RGB information.

Excerpt from Digital Cinematography: Fundamentals, Tools, Techniques, and Workflows by David Stump © 2014 Taylor and Francis Group. All Rights Reserved.

Related posts:

0 Comments
Tell us what you think!
*

Latest Tweets

Stay Informed

Click here to register with Focal Press to receive updates.


about MasteringFilm

MasteringFilm, powered by bestselling Routledge authors and industry experts, features tips, advice, articles, video tutorials, interviews, and other resources for aspiring and current filmmakers. No matter what your filmmaking interest is, including directing, screenwriting, postproduction, cinematography, producing, or the film business, MasteringFilm has you covered. You’ll learn from professionals at the forefront of filmmaking, allowing you to take your skills to the next level.