RETURNING PICTURES FROM SPACE


Objective: In the activity, the basic ideas of digital pictures will be introduced. Concepts include math (multiplication and "powers"), computers (1 bit, 2 bit, etc.), and geology.

Except for some journeys to the Moon, all spacecraft sent to explore the solar system have been one-way trips. This means that the information collected has to be sent back to Earth electronically, including pictures. At first, this would seem to be very easy; after all, television stations beam billions of pictures into homes every day. However, the energy required for TV broadcast is very high and the signal gets very weak after a few tens of kilometers. Not only is the distance in space much greater, but the energy available on spacecraft is tiny--the total is less than the light bulb in your reading lamp. To solve this problem, cameras on spacecraft convert pictures into digital signals that can be sent as a series of ones and zeros using very little electrical power. This exercise will show you how these digital images ("pictures by numbers") work.

Because film is not returned from spacecraft, we must use some other means to record a picture. Most spacecraft today use cameras somewhat similar to home-use video cameras, based on charge-coupled devices, or CCDs. CCDs are electronic "chips" divided into a grid (see figure 1). Each cell in the grid is called a picture element, or pixel. To keep track of the pixels, the grid is divided into lines and samples. The area to be "pictured" is focused by a lens onto the CCD, which is sensitive to light--the more light, the higher the electrical charge. The electrical charge of each pixel (identified by line and sample) is measured and recorded in computer memory, then sent back to Earth.


Figure 2 shows an image returned from the Galileo spacecraft's CCD camera. The image is of Europa, one of Jupiter's moons. The area in the box is enlarged to show the pixels of different brightness levels (shades of grey). For simplicity, let us say that only two possible brightness values for each pixel can be returned to Earth, one value for bright pixels and the other value for dark pixels. In computer language, this would be called 1-bit data, meaning 21, which equals 2 possible values (bright or dark). The computer attached to the camera keeps track of the value for each pixel by line and sample. Figure 3 shows a line and sample grid and the value for each pixel (blank square = white, or bright; b = black, or dark). Take a pencil and fill in each pixel labeled with a b. What do you see as the result? What type of landform might this be?

_________________________________________________________________. Although this is a crude picture, you should get the idea!

Now, how could we improve the result shown in Figure 3? Instead of having only black or white pixels, how about having some shades of gray? Remember that we still have to send the information on each pixel back to Earth electronically in computer form. Instead of 1-bit coding (21 = 2 values), we will use 2-bit coding, or 22 (2 x 2 = 4 values). Now we can have white, black, and 2 shades of gray. Figure 4 shows a grid "map" on which b = black, g = dark gray, l = light gray, and white is left blank. Use a pencil and shade in these tones. Now what do you see?



Do you see anything that you were unable to detect in the first picture?



We could continue improving the picture by increasing the number of gray levels with 3-bit data (23, or 2 x 2 x 2 = 8 values), 4-bit data (24, or 2 x 2 x 2 x 2 = 16 values), etc. In fact, most of the pictures you have seen for Mars used 8-bit encoding (28). How many shades of gray does this represent?______________________ Get the idea? In fact, the human eye can separate less than 2 dozen shades of gray. This means that 8-bit pictures contain much more information than you could detect if you were looking directly at the area. Computer processing of the pictures enables all the data to be used for analysis.

Can you think of some other ways to improve the picture shown in Figure 4? The grid is rather coarse, meaning that there are not many pixels. Figure 5 shows the same scene but with more lines and samples of pixels. The encodement is still the same (22, or 4 levels of brightness). Shade the pixels using the same method as in Figure 4. Now what are you able to see in the picture?



This improvement refers to resolution, meaning the size of the area shown by one pixel. Resolution depends on the size and number of pixels on the CCD chip, the camera lens (telephoto, normal, or wide-angle lens), and the distance to the scene.

Now when you look at a picture from space, see if you can recognize the pixels and remember how the image was returned to Earth!