Calibration Image


Pictures are important in science because people are naturally very good at interpreting images. When we observe the Sun, for example, a picture immediately tells us a lot about what is happening and whether our instrument is making sensible measurements. Images are particularly valuable for finding features, looking for changes, and for checking large volumes of data.

A picture is truly worth many megabytes.

But scientific measurements are more than just pretty pictures. Measurements have to be quantitative. We have to be able to tell that today a sunspot is causing a decrease in the Sun's brightness by 50 parts per million while yesterday it was only 46 ppm, or we need to know that today the Sun's equator is rotating at 2045 meters per second and last year is was rotating 2.3 m/s faster. Converting the values we get from an instrument into meaningful numbers in standard units is called calibration.

For example, I keep my alarm clock at home set to the correct time, but my spouse keeps hers 5 minutes fast. When I look at her alarm I have to calibrate the time I read by subtracting 5 minutes or I'll wind up leaving the house 5 minutes early in the morning.

When we make scientific observations we make the same sorts of adjustments, though the errors are not usually voluntary. Think about an image of the Sun that falls on a CCD camera with a million pixels. We might know, for example, that when the camera says pixel 495,241 is 50% as bright today as it was yesterday, the real brightness of the Sun is only 49.5% of yesterdays value, because that pixel is not quite correct; that pixel is a little bit non-linear. Or if we compare pixel 951 with pixel 952, they may give different readings for the same solar brightness - the sensitivity is different for each pixel. These kinds of detector errors are fairly easy to measure and correct for - we call that kind of correction flat-fielding. Most people find pictures of the flat field are kind of boring.

You can probably imagine all sorts of problems that could introduce errors in a measurement of the Sun's brightness. Some of the errors are due to the instrument: maybe your front lens is getting dirty or maybe there's a reflection of light inside the instrument. The biggest problem in the MDI instrument is that the wavelength filter isn't quite the same everywhere. That non-uniformity means that we're more sensitive to light from certain parts of the Sun - that's the main reason why the calibration image doesn't look symmetric.

Other annoying problems are due to the way we have to observe things. For example, if you observe the Sun at noon in winter it won't be as bright as it was in the summer. The intensity isn't different because the Sun changed, it's different because the winter Sun is lower in the sky. The lower angle means that in winter the sunlight has to travel through more of the Earth's atmosphere, more light is scattered or absorbed along the way, and so the Sun looks dimmer. Once we understand this kind of effect, we can calibrate it out. These are called systematic errors.

Fortunately the SOHO spacecraft and the MDI instrument aren't affected by seasons or clouds or lots of other problems associated with being on Earth, but there are other systematic effects for which we do have to correct. One of the largest effects, the one that gives this calibration image it's characteristic shape, is called 'limb darkening.'

Limb Darkening. When you first see a picture of the Sun one of the surprising things you might notice is that the edge, or limb, looks a lot darker. This picture was taken in Hawaii in mid 1997. We know the Sun isn't really dimmer near the edge, because the Sun rotates and our Sun was just as bright a week ago when the dark part that's now on the right side of the picture was in the middle. It just looks dark because we aren't looking at the same place in the Sun's atmosphere. The light that escapes from the Sun is radiated by atoms in what we call the Sun's photosphere. That's the layer where light can first escape into space. The photosphere is pretty thin by solar standards (a few hundred kilometers compared to a diameter of 1.39 million km), but the temperature is hundreds of degrees cooler at the top than it is at the bottom. Even though it's not very dense (you'd still need a space suit), it still absorbs light going through it. As we look toward the edge of the Sun the light rays have to pass through more of the Sun's absorbing atmosphere, so we can't see as deep. The light we see comes from higher in the photosphere, where it's cooler; when something is cooler it is not as bright. So the limb appears dark.

Anyway, to see the Sun's surface brightness as it 'really' is, we calibrate our images to correct for all of the known defects, errors, and systematic effects we can think of. The Calibration Image is a pretty graphical representation of the number by which we divide each image to correct for all of those effects. It's fairly smooth, so we can use that fairly simple polynomial expansion shown on the main page. The colors are just to make it look appealing.


This page is
Created by Todd Hoeksema 10 September 1997
Last revised by JTH on Sept 11, 1997