Section 5

The Physics of Light and How We See

Light is composed of photons. Photons are very small packets of kinetic energy ( they have no mass ) given off by an electron when it goes from a higher energy state to a lower energy state. The difference in energy states is the energy of a photon ( lasers work on this principle ).

We replace air molecule density with photon density.

We replace the vocal system with a battery and light bulb.

We replace the ear with the eye.

When an electric current passes through a light bulb, light is radiated in a stream of photons. You can place your hand or a filter in front of the photon stream as a way to modulate the stream ( change the amplitude and frequency ).

When the photon stream enters the eye, four types of detectors are present to measure the amplitude ( intensity ).

The primary detector has a wide bandwidth and only detects the amplitude of the total stream, not a specific frequency. This detector is referred to as detecting black and white ( in reality, it's a gray scale detector ).

The other three detectors work together as one, each detecting a much smaller bandwidth around a specific frequency. These detectors are referred to as the Red, Blue and Green detectors ( cones ). The three signals of these detectors are combined to create a single value that's interpreted as a color ( hue ).

The B&W detectors ( rods ) cover most of the inner wall of the eye. They are also far more sensitive ( detect lower amplitudes ) than the color detectors.

The color detectors are clustered around the center at the back of the eye ( the focus area of your vision ).

Under dim lighting conditions ( dusk or dawn ), or outside your focus area, you basically see only in shades of gray.

Interesting to note is that you do not see color as cameras do or as color is used in print or on computer displays. Your brain takes a sample of measurements relative to a detector in your focus area and weights that measurement. It repeats this process for every detector. From these weights a color value for each detector is derived relative to all the other weights of all the other detectors. This is the color pattern the brain uses to interpret what it sees.

Why does the brain do it that way?

Remember the detectors only detect the intensity of a photon stream in a narrow bandwidth. The intensity varies based on your relationship to the light source and object of focus ( remember frame of reference problems ). As you move around, from direct sun light into a shadow, from outside a building to the inside, as the light source goes from a nearly white light source like the sun to a yellow light source like a light bulb, the intensity of the photon stream changes even though the object and it's characteristics don't. If your brain worked off of the intensity only, then the color of the object would change with every view. So the brain makes it's color measurement at any given point a relative measurement to all the other color measurements. What this means is that color measurement is really a relative measurement or balance between all colors seen. The variations do to source changes affects all the color characteristics of an object, so if red is reduced then green and blue are also reduced. The brain's weighting system sees the colors relative to each other rather than as an absolute frequency ( color ) and amplitude ( brightness or intensity ) measurement ( see figure - Relative Color ). This transformation formula, maps two separate frames of reference onto a standard frame of reference.

This is the major difference between hearing and vision, hearing is absolute and vision is relative.

Because of the transformations of the brain and the resolution of the eyes, 3-D objects can be mapped onto a sheet of paper and actually appear to be floating in space as well as appear to contain more colors than actually exist. When I first starting developing games on computers that had color, I used these features to trick the user into seeing what wasn't actually there.

Early computers were generally limited to 8 colors; black, white, gray ( these are not colors ), red, green, blue, magenta, cyan and yellow. Note, all the colors are related to how we see. In many games I needed a color to represent dirt. The solution was to use a pattern to mix very small areas with red and black pixels which the user's vision interpreted as brown. To adjust the brightness of the brown I used a lighter or darker color for the background. Using white lightened the brown and using black darkened the brown.

The same method works with adding depth to clouds ( black for edges, gray to change the appearance from still to storm and cyan mixed with white to add depth ) and making an elevator appear to be inside an elevator shaft rather than sliding down the face of the monitor. Mixing white with yellow controlled the distance a lightning bolt appeared to strike in relationship to another object.

Our eyes are slightly offset on the X axis, thus giving us a view of the Z axis ( depth perception ). This is also the primary reason why the Z axis on a sheet of paper is offset relative to the X axis rather than the Y axis. However, without clues, the brain has a difficult time determining which surface of a 3-D object is the closest. If you look at the 3-D cube in figure - Dimensions, your vision system has enough clues to determine the orientation of the cube. Since this is an instructional drawing, it's most likely that the edges follow the axis', which means that the faces lie parallel to each axis plane. In other words the highest edge is the back and the lowest edge is the front. The cube goes from the bottom-left to the top-right, shrinking as it goes. If you focus so that the smaller face is forward, it will look as if the cube is going from top-right to bottom-left, expanding as it goes. Without the axis' your vision system has no way of determining the orientation of the cube, therefore any perceived orientation will be correct, because no frame of reference is defined. In fact any transformation of a higher dimensional object to a lower dimension is purely subjective, since the cube may not even be a cube, it could be just a collection of lines or irregular polygons or a combination of many possibilities. This is the problem with a single perspective. By convention if given three axis' then an object is 3-D if it looks 3-D, 2-D if it looks 2-D etc.


Author: David Bishop

Last updated: Mar 4, 2011

WXC