Showing posts with label color. Show all posts
Showing posts with label color. Show all posts

Sunday, October 6, 2013

Bigger Pixels

What is better? Bigger pixels or more megapixels? In this blog post, I will explain all. The answer may not be what you think it is!

Image sensors

Digital cameras use image sensors, which are rectangular grids of photosites mounted on a chip. Most image sensors today in smartphones and digital cameras (intended for consumers) employ a CMOS image sensor, where each photosite is a photodiode.

Now, images on computers are made up of pixels, and fortunately so are sensors. But in the real world, images are actually made up of photons. This means that, like the rods and cones in our eyes, photodiodes must respond to stimulation by photons. In general, the photodiodes collect photons much in the way that our rods and cones integrate the photons into some kind of electrochemical signal that our vision can interpret.

A photon is the smallest indivisible unit of light. So, if there are no photons, there is no light. But it's important to remember that not all photons are visible. Our eyes (and most consumer cameras) respond only to the visible spectrum of light, roughly between wavelengths 400 nanometers and 700 nanometers. This means that any photon that we can see will have a wavelength on this range.

Color

The light that we can see has color to it. This is because each individual photon has its own energy that places it somewhere on the electromagnetic spectrum. But what is color, really? Perceived color gives us a serviceable approximation to the spectrum of the actual light.

Objects can be colored, and lights can be colored. But, to determine the color of an object, we must use a complicated equation that involves the spectrum of the light from the light source and the absorption and reflectance spectra of the object itself. This is because light can bounce off, be scattered by, or transmit directly through any object or medium.

But it is cumbersome to store light as an entire spectrum. And, since a spectrum is actually continuous, we must sample it. And this is what causes the approximation. Sampling is a process by which information is lost, of course, by quantization. To avoid this loss, we convolve the light spectrum with color component spectra to create the serviceable, reliable color components of red, green, and blue. The so-called RGB color representation is trying to approximate how we sense color with the rods and cones in our eyes.

So think of color as something three-dimensional. But instead of X, Y, and Z, we can use R, G, and B.

Gathering color images

The photons from an image are all mixed up. Each photodiode really just collects photons and so how do we sort out the red photons from the green photons from the blue photons? Enter the color filter array.

Let's see how this works.

Each photosite is really a stack of items. On the very top is the microlens.

The microlenses are a layer of entirely transparent material that is structured into an array of rounded shapes. Bear in mind that the dot pitch is typically measured in microns, so this means that the rounding of the lens is approximate. Also bear in mind that there are millions of them.

You can think of each microlens as rounded on the top and flat on the bottom. As light comes into the microlens, its rounded shape bends the light inwards.

The microlens, as mentioned, is transparent to all wavelengths of visible light. This means that it is possible that an infrared- and ultraviolet-rejecting filter might be required to get true color. The colors will become contaminated otherwise. It is also possible, with larger pixels, that an anti-aliasing filter, usually consisting of two extremely thin layers of silicon niobate, is sandwiched above the microlens array.

Immediately below the microlens array is the color filter array (or CFA). The CFA usually consists of a pattern of red, green, and blue filters. Here we show a red filter sandwiched below.

The CFA is usually structured into a Bayer pattern. This is named after Bryce E. Bayer, the Kodak engineer that thought it up. In this pattern, there are two green pixels, one red, and one blue pixel in each 2 x 2 cell.

A microlens' job is to focus the light at the photosite into a more concentrated region. This allows the photodiode to be smaller than the dot pitch, making it possible for smaller fill factors to work. But a new technology, called Back-Side Illumination (BSI) makes it possible to put the photodiode as the next thing in the photosite stack.   This means that the fill factors can be quite a bit larger for the photosites in a BSI sensor than for a Front-Side Illumination (FSI) sensor.

The real issue is that not all light comes straight into the photosite. This means that some photons are lost. So a larger fill factor is quite desirable in collecting more light and thus producing a higher signal-to-noise ratio (SNR). Higher SNR means less noise in low-light images. Yep. Bigger pixels means less noise in low-light situations.

Now, the whole idea of a color filter array consists of a trade-off of color accuracy for detail. So it's possible that this method will disappear sometime in the (far) future. But for now, these patterns look like the one you see here for the most part, and this is the Bayer CFA pattern, sometimes known as an RGGB pattern. Half the pixels are green, the primary that the eye is most sensitive to. The other half are red and blue. This means that there is twice the green detail (per area) as there is for red or blue detail by themselves. This actually mirrors the density of rods vs. cones in the human eye. But in the human eye, the neurons are arranged in a random speckle pattern. By combining the pixels, it is possible to reconstruct full detail, using a complicated process called demosaicing. Color accuracy is, however, limited by the lower count of red and blue pixels and so interesting heuristics must be used to produce higher-accuracy color edges.

How much light?

It's not something you think about every day, but the aperture controls the amount of light let into the camera. The smaller the aperture, the less light the sensor receives. Apertures use f-stops. The lower the f-stop, the larger the aperture. The area of the aperture, and thus the amount of light it lets in, is proportional to the reciprocal of the f-stop squared. For example, after some calculations, we can see that an f/2.2 aperture lets in 19% more light than an f/2.4 aperture.

Images can be noisy. This is generally because there are not enough photons to produce a clear, continuous-tone image, and even more because the arrival time of the photons is random. So, the general rule is this: the more light, the less noise. We can control the amount of light directly by increasing the exposure time. And increasing the exposure time directly lets more photons into the photosites, which dutifully collect them until told not to do so. The randomness of the arrival time is less a factor when the exposure time increases

Once we have gathered the photons, we can control how bright the image is by increasing the ISO. Now, ISO is just another word for gain: a volume knob for the light signal. We crank up the gain when our subject is dark and the exposure is short. This restores the image to a nominal apparent amount of brightness. But this happens at the expense of greater noise because we are also amplifying the noise with the signal.

We can approximate these adjustments by using the sunny 16 rule: on a sunny day, at f/16, with ISO 100, we use about 1/120 of a second exposure to get a correct image exposure.

The light product is this:

(exposure time * ISO) / (f-stop^2)

This means nominal exposure can be found for a given ISO and f/number by measuring light and dividing out the result to compute exposure time.

If you have the exposure time as a fixed quantity and you are shooting in low light, then the ISO gets increased to keep the image from being underexposed. This is why low-light images have increased noise.

Sensor sensitivity

The pixel size actually does have some effect on the sensitivity of a single photosite in the image sensor. But really it's more complicated than that.

Most sensors list their pixel sizes by the dot pitch of the sensor. Usually the dot pitch is measures in microns (a micron is a millionth of a meter). When someone says their sensor has a bigger pixel, they are referring to the dot pitch. But there are more factors affecting the photosite sensitivity.

The fill factor is an important thing to mention, because it has a complex effect on the sensitivity. The fill factor is the amount of the array unit within the image sensor that is devoted to the surface of the photodiode. This can easily be only 50%.

The quantum efficiency is related to the percentage of photons that are captured of the total that may be gathered by the sensor. A higher quantum efficiency results in more photons captured and a more sensitive sensor.

The light-effectiveness of a pixel can be computed like this:

DotPitch^2 * FillFactor * QuantumEfficiency

Here the dot pitch squared represents the area of the array unit within the image sensor. Multiply this by the fill factor and you get the actual area of the photodiode. Multiply that by the quantum efficiency and you get a feeling for the effectiveness of the photosite, in other words, how sensitive the photosite is to light.

Megapixel mania

For years it seemed like the megapixel count was the holy grail of digital cameras. After all, the more megapixels the more detail in an image, right? Well, to a point. Eventually, the amount of noise begins to dominate the resolution. And a little thing called the Airy disc.

But working against the megapixel mania effect is the tiny sensor effect. Smartphones are getting thinner and thinner. This means that there is only so much room for a sensor, depth-wise, owing to the fact that light must be focused onto the plane of the sensor. This affects the size of the sensor package.

The granddaddy of megapixels in a smartphone is the Nokia Lumia 1020, which has a 41MP sensor with a dot pitch of 1.4 microns. This increased sensor size means the phone has to be 10.4mm thick, compared to the iPhone 5S, which is 7.6mm thick. The extra glass in the Zeiss lens means it weighs in at 158g, compared to the iPhone 5S, which is but 115g. The iPhone 5S features an 8MP BSI sensor, with a dot pitch of 1.5 microns.

While 41MP is clearly overkill, they do have the ability to combine pixels, using a process called binning, which means their pictures can have lower noise still. The iPhone 5S gets lower noise by using a larger fill factor, afforded by its BSI sensor.

But it isn't really possible to make the Lumia 1020 thinner because of the optical requirements of focusing on the huge 1/1.2" sensor. Unfortunately thinner, lighter smartphones is definitely the trend.

But, you might ask, can't we make the pixels smaller still and increase the megapixel count that way?

There is a limit, where the pixel size becomes effectively shorter than the wavelength of light, This is called the sub-diffraction limit. In this regime, the wave characteristics of light begin to dominate and we must use wave guides to improve the light collection. The Airy disc creates this resolution limit. This is the diffraction pattern from a perfectly focused infinitely small spot. This (circularly symmetric) pattern defines the maximum amount of detail you can get in an image from a perfect lens using a circular aperture. The lens being used in any given (imperfect) system will have a larger Airy disc.

The size of the Airy disc defines how many more pixels we can have with a specific size sensor, and guess what? It's not many more than the iPhone has. So the Lumia gets more pixels by growing the sensor size. And this grows the lens system requirements, increasing the weight.

It's also notable that, because of the Airy disc, decreasing the size of the pixel may not increase the resolution the resultant image. So you have to make the sensor physically larger. And this means: more pixels eventually must also mean bigger pixels and much larger cameras. Below a 0.7 micron dot pitch, the wavelength of red light, this is certainly true.

The human eye

Now, let's talk about the actual resolution of the human eye, computed by Clarkvision to be about 576 megapixels.

That seems like too large a number, and actually it seems ridiculously high. Well, there are about 100 million rods and only about 6-7 million cones. The rods work best in our night vision because they are so incredibly low-light adaptive. The cones are tightly packed in the foveal region, and really only work in lighted scenes. This is the area we see the most detail with. There are three kinds of cones and there are more red-sensitive cones than any other kind. Cones are usually called L (for large wavelengths), M (for medium wavelengths), and S (for small wavelengths). These correspond to red, green, and blue. The color sensitivity is at a maximum between 534 and 564 nanometers (the region between the peak sensitivities of the L and M cones), which corresponds to the colors between lime green and reddish orange. This is why we are so sensitive to faces: the face colors are all there.

I'm going to do some new calculations to determine how many pixels the human eye actually does see at once. I am defining pixels to be rods and cones, the photosites of the human eye. The parafoveal region is the part of the eye you get the most accurate and sharp detail from, with about 10 degrees of diameter in your field of view. At the fovea, the place with the highest concentration, there are 180,000 rods and cones per square millimeter. This drops to about 140,000 rods and cones at the edge of the parafoveal region.

One degree in our vision maps to about 288 microns on the retina. This means that 10 degrees maps to about 2.88 mm on the retina. It's a circular field, so this amounts to 6.51 square millimeters. At maximum concentration with one sensor per pixel, this would amount to 1.17 megapixels. The 10 degrees makes up about 0.1 steradians of solid angle. The human field of vision is about 40 times that at 4 steradians. So this amounts to 46.9 megapixels. But remember that the concentration of rods and cones falls off at a steep rate with the distance from the fovea. So there are at most 20 megapixels captured by the eye in any one glance.

It is true that the eye "paints" the scene as it moves, retaining the information for a larger field of view as the parafoveal region sweeps over the scene being observed. It is also true that the human visual system has sophisticated pattern matching and completion algorithms wired in. This probably increases the perceived resolution, but not by more than a factor of two by area.

So it seems unlikely  that the human eye's resolution can exceed 40 megapixels. But of course we have two eyes and there is a significant overlap between them. Perhaps we can increase the estimate by 20 percent, to 48 megapixels.

If you consider yourself using a retina display and then extrapolate to the whole field of view, this is pretty close to what we would get.

So this means that a camera that captures the entire field of view that a human eye can see (some 120 degrees horizontally and 100 degrees vertically in a sort of oval-shape) could have 48 megapixels and you could look anywhere on the image and be fooled. If the camera were square, it would probably have to be about 61 megapixels to hold a 48 megapixel oval inside. So that's my estimate of the resolution required to fool the human visual system into thinking it's looking at reality.

Whew!

That's a lot of details about the human eye and sensors! Let's sum it all up. To make a valid image with human-eye resolution, due to Airy disc size and lens capabilities, would take a camera and lens system about the size and depth of the human eye itself! Perhaps by making sensors smaller and improving optics to be flexible like the human eye, we can make it twice as good and half the size.

But we won't be able to put that into a smartphone, I'm pretty sure. Still, improvements in lens quality, BSI sensors, wave guide technology, noise reduction, and signal processing, continue to push our smartphones to ever-increasing resolution and clarity in low-light situations. Probably we will have to have cameras with monochromatic (rod-like) sensors to be able to compete with the human eye in low-light scenes. The human retinal system we have right now is so low-light adaptable!

Apple and others have shown that cameras can be smaller and smaller, such as the excellent camera in the iPhone 5S, which has great low-light capabilities and a two-color flash for better chromatic adaptation. Nokia has shown that a high-resolution sensor can be placed in bigger-thicker-heavier phones that has the flexibility for binning and better optics that push the smartphone cameras ever closer to human-eye capabilities.

Human eyes are hard to fool, though, because they are connected to pattern-matching systems inside our visual system. Look for image interpretation and clarification algorithms to make the next great leap in quality, just as they do in the human visual system.

So is it bigger pixels or simply more of them? No, the answer is better pixels.

Saturday, March 31, 2012

An Anatomy of Painter's Brushes, Part 1


Painter is famous for its brushes. Most are not duplicated anywhere else, despite some claims to the contrary. But what makes the brushes in Painter different? How do they work?

Well, I can't tell you how they work exactly, but I will share with you some of the decisions made in building Painter's brushes. And show you, hopefully, which brushes are good for what. And, along the way, I hope you will gain a better knowledge of how to use them to suit your artistic task.

Also, this could take several posts, so I will start with the basics, you know, Painter 1 and 2 brushes, and then work my way up to the much newer brushes.

Cover? Buildup?

Painter uses some terms that describe some of the fundamental ways that the brushes lay their paint down onto the canvas. In the blog post on Color, I described different kinds of color mixing. You can think of cover methods as using interpolative color mixing, which is applied to additive color. Also, the buildup methods are using subtractive color mixing, in particular, they are using Beer's Law on the three components of the color, Red, Green, and Blue (this part is in the '620 patent, so it's public knowledge). But of course, it's more complicated than that.

Here you see cover (top) and buildup (bottom) strokes. Cover strokes tend to become flat opaque color, while buildup strokes increase their density.

With buildup brushes, you choose colors that are less saturated, and contain all three color components in them. You must also set their opacity lower to get more levels of build-up. Here you see a version of that brush with the opacity lowered to 4%. As you can see, it takes much longer for the colors to darken.

You use cover methods for things like oils, airbrush, chalk, etc. So it is aptly named, since these media do tend to cover what's behind them.

Other media is more applied in a watery, transparent layer, like watercolors and felt pen. These media are more suited to modeling with the buildup methods. Sometimes colored pencils can behave this way as well. But don't confuse partitive mixing with buildup.

So, why does charcoal tend to build up when you use it? Charcoal builds up because the tiny grains of graphite get lodged in the crevices of the paper and the color partitively mixes with the color of the paper. Partitive mixing is also described in the color post. But what happens is more and more crevices and eventually even the tops of the grain get saturated with graphite and the color gets darker.

There should be more ways of mixing color in Painter, particularly since oil paints don't really mix in either of these ways. But wait... are we talking about mixing or simply laying down color?

Pickup, Mixing, and So Forth

Actual mixing of color on the canvas (and within the brush, it turns out) is modeled by Painter using a color well concept. This is not the same color well concept used by photo sensors that collect electrons. converted from photons by a photodiode. This is a concept by which RGB color is collected in a color well, and as other colors get put in, the well models what the mixture of the color is: a kind of local color accumulator.

The color well can be done on a whole-brush basis, or it can be down on a bristle-by-bristle basis.

Painter's modeling is quite sophisticated.

So let's look at how the color well performs. There are three parameters in the Brush Controls:Well section, and you need to know what they mean. Every time Painter lays down a brush dab or a bristle dab, the well is accessed. It knows about the supplied color (which is generally the current color, but it can also be the color from the original image when cloning) and it also knows about the canvas color underneath the dab or bristle. The color in the color well then becomes the brush color.

Resaturation is the capacity of the brush or bristle to be replenished with the supplied color with every dab or bristle that gets laid down. Bleed is a measure of the amount of canvas color that gets picked up by the brush (with every dab or bristle that gets laid down). And dryout is the distance over which the resaturation stops working.

In the color well, resaturation tends to trump all other aspects of the well, and the most useful values for resaturation are generally down between 0 and 4%. This is where you get the fudgy, smeary mixtures of paint. You will need to combine this with the bleed setting. A low bleed setting tends to make the pickup take longer and thus the brush strokes get smearier. A high bleed setting causes less pickup and thus the smears become shorter, and, up around 50%, they become unnoticeable.

So both resaturation and bleed should be kept in the low ends of their ranges for the fullest degree of control. At resaturation of 2% and bleed of 28%, for instance, the bleed trumps the resaturation and causes consistent smears, seen here. The overstroke color is a dun color, barely discernible due to the low resaturation.

The best way to get a real admixture of color is to mix brushes. In other words, lay the color down first and do your admixture afterwards.

Here, I first drew unsullied patches of red, blue, green, and yellow using a higher resaturation (28%) and a comparable bleed (27%). Then I set resaturation to zero, creating a smear brush. Using the smear brush, I mixed the paints together to get the muddy mixtures in between.

Real mixtures with real paints usually keep a bit more saturation (in this case, I mean colorfulness). This is because the scattering term and the absorption term need to be kept separate. And also because actual colorant mixing can't really be properly modeled using only three wavelengths.

Future paint applications will need to do this, I believe. I would think that two-constant Kubelka-Munk theory should suffice as a good second-order approximation.

But, you know what? More and more artists are using digital paint applications. So they are going to expect the paint to mix more like RGB interpolative mixing, and less like actual oil paint mixing. And I am partly to blame for this, I know.

Paper Grain, and Grainy Soft, Flat, Edge, and Hard Methods

Painter's first real advance in 1991 was the paper grains. The way the brushes interact with the grains was also a serious advance over previous types of brush.

Here you see Grainy Flat Color, Grainy Soft Cover, Grainy Edge Flat Color, and Grainy Hard Cover methods. Note that I had to increase grain contrast to 400% so you could see the grain in the Grainy Soft Cover method's stroke. Each method has its own unique signature. Of particular interest to me is the Grainy Edge Flat Cover method.

This method can be adjusted by a few slider settings that you should be aware of, and can create a wealth of looks in this way.

Here you see the grain slider adjusted to 28%, 21%, 15%, and 8% (top to bottom).

This allows you to control, using this linear brush (with a profile that is straight and pointed at the tip), the size and graininess simultaneously. But what if you want the amount of grain penetration to be controlled while you make the brushstroke?

A couple of features may allow you to do this.

Right next to the grain slider (in the Brush Controls:General section) is an expression pop-up. This allows you to control directly how the grain is animated during the stroke. Grain will be controllable directly by stylus pressure when you use pressure to control grain expression with this method.

This shows a brush stroke created using this technique. But to get this level of expression out of it, I had to do another thing first. I had to go to Preferences:Brush Tracking and adjust the Pressure Power to a setting of 2.04. This makes more values that are closer to zero pressure in your brush stroke.

And therefore more grain, because as you saw above, it is the lower grain values that produce the most grainy edges.

When playing with paper grain, you can also adjust the grain itself to your taste as well. It is extremely convenient to be able to scale the grain. This works well with Grainy Edge Flat Cover brushes because it just makes the grains rounder. But you can also adjust the contrast (and thus how pronounced the grain will appear through the brush) and the brightness (in case your brush isn't actually touching the grain at all).

Of all the grainy brush methods, the Grainy Soft Cover method is probably the least useful, I would say. To make more sense of this soft method, you will probably need to adjust your paper texture characteristics.

The Grainy Hard Cover method is about half way in-between the Grainy Soft Cover method and the Grainy Edge Flat Cover method. With this brush, both opacity and grain will have a bearing on how grainy the brush appears when you use it. It is probably the best method for simulating colored pencils.

Here I have set an opacity of 11% and a grain of 30%. This gives us good coverage and a nice grain taper with opacity. I have set grain expression to pressure and opacity expression to none. This one is also sensitive to the Preferences:Brush Tracking adjustments, particularly Pressure Power, because I'm using pressure. It's a pretty good chalk or colored pencil.

Cover Brushes and Opacity

It's time to discuss the problem of how to set the opacity of cover brushes. The problem with many cover brushes is that they use sequential dab overlay. This means that the dabs are laid out along the brush stroke according to the brush spacing. If you set the color to black, the brush spacing to 50% and the opacity to 38%, you get the pattern you see here. What this all means is that it is hard to set the opacity of an airbrush to get the actual opacity you want. It becomes an effective opacity of 100% way too quickly because of the sequential dab overlay effect.

By 50%, the spacing is measured in terms of the radius of the brush. I am using a one-pixel-edge brush so you can see the placement. So how dark does it get along the stroke? It depends upon the overlap, as you can see, and because of that, upon how many the dabs will overlay in one spot. Since the radius is 1/2 the diameter, we can calculate the number of times an overlap will happen as 1/(2*spacing). Each time an overlap happens, the color gets more opaque. The opacity of n overlaps is 1 - (1-opacity)^n. Using these formulas, you can compute the opacity of the brush stroke. In this case, n is clearly 4, and the transparency is 62% to the fourth power, or 14.8%, so the opacity ends up being 85.2%, which seems right.

How Corel Can Fix This

An aside: what Corel needs to do is to let the user specify the final opacity (called the desired opacity) they want and then invert these formulas to compute the proper opacity for the current spacing. It's easy, and so here's how it can be done:

Here, the specified opacity is the one you used with each dab of the brush.

With airbrushes, that have a soft profile, this may be a more complicated formula. But it will be arranged according to powers of transparency, as I have indicated here. Perhaps the floor won't be necessary, making n a continuous parameter. It turns out that this can be measured empirically and then the opacity setting can be computed fro the desired opacity and the spacing using a table lookup. Really, the only coefficient that will need to change is the power term, so you could keep one coefficient per profile. Possibly, though, the spacing (with different profiles) will have a non-linear effect on the overlap and thus the opacity of the stroke. In this case, you might need one coefficient per profile per spacing. Since spacing is a continuous parameter, it must be quantized into a small table.

Oh, I love programming!

Next time, maybe I'll write about grain histogram equalization and the problem of getting consistent results with different paper texture patterns. It would really be nice to get a more ergonomic handle on brushes, grain, opacity, and all that jazz.

Thursday, March 22, 2012

Color

In Painter, there is nothing that is more iconic than its color picker. It was designed for the artist, and so it features a circular ring of hues (called a color wheel) and a triangle of single-hued color (called a color page) inside it.

Color Pickers for Artists

In Painter 3, I redesigned the color picker around the concept of the color wheel. Before Painter 3, it was a color triangle above a hue slider.

The pre-Painter 3 color picker was actually clumsy. But I chose the triangle because it was ergonomically easy to use, and it was approximately perceptually arranged. It is good that the triangle has a single point at the top and the bottom for white and black. This shows unambiguously where these colors are. Other color pickers show them as the top and bottom of a square, which is not a correct depiction of color space.

Here we have the Painter 1.2 color picker. My main problem with this is that the hue slider is not really big enough to represent all hues properly.

A set of color swatches is available for quick choice and drawing, like a mini-palette.

I don't like how the color ring on the triangle (that indicates the current color) actually gets hidden by the hue slider. It's a visually-conflicting thing.

In Painter 3, I chose the hue ring to be a little thick, like paint. But even so, I had some issues with it. The position of the colors on the wheel isn't really equally-spaced. Ideally, equal angular changes along the wheel would represent equal perceptual differences in the color.

Look at an RGB color wheel, to the left, and a perceptual color wheel, to the right.

Two things have been done. First, the colors have been spaced perceptually equal. Second, the colors have been chosen to be at approximately the same luminance, of apparent lightness.

Notice on the RGB color wheel, where the colors red, yellow, green, cyan, blue, and magenta are equally spaced at 60-degree angles around the wheel, that the yellow area seems tight, and the green area seems grossly large in comparison. On the perceptual color wheel, care was taken to have equal color increments.

This means that a user can choose colors in the area they want with equal ease.

With the RGB color wheel, on the other hand, the artist always has to adjust the luminance's up and down to choose colors at the same apparent lightness, depending upon the hue.

So, if I were to do Painter again, I would probably do some work at making the color picker more ergonomic (or at least have an option for the artist to use an ergonomic color picker).

Color Mixing

Color works in some very interesting ways, that most people don't really think about every day.

There are several kinds of color mixture that we like to describe. The first, learned by children when they mix their crayons on white paper, is called subtractive color.

With subtractive color, the more color that gets deposited, the darker and more saturated the combination color gets. This is because the rays of light reflect off the paper. As color gets laid down, the light rays are absorbed by the pigments. The more kinds of colors you lay down, the more wavelengths of the light are blocked from reflecting by the absorption of the particular color of the light. So laying down two hues will muddy the color.

Subtractive color is the chosen mixing method for felt tip markers (buildup brushes), for instance.

The second kind of color mixing is additive color. With additive color, it's like you are starting with a dark room and shining lights of different colors.

In fact, in the Apply Lighting effect, this is the method of color mixing that is used.

This is quite different from the way that paints mix, but it does bely the way light can be split up into a spectrum by a prism: because white light actually consists of the addition of several spectral hues, it may also be broken down into those hues. This is done by a process of refraction. Dispersion is caused by the wavelength-dependence of the index of refraction of the prism material in question.

In Painter, cover brushes are another kind of brushes. How does that work?

The additive color model does apply, but it is complicated by more than just addition. the cover brushes use interpolative color mixing.

With interpolative mixing, suddenly the priority order matters. This actually becomes useful with brushes, and it makes it possible to cover things with successive brush strokes, and this is why they are called cover brushes.

In this image, the ordering from back to front is red, green, turquoise, purple. So the purple color dominates the color in the center, where the four rectangles overlap. A 50% opacity is used in all rectangles.

It is true that, in a cover brush stroke, many dabs of paint overlap to create the final stroke's color. This means we have the luxury of keeping the opacity low for each dab, since multiple overlays quickly converge to near 100% coverage.

There is a strange kind of color mixing, called partitive color mixing. This is the formation of intermediate colors by dividing the view area into many tiny swatches of color, like a mosaic. Partitive color can and does apply to both additive and subtractive color. When it is applied to additive color, you get the very screen you are currently looking at. LCD or CRT, it doesn't matter. All of them use partitive mixing. When it is applied to subtractive color, you get CMYK halftone images.

I have generated a halftone image of myself in 1995 using the Core Image filter CICMYKHalftone. When you overlay halftones of cyan, magenta, yellow, and black, each pixel of the result can be one of 16 possible colors (because 2 to the 4th power is 16).

Painterly Color Mixing

But, how should color mixing be done to simulate oil paints? Now we are getting into the complex world of actual paint physics simulation. This is done via Kubelka-Munk theory. In this theory, both absorption (which is responsible for subtractive color as mentioned earlier) and scattering (which is responsible for the color of the sky) are taken into account. A mixture pigment has absorption and scattering that is the linear mixture of the absorption and scattering of its component pigments, using the weights that come from the fractions of the pigments that are mixed together. Actually, this is very much like RGB mixing, except that absorption and scattering applies to every wavelength of light, not just the three primary wavelengths. Research has shown that 8 wavelengths produce a much more accurate result than the usual three wavelengths used by RGB mixing, and that not much more improvement is to be had by going to 100 wavelengths.

Then, a fellow named Saunderson produced a correction to this formula that allowed for the reflection of light off the transitional boundary between the pigments, when they are layered.

This combination is used for color mixing today, and it is called the two-constant method for color mixing. A single-constant method is also used to approximate the mixing estimation, which assumes absorption divided by scattering to be a single constant, and works from there. This method is less accurate.

Someday I would love to investigate color mixing again.