Follow by Email

Thursday, December 12, 2013

The Unstoppable Now

The universe seems to be moving forwards, ever forwards, and there's nothing we can do about it. Or is there? Is the world too tangled to unravel?

Changing political landscapes

We all see the changes in the world. Climate change is the new catchphrase for global warming. Some areas of the world may never sort themselves out: the Koreas, the Middle East, Africa. Yet we can look to the past and see how a divided Germany re-unified, how South Africa eliminated the apartheid government and changed for the better (bless you Nelson Mandela, and may you rest in peace), how Europe has bonded with common currency and economic control.

Good and bad: will Europe solidify or become an economic roller coaster? Will Africa stabilize or continue on its path of tribal and religious genocide? Will Iran become a good neighbor, or will it simply arm itself with nuclear weapons and force a confrontation with Israel?

Despotic secular regimes have been overthrown in the Islamic world (Egypt, Tunisia, and Libya) and social media seems to have become a trigger for change, a tool for inciting revolution. Some regimes are experiencing slight Islamic shifts, like Turkey. But Egypt, having moved in that direction when the Islamic Brotherhood secured the presidency, is now moving away from it in yet another revolution.

The more things change, the more they stay the same.

The reason that social media became an enabler for the changes we are seeing is because people care. Crowdsourced opinion has an increasing amount of effect on government. Imagine that! Democracy in action. Even in countries that have yet to see democracy.

Let's look at one of the biggest enablers for this: the iPhone.

The iPhone and its effect

Yes, this is one of the biggest vehicles for change because it raised the bar on handheld social media, on internet in your pocket, and on the spread of digital photography. The ability to make a difference was propagated with the iPhone and the devices that copied it. Did Steve Jobs know he was starting this kind of change? He knew it was transformative. And he built ecosystems like iTunes, the App store, and the iBookstore to make it all work. Without the App Store, we'd all still be in the dark ages of social media. The mobile revolution is here to stay.

Holding the first iPhone was like holding a bit of the future in your hands. It was that far ahead of the pack. Its amazing glass keyboard was met with skepticism from analysts at first, but the public was quick to decide it was just fine for them. A phone that was just a huge glass screen was more than an innovation. It was a revolution.

It's even remarkable that Steve Ballmer panned the first iPhone when it came out. By doing that, he drew even more attention to the gamble Apple was making, and in retrospect made himself look amazingly short-sighted. And look where it got him! Microsoft's lack of success in the mobile industry seems predictable, once you see this.

Each new iPhone iteration brings remarkable value. Better telephony (3G quickly became 4G and that quickly became LTE), better sensors (accelerometer, GPS, magnetometer, gyrometer, etc.), and better camera, lenses, flashes, and BSI sensors. Bluetooth connectivity makes it work in our cars. Siri makes it work by voice command. Each new feature is so well-integrated that it just feels like it's been there all along. Now that I have used my iPhone 5S for awhile, I feel like the fingerprint sensor is part of what an iPhone means, now.

This all-in-one device has led to unprecedented spread of pictures. It and its (ahem, copycat) devices supporting Google's Android and more recently Microsoft's Windows Phone 8 have enabled social media to become ever more present, and influential, in our world.

In 2012, a Nielsen report showed that social media growth is driven largely by mobile devices and the mobile apps made by the social media sites.

Hackers, security, whistleblowers

A battle is being fought in the field of security.

Private hackers have been stealing identities and doing so much more to gain attention, and we know why.

Then hackers began attacking companies and countries, plying their expertise, for various causes. The Anonymous and LulzSec groups fought Sony against the restrictiveness of gaming systems, against the despotic regime in Iran, against banks they believed were evil.

Enter the criminal hacking consortia, which build programs like Zeus for constructing and tasking botnets using rootkit techniques, for perpetrating massive credit card fraud.

Then the nation state hacking organizations began to do their worst. With targeted viruses like Flame, Stuxnet, and Duqu. Whole military organizations are built, like China's military unit 61398 with the sole task of hacking foreign businesses and governments.

Is anybody safe?

It is very much a sign of the times that the latest iPhone 5S features Touch ID. You just need your fingerprint to unlock it. Biometrics like fingerprints and iris scans (something only you are) are becoming a good method for security engineering. There are so many public hacker attacks that individual security is quickly becoming a major problem.

New techniques for securing your data, like multi-factor authentication, are becoming increasingly both popular and necessary. Accessing your bank and making a money transfer? Enter the passcode for your account (something only you know), then they send your trusted phone (something only you have) a text message and you enter it into the box. The second factor makes it more secure because it is more certain to be you and not some interloper spoofing you.

The landscape of security has been forever changed by the whistleblowers. Whole organizations were built to support them (WikiLeaks) and governments, banks, and corporations were targeted. The release of huge sets included confidential data from the US Military, from the Church of Scientology, from the Swiss Bank Julius Baer, from the Congressional Research Service, and from the NSA, via Edward Snowden.

It is notable that WikiLeaks hasn't released secret information from Russia or China. It is most likely that they would be collectively assassinated were that the case. Especially given such events as the death of Alexander Litvinenko.

The founder of WikiLeaks, Julian Assange, is currently a self-imposed captive in the Ecuadorean embassy in London. In an apparent coup, one of the WikiLeaks members, Daniel Domscheit-Berg decided to leave WikiLeaks, and when he left, he destroyed documents containing America's no-fly list, the collected emails of the Bank of America, insider information from 20 right-wing organizations, and proof of torture in an undisclosed Latin American country (unlikely to be Ecuador, and much more likely to be one of its adversaries, such as Colombia). Domscheit-Berg apparently left to start up his own leaks site, but later decided to merely offer information on how to set one up.

The trend is that the general public (or at least a few highly-vocal people) increasingly expect all secrets to be revealed. And yet, I expect that they would highly value their own secrets. This is why there is such a trend towards protecting individual privacy.

The reality is organizations like WikiLeaks are proud to reveal secrets from the western democracies like America, but are reticent to do so for America's adversaries like Russia. Since this creates an asymmetric advantage, these organizations can only be viewed as anti-American. Even if they aren't specifically anti-American, they inevitably have this effect.

So they are playing for the Russians whether they believe it or not.

Does the whistleblower movement have the inherent potential for disentangling the world political situation? Perhaps in the sense that knots can be cut, like the Gordian Knot. But disentangled? No.

The only way that the knots can be unraveled is if everybody begins to play nice. And I don't really see that happening.

Perhaps Raul Castro will embrace America as an ally now that we have shaken hands. Perhaps Iran will stop its relentless bunker-protected quest for Uranium enrichment. Perhaps the Islamic militias in Africa will declare a policy of live-and-let-live with their Christian neighbors and stop the wholesale slaughter.

It's good to be idealistic. In idealism, when it is peace-oriented, we see a chance for change. In the social media revolution we see a chance for the moderate majority to be heard.

Only we can stop the unstoppable now.

Tuesday, October 22, 2013

Knots, Part 3

Knots are also intertwining, and sometimes present a bit of complexity when rendering them. Separate ends may be intertwined, as when we tie our shoes. But loops can also intertwine, and this creates a kind of impossible figure because they are most difficult to actually make. As with Borromean rings and the Valknut, we can also use twists and loops.

In the previous post on knots, I included what I considered to be the simplest intertwining of a loop containing a twist.

Here a gray four-leaf clover loop with twists at the corners intertwines with a brown loop with inside twists. This creates a form of duality because the brown loop is really a four-leaf clover loop turned inside-out. The over-under rule is used on each thread to produce a maximally tied figure. A bit of woodcut shading is also used.

Now I'd like to show a natural extension of the figure eight intertwined with a simple loop. I designed this form a few days ago, but it took me a few days to get to a proper rendering. I used the same techniques to produce this as I used in the examples from the previous post. Except that I used a spatter airbrush on a gel layer to create the shading when one thread passes under another.

I used a simple airbrush on a screen layer to create the ribbon highlights. As always, I wish I had more time to illustrate!

But this figure shows how four loops can become intertwined in an interesting way by twisting each loop once.

Each knot I draw starts out as a thin black line on a page. I don't even worry about the crossings and their precedence. I just try to get the form right. The final result is very complex and simple at the same time.

Knots have their stylistic origins in antiquity. They were used for ornament and symbology by the Celts, the Vikings, and the ancient Chinese.

A purple loop with three twists intertwines with a blue circle in this knot.

The shines were created using a lighter, more saturated color and mixed into the gel layer using the Just Add Water brush in Painter. It's a bit like a Styptic pencil and was one of the first mixing brushes I created in Painter in 1991.


Saturday, October 12, 2013

Knots, Part 2

In an earlier post, I talked about knots. And knots are entanglement, there is no doubt. They serve to bind, secure, to tie down, to hang up, and even to keep our shoes on.

In this post I will talk about knots as a way to entangle two threads. I will continue to use the planar method of showing knots, combined with precedence at crossover points. An over-under rule is used to keep the knots maximally entangled.

In addition, I will show how to draw knots using my drawing style, which is a little bit scratchboard-watercolor, a little bit woodcut, and a lot retro. You can find more of my style (and lots more knots) at my Pinterest artwork board.

The over-under rule characterizes one of the best ways to organize the making of a knot. In its simplest form, you can see a less confusing, more iconic representation.

This knot is a clover interleaved with a ring. The ancient name for the clover symbol is the Saint John's Arms. The clover is used to symbolize places of interest on a map, the command key on Macs, and cultural heritage monuments in Nordic countries, Estonia, and a few other places. This symbol has been around for at least 1500 years.

The other day while working on a complicated programming problem, and I drew such a clover absent-mindedly and realized suddenly that I could pass a ring through its loops, hence this figure. When you draw the clover as a knot, it is also called the Bowen knot.

It seemed like the simplest thing at the time. Then I tried to draw it in its current form: not so easy! After a few hours (off and on) with Painter yesterday I finally had this figure smoothed out in nice outlines. Today I shaded and colored it. Sure, maybe the purple is a bit much, but I like the simple forms and the way they intertwine.

After making this figure originally, I went back to my programming. But there was a nagging question in the back of my head. What was the simplest intertwined figure that had a twist in it? I had to think simple, so I drew an infinity as a twisted bit of rope.

Then I wondered how a ring might enter the picture. I tried one way and then it hit me: use the over-under rule.

This is the figure I ended up with. Now that's much simpler than the first, and iconic in its own way, I think. It could be a logo in an even simpler form. O-infinity? Well, there's nothing like a logo created for no particular reason!

But how are such knots created, really? Is there an easy way?

Start with a line drawing showing the paths of the two threads. This is how I started. I put them at an angle because I drew the oval first. This was a natural angle for me to draw it right-handed.

Then I turned the page and drew the infinity so that the oval passed through each of the figure-eight's loops.

It wasn't exactly symmetric. Though I do like symmetry, I like even more to make my drawings a bit imperfect to show that they are hand-drawn. If I were designing for a logo, though, I'm not sure I'd make the same choice.

Next I drew the figure again, but with an indication (by breaking the lines so they don't quite cross over each other) of which thread is on top and which crosses under.

Here is my first attempt.

But there is a basic flaw: if I were to grab the oval and pull it, it would easily come loose from the figure-eight! Needless to say this wasn't the knot I was looking for so I redrew it again using the tried-and-true over-under rule which states this: as you pass along a thread, it must pass first over and then under the other threads, alternating in succession.

Here is the result of redrawing it. As you can see, it has a much nicer integrity. It seems to be entangled properly.

So now I have a basic plan for the entanglement of the knot. Now I must plan to draw the knot using outlines for each thread. This means that each thread must really be two lines that are parallel to each other. I call this the schematic version.

I use the original line drawing as a guide and draw two lines parallel to the original line, one line on each side. Originally I worked in black ultra-fine Sharpie on thick 32# copy paper.

The wide lines drawing, as you can see, is getting a bit complicated. But fortunately I have a legend for which lines to draw in and which lines to erase: the second hidden-line diagram above.

I use this as a template so I can redraw the image, using only the new wide lines. With this I can create a hidden-line version of the wider knot. It is easy to accomplish this by placing the blank sheet over the original and using it as tracing paper.

Of course when I do this, I avoid drawing the centerline. This keeps the drawing simple. In this way, you can see that the centerline was a for-reference-only diagram for what follows.

Here is the wide hidden-line version. This one is much clearer and certainly much closer to what I was trying to create.

But it is a bit flat, like a road. And the crossings are really dimensionless.

I brought this into Painter and smoothed out the lines, making them a bit more consistent. Then I worked a bit of magic by using my woodcut style.

How do I do that?

I'm glad you asked! At each crossover, I draw three or four lines on the "under" sides of the crossover. Then I draw to create wedges of black that meet very close to the "over" lines. Finally I use a small white brush to sculpt the points of the wedges, making them very pointy.

This simulates what could be created using a V-shaped ductal tool with linoleum or wood.

Well, this process takes a bit of time. If you count, you can see I had to create about 40 wedges, sculpting each of them into a perfect line or curve. But I am patient.

Sometimes I widened the "under" lines to meet the outermost wedges. This makes a more natural-looking woodcut.

Finally, in Painter I use a gel layer and fill in color on top, filling in each area of the thread using a slightly different color.

This gives me the final result, a unified entanglement of two interesting threads! This result is quite similar to the scratchboard-watercolor look that I like. I used the same technique exactly to create the knot at the top of this post. In past posts, I have used this technique to create many illustrations, of course. I like this look because it's easy to print and it is good for creating logos.

For instance, if I take the plain wide line version and blacken the white background, I get a version that can be manipulated into a logo form. After that, I invert the colors of the image and that gives me a clean black logo on white. Then I use a layer in Screen mode to colorize the black segments of the threads.

Here is a logo version of the knot, expressed in colorful tones. But this won't do for O-infinity at all! It might easily be an O in purple and the figure-eight in navy blue. On black.

But that's not my idea of a good company name, so I will leave it like this!

There are plenty of styles for redrawing this knot that make interesting illustrations.

This one is not a knot, really. But it is an interesting redrawing of the figure.

This is called an inline treatment.

Remember the Neuland Inline font that was used for the movie Jurassic Park?

This figure can be used as the start of about 100 different illustrations, depending upon which crossings you want to black in or erase.

I tried several before I realized that it wasn't the direction I wanted to go with the logo.

Trial-and-error is often the way with creativity!

I have other knots I'd like to draw, but they certainly do take time! It's good to be drawing again.

Sunday, October 6, 2013

Bigger Pixels

What is better? Bigger pixels or more megapixels? In this blog post, I will explain all. The answer may not be what you think it is!

Image sensors

Digital cameras use image sensors, which are rectangular grids of photosites mounted on a chip. Most image sensors today in smartphones and digital cameras (intended for consumers) employ a CMOS image sensor, where each photosite is a photodiode.

Now, images on computers are made up of pixels, and fortunately so are sensors. But in the real world, images are actually made up of photons. This means that, like the rods and cones in our eyes, photodiodes must respond to stimulation by photons. In general, the photodiodes collect photons much in the way that our rods and cones integrate the photons into some kind of electrochemical signal that our vision can interpret.

A photon is the smallest indivisible unit of light. So, if there are no photons, there is no light. But it's important to remember that not all photons are visible. Our eyes (and most consumer cameras) respond only to the visible spectrum of light, roughly between wavelengths 400 nanometers and 700 nanometers. This means that any photon that we can see will have a wavelength on this range.


The light that we can see has color to it. This is because each individual photon has its own energy that places it somewhere on the electromagnetic spectrum. But what is color, really? Perceived color gives us a serviceable approximation to the spectrum of the actual light.

Objects can be colored, and lights can be colored. But, to determine the color of an object, we must use a complicated equation that involves the spectrum of the light from the light source and the absorption and reflectance spectra of the object itself. This is because light can bounce off, be scattered by, or transmit directly through any object or medium.

But it is cumbersome to store light as an entire spectrum. And, since a spectrum is actually continuous, we must sample it. And this is what causes the approximation. Sampling is a process by which information is lost, of course, by quantization. To avoid this loss, we convolve the light spectrum with color component spectra to create the serviceable, reliable color components of red, green, and blue. The so-called RGB color representation is trying to approximate how we sense color with the rods and cones in our eyes.

So think of color as something three-dimensional. But instead of X, Y, and Z, we can use R, G, and B.

Gathering color images

The photons from an image are all mixed up. Each photodiode really just collects photons and so how do we sort out the red photons from the green photons from the blue photons? Enter the color filter array.

Let's see how this works.

Each photosite is really a stack of items. On the very top is the microlens.

The microlenses are a layer of entirely transparent material that is structured into an array of rounded shapes. Bear in mind that the dot pitch is typically measured in microns, so this means that the rounding of the lens is approximate. Also bear in mind that there are millions of them.

You can think of each microlens as rounded on the top and flat on the bottom. As light comes into the microlens, its rounded shape bends the light inwards.

The microlens, as mentioned, is transparent to all wavelengths of visible light. This means that it is possible that an infrared- and ultraviolet-rejecting filter might be required to get true color. The colors will become contaminated otherwise. It is also possible, with larger pixels, that an anti-aliasing filter, usually consisting of two extremely thin layers of silicon niobate, is sandwiched above the microlens array.

Immediately below the microlens array is the color filter array (or CFA). The CFA usually consists of a pattern of red, green, and blue filters. Here we show a red filter sandwiched below.

The CFA is usually structured into a Bayer pattern. This is named after Bryce E. Bayer, the Kodak engineer that thought it up. In this pattern, there are two green pixels, one red, and one blue pixel in each 2 x 2 cell.

A microlens' job is to focus the light at the photosite into a more concentrated region. This allows the photodiode to be smaller than the dot pitch, making it possible for smaller fill factors to work. But a new technology, called Back-Side Illumination (BSI) makes it possible to put the photodiode as the next thing in the photosite stack.   This means that the fill factors can be quite a bit larger for the photosites in a BSI sensor than for a Front-Side Illumination (FSI) sensor.

The real issue is that not all light comes straight into the photosite. This means that some photons are lost. So a larger fill factor is quite desirable in collecting more light and thus producing a higher signal-to-noise ratio (SNR). Higher SNR means less noise in low-light images. Yep. Bigger pixels means less noise in low-light situations.

Now, the whole idea of a color filter array consists of a trade-off of color accuracy for detail. So it's possible that this method will disappear sometime in the (far) future. But for now, these patterns look like the one you see here for the most part, and this is the Bayer CFA pattern, sometimes known as an RGGB pattern. Half the pixels are green, the primary that the eye is most sensitive to. The other half are red and blue. This means that there is twice the green detail (per area) as there is for red or blue detail by themselves. This actually mirrors the density of rods vs. cones in the human eye. But in the human eye, the neurons are arranged in a random speckle pattern. By combining the pixels, it is possible to reconstruct full detail, using a complicated process called demosaicing. Color accuracy is, however, limited by the lower count of red and blue pixels and so interesting heuristics must be used to produce higher-accuracy color edges.

How much light?

It's not something you think about every day, but the aperture controls the amount of light let into the camera. The smaller the aperture, the less light the sensor receives. Apertures use f-stops. The lower the f-stop, the larger the aperture. The area of the aperture, and thus the amount of light it lets in, is proportional to the reciprocal of the f-stop squared. For example, after some calculations, we can see that an f/2.2 aperture lets in 19% more light than an f/2.4 aperture.

Images can be noisy. This is generally because there are not enough photons to produce a clear, continuous-tone image, and even more because the arrival time of the photons is random. So, the general rule is this: the more light, the less noise. We can control the amount of light directly by increasing the exposure time. And increasing the exposure time directly lets more photons into the photosites, which dutifully collect them until told not to do so. The randomness of the arrival time is less a factor when the exposure time increases

Once we have gathered the photons, we can control how bright the image is by increasing the ISO. Now, ISO is just another word for gain: a volume knob for the light signal. We crank up the gain when our subject is dark and the exposure is short. This restores the image to a nominal apparent amount of brightness. But this happens at the expense of greater noise because we are also amplifying the noise with the signal.

We can approximate these adjustments by using the sunny 16 rule: on a sunny day, at f/16, with ISO 100, we use about 1/120 of a second exposure to get a correct image exposure.

The light product is this:

(exposure time * ISO) / (f-stop^2)

This means nominal exposure can be found for a given ISO and f/number by measuring light and dividing out the result to compute exposure time.

If you have the exposure time as a fixed quantity and you are shooting in low light, then the ISO gets increased to keep the image from being underexposed. This is why low-light images have increased noise.

Sensor sensitivity

The pixel size actually does have some effect on the sensitivity of a single photosite in the image sensor. But really it's more complicated than that.

Most sensors list their pixel sizes by the dot pitch of the sensor. Usually the dot pitch is measures in microns (a micron is a millionth of a meter). When someone says their sensor has a bigger pixel, they are referring to the dot pitch. But there are more factors affecting the photosite sensitivity.

The fill factor is an important thing to mention, because it has a complex effect on the sensitivity. The fill factor is the amount of the array unit within the image sensor that is devoted to the surface of the photodiode. This can easily be only 50%.

The quantum efficiency is related to the percentage of photons that are captured of the total that may be gathered by the sensor. A higher quantum efficiency results in more photons captured and a more sensitive sensor.

The light-effectiveness of a pixel can be computed like this:

DotPitch^2 * FillFactor * QuantumEfficiency

Here the dot pitch squared represents the area of the array unit within the image sensor. Multiply this by the fill factor and you get the actual area of the photodiode. Multiply that by the quantum efficiency and you get a feeling for the effectiveness of the photosite, in other words, how sensitive the photosite is to light.

Megapixel mania

For years it seemed like the megapixel count was the holy grail of digital cameras. After all, the more megapixels the more detail in an image, right? Well, to a point. Eventually, the amount of noise begins to dominate the resolution. And a little thing called the Airy disc.

But working against the megapixel mania effect is the tiny sensor effect. Smartphones are getting thinner and thinner. This means that there is only so much room for a sensor, depth-wise, owing to the fact that light must be focused onto the plane of the sensor. This affects the size of the sensor package.

The granddaddy of megapixels in a smartphone is the Nokia Lumia 1020, which has a 41MP sensor with a dot pitch of 1.4 microns. This increased sensor size means the phone has to be 10.4mm thick, compared to the iPhone 5S, which is 7.6mm thick. The extra glass in the Zeiss lens means it weighs in at 158g, compared to the iPhone 5S, which is but 115g. The iPhone 5S features an 8MP BSI sensor, with a dot pitch of 1.5 microns.

While 41MP is clearly overkill, they do have the ability to combine pixels, using a process called binning, which means their pictures can have lower noise still. The iPhone 5S gets lower noise by using a larger fill factor, afforded by its BSI sensor.

But it isn't really possible to make the Lumia 1020 thinner because of the optical requirements of focusing on the huge 1/1.2" sensor. Unfortunately thinner, lighter smartphones is definitely the trend.

But, you might ask, can't we make the pixels smaller still and increase the megapixel count that way?

There is a limit, where the pixel size becomes effectively shorter than the wavelength of light, This is called the sub-diffraction limit. In this regime, the wave characteristics of light begin to dominate and we must use wave guides to improve the light collection. The Airy disc creates this resolution limit. This is the diffraction pattern from a perfectly focused infinitely small spot. This (circularly symmetric) pattern defines the maximum amount of detail you can get in an image from a perfect lens using a circular aperture. The lens being used in any given (imperfect) system will have a larger Airy disc.

The size of the Airy disc defines how many more pixels we can have with a specific size sensor, and guess what? It's not many more than the iPhone has. So the Lumia gets more pixels by growing the sensor size. And this grows the lens system requirements, increasing the weight.

It's also notable that, because of the Airy disc, decreasing the size of the pixel may not increase the resolution the resultant image. So you have to make the sensor physically larger. And this means: more pixels eventually must also mean bigger pixels and much larger cameras. Below a 0.7 micron dot pitch, the wavelength of red light, this is certainly true.

The human eye

Now, let's talk about the actual resolution of the human eye, computed by Clarkvision to be about 576 megapixels.

That seems like too large a number, and actually it seems ridiculously high. Well, there are about 100 million rods and only about 6-7 million cones. The rods work best in our night vision because they are so incredibly low-light adaptive. The cones are tightly packed in the foveal region, and really only work in lighted scenes. This is the area we see the most detail with. There are three kinds of cones and there are more red-sensitive cones than any other kind. Cones are usually called L (for large wavelengths), M (for medium wavelengths), and S (for small wavelengths). These correspond to red, green, and blue. The color sensitivity is at a maximum between 534 and 564 nanometers (the region between the peak sensitivities of the L and M cones), which corresponds to the colors between lime green and reddish orange. This is why we are so sensitive to faces: the face colors are all there.

I'm going to do some new calculations to determine how many pixels the human eye actually does see at once. I am defining pixels to be rods and cones, the photosites of the human eye. The parafoveal region is the part of the eye you get the most accurate and sharp detail from, with about 10 degrees of diameter in your field of view. At the fovea, the place with the highest concentration, there are 180,000 rods and cones per square millimeter. This drops to about 140,000 rods and cones at the edge of the parafoveal region.

One degree in our vision maps to about 288 microns on the retina. This means that 10 degrees maps to about 2.88 mm on the retina. It's a circular field, so this amounts to 6.51 square millimeters. At maximum concentration with one sensor per pixel, this would amount to 1.17 megapixels. The 10 degrees makes up about 0.1 steradians of solid angle. The human field of vision is about 40 times that at 4 steradians. So this amounts to 46.9 megapixels. But remember that the concentration of rods and cones falls off at a steep rate with the distance from the fovea. So there are at most 20 megapixels captured by the eye in any one glance.

It is true that the eye "paints" the scene as it moves, retaining the information for a larger field of view as the parafoveal region sweeps over the scene being observed. It is also true that the human visual system has sophisticated pattern matching and completion algorithms wired in. This probably increases the perceived resolution, but not by more than a factor of two by area.

So it seems unlikely  that the human eye's resolution can exceed 40 megapixels. But of course we have two eyes and there is a significant overlap between them. Perhaps we can increase the estimate by 20 percent, to 48 megapixels.

If you consider yourself using a retina display and then extrapolate to the whole field of view, this is pretty close to what we would get.

So this means that a camera that captures the entire field of view that a human eye can see (some 120 degrees horizontally and 100 degrees vertically in a sort of oval-shape) could have 48 megapixels and you could look anywhere on the image and be fooled. If the camera were square, it would probably have to be about 61 megapixels to hold a 48 megapixel oval inside. So that's my estimate of the resolution required to fool the human visual system into thinking it's looking at reality.


That's a lot of details about the human eye and sensors! Let's sum it all up. To make a valid image with human-eye resolution, due to Airy disc size and lens capabilities, would take a camera and lens system about the size and depth of the human eye itself! Perhaps by making sensors smaller and improving optics to be flexible like the human eye, we can make it twice as good and half the size.

But we won't be able to put that into a smartphone, I'm pretty sure. Still, improvements in lens quality, BSI sensors, wave guide technology, noise reduction, and signal processing, continue to push our smartphones to ever-increasing resolution and clarity in low-light situations. Probably we will have to have cameras with monochromatic (rod-like) sensors to be able to compete with the human eye in low-light scenes. The human retinal system we have right now is so low-light adaptable!

Apple and others have shown that cameras can be smaller and smaller, such as the excellent camera in the iPhone 5S, which has great low-light capabilities and a two-color flash for better chromatic adaptation. Nokia has shown that a high-resolution sensor can be placed in bigger-thicker-heavier phones that has the flexibility for binning and better optics that push the smartphone cameras ever closer to human-eye capabilities.

Human eyes are hard to fool, though, because they are connected to pattern-matching systems inside our visual system. Look for image interpretation and clarification algorithms to make the next great leap in quality, just as they do in the human visual system.

So is it bigger pixels or simply more of them? No, the answer is better pixels.

Friday, August 23, 2013

Observing Microsoft, Part 4

This day is an interesting one for Microsoft. First, Ballmer sends out a letter to employees that states that he will resign within 12 months. Then it is announced that there is a committee on the Microsoft board, containing Bill Gates, of course, which has the responsibility of finding a new CEO. No, I suspect that Ballmer is not on that committee.

Some writers are saying that Microsoft is not forcing Ballmer out. But think about it. If you had to get rid of a failed CEO who owned 333 million shares of your company's stock, what would you do? It was most certainly a negotiated force-out. With a legal release. And probably some kind of honorary employment that requires Ballmer to only sell within certain windows of time and keeps him on a leash.

Welcome to the mobile revolution.

I must say that this change is way too late. After all, in 2010 people were already clamoring to fire Ballmer. And doesn't clean things up soon enough. Obviously Microsoft's board or directors should have been doing this for the last several years!

The reorganization that Ballmer has been accomplishing seems like a smart idea, except that it is trying to make a silk purse out of a sow's ear. It's made for the PC era which is slowly fading away. Still, the new organization is probably one less thing that a new CEO will have to worry about. That is: if he accepts this vision for the new Microsoft. A vision that depends upon Microsoft succeeding in the mobile revolution. Still with the reorg, Microsoft has a corporate culture that can't simply turn on a dime.

And Windows is exactly the problem.

Energy Efficiency

The mobile revolution has created two very interesting trends in the computing landscape. These are battery longevity and cloud computing. In order for batteries to last a long time, the products they power must be energy-efficient in a system-wide way. In order for cloud computing, with its massive compute farms, to be cost-effective, each server must be singularly power-efficient and generate as little heat as possible since cooling is a power consumption concern as well.

Of course battery longevity also affects electric cars like the Tesla. But, when it comes to computing, the battery longevity comes from three sources: more efficient batteries, hardware systems where power efficiency is an integral part of their design, and finally the economical use of resources in software. In the cloud computing arena, instead of more efficient batteries we are concerned with heat dissipation and cooling strategies.

More efficient batteries is a great thing, when you can get them. But advances in supercapacitors and carbon nanotube electrodes on various substrates is yet to pan out. This means that hardware systems such as SoC's (Systems on a Chip) must be designed with power efficiency in mind. Power management solutions that allow parts of a chip to turn themselves off on demand are one way to help.

Even at the chip level, you can send signals between various components of an SoC (System on a Chip) by using power-efficient transmission. For example, the MIPI M-PHY layer even enables lower power consumption by the transmission of the high-frequency data that usually chews up so much power. Consider using a camera and processing the data on-chip. Or using a scaler that operates from/to on-chip memory. These applications involve images, which are huge resource hogs and must be specially considered, in order to save significant amounts of power.

But there's more to this philosophy of power management, and this gets to the very heart of why SoC-based gadgets are so useful in this regard. General tasks that use power by processing large amounts of data are handled increasingly by specialized areas of the SoC. Like image scaling and resampling. Like encrypting and decrypting files. Like processing images from the onboard cameras. Like display processing and animation processing. Like movie codec processing. Each of these applications of modern gadgets are resource hogs. So they must be optimized for power efficiency at the very start or else batteries simply won't last as long.

Of course, you could simply user a bigger battery. Which makes the product larger. And less elegant!


So what is the problem with Windows? The Wintel architecture wasn't built from the ground up for power-efficiency. Or distributed specialized computing, like so many gadgets are constructed these days. And now you can see what a daunting process this must be for Microsoft engineers that basicaly have to start over to get the job done. It will take quite a bit of time to get Windows to run on an SoC. Almost all implementations of Windows today are built to run on discrete CPUs. The Surface Pro appears to use a regular CPU board with a stock Intel part.

You see, power efficiency isn't just a hardware problem to solve. The software must also have this in mind with everything it does. The consumption of resources is a serious issue with any operating system, and affects the user experience in a huge way. I can't even begin to go into the legacy issues with the Windows operating system. The only way is to rewrite it. One piece at a time.

This problem has led many companies who lead the cloud computing initiatives to use Linux for their server operating systems. Mostly because it can easily be tailored for power efficiency. The server operating system share of Unix-based operating systems is 64%, compared to about 36% for Windows.

Servers are almost certainly going to go the way of the SoC also, with dedicated processors doing the expensive things like video codec processing, web page computation, image processing, etc. But I do see multiple cores and multithreading still being useful in the server market.

But not if they increase the power requirements of the system.

On mobile devices, Windows hasn't done so well either. Windows Phone probably has less than 3% of the mobile space, if that.

The Surface never clicked

Why didn't the Surface RT and the Surface Pro tablets succeed? First off, it's possible that they are simply yet to succeed. I just had to say that.

But more likely they will never succeed. It's hard to move into a market where your competitors have been working on the hardware solutions for years. And when hardware isn't your expertise.

At first, the Surface marketing campaign was all flash and no substance. A video of dancers clicking their tablet covers into their Surface tablets was certainly criticized by a few bloggers as vacuous. The main problem was it stressed the expensive keyboard cover, and skirted the issue that the cover is totally needed. With the cover, the Surface tablet becomes just a crappy laptop. That you can't really use on your lap, because of the kickstand. Their follow-up video was curt and to the point, but sounds a bit like propaganda. saying "Surface is yours. Your way of working. Your way of playing".

Yeah. Trying to get into the mind of their prospective users.

But it's clear that their strategies were simply not working, because they went to the old adage "if we don't look good, then maybe we should just make them look bad". And they started releasing anti-iPad ads. The first one used Siri's voice to sum it up "do you still think I'm pretty?". They compared the price of the legendary iPad to the Surface RT without a cover. I suspect that a Surface RT without a keyboard cover is pretty much useless. The next anti-iPad ad compared features in a less quirky way. But anybody using a Surface RT knew that it didn't support the apps that the iPad has, or really have any of the advanced iOS/iTMS ecosystem in place. And without the keyboard cover it was cheaper, certainly. But you really had to have the cover to get full functionality.

So Microsoft decided to drop the price. This was echoed in the nearly $1-billion charge they took that quarter. Then they followed up by dropping the price of the Surface Pro! It seems desperate to sell their inventory. Otherwise they will be taking another huge charge against Windows revenues like before.

Friday, July 19, 2013

Observing Microsoft, Part 3

When a company chooses a strategy, it is usually important that the strategy must make sense given its existing business model. A strategy of changing the business model, however, is a much harder one to implement and takes years. And that's one of the reasons why I'm observing Microsoft.

OMG there's so much to catch up on! But it's clear the trends I was referring to in my previous installments are being realized. To start with, I looked at their Surface and Windows 8 strategy, and then I looked at their management of the Windows brand, and its subsequent performance in the crucial holiday season.

Converting themselves into a hardware company, in the Apple model, is sheer madness for a software company like Microsoft. It will kill off their business model very quickly, I think. And yet they continue to do it, company culture be damned.

Ballmer is a coach personality, and clearly business looks like a football game to him. I can imagine him saying "if a strategy is not working against our opponent, then we must change it up". But it's clear that it's much easier to do this with a football team than it is to do the same with a company of 100K employees.

So I wonder why Microsoft doesn't just focus on making business simpler? Instead, they have been making it more and more complex by the ever-expanding features of Office, their business suite.

Software, hardware, nowhere

As one of Steve Jobs' favorite artists, Bob Dylan, once said "the times they are a changin'". And Steve knew it, too. At TED in 2010, Steve said that the transition away from PCs in the post-PC era had begun and that it would be uncomfortable for a few of its players. I took this to mean Microsoft, particularly. But how has it played out so far?

Microsoft is a software company that dabbles in hardware. Most of its revenues come from software, but remember that they make keyboards and mice and also a gaming console. These are only dabbling though, because the real innovation and money is to be made in gadgets like phones, tablets, and laptops. But their OEMs make gadgets, which requires a significantly greater level of expertise and design sense. So Microsoft's entry into gadgets can only represent their desire to sell devices, not licenses. They want to be like Apple, but specifically they want to own the mobile ecosystem and sit on top of a pile of cash that comes from device revenues. And the OEMs like HP, Lenovo, Dell, Acer, and Asus are a bit left out; they must compete with their licensor. That can't be good.

So Microsoft is clearly changing its business model to sell hardware and to build custom software that lives on it. Hence Surface RT and Surface Pro. But their first quandary must be a hard one: what can they possibly do with Windows? Windows 8 is their first answer. The live tiles "Metro" style interface is unfortunately like greek to existing Windows users. The user experience, with no start menu, must seem like an alien language to them.

This entire process is beginning to look like a debacle. If it all continues to go horribly wrong, the post-PC era could happen a lot sooner than Steve thought.

Microsoft ignores their core competence as they blithely convert themselves to a hardware company. Specifically, I think that's why they are doing it badly.

They could end up nowhere fast.

Microsoft's numbers

Microsoft is a veritable revenue juggernaut and has done a fairly good job of diversifying their business.  An analysis of Q4 2012 reveals the following breakdown of their business units in revenue out of an $18.05B pie:

23% Windows and Windows Live
28% Server and Tools
35% Business
4% Online Services
10% Entertainment and Devices

This reveals that business is their strongest suit. Servers also speak to the business market. Online services also largely serve businesses. Each division, year over year, had the following increase or decrease as well:

-12.4% Windows and Windows Live
+9.7% Server and Tools
+7.3% Business
+8.1% Online Services
+19.5% Entertainment and Devices

This reveals that Xbox is their fastest-growing area. It is believed that Xbox is leaving the PowerPC and moving to AMD cores and their Radeon GPUs. This could be a bit disruptive, since old games won't work. But most games are developed on the x86/GPU environment these days.

It also shows that their Windows division revenue was down 12.4% during the quarter year over year. This involved a deferral of revenue related to Windows 8 upgrades. Umm, revenue which most likely hasn't materialized, and so you can take the 12.4% as a market contraction.

Why is the market contracting? Disruption is occurring. The tablet and phone market is moving the user experience away from the desktop. That's what the post-PC era really is: the mobile revolution. Tablet purchases are offsetting desktop and laptop PC purchases. And most of those are iPads. It gets down to this: people really like their iPads. It is a job well done. People could live without them, but they would rather not, and that is amazing given that it has only been three years since the iPad was released.

The consequence of this disruption is that PC sales are tumbling. If you dig a little deeper, you can find this IDC report that seems to be the most damning. Their analysis is that Windows 8 is actually so bad that people are avoiding upgrades and thus it is accelerating the PC market contraction. On top of the economic downturn that has people waiting an extra year or two to upgrade their PC.

Microsoft CEO Steve Ballmer stated in September 2012 that in one year, 400 million people would be running Windows 8. To date, it appears that only 80 million have upgraded (or been forced to use it because unfortunately it came installed on their new PC). That's why I said we need to ignore that deferred revenue, by the way.

If you look at OS platforms, Microsoft's future is clearly going to be on mobile devices. Yet they are not doing so well in mobile. In fact, they are becoming increasingly irrelevant, with about 80% of their Windows Phone models on only one manufacturer, Nokia. Soon, I think they may simply have to buy Nokia to prevent them from going to Android.

In the end, you can't argue with the numbers. The PC market is contracting, as evidenced by Windows revenue declining year-over-year. Tablets are not a fad. As the PC market contracts there are several companies that stand to lose a lot.


What is the Microsoft reorganization about? There are three things that I single out.

The first and most noticeable is the that the organization puts each division across devices so the software development is not device-compartmentalized, and so that Windows for the desktop is written by the same people who write Windows for the devices. At least in principle.

And, of course, games are now running on mobile devices, dominating the console market. And undercutting the prices.

This closely mirrors what Apple has been doing for years. And this clearly points out that Microsoft is envious of the Apple model and its huge profitability.

Second, in reorganizing, Microsoft is able to adjust the reporting of their financial data, to temporarily obfuscate the otherwise embarrassing results of market contraction. This is because if each division reports across devices then the success of a new device will hide the contraction of the old ones. At least, in theory.

But Microsoft made a huge bet in the Surface with Windows RT. And it's not panning out. They have just reported that they had to write off $900M of Surface RT inventory in the channel. The translation is this: it's not selling. They have instituted a price drop for Surface RT. I bet they won't be able to give them away. But when they finally are forced to, they will be the laughing stock of the mobile market.

Today, Microsoft is down 11%. That's represents a correction. A re-realization of the capitalization of Microsoft. This represents a widely-help perception that the consumer market is lost to them.

Third, Ballmer wants the culture of Microsoft to change. They have been having problems between competing divisions. Coach, get your team on the same page! Wait: they should have been on the same page all along. After all, the iPhone came out in 2007, right? Ballmer didn't think too much of it at the time. That's why coaches hire strategy consultants.

A reorg can be even more traumatic than a merger. It's all about culture, which is the life blood of a company. It's what keeps people around in a job market that includes Google and Apple.

Monkey business

I have to give it to Microsoft: they really want to give their tablet market a chance. But they are doing it at the expense of their business market. They are reportedly holding off on their Office for Mac and iOS until 2014. A deeper analysis is here.

This is a big mistake. They need to build that revenue now because BYOD (bring your own device) is on the rise and they need to be firmly in the workplace, not made irrelevant by other technology. If they lag, then other software developers that are a lot more nimble will supplant them in the mobile space. Apple, for instance, offers Pages and Numbers as part of their iWork suite. And those applications read Word and Excel files. And they can also be used for editing and general work.

Microsoft should be focusing on making business simpler. Cut down on the complexity and teach it to the young people. Reinvent business. This entails making business work in the meeting room with tablets and phones. Making business work in virtual meetings.

They certainly had better make their software simpler and easier to use. They must concentrate on honing their main area of expertise: software.

If they don't do it, then somebody else will. Microsoft should stop all this monkey business, trim the fat, and concentrate on what adds the most value. They simply have to stop boiling the ocean to come up with the gold.

The moral

There are some morals to this story. First, don't ever let "coach" run a technology company. Second, focus on your core competence. Third, and most important, create the disruption rather than react to it.

Wednesday, June 26, 2013

Weaponized Computation

Ever since the early 20th century when primitive analog computers were built to help compute solutions for naval gunnery fire control and increasing bomb accuracy, computing machinery has been used for weaponry. This trend continues to accelerate into the 21st century and has become an international competition.

Once upon a time

I had an early gift for mathematics and understanding three-dimensional form. When I was 16 or so, I helped my dad understand and then solve specific problems in spherical trigonometry. It eventually became clear to me that I was helping him verify circuitry specifically designed for suborbital mechanics: inertial guidance around the earth. Later I found out in those years he was working on the Poseidon SLBM for Lockheed, so, without completely understanding it, I was actually working on weaponized computation.

This is the period of my life where I learned about the geoid: the specific shape of the earth, largely an oblate ellipsoid. The exact shape depends upon gravitation, and thus mass concentrations (mascons). Lately the gravitational envelope of the moon caused by mascons has been an issue for the Lunar Orbiters.

At that point in history, rocket science was quite detailed and contained several specialized areas of knowledge. Many of which were helped by increasingly complex calculations. But there have been other fields that couldn't have advanced, where specific problems couldn't be solved, without the advances in computation. Ironically, some basic advances in computation we enjoy today owe these problems for their very existence. Consider this amazing article that details the first 25 years or so of the supercomputing initiatives at Lawrence Livermore National Laboratory.


Throughout our computing history, computation has been harnessed to aid our defense by helping us create ever more powerful weapons. During the Manhattan Project at Los Alamos, Stanley Frankel and Eldred Nelson organized the T5 hand-computing group, a calculator farm populated with Marchant, Friden, and Monroe calculators and the wives of the physicists entering data on them. This group was arranged into an array to provide one of the first parallel computation designs, using Frankel's elegant breakdown of the computation into simpler, more robust calculations. Richard Feynman, a future Nobel prize winner, actually learned to fix the mechanical calculators so the computation could go on unabated by the huge time-sync of having to send them back to the factory for repair.

I was fortunate enough to be able to talk with Feynman when I was at Caltech, and we discussed group T-5, quantum theory, and how my old friend Derrick Lehmer was blacklisted for having a Russian wife. He told me that Stanley Frankel was also blacklisted. Also, I found 20-digit Friden calculators particularly useful for my computational purposes when I was a junior in High School.

The hunger for computation continued when Edward Teller began his work on the Super, a bomb organized around thermonuclear fusion. This lead John von Neumann, when he became aware of the ENIAC project, to suggest that the complex computations required to properly understand thermonuclear fusion could be carried out on one of the world's first electronic computers.


In the history of warfare, codebreaking has proven itself to be of primary strategic importance. It turns out that this problem is perfectly suited to solution using computers.

One of the most important first steps in this area was taken at Bletchley Park in Britain during World War II. There, in 1939, Alan Turing constructed the Bombe. This was an early electromechanical computer and it was specifically designed to break the cipher and daily settings used in the German Enigma machine.

This effort required huge amounts of work and resulted in the discovery of several key strategic bits of information that turned the tide of the war against the Nazis.

The mathematical analysis of codes and encoded information is actually the science of decryption. The work on this is never-ending. At the National Security Agency's Multiprogram Research Facility in Oak Ridge, Tennessee, hundreds of scientists and mathematicians work to construct faster and faster computers for cryptanalytic analysis. And of course there are other special projects.

That seems like it would be an interesting place to work. Except there's no sign on the door. Well, this is to be expected since security is literally their middle name!

And the NSA's passion for modeling people has recently been highlighted by Edward Snowden's leaks of a slide set concerning the NSA's metadata-colecting priorities. And those slides could look so much better!


In the modern day, hackers have become a huge problem for national and corporate security. This is partly because, recently, many advances in password cracking have occurred.

The first and most important advance was when was hacked with an SQL injection attack and 32 million (14.3 million unique) passwords were posted online. With a corpus like this, password crackers suddenly were able to substantially hone their playbooks to target the keyspaces that contain the most likely passwords.

A keyspace can be something like "a series of up to 8 digits" or "a word of up to seven characters in length followed by some digits" or even "a capitalized word from the dictionary with stylish letter substitutions". It was surprising how many of the RockYou password list could be compressed into keyspaces that restricted the search space considerably. And that made it possible to crack passwords much faster.

Popular fads like the stylish substitution of "i" by "1" or "e" by "3" were revealed to be exceptionally common.

Another advance in password cracking comes because passwords are usually not sent in plaintext form. Instead, a hashing function is used to obfuscate them. Perhaps they are only stored in hashed form. So, in 1980 a clever computer security professor named Martin Hellman published a technique that vastly sped up the process of password cracking. All you need to do is keep a table of the hash codes around for a keyspace. Then, when you get the hash code, you just look it up in the table.

But the advent of super-fast computers means that it is possible to compute billions of cryptographic hashes per second, allowing the password cracker to iterate through an entire keyspace in minutes to hours.

This is enabled by the original design of the hashing functions, like SHA, DES, and MD5, all commonly used hashing functions. They were all designed to be exceptionally efficient (and therefore quick) to compute.

So password crackers have written GPU-enabled parallel computation of the hashing functions. These run on exceptionally fast GPUs like the AMD Radeon series and the nVidia Tesla series.

To combat these, companies have started sending their passwords through thousands of iterations of the hashing function, which dramatically increases the time required to crack passwords. But really this only means that more computation is required to crack them.

The Internet

Many attacks on internet infrastructure and on targeted sites depend upon massively parallel capabilities. In particular, hackers often use Distributed Denial of Service (DDoS) attacks to bring down perceived opponents. Hackers often use an array of thousands of computers, called a botnet, to access a web site simultaneously, overloading the site's capabilities.

Distributed computing is an emerging technology that depends directly on the Internet. Various problems can be split into clean pieces and solved by independent computation. These include peaceful projects such as the spatial analysis of the shape of proteins (folding@home), the search for direct gravitational wave emissions from spinning neutron stars (Einstein@home), the analysis of radio telescope data for extraterrestrial signals (SETI@home), and the search for ever larger Mersenne prime numbers (GIMPS).

But not only have hackers been using distributed computing for attacks, they have also been using the capability for password cracking. Distributed computing is well suited to cryptanalysis also.

Exascale weapons

Recently it has been discussed that high-performance computing has become a strategic weapon. This is not surprising at all given how much computing gets devoted to the task of password cracking. Now the speculation is, with China's Tianhe-2 supercomputer, that weaponized computing is poised to move up to the exascale. The Tianhe-2 supercomputer is capable of 33.86 petaflops, less than a factor of 30 from the exascale. Most believe that exascale computing will arrive around 2018.

High-performance computing (HPC) has continually been used for weapons research. A high percentage of the most powerful supercomputers over the past decade are to be found at Livermore, Los Alamos, and Oak Ridge.

Whereas HPC has traditionally been aimed at floating-point operations (where real numbers are modeled and used for the bulk of the computation) the focus of password cracking is integer operations. For this reason, GPUs are typically preferred because modern general-purpose GPUs are capable of integer operations and they are massively parallel. The AMD 7990, for instance, has 4096 shaders. A shader is a scalar arithmetic unit that can be programmed to perform a variety of integer or floating-point operations. Because a GPU comes on a single card, this represents an incredibly dense ability to compute. The AMD 7990 achieves 7.78 teraflops but uses 135W of power.

So it's not out of the question to amass a system with thousands of GPUs to achieve exascale computing capability.

I feel it is ironic that China has built their fastest computer using Intel Xeon Phi processors. With 6 cores in each, the Xeon Phi packs about 1.2 teraflops of compute power per chip! And it is a lower power product than other Xeon processors, at about 4.25 gigaflops/watt. The AMD Radeon 7990, on the other hand, has been measured at 20.75 gigaflops/watt. This is because shaders are much scaled down from a full CPU.

What is the purpose?

Taking a step back, I think a few questions should be asked about computation in general. What should computation be used for? Why does it exist? Why did we invent it?

If you stand back and think about it, computation has only one purpose. This is to extend human capabilities; it allows us to do things we could not do before. It stands right next to other machines and artifices of mankind. Cars were developed to provide personal transportation, to allow us to go places quicker than we could go using our own two feet. Looms were invented so we could make cloth much faster and more efficiently than using a hand process, like knitting. Telescopes were invented so we could see farther than we could with our own two eyes.

Similarly, computation exists so we can extend the capabilities of our own brains. Working out a problem with pencil and paper can only go so far. When the problems get large, then we need help. We needed help when it came to cracking the Enigma cipher. We needed help when it came to computing the cross-section of Uranium. Computation was instantly weaponized as a product of necessity and the requirements of survival. But defense somehow crossed over into offensive capabilities.

With the Enigma, we were behind and trying to catch up. With the A-bomb, we were trying to get there before they did. Do our motivations always have to be about survival?

And where is it leading?

It's good that computation has come out from under the veil of weapons research. But the ramifications for society are huge. Since the mobile revolution, we solve problems that can occur to any of us in real life, and build an app for it. So computation continues to extend our capabilities in a way that fulfills some need. Computation has become commonplace and workaday.

When I see a kid learn to multiply by memorizing a table of products, I begin to wonder whether these capabilities are really needed, given the ubiquity of computation we can hold in our hands. Many things taught in school seem useless, like cursive writing. Why memorize historical dates when we can just look it up in Wikipedia? It's better to learn why something happened then when.

More and more, I feel that we should be teaching kids how to access and understand the knowledge that is always at their fingertips. And when so much of their lives is spent looking at an iPad, I feel that kids should be taught social interaction and be given more time to play, exercising their bodies.

It is because knowledge is so easy to access that teaching priorities must change. There should be more emphasis on the understanding of basic concepts and less emphasis on memorization. In the future, much of our memories and histories are going to be kept in the cloud.

Fundamentally, it becomes increasingly important to teach creativity. Because access to knowledge is not enough. We must also learn what to do with the knowledge and how to make advancements. The best advancements are made by standing on the shoulders of others. But without understanding how things interrelate, without basic reasoning skills, the access to knowledge is pointless.

Sunday, June 16, 2013

Three-Dimensional Thinking, Part 2

The last time I wrote about three-dimensional thinking, I discussed impossible figures. They are fun ways to challenge our brains to see things in a different way. But to me they signify more than just artwork.

Different Angles

Looking at objects from different angles helps us understand their spatial structure.

Looking at a given subject from different angles is a requirement for creativity. But eventually, in your mind, you realize that reality itself is malleable, and this is the domain of dreams. And dreaming is good for creativity because it helps us get out of the box of everyday experience and use our vision in a new way.

The key

Once I asked myself a question about impossible objects: what is the key to making one?

The key trick which is used in impossible figures is this: locally possible globally impossible. In the case of a Penrose triangle (also called a Reutersvärd triangle because Oscar Reutersvärd was the first to depict it) local corners and pieces of objects are entirely possible to construct but the way they are globally connected is spatially impossible.

I have constructed another impossible figure which is included above. This figure contains several global contradictions, yet remains locally plausible. However, there are two global levels of impossibility in this figure. Let's consider what they are.

First off, there are plenty of locally plausible geometries depicted in the figure. For instance, the M figure is a totally real and constructible object in the real world.

My original drawing didn't actually have M's at the three corners. It was a Penrose triangle. To make the figure compact, I added the M's on each of the three corners of the Penrose triangle. This doesn't make the figure any more possible though. It just adds a little salt and pepper to the mix; it helps confuse the eye a bit.

The next part shows the three strands connected to the three loops that wrap around the Penrose triangle.

There is really nothing about this strong figure that is impossible either. It can be totally constructed in real space.

Actually, it is a nice figure by itself, standing alone. You can see each block sliding by itself through the set of blocks.

And further, I this this figure would make a good logo. It feels like an impossible figure even though it's perfectly realizable. And it can be depicted from any angle because it is an honest three-dimensional construction. I have an idea to construct one out of lucite or another transparent material.

The next part of the figure is the loop. Each loop wraps around one of the sides of the Penrose triangle and creates an interlocking impossible figure, a concept I have shown examples of before in this blog. For instance, there is the impossible Valknut.

But this is the first level of impossibility. Such a loop is not really constructible without bending the top face. In this way, it is related to the unending staircase of M. C. Escher's Ascending and Descending.

The second level of impossibility is, of course, the Penrose triangle itself. When it comes to levels of impossibility and a clean depiction of impossibility, consider Reutersvärd. Pretty much all of Reutersvärd's art contains this illusion as a key. Though, I would encourage you to look at all of his work, because individual pieces can be both stunning and subtle simultaneously.

The next impossible figure is another modification of the Penrose triangle, showing what happens when the blocks intersect each other.

Any two blocks may certainly intersect each other, but to have all three intersect each other in this way is a clear impossibility.

It would probably have been more striking to make the triangular space in the center a bit larger.

Impossible objects take imagination out of the real world and into a world that maybe could be. Perhaps it's the world of flying cars, of paper that can hold any image and quickly change to any other, or of people whose thoughts are interconnected by quantum entanglement. In such a world, imagination can fly free.

Thursday, June 6, 2013

My Artwork

For those of you who have been reading posts and checking out my artwork for a while, I have a small present. I have posted the full-sized versions of much of my special artwork from this blog on Pinterest.

You can get to it here.

Finally, if you click on into them, you can see the details and get an idea of how many hours I spent on these pieces. Sometimes a bit of work was so complicated it took several days to complete. Which totally explains why my posts take so long!


I have other boards on Pinterest, some with cool patterns, clouds, and collections of things. Just a hobby, and I thought I'd share a bit.

Sunday, June 2, 2013

Mastering Nature's Patterns: Basalt Formations

I love patterns. This all originally stems from my observations of nature's patterns. A lot of the objects I draw (and develop in code mathematically) come directly from nature.

Strikingly, nature will often conspire to produce objects of great beauty, ones which we cannot match without tremendous effort. An example of this are the basalt formations. Created by volcanic upwelling, great pressure leading to crystallization, and fracturing during cooling, they are nature's brilliant tessellations, awe-inspiring extrusions, and mad ravings simultaneously.

They resemble three-dimensional bar graphs. Their fracture pattern, in two dimensions, is a natural Voronoi diagram. I first saw this pattern in nature while observing the way that soap bubbles join. Without fully understanding it, this observation introduced me to the mathematical laws of geometry when I was very young. Little did I know that I would never stop trying to duplicate it.

In this post, I show you how I duplicated this particular kind of nature. And I did it in my style, as you can see.

To create a drawing of a basalt formation, I actually used a rendered Voronoi diagram, which you see here, transformed it into a subtle perspective, establishing two vanishing points. Then I made three copies arranged as layers in a way that approximated placing them on three-dimensional transparent layers at various depths. This was so I could see the levels, and so the third vanishing point could be right.

Of course, I used Painter's Free Transform to do this!

I kept each layer a little bit transparent so I could get an intuitive feeling for which layer was on the top and which layer was on the bottom. This technique is called depth-cueing.

As you can see, it worked pretty well. I stopped at three layers because I didn't want the drawing project to get too complicated. But, of course, like all of my projects, it soon did!

Next, on a new layer, I drew lines on top of the the lines that I wanted to represent the three-dimensional surface of the basalt formation. This meant choosing a three-dimensional height for each cell. The base layer that extended to the outside of the drawing was the lowest height, of course, and a second and third layer was built on top of it.

This causes cells to raise out of the base layer and appear to become extruded.

When I consulted some real images of basalt formations as a guide, I found that they were quite imperfect and usually were cracked, damaged, or eroded in some way.

I really wanted my drawing to represent a perfect un-eroded result.

I used an extra transparent layer (behind the layer with the lines) and marked each cell with a three-dimensional height index so I could be sure which heights corresponded with each cells. This told me where to put the shading and also told me how to interpret the extrusion lines.

This layer was for informational purposes only. You see here the original small layer with crudely drawn lines. It's actually kind of hard to see the three-dimensional relative positions of the cells in some cases, which is another reason I labelled each cell with a height index.

Once I had designed it, I found that the drawing was way too small to shade the way I like to (using a woodcut technique) and so I resized the image and went over each of the lines by hand to make it crystal clear at the new resolution.

That only took a few days.

Why? After resizing the image, I found that each line was unusually soft. This meant that I had to go over the lines with a small brush, darkening and resolving the line. Then I had to go around it with white to create a clean edge. This is what really took the time!

Naturally I do lots of other things than just draw all the time, and so I had to use extra minutes here and there. I kept the Painter file on my laptop and brought my Wacom tablet with me in my bag.

I spent probably ten or twenty hours drawing this image.

Once the lines were perfect, the next step was shading. But of course it had to be in my style, and this also took quite a bit of time.

I used woodcut shading to create shadows and accessibility shading. This created a very nice look.

To do this, I drew parallel lines at a desired spacing, taking care to make them correspond in length and position to the shading and shadows that would result from a light coming from the left side.

I thickened the lines at their base, and made them a bit triangular. Then at the end, I used a small white brush to erode and sharpen the point and clean the sides of each shading line to get the right appearance.

The final step was coloring the tops and the sides, using a gel layer.

I colored each layer using a different shade of slightly bluish gray. The top layer got the lightest shade.

Here you can see a close-up of the final image, which was very high resolution indeed.

Even though I started out with a computer-generated fracturing pattern, I was able to retain a hand-wrought look to the final image. None of the lines are really computer-prefect

Yes, nature's patterns often take a bit of time to master!