Showing posts with label future. Show all posts
Showing posts with label future. Show all posts

Thursday, December 12, 2013

The Unstoppable Now

The universe seems to be moving forwards, ever forwards, and there's nothing we can do about it. Or is there? Is the world too tangled to unravel?

Changing political landscapes

We all see the changes in the world. Climate change is the new catchphrase for global warming. Some areas of the world may never sort themselves out: the Koreas, the Middle East, Africa. Yet we can look to the past and see how a divided Germany re-unified, how South Africa eliminated the apartheid government and changed for the better (bless you Nelson Mandela, and may you rest in peace), how Europe has bonded with common currency and economic control.

Good and bad: will Europe solidify or become an economic roller coaster? Will Africa stabilize or continue on its path of tribal and religious genocide? Will Iran become a good neighbor, or will it simply arm itself with nuclear weapons and force a confrontation with Israel?

Despotic secular regimes have been overthrown in the Islamic world (Egypt, Tunisia, and Libya) and social media seems to have become a trigger for change, a tool for inciting revolution. Some regimes are experiencing slight Islamic shifts, like Turkey. But Egypt, having moved in that direction when the Islamic Brotherhood secured the presidency, is now moving away from it in yet another revolution.

The more things change, the more they stay the same.

The reason that social media became an enabler for the changes we are seeing is because people care. Crowdsourced opinion has an increasing amount of effect on government. Imagine that! Democracy in action. Even in countries that have yet to see democracy.

Let's look at one of the biggest enablers for this: the iPhone.

The iPhone and its effect

Yes, this is one of the biggest vehicles for change because it raised the bar on handheld social media, on internet in your pocket, and on the spread of digital photography. The ability to make a difference was propagated with the iPhone and the devices that copied it. Did Steve Jobs know he was starting this kind of change? He knew it was transformative. And he built ecosystems like iTunes, the App store, and the iBookstore to make it all work. Without the App Store, we'd all still be in the dark ages of social media. The mobile revolution is here to stay.

Holding the first iPhone was like holding a bit of the future in your hands. It was that far ahead of the pack. Its amazing glass keyboard was met with skepticism from analysts at first, but the public was quick to decide it was just fine for them. A phone that was just a huge glass screen was more than an innovation. It was a revolution.

It's even remarkable that Steve Ballmer panned the first iPhone when it came out. By doing that, he drew even more attention to the gamble Apple was making, and in retrospect made himself look amazingly short-sighted. And look where it got him! Microsoft's lack of success in the mobile industry seems predictable, once you see this.

Each new iPhone iteration brings remarkable value. Better telephony (3G quickly became 4G and that quickly became LTE), better sensors (accelerometer, GPS, magnetometer, gyrometer, etc.), and better camera, lenses, flashes, and BSI sensors. Bluetooth connectivity makes it work in our cars. Siri makes it work by voice command. Each new feature is so well-integrated that it just feels like it's been there all along. Now that I have used my iPhone 5S for awhile, I feel like the fingerprint sensor is part of what an iPhone means, now.

This all-in-one device has led to unprecedented spread of pictures. It and its (ahem, copycat) devices supporting Google's Android and more recently Microsoft's Windows Phone 8 have enabled social media to become ever more present, and influential, in our world.

In 2012, a Nielsen report showed that social media growth is driven largely by mobile devices and the mobile apps made by the social media sites.

Hackers, security, whistleblowers

A battle is being fought in the field of security.

Private hackers have been stealing identities and doing so much more to gain attention, and we know why.

Then hackers began attacking companies and countries, plying their expertise, for various causes. The Anonymous and LulzSec groups fought Sony against the restrictiveness of gaming systems, against the despotic regime in Iran, against banks they believed were evil.

Enter the criminal hacking consortia, which build programs like Zeus for constructing and tasking botnets using rootkit techniques, for perpetrating massive credit card fraud.

Then the nation state hacking organizations began to do their worst. With targeted viruses like Flame, Stuxnet, and Duqu. Whole military organizations are built, like China's military unit 61398 with the sole task of hacking foreign businesses and governments.

Is anybody safe?

It is very much a sign of the times that the latest iPhone 5S features Touch ID. You just need your fingerprint to unlock it. Biometrics like fingerprints and iris scans (something only you are) are becoming a good method for security engineering. There are so many public hacker attacks that individual security is quickly becoming a major problem.

New techniques for securing your data, like multi-factor authentication, are becoming increasingly both popular and necessary. Accessing your bank and making a money transfer? Enter the passcode for your account (something only you know), then they send your trusted phone (something only you have) a text message and you enter it into the box. The second factor makes it more secure because it is more certain to be you and not some interloper spoofing you.

The landscape of security has been forever changed by the whistleblowers. Whole organizations were built to support them (WikiLeaks) and governments, banks, and corporations were targeted. The release of huge sets included confidential data from the US Military, from the Church of Scientology, from the Swiss Bank Julius Baer, from the Congressional Research Service, and from the NSA, via Edward Snowden.

It is notable that WikiLeaks hasn't released secret information from Russia or China. It is most likely that they would be collectively assassinated were that the case. Especially given such events as the death of Alexander Litvinenko.

The founder of WikiLeaks, Julian Assange, is currently a self-imposed captive in the Ecuadorean embassy in London. In an apparent coup, one of the WikiLeaks members, Daniel Domscheit-Berg decided to leave WikiLeaks, and when he left, he destroyed documents containing America's no-fly list, the collected emails of the Bank of America, insider information from 20 right-wing organizations, and proof of torture in an undisclosed Latin American country (unlikely to be Ecuador, and much more likely to be one of its adversaries, such as Colombia). Domscheit-Berg apparently left to start up his own leaks site, but later decided to merely offer information on how to set one up.

The trend is that the general public (or at least a few highly-vocal people) increasingly expect all secrets to be revealed. And yet, I expect that they would highly value their own secrets. This is why there is such a trend towards protecting individual privacy.

The reality is organizations like WikiLeaks are proud to reveal secrets from the western democracies like America, but are reticent to do so for America's adversaries like Russia. Since this creates an asymmetric advantage, these organizations can only be viewed as anti-American. Even if they aren't specifically anti-American, they inevitably have this effect.

So they are playing for the Russians whether they believe it or not.

Does the whistleblower movement have the inherent potential for disentangling the world political situation? Perhaps in the sense that knots can be cut, like the Gordian Knot. But disentangled? No.

The only way that the knots can be unraveled is if everybody begins to play nice. And I don't really see that happening.

Perhaps Raul Castro will embrace America as an ally now that we have shaken hands. Perhaps Iran will stop its relentless bunker-protected quest for Uranium enrichment. Perhaps the Islamic militias in Africa will declare a policy of live-and-let-live with their Christian neighbors and stop the wholesale slaughter.

It's good to be idealistic. In idealism, when it is peace-oriented, we see a chance for change. In the social media revolution we see a chance for the moderate majority to be heard.

Only we can stop the unstoppable now.

Monday, October 22, 2012

Creativity and Invention

Invention is the act of making something entirely new or of discovering an entirely new way of accomplishing something, and so often this is a result of trying many different approaches. For me, when one method doesn't work or achieve the results I need, I just try something else. Yet what will make an approach different from someone else's approach is the spark of creativity. To solve the problem, try applying a technique or a principle that, at first glance, doesn't seem to apply.

When I invent things, I know I'm trying to solve a problem. I'm exhausting all of the possible ways to solve it. I'm looking for an efficient way to make use of the information or progress that has been made so far. I'm finding a better way to do it. Or a way to do it at all.

Try Something Unlikely

In ancient Egypt, blacksmiths were good at forming swords other rudimentary tools by holding a piece of iron into a fire to make it malleable and beating it with a hammer. The hammer and anvil had been used for many years, having been invented in the iron age. But sometime around 1450 BCE in ancient Egypt during the reign of Twthomosis III somebody decided that a leather bag could serve as a bellows, and that the increase of forced air would make the fire hotter. Because of this, metal became more malleable, and could even be melted.

This is a clear example of using an unlikely object in common use for something else entirely. A leather bag, used for carrying things, becomes a bellows for metallurgy. Many inventions, in fact, require this kind of discovery.

To make these kinds of discoveries, we must learn about as many things as possible, but perhaps not in depth. Absorbing a little about plenty of subjects is food for invention. It helps you make connections between things that are, for all intents and purposes, not connected in the first place.

For instance: knowing about Voronoi diagrams helped me figure out how best to render fascinating patterns like those produced by raindrops on a windshield. My blog post on where ideas come from is helpful in understanding how to exercise your brain to make such connections.

Try Try Again

But even more discoveries happen a small bit at a time. And the light bulb is the perfect example. Most people associate Thomas Edison with the discovery of the light bulb. But really, he only participated in part of the invention: the part that made it practical.

In 1800, Humphry Davy, in Britain, discovered that applying electricity to a carbon filament could make it glow, demonstrating the electric arc. Some 77 years later, American Charles Francis Bush manufactured carbon arc lamps to illuminate Cleveland, keeping the filament in a glass bottle. Two years later, Thomas Alva Edison discovered that filaments in an oxygen-free bulb would still glow. Then he tried literally thousands of materials before settling upon carbonized bamboo for the filament. The new bulb could last 1200 hours. And it had a screw-in base! But it wasn't until 1911 when modern sintered ductile tungsten filaments were invented at General Electric, that their useful lifetime was increased substantially. Then, in 1913, Irving Langmuir started using inert (electrically nonconductive) gases like argon (instead of a vacuum) inside the bulb, which increased luminosity by a factor of two and also reduced bulb blackening. Nitrogen, xenon, argon, neon, and krypton are routinely used inside bulbs today. However, when mercury vapor is used, the gas itself is the conductor, producing blue-green electric arc.

Of course, light bulbs are being reinvented every few years now. Fluorescent bulbs are used in businesses largely because they are four to six times the efficiency of incandescent bulbs. Then there were compact fluorescent light (CFL) bulbs, sharing the same efficiency advantage, but in a compact light bulb form factor. And now light-emitting diode (LED) lighting. These new bulbs save about 80-90% of the energy (over incandescent bulbs) required to illuminate us. And they last about 25 times longer than CFL bulbs.

The future is going to be just as much about conserving energy as it is about producing it.

Try Harder

The main use for creativity in invention is simply so you can solve the hardest problems of all. These are the problems that don't have an apparent solution.

Two supreme examples of this kind of problem are computer vision and computer cognition. Teaching a computer to understand everyday objects like faces, kinds of clothing, the make and model of a car, and even something as simple as a tree is incredibly difficult. Humans do this very well, of course, and this belies its complexity. Teaching a computer to read and understand a book is also hard beyond comprehension. Small parts of this, like optical character recognition and a small amount of natural language processing have been accomplished. But for the computer to actually understand the subject matter and discuss it, or even better to learn from it, is practically impossible. People dedicate their lives to solving this problem.

A small example of the problem of computer cognition is what I once dreamt about: subject space. I envisioned a space where all concepts are related in different ways. Each concept is a node in the graph of subject space and arcs between the nodes relate them.

Here I show is-a relations as a green arrow between two objects. So the green arrow between FLEA and BUG represents the information that a flea is a kind of bug. Similarly meat, rice, and carrot are a kind of food. This is a subset relationship. Another kind of relationship has to do with ownership or possession. A cyan arrow from one object to another means that the source object can possess the destination. A dog has legs, and so does a bug. A has relation can have other information associated with it. For instance, a dog has 4 legs and a bug has 6 or 8 legs. Any relation, which generally is where the verbs live in this space, can have additional information associated with it, in the form of an adverb. For instance the eats relation can have quickly associated with it.

Action relations concern a direct or indirect object. These are shown in indigo. Legs walk on the floor. A human buys food, and a dog eats the food. A flea lives on the dog. In this way buys, walk-on, lives-on, and eats are relations. And by definition, those relations can have a timestamp associated with them. The sequence in which actions occur affect the semantics. Sometimes in a causal way.

Very complicated relations are two way arcs, like the dog-master relationship. There are other obvious relationships, like is-an-attribute-of, where appropriate adjectives may be associated with subjects. Even idiomatic expressions get their representations here. For instance hair of the dog is slang for an alcoholic drink.

Note that a human has legs but I didn't include an arc for that relationship. This shows that subject space is not planar. In fact, it is n-dimensional.

Such a graph is useful in understanding and parsing the grammar of text or spoken language. A sentence can then be encoded into a series of factual semantic concepts. For instance, if you know the man buys food, then you will have to determine what the food consists of. Based on this graph, it could be meat, carrot, or rice, or some combination of them.

Also, the relation eats really means can eat. When parsing text, the fact that a given dog is eating or has eaten food is yet to be discovered. Once discovered, this subject space graph helps the semantic understanding system codify the actions that occur.

Sometimes the solution, however complex, can come to you in a dream. And this shows a creatively-applied technique, graph theory, and how it is applied to a nearly impossible problem, computer understanding.

Trial and Error

It is quite remarkable when a discovery gets made by accident!

Physicist Henri Becquerel was looking for X-rays from naturally-fluorescent materials in 1986. He knew that phosphorus would collect energy by being exposed to sunlight. And he had a naturally-fluorescent material: uranium. But there was one main problem: it was winter and the days were all overcast.

So the put his materials together in a drawer, including a bit of uranium and a photographic plate, and waited for a day when the sun would come out. When that day came, he removed the materials from the drawer and soon found that the photographic plates were affected by the uranium without being first exposed to sunlight.

And radioactivity was discovered.

My point is that sometimes a discovery is the result of unintended consequences. As for me, I have invented a few effects by accidentally creating a bug in a program I wrote. This is part of the pleasure of working in graphics. In fact, the cool visual effect in my Mess and Creativity post was discovered as the result of a bug in a program that computed image directions.

Trials and Tribulations

One problem, the lofting problem, was an elusive problem to me for years. I spent a lot of time constructing better and faster Gaussian Blur algorithms over the years, and even learned of a few new ones from such people as Michael Herf and Ben Weiss. But it wasn't until late 2004 that Kok Chen suggested that I apply constraints to the blur. And an iterative algorithm to solve this problem was born. This is detailed in my Hard Problems post.



Sunday, September 2, 2012

Keep Adding Cores?

There is a trend among the futurists out there that we just need to keep adding cores to our processors to make multi-processing (MP) the ultimate solution to all our computing problems. I think this comes from the conclusions concerning Moore's Law and the physical limits that we seem to be reaching at present.

But, for gadgets, it is not generally the case that adding cores will make everything faster. The trend is, instead, toward specialized processors and distribution of tasks. When possible, these specialized processing units are placed on-die, as in the case of a typical System-on-a-Chip (SoC).

Why specialized processors? Because using some cores of a general CPU to do a specific computationally-intensive task will be far slower and use far more power than using a specialized processor specifically designed to do the task in hardware. And there are plenty of tasks for which this will be true On the flip side, the tasks we are required to do are changing, so specific hardware will not necessarily be able to do them.

What happens is that tasks are not really the same. Taking a picture is different from making a phone call or connecting to wi-fi, which is different from zooming into an image, which is different from real-time encryption, which is different from rendering millions of textured 3D polygons into a frame buffer. Once you see this, it becomes obvious that you need specialized processors to handle these specific tasks.

The moral of the story is this: one processor model does not fit all.

Adding More Cores

When it comes to adding more cores, one thing is certain: the amount of die space on the chip will go up, because each core uses its own die space. Oh, and heat production and power consumption also go up as well. So what are the ways to combat this? The first seems obvious: use a smaller and smaller fabrication process to design the multiple-core systems. So, if you started at a 45-nanometer process for a single CPU design, then you might want to go to 32-nanometer process for a dual-CPU design. And a 22-nanometer process for a 4-core CPU design. You will have to go even finer for an 8-core design. And it just goes up from there. The number of gates you can place on the die goes up roughly as one over the square of the ratio of the new process to the old process. So when you go from 45 nm to 32 nm, you get the ability to put in 1.978x the number of gates. When you go from 32 nm  to 22 nm, you get the ability to put in 2.116x as many gates. This gives you room for more cores.

A change in process resolution gives you more gates and thus more computation per square inch. But it also requires less power to do the same amount of work. This is useful for gadgets, for whom the conservation of power consumption is paramount. If it takes less power, then it may also run cooler.

But wait, we seen to be at the current limits of the process resolution, right? Correct, 22 nm is about the limit at the current time. So we will have to do something else to increase the number of cores.

The conventional wisdom for increasing the number of cores is to use a Reduced Instruction Set Computer (RISC) design. ARM uses one, but Intel really doesn't. The PowerPC uses one.

When you use a RISC processor, it generally takes more instructions to do something than on a non-RISC processor, though your experience may vary.

Increasing the die size also can allow for more cores, but that is impractical for many gadgets because the die size is already at the maximum they can bear.

The only option is to agglomerate more features onto the die. This is the typical procedure for an SoC. Move the accelerometer in. Embed the baseband processor, the ISP, etc. onto the die. This reduces the number of components and allows more room for the die itself. This is hard because your typical smartphone company usually just buys components and assembles them. Yes, the actual packaging for the components actually takes up space!

Heat dissipation becomes a major issue with large die sizes and extreme amounts of computation. This means we have to mount fans on the dies. Oops. This can't be useful for a gadget. They don't have fans!

Gadgets

Modern gadgets are going the way of SoCs. And the advantages are staggering for their use cases.

Consider power management. You can turn on and off each processor individually. This means that if you are not taking a picture, you can turn off the Integrated Signal Processor (ISP). If you are not making a call (or even more useful, if you are in Airplane Mode), then you can turn off the baseband processor. If you are not zooming the image in real time, then you can turn off the a specialized scaler, if there is one. If you are not communicating using encryption, like under VPN, then you can turn off the encryption processor, if you have one. If you are not playing a point-and-shoot game, then maybe you can even turn off the Graphics Processing Unit (GPU).

Every piece you can turn off saves you power. Every core you can turn off saves you power. And the more power you save, the longer your battery will last before it must be recharged. And the amount of time a device will operate on its built-in battery is a huge selling point.

Now consider parallelism. Sure, four cores are useful for increasing parallelism. But the tendency is to use all the cores for a computationally-intensive process. And this ties up the CPU for noticeable amounts of time, which can make UI slow. By using specialized processors, you can free up the CPU cores for doing the stuff that has to be done all the time, and finally the device can actually be a multitasking device.

Really Big Computers

Massive parallelization does lend itself to a few really important problems, and this is the domain of the supercomputing center. When one gets built these days, thousands, if not millions, of CPUs are added in to make a huge petaflop processing unit. The Sequoia unit, a BlueGene/Q parallel array of 1,572,864 cores is capable of 16.32 petaflops.

But wait, the era of processing specialization has found its way into the supercomputing center as well. This is why many supercomputers are adding GPUs into the mix.

And let's face it, very few people use supercomputers. The computing power of the earth is measured in gadgets these days. In 2011, there were about 500 million smartphones sold on the planet. And it's accelerating fast.

The Multi-Processing Challenge

And how the hell do you code on multi-processors? The answer is this: very carefully.

Seriously, it is a hard problem! On GPUs, you set up each shader (what a single processor is called) with the same program and operate them all in parallel. Each small set of shaders (called a work group) shares some memory and also can share the texture cache (where the pixels come from).

It takes some fairly complex analysis and knowledge of the underlying structure of the GPU to really make any kind of general computation go fast. The general processing issue on GPUs is called the GPGPU problem. The OpenCL language is designed to meet this challenge and bring general computation to the GPU.

On multiple cores, you set up a computation thread on one of the cores, and you can set up multiple threads on multiple cores. Microthreading is the technique used to make multiple threads operate efficiently on one core. Which technique you use depends upon how the core is designed. With hyperthreading, one thread can be waiting for data or stalled on a branch prediction while the other is computing at full bore, and vice-versa. On the same core!

So you need to know lots about the underlying architecture to program multiple cores efficiently as well.

But there are general computation solutions that help you to make this work without doing a lot of special-case thought. One such method is Grand Central Dispatch on Mac OS X.

At the Cellular Level

There is a multi-core architecture that is specifically a massively-parallel model that departs from simply just adding cores. The Cell Architecture does this by combining a general processor (in this case a PowerPC) with multiple cores for specific hard computation. This architecture, pioneered by Sony, Toshiba, and IBM targets such applications as cryptography, matrix transforms, lighting, physics, and Fast Fourier Transforms (FFTs).

Take a PowerPC processor and combine it with multiple (8) Signal Processing Engines capable of excellent (but simplified) Single-Instruction Multiple Data (SIMD) floating-point operations, and you have the Cell Broadband Engine, a unit capable of 256 Gflops on a single die.

This architecture is used in the Sony Playstation. But there is some talk that Sony is going to a conventional multi-core with GPU model, possibly supplied by AMD.

But what if you apply a cellular design to computation itself? The GCA model for massively-parallel computation is a potential avenue to consider. Based on cellular automata, each processor has a small set of rules to perform in the cycles in between the communication with its neighboring units. That's right: it uses geometric location to decide which processors to talk with.

This eliminates little complications like an infinitely fast global bus, which might be required by a massively parallel system where each processor can potentially talk to every other processor.

The theory is that, without some kind of structure, massively parallel computation is not really possible. And they are right, because there is a bandwidth limitation to any massively parallel architecture that eventually puts a cap on the number of petaflops of throughput.

I suspect a cellular model is probably a good architecture for at least two-dimensional simulation. One example of this is weather prediction, which is mostly a two-and-a-half dimensional problem.

So, in answer to another question "how do you keep adding cores?" the response is also "very carefully".

Thursday, August 9, 2012

Paper

I have a piece of paper on my desk, and it is white, 8.5" by 11", letter size. I have a pen in my hand, and I draw on the paper in clean crisp lines. Oops, that line was wrong, so I can zoom in within the paper, using a reverse-pinch, and correct the line using more pen strokes. I can dropper white or black from the paper to draw in white or black for correction.

But, if I really don't like that line, I can undo it and try again. All on what appears to be a regular piece of paper!

Wait, this is just like a paint app on an iPad!

Yes, this is how paper will be in the future: just a plain piece of paper. Plus.

The drawing can be finished and cleaned up and then saved using an extremely simple interface. Touching the paper with my finger brings up this interface. Touching the paper with the pen allows me to draw.

When I bring up the interface, I can save the drawing. Into the cloud.

Smaller and Smaller

How did this come to be? Simple: miniaturization.


I think the computer concept, stemming from WW II and afterwards, is the transformative concept of our lifetimes. The web, though amazingly useful, is just an offshoot of computing; it's a natural consequence. We have seen computers go from house-sized monstrosities during the war to room-sized beasts during the 50s and 60s to refrigerator-sized cabinets with front-panel switch-based consoles in the 70s to TV-sized personal computers in the 80s to portable laptops in the 90s to handheld items in the 2000s to wearable items in the 2010s.

It's perfectly clear to me where this is going.

Computers are going to be embedded in everyday objects in our lifetime. When I was born, computers were room-sized and required punched cards to communicate with them. When I die, computers will be embedded in everything and will require but a word or a touch to make them do what we require.

Gadgetizing Ordinary Objects

In the future, the world I live in has objects with their own ability to compute, like modern gadgets, but they are impossibly thin, apparently lacking a power source, and can transmit and receive effortlessly through the ether into the cloud. So, let's summarize what they need in order to be a full-functioning gadget:
  1. computation - a processor or a distributed system of computation
  2. imaging - the ability to change its appearance, at least on the surface
  3. sensing - the ability to respond to touch, light, sound, movement, location
  4. transmission/reception - the ability to communicate with the Internet
  5. storage - the ability to maintain local data
  6. power - perhaps the tiny size means the light shining on the object will be enough to power it
You know what? I don't need as many pieces of paper as I used to. This saves trees, which grow outside all over because we are no longer chopping them down except to control overgrowth. Even paper used to wrap boxes rarely exists, because the outsides of boxes also act this way.

The same paper can be used to read the local new feed or to check the weather. But, unlike a newspaper, it is updated in real time. I can even look at the satellite image.

It becomes clear that the "internet of things" is necessary to make this vision happen.

Yet To Do

It's amazing to think so, but most of this magic already works on an iPad. The only conceptual leaps that need to be made are these:
  1. the display becomes a microscopically-thin layer, reflecting light rather than producing it
  2. the computation, sensing, transmission, and reception must use organic, paper-thin processors
  3. touch interfaces must learn to discern between fingers and pen-points
  4. the paper powers itself, using capacitance or perhaps with a paper-thin power source
In 1, like existing eInk and ePaper solutions used in eBooks, power is only used to change the inherent color of a spot on the paper. Normally, power doesn't get used at all when the display is stable and unchanging. In 2, the smaller they processors are, the less power they will use. We can already envision computation at the atomic level, and also in quantum computers. In 4, maybe the light you see the paper with can power the device (a fraction of the light gets absorbed by the paper, particularly where you have drawn black).

Why Change People When We Can Change Objects

Now go through this scenario with any object you are familiar with. Why couldn't it be done using computing, imaging, sensing, transmission, storage, power, etc. ?

Things like undo, automatic save and recall, global communication, and information retrieval become the magic that is added to real-world objects. It's like a do-what-I-mean world.

But what might be different from a current iPad? Turning your image. Imagine turning your image using current applications like Painter. You can turn it using space-option to adjust the angle of the paper you are drawing onto so your pen strokes can be at ergonomic angles.

But with a paper computing device, you just turn the paper!

The ergonomics of paper use are exactly like those of existing paper, which solves some problems right off the bat.

Also imagine that you lay the paper on something and it can copy exactly what is underneath it. It's like a chameleon.

So objects like paper become more useful in the future. And we are just the same people, but we are enabled to be do so much more than we can do now. And the problems of ergonomics can be solved in the way they have already been solved: with the objects we use in everyday life.

Any solution that doesn't require the human being to change can be accepted. The easier it is, the more likely it will be accepted. The closer to the way it's already done in a non-technological way, the more likely it is that anybody can use it.

Solutions that do require the human to change, like implants, connectors, ways to "jack into" the matrix seem to me to lead to a very dystopian future. But remember there are those who are disabled and who will probably need a better way to communicate, touch, talk, hear, or see.

Hmm. I Never Thought Of That!

Cameras are interesting to make into a paper-thin format. Maybe there are some physics limitations that make this unlikely. When eyes get small, they become like fly's eyes. Perhaps some answer is to be found in mimicking that technology.

Low-power transmission is a real unknown. There may be a massive problem with not having enough power unless some resonance-based ultra-low-power transmission trick gets discovered. Perhaps there are enough devices nearby that only low-power transmission needs to be done. Maybe the desk can sense the paper, or the clipboard has a good transceiver.

And if (a fraction of) the light being used to view the device is not enough to power it? Hmm. Let's take a step back. How much power is really needed to change the state of the paper at a spot? Perhaps less power than is needed to deposit plenty of graphite atoms on the surface: the friction of contact may supply enough energy to operate the paper device. There are plenty of other sources of energy: piezoelectrics from movement, torsion, and tip pressure on the paper, heat from your hand, inductive power, the magnetic field of the earth, etc.

Still, I think that computing is becoming ubiquitous, and that one of the inevitable products of this in the future is the gadgetization of everyday objects.

Friday, April 6, 2012

New Ideas, Old Ideas

We have talked about where ideas come from, and that serves to illuminate the process of how new ideas come about. But what of old ideas? And how can old become new?

Old ideas can still be of use, but they must be constantly rethought. Legacy gets boiled away in the frying pan of technology, leaving only the useful bits. The best practices of technology are constantly changing, though, which has a drastic effect on what is possible, and also on how much the consumer must spend keeping up with it.

Tastes can change or differ between demographics as well, which leads studios to make and remake the same old plot line, songwriters to rearrange their songs, and DJs to remix them. What was great on a PC can be even better using the interface advantages of Mac OS X, and now it can be more widely used by moving it to the multitouch environment of iOS.

So old ideas are generally only of use when they represent something that a user still wants to do, but which has not yet been possible with current technology, or has not been ported to a new platform. Oh, there are plenty of these things, like flying cars and instant elsewhere. And porting desktop software, like Painter, to iOS might also provide a tool that is useful to a wider class of users. But is that the only way legacy can continue to be useful?

No. There is the real world to consider. In the gadget world, things can only change so fast because of the constraints placed on technology. We have talked about what accelerates technological advancements, and also what holds them back. Are some of the constraints placed on technology actually valuable?

Standards

Light bulbs serve to illustrate this issue. While the electric light is over a century old, it can now be reinvented with such technologies as compact fluorescents and even LED lighting. But new ideas aren't enough. They must still screw into the same old sockets, have the same form factor, and utilize the same electricity, otherwise they won't be useful in the standard enclosures. Sockets and enclosures have been designed to standards that come from the 1950s and 1960s.

OK standards aren't the same worldwide. For instance power plugs have different standards from country to country. But they are still standards, and they must be considered when building something new.

The persistence of a standard helps us in one very important way: it lets us build to a specification. This balances customization against factory production. If we can build something in a factory, it can lead to cheaper, more plentiful goods. When you are building homes, for instance, it is necessary to source your building materials. Things like light bulb sockets all meet a certain set of specifications. These are important, because without them, each house would have to be designed using custom parts. Standards and specifications lead to modularity and thus ease of building.

While standards persist, they can still be changed over time. All these cars using gasoline don't have to be retrofit to use Hydrogen fuel cells or batteries: rather, they will become obsolete and then recycled. Each car has an obsolescence period that means that, even if adoption of a new fuel source were to happen right now, it would only mean a slow transition.

So standards changes must be evolutionary to make economic sense.

Ergonomics

Some specifications come right from our own bodies: ergonomics.

While standard can change evolutionarily, there is no changing the ergonomic requirements. These are set in stone. This can involve some differences from person to person, true. Otherwise there would not be several sizes of clothes, shoes, and even hats. But there is such a thing as a standard observer that controls how displays, detail, and color should appear. And there are standard sizes, sometimes referred to as the canon, for target audiences, like children, adults, men, women, and so forth.

There are preferences that differ from region to region. I heard once that European magazines print skin tone quite a bit darker than Americans, for instance, in beauty models. What I considered garish in an ad for suntan lotion was considered normal in Germany. But preferences change a bit like standards. This process has been known as westernization in the literature, though there have been other kinds of changing preference trends over the years.

Preferences also can be the a matter of taste, and can be specific to a given demographic, as I have mentioned before. These do change, and can be influenced. Runaway leaders in any given area, like Apple in the gadget world, like the Beatles, Lady Gaga, or Adele in the music world, like Toyota's Prius in the automotive world, even Emperor Augustus in the world of infrastructure, security, and political stability, do influence taste and preference. This has been happening for thousands of years, but it is happening much faster now than it was in Emperor Augustus' time!

Ah, to have a slice of that kind of fame: the persistent kind.

Physical Limitations

We probably take for granted that there is one thing that are the same for all people: gravity. But even a "given" like gravity may have to be re-examined in the light of something like space travel. For instance, those aboard the ISS live in a microgravity environment. Air and the presence of oxygen is another thing that must be quite similar for all people. Sure, it can be thinner at high elevations.

All these things present the basic boundary conditions of all technology: constraints that they must exist under. And some of them are more than constraints, but actually requirements, like oxygen. This is why there are portable oxygen cylinders for exploring under the ocean, oxygen bars for people who want to invigorate their brains at the end of the day, and such. It is interesting that gravity represents both a limitation and a requirement.

Still, anti-gravity technology would still be extremely useful. Or anti-momentum. Or anti-entropy.

We think of this aspect as design constraints. They are the fixed givens that represent things we can't change. Which is why changing them would be such a game-changer. Nothing would ever be the same if we figure out how to polarize gravity or extract free power from dark energy.

Revolution, Evolution

How could something like lighting change even further, now that the world is changing quickly towards LEDs and even simpler technologies? Well, you might need a standard for a light panel.

No, I'm not talking about color panels. I'm talking about a part of the wall that produces light. Touch what initially appears to be an inert wall and a UI appears where you touch it. Slide the right widget and the light turns on, ramping from low brightness to useable light, promoting accessibility. This is the kind of panel that doubles as a wall and also as a television. And make it the kind of material that's durable enough for kids to bounce basketballs off of.

A standard might not actually depend upon the technology. For instance, such a panel could have LED lighting or perhaps even some kind of future lighting that uses OLEDs or new technologies, like nanotechnology. Or variable-opacity materials using polarization like liquid crystals.

My point is that we should consider what we want when constructing new standards. Not what exists currently. This will take time to become adopted, of course, which is why it is evolutionary.

Why does some science fiction become dated, even though the ideas are sound? Simple. The ideas and their portrayal no longer match our understanding of how they must work given modern technology. They must evolve along with the world.

Even with Apple's designs, evolution is the ticket to revolution, it seems.

Revolutionary changes like electric vehicles are a great concept, and we must really understand them more so we can make the great leaps and bounds. But you should have a standard first, which addresses what you want to do with them. For instance: how do you charge them? How does it fit into the existing infrastructure? How long do the components last? How can they be replaced?

Tesla seems to be asking those questions and making informed decisions that help to solve the real problems implied by these questions. But there are areas in this field where technology really needs to catch up.

You have to ask the questions that the users will ask, and also the ones that they will ask once they have bought and have used the product. Your standards must be arranged around the best answers to these questions.

Technology Finally Caught Up

Nobody can miss the exhilaration of the first iPhone, of the first iPad. Technology finally caught up with what I want to do!

Ideas abound. They are first called fantasies, like science fiction. Slates that you can view videos on (in 2001: A Space Osyssey) or that you can review documents on (Star Trek: The Next Generation). This shows that they can be mocked up by designers and clever futurists well ahead of the possibility of one being actually made to work. Then, someday, they are called reality. When technology catches up.

This is probably why you can't patent what you want to do, only how it actually can be done: it shouldn't be possible to patent fantasy.

With Painter, there are several instances of this principle. When I started working on Painter in September 1990, I was using a 16 MHz single-core 68030 in my Mac II. By 1998, my PowerMac G3 on my desk was using a 300 MHz PowerPC RISC processor, which was probably 25 times faster when you consider processor speed, the number of clocks per instruction, the memory speeds, and improvement to caches. When I started, to get the brushes to work in real time I had to code the inner loops in 68020 assembler. So the inner loop of depositing a dab of the brush within a stroke could be as fast as possible. But with 25 times the compute power, I could do huge amounts of computation when depositing a dab of a brush, and even completely rewrite how the brush stroke was rendered. Without even resorting to assembler recoding. Some ideas that I had earlier that weren't practical had finally become possible. Technology had caught up with my ideas. At least some of them.

The machine I'm using right now is about 3000 times as powerful than the Mac II I was using in 1990 (or more, because of multiple cores and also a massively parallel GPU). So, if I were to write Painter today, I would make an entirely new set of design decisions. And they would be based on what I want to accomplish, not nearly as much on the limitations of the hardware I'm using.

Moore's Law is not guaranteed to continue, without rearranging the ways we compute. We must begin to use massively parallel and also individually powered (meaning individually able to be turned off) architectures. This has already begun in the GPU and gadget worlds.

Soon enough computation will produce more heat than we can handle, and the conversion of waste heat to more power to compute is also going to count.

Everything that is happening now points to the indisputable fact that local computing is necessary. Some years ago there was a theory that computing would be done somewhere else and all you would need is a dumb terminal and a fast internet connection to do anything you wanted. Now it is becoming clear that local computing is important, and so we will continue to push the limits of Moore's law and also the limits of the minimization of power consumption.

Even in server farms, in the cloud, we will continue to push the limits of these two pacing factors for the technology of computation. Speed, memory, and power consumption will be paramount, while the moving of more and more data in pipes with more and more bandwidth will also be a pacing factor for the cloud.

People still have plenty of ideas with which technology is yet to catch up. You see, old ideas do still count. But they may need to evolve a bit.

Wednesday, March 14, 2012

Post-PC

On June 1, 2010 at D8 Steve Jobs proclaimed that we were entering the post-PC era. This was quite a shock to some, but not terribly surprising to others. He had just introduced the magical iPad on January 27 and on April 3, Apple sold 300,000 iPads on the first day in the US. Still, when Steve mentioned that the post-PC era was coming, I totally got it.

He didn't mean that the desktop was dead, just that fewer and fewer people would be using them.

My take on it was that you could carry your digital life around with you wherever you went. On June 1, I was already using it for email, web browsing, and my calendar. I knew that this constituted much of what people do with computers. Yes, I figured the vast majority of users could do what they needed on an iPad.

On day one, the iPad was activated by connecting it to iTunes on your desktop, tethering it to your PC. But, on October 12, 2011, with iOS 5's release Apple rectified this by allowing you to activate your iOS devices over the air. They cut the cord. This was done synchronously with the iCloud release, which enabled users to sync their data to the cloud.

So, last year I was thinking, who needs a computer?

Well, that thought really only occurred to me in passing, because I used a desktop computer quite a bit for computer programming development. I wasn't your typical user.

But, for the average user, a desktop or laptop might not seem necessary at all. And this is the essence of the post-PC transition. This is evidenced in the cannibalization of desktop PC, laptop, and netbook sales by iPad.

With every release of iOS, the notion that you don't need a computer is becoming clearer and clearer. Recently, Apple released iPhoto for iPad and iPhone, and I can tell you it is quite effective.

Naysayers

It wasn't at all surprising that a few people were attempting to debunk Steve's point-of-view immediately. Take Steve Ballmer, Microsoft's CEO. I imagine they would have everything to lose if Steve Jobs' proclamation turned out to be true. So what did he say?

At the same conference, D8, two days later, Walt Mossberg asked "Is the iPad a PC?" and Steve Ballmer answered "of course it is". The buzz was that the iPad was for consumption of media and that Windows tablets would be appropriate for creation.

Of course, history shows that Apple came out with iWork and most of iLife on the iPad very quickly. And now iPhoto is available. These are all creation apps. And now there are many, many creation apps on iPad for bloggers, artists, composers, and others.

Another of my favorite incorrect predictions was from Microsoft's chief research and strategy officer, Craig Mundie in March, 2011, when he said, when referring to iPads and other tablets, "personally, I don't know whether I believe that that space will be a persistent one or not". And he continued with "Today those things are primarily being used in a consumptive model, because they're not very good for creating stuff".

So it's not surprising that Microsoft, with everything to lose, has applied a full court press to the iPad. What's surprising is that they have had so little effect.

It turns out to be hard to make a tablet that is as good as the iPad.

Having been around at the time, I can say it took years to modulate Mac OS X into iOS (yes, that's actually where it came from). So moving a huge boat anchor like Windows onto a tablet is going to be quite fun to watch. And, by the way, I wouldn't hold my breath waiting for it if I were you.

For Windows users, all the news is bad. First off, the interface is totally different. Second off, it can't run your Wintel applications. Third off, Microsoft is not building their own tablet, so it's just going to be a comedy of errors when it does come out. Oops, scrolling isn't smooth. Ooops, the baseband is too slow, and bogs down the processor. Ooooops, did you say you want pictures on this thing?

What I can't understand is why Microsoft hasn't fired Ballmer. Microsoft has lost position, profits, and prestige in both the smartphone and the tablet markets, all under Ballmer's watch.

But it gets weirder still.

When Steve Ballmer brought in Bill Gates to check out the ill-fated Courier tablet, Bill had "an allergic reaction" to the content creation side of the device, questioning the logic behind such a positioning.

So it's just possible that Ballmer's decisions were simply influenced by Bill Gates' bad call on tablets.

And it is also possible that Microsoft has gotten too large and entrenched in Windows-desktop-centric Office-using views to mobilize itself against the threat of the iPad, which they probably still will downplay using obviously stupid and childish observations until they become the purveyor of dead OS's.

Well, I'm not the only one to think this, apparently. Goldman, Sachs saw the iPad as a serious threat to Microsoft, and downgraded them in October 2010.

Multitouch Revolution

The thing about iPad and iPhone both is that they employ a radically new way of interacting: multitouch. This is as easy as using your fingers to type, scroll, browse the web, and pretty much everything else.

Really, as soon as the iPhone came out in June, 2007, competitors worked to duplicate the multitouch experience. Before the iPhone, smartphones were all keyboard. Buttons everywhere.

Perhaps its RIM with their Blackberry phone that has had the most difficult time, since typing on glass is pretty easy, and requires less movement of fingers. I don't have any problem at all typing on an iPad, especially if I am using the smart cover to tilt it to an ergonomic typing angle. Again, email is a pleasure on iPad.

But the main thing about multitouch is pretty clear: everything is going that way. All the laptops are going to multitouch, and it works pretty well.

Gadget Requirements

There are other things that make gadgets easier to use than desktop computers. The portability aspect makes it possible to take pictures of things you see. This is a big one. The integrated GPS makes it possible to find your way, check traffic, and even check in to social media sites like foursquare.

Battery life is a big issue with gadgets. Never mind changing the battery, If you have to carry an extra battery, the gadget simply isn't as useful as when it lasts all day.

Slow devices aren't useful. Scrolling must be seamless. Movies must be real-time. And, most importantly, it should turn on instantly. This implies that plenty of flash memory, essentially solid-state disks (SSDs) are a necessary feature.

Connectivity is key in any gadget. Without it, the device might just as well be an expensive paperweight. So, the more kinds of connectivity, the better. I'm talking about wi-fi, 3G, 4G, and LTE.

My point is that you can't just have one or two of these things. You have to have all of these things.

Where Is It All Going?

My take is that multitouch is here and it will continue to pervade everyday life. Pretty soon cars will have multitouch control panels (check out the Tesla Model S). But don't expect multitouch to make its way onto your desktop screen. Holding your hand up to the screen is just plain unergonomic, and becomes quickly tiring.

But touchscreens for common tasks like getting directions on the subway will be highly desirable.

Gadgets will have to replace wallets also. Use your iPhone to buy stuff at the grocery store and it debits your account. Walk into a restaurant and get the menu on your iPad. Near-field communication (NFC) technologies like RFID seem a likely option.

Apple shows us with Siri in the iPhone 4S that voice command technology is quickly maturing. I find it useful for dictation, when I want to write an email or speak a text message.

Several pundits are predicting that Siri will make its way into other common everyday objects.

The march of technology is relentless. And it's accelerating.

Sunday, January 15, 2012

Future, Part 1

Is technology advancement accelerating? What's holding it back? When can I get my flying car? Enquiring minds want to know!

To answer these questions and others that shape the future, let's look at a concrete example of technological advancement and see what it tells us.

Display Panels

In 2007, when I got my first iPhone, I knew I was holding the future in my hands. And when the iPad arrived, it seemed that Apple single-handedly propelled us into the 24th century. But these inventions also depended upon the relentless advancement of technology: capacitive touch panels, software and hardware for multitouch processing, thin display panels, battery technology, architecture for economical power consumption, MEMS, and so many other cool things. We will look at thin display panels for a moment just to get an idea of how technology advances. This will give us a time frame that we can use to understand how fast the future might arrive.

Thinner, brighter display panels that consume less power, clearly necessary for smartphones and tablets, are one invention that has taken years and years. Let's consider the timeline from conception to real-world commercial availability.

George du Maurier's illustration in Punch, 1879
In 1851 to 1855, Czar Nicholas I had a prototype Pantelegraph installed between Moscow and Saint Petersburg and about 5,000 faxes were sent between those dates. So people were certainly interested in sending images at least in the form of faxes.

On December 9, 1878, George du Maurier's sketch for the Telephonoscope appeared in the Punch Almanack for 1879, which showed a window-sized display of video transmitted from another source, and it shows people talking to each other at a great distance, like FaceTime. Although it was intended as a spoof of Edison's inventions, it indicates that people were thinking of this as something they wanted.

Philo Farnsworth and the first television
The first real television was demonstrated to the press on September 1, 1928 by Philo Farnsworth. But RCA Corporation disputed his patent and it was stalled in the US for ten years. However, German companies licensed it in 1935 and sets were produced in limited numbers. The 1939 World's Fair in New York City brought a public demonstration of the technology. Farnsworth's patent was finally licensed by RCA and Gaumont that year, but World War II stalled the development once again. In 1948, after Farnsworth's patent finally ran out, the commercial availability of television was finally realized in the US.

But I remember our first color TV when I was a kid, and it was quite large, and even had tubes inside. Well, all CRTs have at least one tube, the Cathode Ray Tube it is named for. At some point CRTs were replaced almost entirely by flat panel displays. Did that happen right away?

Not at all. The first flat plasma panel displays were introduced in 1964 at the University of Illinois at Urbana-Chamapign. It took 33 years until the first color plasma panel display as introduced by Fujitsu.

LCDs have been researched since the 1880s but LCD panels didn't start appearing until 1972 when Westinghouse demonstrated the first active-matrix LCD panel.

Because technology marches on in separate but simultaneous paths, plasma panels were the dominant television flat-panel technology from about 2000 through 2008, when LCD panels finally took more than a 50% share of the flat-panel television market.

Now, companies are producing thin bright TVs that appear to be bringing us directly into the world of Total Recall, where the walls are just displays. Sharp Electronics is bringing us ultra thin displays with their factory that is building 10th-generation panels. Also, an ID card has been shown with an embedded OLED panel with a 3D display of the person, that is activated by RFID. Just like Total Recall.

In fact, the movie is now being remade, in part because its technology is realizable and just doesn't seem so much like the future any more.

Apple iPhone 4
Today, with LED-backlit LCD panels in virtually every smartphone, tablet, laptop, computer monitor, and television, resolution has become even more important. Particularly with the introduction of the iPhone 4, with its retina display, in 2010. In October, 2011, Toshiba announced a 2560 by 1600 panel in a 6.1 inch display, a resolution of about 428 pixels per inch.

So, to sum it up


  • 161 years ago people first started transmitting images
  • 135 years ago people first imagined having a transmitted image display on a wall
  • 84 years ago people first demonstrated an all-electronic display
  • 67 years ago that television's commercial success began
  • 48 years ago people first demonstrated a flat panel display
  • 40 years ago companies first started marketing LCD panels
  • 29 years ago Seiko introduced the first hand-held TVs
  • 20 years ago portable computers first featured flat panel displays
  • 16 years ago Fujitsu commercially introduced a 42" plasma display panel
  • 9 years ago Kodak and Sanyo introduced the first AMOLED color panel
  • 5 years ago Apple introduced the iPhone


My first point is that technology advancement definitely accelerates over time. My second point is that, also, sociological, political, and economic forces hold technology back. A third point, not specifically illustrated by the display panels example is that external requirements can force progress.

Why Technology Accelerates

My theory is that there is a copy effect, a synergy effect, and a forcing effect and together they accelerate technology.

One of the basic principles of technology advancement is that once a technology has been demonstrated, it is only a small amount of time before someone else can duplicate it. This I call the copy effect. Whether it happens because of stealing of information, or because there are a large number of clever people is a good question. People are motivated by the understanding that the advancement is highly desirable.

In 1945, the secrets of the atom bomb were smuggled out of Los Alamos by Klaus Fuchs and Sergeant David Greengrass through Harry Gold, and delivered directly to Julius and Ethel Rosenberg and from them to Anatoly Yakovlev, their Soviet contact. When there is desire, information finds its way out.

Today, information doesn't need to be smuggled. In order to transmit it, all one needs is an internet cafe. There is evidence that information doesn't even need to be encrypted to be disseminated widely. So all it takes is one whistleblower to move technological secrets.

Although it is not about technology per se, it can quickly be seen that Bradley Manning and Julian Assange were able to move large amounts of secret information very quickly through the WikiLeaks scandal.

There is another basic principle of technology advancement, demonstrated admirably by the display panels example, is that technology is created by standing upon the shoulders of those who have come before. I call this the synergy effect, particularly when it is accelerated by free dissemination of information. In other words, the internet.

Why synergy? With synergy, 2+2=5, or the sum is greater than the parts. When person A discovers something, and person B knows that, it is possible that person B can improve upon it in some way that makes it truly useful.

For instance, the invention of money enabled us to advance beyond a barter system. The invention of electronic exchange of money enabled banks to create commerce on a larger scale. But it wasn't until the invention of point-of-sale systems for transacting commerce, including credit, debit cards, and the systems for reading them, that the promise of electronic commerce became really useful for all people.

A third basic principle guiding progress is that necessity is the mother of invention. Once the telegraph was in common use, the need to convey emotion and intent forced the invention of the telephone. This is the forcing effect.

Many technological inventions have been made in order to gain the upper hand in matters of conflict. The creation of armor emboldened the knights of the crusade. Attacks by large numbers of people spurred on the advancements in defense: castles, heavy stone walls, towers, moats, and traps. Advancements in defense forced the creation of new technologies for advanced sieges, such as trebuchets, siege towers, and siege hooks. The American Civil War led to the invention of the Gatling gun and later the machine gun, which was prominently used in World War I. And then came the dawn of the nuclear age, when the atom bomb became the deciding technology that ended World War II.

It continues to this day, with man-in-the-loop systems, precision-guided munitions and bombs, and UAVs.

When you put these three principles together and into the hands of billions of people, it becomes impossible for technology to be held back. And, at some point, information spread will reach a maximum limit, where everybody knows everything as soon as it is known. But also notice that some events can simultaneously hold back and push forwards technology.

All in all, this is still good news for the future, if we survive it.

Why Technology Gets Held Back

Public sentiment is a very good first reason that technology can get held back. Right now, we seem poised on the brink of new methods of portable energy storage, like fuel cells. But the electricity required to generate enough hydrogen for mass fuel cell adoption is large. Where will we get the electricity? One technology that seems almost certain to be able to provide this electricity is nuclear energy.

But such events as Three Mile Island and Chernobyl, and more recently the effect of the March 11, 2011  Tohoku tsunami on the Fukushima nuclear power plant, are turning public sentiment against nuclear power. The dangers associated by the storage of High Level Waste (HLW) such as spent fuel rods are also widely known problems, and their implications for future generations cannot be ignored. This has led to the rejection of the Yucca Mountain facility in Nevada (though it's not over yet), and also to the creation of better HLW storage facilities, such as the Östhammar Forsmark facility in Sweden, which could be completed in 2015.

Political turmoil is a second reason that technology can get held back. As discussed earlier, World War II held back the advancement of television. It also held back jet engines.

Periodically, purges have caused huge destruction of information. The burning of the Library of Alexandria was one example and it is speculated that the plans for mechanical inventions, including perhaps the Antikythera mechanism for predicting astronomical positions, was destroyed accidentally by Julius Caesar in 48 BC. This disrupted scientific progress since huge stores of knowledge were lost.

When, between 213 and 206 BC, the Qin dynasty ordered the burning of books and then ordered more than 460 scholars to be buried alive, they however decided to keep the military technology.

Pressure from economic interests is an excellent third reason that technology can get held back. Existing investments in infrastructure can quickly be obsoleted by disruptive technology. Companies wishing to retain control over a market can buy up invention rights to prevent them from coming to market. Or simply suppress them.

For instance, General Electric engineer Ed Hammer invented the compact fluorescent light (CFL) in 1976, but GE failed to bring this device to market, or to prioritize its research. It is believed that they thought their incandescent light bulb business would be disrupted by such a technology. In reality, they might have owned that market for the many intermediate years before LED light bulbs were introduced. And saved the world plenty of energy in the meanwhile. But they were also selling nuclear reactors, you see.

It isn't a real stretch of the imagination to think that petrochemical energy companies might not want alternative energy sources to come to light. Some of these speculations border on conspiracy theory, but such incidents have certainly happened in the past.

Flying Cars

One of the most common predictions of the future is the flying car. In fact, we have flying machines today, in the form of airplanes. And we have magnetic levitation and induction, used in bullet trains. But to realize the flying car without using the ground effect or a rocket to keep it aloft (both rather a problem for those underneath it) requires something different.

It requires antigravity.

Anti-gravity seems like so much science fiction today, but what would it really entail? We know gravity is one of the four non-contact forces, alone with electromagnetism, the strong nuclear force, and the weak nuclear force. In the hypothetical Theory of Everything (ToE), the gravitational force is unified with the other three forces by a single theory that clarifies the origins of all forces.

If force unification can be achieved, then it may be possible to treat gravity like another force. There is some experimental proof that gravity travels in waves. This is because it is known that gravitation propagates at the speed of light. So, if gravity can be treated like electromagnetism, then perhaps it can be polarized or cancelled.

We always assume that a vacuum is empty, that space is completely devoid of all matter. Gravity waves are interesting because of how they must propagate: through the curvature of space-time itself. This implies that vacuum is not vacuum at all, but is permeated with energy (known as dark energy). In one theory, the Superfluid Vacuum Theory, space is actually made up of a Bose-Einstein Condensate, a dilute gas of weakly-interacting subatomic particles. This theory might be a basis for quantum gravity, which attempts to explain the gravitational force through the quantum interactions between these particles.

The duality of photons, tiny bits of light, as either particles or waves may also testify to the internal workings of space. Since photons can be polarized, it is not a stretch of the imagination to think that gravity can also be polarized, and thus components of gravity that act in a particular direction might be cancelled.

The discovery of dark matter, matter with mass but which doesn't interact with light or any other electromagnetic radiation, shows us that some kinds of matter can exist outside the Standard Model of particle physics, which in turn indicates that we have a lot to learn about physics in general.

Communication Through the Earth

The verification of quantum teleportation shows how communication between two entangled photons can be done. It has been verified through free space over distances of multiple kilometers. However, several problems exist that make the process currently unsuitable for transmitting classical information. First, only a quantum state can be transmitted. Second, the information is not transported instantly, but is instead transmitted at the speed of light.

Yet, at the end of the day, a quantum state does get transmitted between the two entangled photons without interacting with the intermediate space. This is clearly evidence for the non-Cartesian connectedness of the fabric of space-time, at least at the quantum level.

While this technology does not accomplish zero-time transmission, it does have the promise of transmitting information from point-to-point without the possibility of an intermediate interloper. Such a technique is extremely important to secure transmission, and would employ quantum cryptography, a two-key cryptosystem that is entirely based upon the entanglement of quantum states.

Using such a system, you could communicate with a satellite in orbit at arbitrary bandwidths, regardless of whether or not it was on the other side of the planet. And to intercept the information being transmitted, you would have to be at one end or the other. And even then, you couldn't get the information because it would be dependent upon highly-randomized quantum states, which are kept in sync at both photons at either end.

Perfect for keeping secrets.

To create such an entangled pair of photons, called an Einstein-Podolsky-Rosen (EPR) pair, you would need a source for single photons that operates at room temperature. NASA is sponsoring the creation of such a device.

Monday, December 26, 2011

Disruptive Technology

Welcome to the future

Its t's are crossed and i's are dotted with disruptive technology. And so, by the way, goes our past. In the 16th century, the Spanish conquered the indigenous mesoamerican civilizations by using the technological innovations of horses, trained dogs, gunpowder, and particularly steel. This led to bloodshed and subjugation. In other words, the disruption of an entire set of civilizations.

Fortunately, today, disruptive technology is a bit more gentle. We can see its force in lots of places.

Brick and Mortar

One example is online shopping. This is leading to increased availability of goods, and also it is leading to the demise of many brick and mortar stores. Online shopping, along with the prevalence of the superstore (WalMart, Target, Home Depot, Best Buy, Costco, etc.) is accelerating the disappearance of the mom-and-pop shops of yesterday. Amazon has a new thing called Price Check that may do much to kill off brick-and-mortar altogether.

OK, it's really only a trend. But there are markets that have been changed forever (or are on their way to being so). Let's look at a few.

Bank Branches

Consider bank branches. With automatic deposit, the prevalence of ATMs, and banking on your iDevice, who needs 'em? Perhaps banks will phase out paper checks. Paper checks are almost unheard of in Sweden, I have heard.

Record Stores, Video Stores

Now we look at some kinds of stores that have really been killed off by disruptive technology. Consider the record store. Record stores were just filled with disrupted technologies. I used to go to Tower Records over in Campbell quite often. First to get records. Then when records were disrupted by CDs, I went to get CDs. I remember getting VHS tapes there for hit movies. Then I remember that it was more DVDs that I was shopping for, because the DVD disrupted the VHS market. Eventually I stopped going to that store completely. Because of iTunes. Oh, and Netflix. Also, many people get their movies from on demand systems that are built into the cable providers. These technologies are disrupting entertainment media. Furthermore, this change is leading to the unbundling of packaged media.

Bookstores

Now consider two more venues for another commodity: books. Bookstores were partly eaten up by online sellers, like Amazon, and partly by superstores like Barnes and Noble and Borders. But they are poised to be defeated by online book delivery. The Amazon Kindle and the Apple iPad are two devices leading this charge. Right now, the bookstore is on its way to extinction, like the record store.

Libraries

The second venue that is also on its way to extinction is the library. This one is the most troubling because of the issue that most books are not yet digitized. Google is having significant issues in doing just that. But if they succeed, the library is all but dead.

What could replace the library? A banks of servers containing all the printed information in existence, that's what. Stored redundantly. And, hopefully in some format that is as close to permanence as possible. Books, by the way are far from permanent. For instance, about 25 percent of the 14 million books at the Library of Congress are presently too brittle for normal use. We are going to need these information banks soon, folks.

Newsstands

Magazines are the next form of printed or manufactured media to be replaced by its downloadable cousin. As smartphones and tablets become more prevalent, so will the consumption of media through these devices. Some are predicting the death of printed media. And I haven't bought a newspaper in years. When Google News will do just as well, why would I?

Gadgets

Now let's look at gadgets. The smartphone is replacing a whole host of gadgets and thus disrupting all sorts of markets.

PNDs

Personal Navigation Device sales have been on the decline. For instance, an iPhone contains a GPS receiver, which seems to be displacing a whole bunch of hand-held GPS units. The TomTom, Garmin, and Magellan devices have all seeing lower sales since 2009. They are now starting to make apps for the smartphone market. Disruption.

MP3 Players

Smartphones, particularly the iPhone, have MP3-playing capability built in. So, why would you need an iPod if you had an iPhone? Well, it seems that the cannibalization hasn't become too bad yet, but we do see a decline in iPod sales.

Portable Gaming Consoles

Here it seems that the number of people using iPhone and iPad as portable gaming devices is on the rise. This may be due to the inclusion of fast graphics processing unit (GPU) hardware, and also to the inclusion of MEMS 3-axis accelerometers and gyroscopes. The sales of portable game software and hardware consoles are definitely moving towards iOS. For Nintendo, they have said that $925 million was lost to iOS in six months alone. They expect cannibalization of that market to continue in 2012.

Cameras

The iPhone cameras have been getting better and better. And so, their customers are using them more and more as their camera of choice, their camera of convenience. Consider Flickr uploads. The iPhone 4 is the most popular camera for uploads on that site. With a smartphone, a point-and-shoot is a hard sell these days. But, of course, that's not true for real camera enthusiasts. Ongoing issues such as rolling shutter and image stabilization are still drawbacks for smartphone cameras. And, of course, mechanical features like optical zoom and aperture control.

PCs

Are we in the post-PC era? It appears that the ascension of the processor under Moore's Law is near an end. Now, it appears to be less about how fast the main CPU runs, and more simply what a computing device can do. Nonetheless, if it is too slow, then many would say that it simply cannot do it.

The configuration of the computing device is changing to accommodate the limits in the increase in computing power, and frankly to accommodate the tasks that people want to perform with the devices. Specialized processors for graphics (GPUs), image processing (ISPs), and digital baseband processors are placed on Systems on a Chip (SoCs) along with application processors, some of which are starting to have multiple cores. Additionally, audio and video codec hardware is usually present as well.

Many of these organization changes for computing devices are happening because of the relentless pace of improvements required for smartphones and tablets. But, what of the desktop PC? In 2008, laptop shipments first exceeded desktop PC shipments in the US. Now, in 2011, even laptop shipment growth is slowing, due to tablets and smartphones.

Mobile computing devices are explosively more popular and important than desktop computing devices. This is particularly so for Apple.

Rotating Disk Storage Media

I probably can't come up with all the storage media that were outmoded and disrupted by new technology, but three things are clear: portability, media size, and access times are critical factors in desirability. Hard disks have increasingly fulfilled those requirements for decades. Now, a new technology is destined to take over: flash memory. Flash memory is the main component of Solid State Disks (SSDs). This is the kind of memory that makes up the local storage for smartphones and tablets. And, for a few years now, it is also beginning to take over laptop storage and even desktop storage. I see disruption happening here. Probably hybrid hard disk/SSD drives will provide an interim solution just like hybrid internal combustion/electric vehicles are flourishing at the present. But I expect that power requirements and weight requirements will win out and hard disks will become fully disrupted over time.

Displays

Once upon a time, I used a storage scope as a display on a CAD system. But these were eventually replaced by raster CRT devices. And those were replaced by LCD screens of various kinds (with compact fluorescent, and now LED backlights). My opinion is that the technology which produces a crisp, accurate, bright, high-contrast color picture and uses the least amount of power will win. So display technology is becoming disrupted constantly. Power requirements are paramount for displays (called panels) because batteries have a limited amount of power in mobile devices such as tablets.