This day is an interesting one for Microsoft. First, Ballmer sends out a letter to employees that states that he will resign within 12 months. Then it is announced that there is a committee on the Microsoft board, containing Bill Gates, of course, which has the responsibility of finding a new CEO. No, I suspect that Ballmer is not on that committee.
Some writers are saying that Microsoft is not forcing Ballmer out. But think about it. If you had to get rid of a failed CEO who owned 333 million shares of your company's stock, what would you do? It was most certainly a negotiated force-out. With a legal release. And probably some kind of honorary employment that requires Ballmer to only sell within certain windows of time and keeps him on a leash.
Welcome to the mobile revolution.
I must say that this change is way too late. After all, in 2010 people were already clamoring to fire Ballmer. And doesn't clean things up soon enough. Obviously Microsoft's board or directors should have been doing this for the last several years!
The reorganization that Ballmer has been accomplishing seems like a smart idea, except that it is trying to make a silk purse out of a sow's ear. It's made for the PC era which is slowly fading away. Still, the new organization is probably one less thing that a new CEO will have to worry about. That is: if he accepts this vision for the new Microsoft. A vision that depends upon Microsoft succeeding in the mobile revolution. Still with the reorg, Microsoft has a corporate culture that can't simply turn on a dime.
And Windows is exactly the problem.
Energy Efficiency
The mobile revolution has created two very interesting trends in the computing landscape. These are battery longevity and cloud computing. In order for batteries to last a long time, the products they power must be energy-efficient in a system-wide way. In order for cloud computing, with its massive compute farms, to be cost-effective, each server must be singularly power-efficient and generate as little heat as possible since cooling is a power consumption concern as well.
Of course battery longevity also affects electric cars like the Tesla. But, when it comes to computing, the battery longevity comes from three sources: more efficient batteries, hardware systems where power efficiency is an integral part of their design, and finally the economical use of resources in software. In the cloud computing arena, instead of more efficient batteries we are concerned with heat dissipation and cooling strategies.
More efficient batteries is a great thing, when you can get them. But advances in supercapacitors and carbon nanotube electrodes on various substrates is yet to pan out. This means that hardware systems such as SoC's (Systems on a Chip) must be designed with power efficiency in mind. Power management solutions that allow parts of a chip to turn themselves off on demand are one way to help.
Even at the chip level, you can send signals between various components of an SoC (System on a Chip) by using power-efficient transmission. For example, the MIPI M-PHY layer even enables lower power consumption by the transmission of the high-frequency data that usually chews up so much power. Consider using a camera and processing the data on-chip. Or using a scaler that operates from/to on-chip memory. These applications involve images, which are huge resource hogs and must be specially considered, in order to save significant amounts of power.
But there's more to this philosophy of power management, and this gets to the very heart of why SoC-based gadgets are so useful in this regard. General tasks that use power by processing large amounts of data are handled increasingly by specialized areas of the SoC. Like image scaling and resampling. Like encrypting and decrypting files. Like processing images from the onboard cameras. Like display processing and animation processing. Like movie codec processing. Each of these applications of modern gadgets are resource hogs. So they must be optimized for power efficiency at the very start or else batteries simply won't last as long.
Of course, you could simply user a bigger battery. Which makes the product larger. And less elegant!
Windows?
So what is the problem with Windows? The Wintel architecture wasn't built from the ground up for power-efficiency. Or distributed specialized computing, like so many gadgets are constructed these days. And now you can see what a daunting process this must be for Microsoft engineers that basicaly have to start over to get the job done. It will take quite a bit of time to get Windows to run on an SoC. Almost all implementations of Windows today are built to run on discrete CPUs. The Surface Pro appears to use a regular CPU board with a stock Intel part.
You see, power efficiency isn't just a hardware problem to solve. The software must also have this in mind with everything it does. The consumption of resources is a serious issue with any operating system, and affects the user experience in a huge way. I can't even begin to go into the legacy issues with the Windows operating system. The only way is to rewrite it. One piece at a time.
This problem has led many companies who lead the cloud computing initiatives to use Linux for their server operating systems. Mostly because it can easily be tailored for power efficiency. The server operating system share of Unix-based operating systems is 64%, compared to about 36% for Windows.
Servers are almost certainly going to go the way of the SoC also, with dedicated processors doing the expensive things like video codec processing, web page computation, image processing, etc. But I do see multiple cores and multithreading still being useful in the server market.
But not if they increase the power requirements of the system.
On mobile devices, Windows hasn't done so well either. Windows Phone probably has less than 3% of the mobile space, if that.
The Surface never clicked
Why didn't the Surface RT and the Surface Pro tablets succeed? First off, it's possible that they are simply yet to succeed. I just had to say that.
But more likely they will never succeed. It's hard to move into a market where your competitors have been working on the hardware solutions for years. And when hardware isn't your expertise.
At first, the Surface marketing campaign was all flash and no substance. A video of dancers clicking their tablet covers into their Surface tablets was certainly criticized by a few bloggers as vacuous. The main problem was it stressed the expensive keyboard cover, and skirted the issue that the cover is totally needed. With the cover, the Surface tablet becomes just a crappy laptop. That you can't really use on your lap, because of the kickstand. Their follow-up video was curt and to the point, but sounds a bit like propaganda. saying "Surface is yours. Your way of working. Your way of playing".
Yeah. Trying to get into the mind of their prospective users.
But it's clear that their strategies were simply not working, because they went to the old adage "if we don't look good, then maybe we should just make them look bad". And they started releasing anti-iPad ads. The first one used Siri's voice to sum it up "do you still think I'm pretty?". They compared the price of the legendary iPad to the Surface RT without a cover. I suspect that a Surface RT without a keyboard cover is pretty much useless. The next anti-iPad ad compared features in a less quirky way. But anybody using a Surface RT knew that it didn't support the apps that the iPad has, or really have any of the advanced iOS/iTMS ecosystem in place. And without the keyboard cover it was cheaper, certainly. But you really had to have the cover to get full functionality.
So Microsoft decided to drop the price. This was echoed in the nearly $1-billion charge they took that quarter. Then they followed up by dropping the price of the Surface Pro! It seems desperate to sell their inventory. Otherwise they will be taking another huge charge against Windows revenues like before.
Mark Zimmer: Creativity + Technology = Future
Please enter your email address and click submit to follow this blog
Friday, August 23, 2013
Friday, July 19, 2013
Observing Microsoft, Part 3

OMG there's so much to catch up on! But it's clear the trends I was referring to in my previous installments are being realized. To start with, I looked at their Surface and Windows 8 strategy, and then I looked at their management of the Windows brand, and its subsequent performance in the crucial holiday season.
Converting themselves into a hardware company, in the Apple model, is sheer madness for a software company like Microsoft. It will kill off their business model very quickly, I think. And yet they continue to do it, company culture be damned.
Ballmer is a coach personality, and clearly business looks like a football game to him. I can imagine him saying "if a strategy is not working against our opponent, then we must change it up". But it's clear that it's much easier to do this with a football team than it is to do the same with a company of 100K employees.
So I wonder why Microsoft doesn't just focus on making business simpler? Instead, they have been making it more and more complex by the ever-expanding features of Office, their business suite.
Software, hardware, nowhere
As one of Steve Jobs' favorite artists, Bob Dylan, once said "the times they are a changin'". And Steve knew it, too. At TED in 2010, Steve said that the transition away from PCs in the post-PC era had begun and that it would be uncomfortable for a few of its players. I took this to mean Microsoft, particularly. But how has it played out so far?
Microsoft is a software company that dabbles in hardware. Most of its revenues come from software, but remember that they make keyboards and mice and also a gaming console. These are only dabbling though, because the real innovation and money is to be made in gadgets like phones, tablets, and laptops. But their OEMs make gadgets, which requires a significantly greater level of expertise and design sense. So Microsoft's entry into gadgets can only represent their desire to sell devices, not licenses. They want to be like Apple, but specifically they want to own the mobile ecosystem and sit on top of a pile of cash that comes from device revenues. And the OEMs like HP, Lenovo, Dell, Acer, and Asus are a bit left out; they must compete with their licensor. That can't be good.
So Microsoft is clearly changing its business model to sell hardware and to build custom software that lives on it. Hence Surface RT and Surface Pro. But their first quandary must be a hard one: what can they possibly do with Windows? Windows 8 is their first answer. The live tiles "Metro" style interface is unfortunately like greek to existing Windows users. The user experience, with no start menu, must seem like an alien language to them.
This entire process is beginning to look like a debacle. If it all continues to go horribly wrong, the post-PC era could happen a lot sooner than Steve thought.
Microsoft ignores their core competence as they blithely convert themselves to a hardware company. Specifically, I think that's why they are doing it badly.
They could end up nowhere fast.
Microsoft's numbers
Microsoft is a veritable revenue juggernaut and has done a fairly good job of diversifying their business. An analysis of Q4 2012 reveals the following breakdown of their business units in revenue out of an $18.05B pie:
23% Windows and Windows Live
28% Server and Tools
35% Business
4% Online Services
10% Entertainment and Devices
This reveals that business is their strongest suit. Servers also speak to the business market. Online services also largely serve businesses. Each division, year over year, had the following increase or decrease as well:
-12.4% Windows and Windows Live
+9.7% Server and Tools
+7.3% Business
+8.1% Online Services
+19.5% Entertainment and Devices
This reveals that Xbox is their fastest-growing area. It is believed that Xbox is leaving the PowerPC and moving to AMD cores and their Radeon GPUs. This could be a bit disruptive, since old games won't work. But most games are developed on the x86/GPU environment these days.
It also shows that their Windows division revenue was down 12.4% during the quarter year over year. This involved a deferral of revenue related to Windows 8 upgrades. Umm, revenue which most likely hasn't materialized, and so you can take the 12.4% as a market contraction.
Why is the market contracting? Disruption is occurring. The tablet and phone market is moving the user experience away from the desktop. That's what the post-PC era really is: the mobile revolution. Tablet purchases are offsetting desktop and laptop PC purchases. And most of those are iPads. It gets down to this: people really like their iPads. It is a job well done. People could live without them, but they would rather not, and that is amazing given that it has only been three years since the iPad was released.
The consequence of this disruption is that PC sales are tumbling. If you dig a little deeper, you can find this IDC report that seems to be the most damning. Their analysis is that Windows 8 is actually so bad that people are avoiding upgrades and thus it is accelerating the PC market contraction. On top of the economic downturn that has people waiting an extra year or two to upgrade their PC.
Microsoft CEO Steve Ballmer stated in September 2012 that in one year, 400 million people would be running Windows 8. To date, it appears that only 80 million have upgraded (or been forced to use it because unfortunately it came installed on their new PC). That's why I said we need to ignore that deferred revenue, by the way.
If you look at OS platforms, Microsoft's future is clearly going to be on mobile devices. Yet they are not doing so well in mobile. In fact, they are becoming increasingly irrelevant, with about 80% of their Windows Phone models on only one manufacturer, Nokia. Soon, I think they may simply have to buy Nokia to prevent them from going to Android.
In the end, you can't argue with the numbers. The PC market is contracting, as evidenced by Windows revenue declining year-over-year. Tablets are not a fad. As the PC market contracts there are several companies that stand to lose a lot.
Reorganization
What is the Microsoft reorganization about? There are three things that I single out.
The first and most noticeable is the that the organization puts each division across devices so the software development is not device-compartmentalized, and so that Windows for the desktop is written by the same people who write Windows for the devices. At least in principle.
And, of course, games are now running on mobile devices, dominating the console market. And undercutting the prices.
This closely mirrors what Apple has been doing for years. And this clearly points out that Microsoft is envious of the Apple model and its huge profitability.
Second, in reorganizing, Microsoft is able to adjust the reporting of their financial data, to temporarily obfuscate the otherwise embarrassing results of market contraction. This is because if each division reports across devices then the success of a new device will hide the contraction of the old ones. At least, in theory.
But Microsoft made a huge bet in the Surface with Windows RT. And it's not panning out. They have just reported that they had to write off $900M of Surface RT inventory in the channel. The translation is this: it's not selling. They have instituted a price drop for Surface RT. I bet they won't be able to give them away. But when they finally are forced to, they will be the laughing stock of the mobile market.
Today, Microsoft is down 11%. That's represents a correction. A re-realization of the capitalization of Microsoft. This represents a widely-help perception that the consumer market is lost to them.
Third, Ballmer wants the culture of Microsoft to change. They have been having problems between competing divisions. Coach, get your team on the same page! Wait: they should have been on the same page all along. After all, the iPhone came out in 2007, right? Ballmer didn't think too much of it at the time. That's why coaches hire strategy consultants.
A reorg can be even more traumatic than a merger. It's all about culture, which is the life blood of a company. It's what keeps people around in a job market that includes Google and Apple.
Monkey business
I have to give it to Microsoft: they really want to give their tablet market a chance. But they are doing it at the expense of their business market. They are reportedly holding off on their Office for Mac and iOS until 2014. A deeper analysis is here.
This is a big mistake. They need to build that revenue now because BYOD (bring your own device) is on the rise and they need to be firmly in the workplace, not made irrelevant by other technology. If they lag, then other software developers that are a lot more nimble will supplant them in the mobile space. Apple, for instance, offers Pages and Numbers as part of their iWork suite. And those applications read Word and Excel files. And they can also be used for editing and general work.
Microsoft should be focusing on making business simpler. Cut down on the complexity and teach it to the young people. Reinvent business. This entails making business work in the meeting room with tablets and phones. Making business work in virtual meetings.
They certainly had better make their software simpler and easier to use. They must concentrate on honing their main area of expertise: software.
If they don't do it, then somebody else will. Microsoft should stop all this monkey business, trim the fat, and concentrate on what adds the most value. They simply have to stop boiling the ocean to come up with the gold.
The moral
There are some morals to this story. First, don't ever let "coach" run a technology company. Second, focus on your core competence. Third, and most important, create the disruption rather than react to it.
Wednesday, June 26, 2013
Weaponized Computation

Once upon a time
I had an early gift for mathematics and understanding three-dimensional form. When I was 16 or so, I helped my dad understand and then solve specific problems in spherical trigonometry. It eventually became clear to me that I was helping him verify circuitry specifically designed for suborbital mechanics: inertial guidance around the earth. Later I found out in those years he was working on the Poseidon SLBM for Lockheed, so, without completely understanding it, I was actually working on weaponized computation.
This is the period of my life where I learned about the geoid: the specific shape of the earth, largely an oblate ellipsoid. The exact shape depends upon gravitation, and thus mass concentrations (mascons). Lately the gravitational envelope of the moon caused by mascons has been an issue for the Lunar Orbiters.
At that point in history, rocket science was quite detailed and contained several specialized areas of knowledge. Many of which were helped by increasingly complex calculations. But there have been other fields that couldn't have advanced, where specific problems couldn't be solved, without the advances in computation. Ironically, some basic advances in computation we enjoy today owe these problems for their very existence. Consider this amazing article that details the first 25 years or so of the supercomputing initiatives at Lawrence Livermore National Laboratory.

Throughout our computing history, computation has been harnessed to aid our defense by helping us create ever more powerful weapons. During the Manhattan Project at Los Alamos, Stanley Frankel and Eldred Nelson organized the T5 hand-computing group, a calculator farm populated with Marchant, Friden, and Monroe calculators and the wives of the physicists entering data on them. This group was arranged into an array to provide one of the first parallel computation designs, using Frankel's elegant breakdown of the computation into simpler, more robust calculations. Richard Feynman, a future Nobel prize winner, actually learned to fix the mechanical calculators so the computation could go on unabated by the huge time-sync of having to send them back to the factory for repair.
I was fortunate enough to be able to talk with Feynman when I was at Caltech, and we discussed group T-5, quantum theory, and how my old friend Derrick Lehmer was blacklisted for having a Russian wife. He told me that Stanley Frankel was also blacklisted. Also, I found 20-digit Friden calculators particularly useful for my computational purposes when I was a junior in High School.
The hunger for computation continued when Edward Teller began his work on the Super, a bomb organized around thermonuclear fusion. This lead John von Neumann, when he became aware of the ENIAC project, to suggest that the complex computations required to properly understand thermonuclear fusion could be carried out on one of the world's first electronic computers.

In the history of warfare, codebreaking has proven itself to be of primary strategic importance. It turns out that this problem is perfectly suited to solution using computers.
One of the most important first steps in this area was taken at Bletchley Park in Britain during World War II. There, in 1939, Alan Turing constructed the Bombe. This was an early electromechanical computer and it was specifically designed to break the cipher and daily settings used in the German Enigma machine.
This effort required huge amounts of work and resulted in the discovery of several key strategic bits of information that turned the tide of the war against the Nazis.
The mathematical analysis of codes and encoded information is actually the science of decryption. The work on this is never-ending. At the National Security Agency's Multiprogram Research Facility in Oak Ridge, Tennessee, hundreds of scientists and mathematicians work to construct faster and faster computers for cryptanalytic analysis. And of course there are other special projects.
That seems like it would be an interesting place to work. Except there's no sign on the door. Well, this is to be expected since security is literally their middle name!
And the NSA's passion for modeling people has recently been highlighted by Edward Snowden's leaks of a slide set concerning the NSA's metadata-colecting priorities. And those slides could look so much better!

In the modern day, hackers have become a huge problem for national and corporate security. This is partly because, recently, many advances in password cracking have occurred.
The first and most important advance was when RockYou.com was hacked with an SQL injection attack and 32 million (14.3 million unique) passwords were posted online. With a corpus like this, password crackers suddenly were able to substantially hone their playbooks to target the keyspaces that contain the most likely passwords.
A keyspace can be something like "a series of up to 8 digits" or "a word of up to seven characters in length followed by some digits" or even "a capitalized word from the dictionary with stylish letter substitutions". It was surprising how many of the RockYou password list could be compressed into keyspaces that restricted the search space considerably. And that made it possible to crack passwords much faster.
Popular fads like the stylish substitution of "i" by "1" or "e" by "3" were revealed to be exceptionally common.
Another advance in password cracking comes because passwords are usually not sent in plaintext form. Instead, a hashing function is used to obfuscate them. Perhaps they are only stored in hashed form. So, in 1980 a clever computer security professor named Martin Hellman published a technique that vastly sped up the process of password cracking. All you need to do is keep a table of the hash codes around for a keyspace. Then, when you get the hash code, you just look it up in the table.
But the advent of super-fast computers means that it is possible to compute billions of cryptographic hashes per second, allowing the password cracker to iterate through an entire keyspace in minutes to hours.
This is enabled by the original design of the hashing functions, like SHA, DES, and MD5, all commonly used hashing functions. They were all designed to be exceptionally efficient (and therefore quick) to compute.
So password crackers have written GPU-enabled parallel computation of the hashing functions. These run on exceptionally fast GPUs like the AMD Radeon series and the nVidia Tesla series.
To combat these, companies have started sending their passwords through thousands of iterations of the hashing function, which dramatically increases the time required to crack passwords. But really this only means that more computation is required to crack them.

Many attacks on internet infrastructure and on targeted sites depend upon massively parallel capabilities. In particular, hackers often use Distributed Denial of Service (DDoS) attacks to bring down perceived opponents. Hackers often use an array of thousands of computers, called a botnet, to access a web site simultaneously, overloading the site's capabilities.
Distributed computing is an emerging technology that depends directly on the Internet. Various problems can be split into clean pieces and solved by independent computation. These include peaceful projects such as the spatial analysis of the shape of proteins (folding@home), the search for direct gravitational wave emissions from spinning neutron stars (Einstein@home), the analysis of radio telescope data for extraterrestrial signals (SETI@home), and the search for ever larger Mersenne prime numbers (GIMPS).
But not only have hackers been using distributed computing for attacks, they have also been using the capability for password cracking. Distributed computing is well suited to cryptanalysis also.

Recently it has been discussed that high-performance computing has become a strategic weapon. This is not surprising at all given how much computing gets devoted to the task of password cracking. Now the speculation is, with China's Tianhe-2 supercomputer, that weaponized computing is poised to move up to the exascale. The Tianhe-2 supercomputer is capable of 33.86 petaflops, less than a factor of 30 from the exascale. Most believe that exascale computing will arrive around 2018.
High-performance computing (HPC) has continually been used for weapons research. A high percentage of the most powerful supercomputers over the past decade are to be found at Livermore, Los Alamos, and Oak Ridge.
Whereas HPC has traditionally been aimed at floating-point operations (where real numbers are modeled and used for the bulk of the computation) the focus of password cracking is integer operations. For this reason, GPUs are typically preferred because modern general-purpose GPUs are capable of integer operations and they are massively parallel. The AMD 7990, for instance, has 4096 shaders. A shader is a scalar arithmetic unit that can be programmed to perform a variety of integer or floating-point operations. Because a GPU comes on a single card, this represents an incredibly dense ability to compute. The AMD 7990 achieves 7.78 teraflops but uses 135W of power.
So it's not out of the question to amass a system with thousands of GPUs to achieve exascale computing capability.
I feel it is ironic that China has built their fastest computer using Intel Xeon Phi processors. With 6 cores in each, the Xeon Phi packs about 1.2 teraflops of compute power per chip! And it is a lower power product than other Xeon processors, at about 4.25 gigaflops/watt. The AMD Radeon 7990, on the other hand, has been measured at 20.75 gigaflops/watt. This is because shaders are much scaled down from a full CPU.
What is the purpose?
Taking a step back, I think a few questions should be asked about computation in general. What should computation be used for? Why does it exist? Why did we invent it?
If you stand back and think about it, computation has only one purpose. This is to extend human capabilities; it allows us to do things we could not do before. It stands right next to other machines and artifices of mankind. Cars were developed to provide personal transportation, to allow us to go places quicker than we could go using our own two feet. Looms were invented so we could make cloth much faster and more efficiently than using a hand process, like knitting. Telescopes were invented so we could see farther than we could with our own two eyes.
Similarly, computation exists so we can extend the capabilities of our own brains. Working out a problem with pencil and paper can only go so far. When the problems get large, then we need help. We needed help when it came to cracking the Enigma cipher. We needed help when it came to computing the cross-section of Uranium. Computation was instantly weaponized as a product of necessity and the requirements of survival. But defense somehow crossed over into offensive capabilities.
With the Enigma, we were behind and trying to catch up. With the A-bomb, we were trying to get there before they did. Do our motivations always have to be about survival?
And where is it leading?
It's good that computation has come out from under the veil of weapons research. But the ramifications for society are huge. Since the mobile revolution, we solve problems that can occur to any of us in real life, and build an app for it. So computation continues to extend our capabilities in a way that fulfills some need. Computation has become commonplace and workaday.
When I see a kid learn to multiply by memorizing a table of products, I begin to wonder whether these capabilities are really needed, given the ubiquity of computation we can hold in our hands. Many things taught in school seem useless, like cursive writing. Why memorize historical dates when we can just look it up in Wikipedia? It's better to learn why something happened then when.
More and more, I feel that we should be teaching kids how to access and understand the knowledge that is always at their fingertips. And when so much of their lives is spent looking at an iPad, I feel that kids should be taught social interaction and be given more time to play, exercising their bodies.
It is because knowledge is so easy to access that teaching priorities must change. There should be more emphasis on the understanding of basic concepts and less emphasis on memorization. In the future, much of our memories and histories are going to be kept in the cloud.
Fundamentally, it becomes increasingly important to teach creativity. Because access to knowledge is not enough. We must also learn what to do with the knowledge and how to make advancements. The best advancements are made by standing on the shoulders of others. But without understanding how things interrelate, without basic reasoning skills, the access to knowledge is pointless.
Sunday, June 16, 2013
Three-Dimensional Thinking, Part 2
The last time I wrote about three-dimensional thinking, I discussed impossible figures. They are fun ways to challenge our brains to see things in a different way. But to me they signify more than just artwork.
First off, there are plenty of locally plausible geometries depicted in the figure. For instance, the M figure is a totally real and constructible object in the real world.
The next part shows the three strands connected to the three loops that wrap around the Penrose triangle.
The next part of the figure is the loop. Each loop wraps around one of the sides of the Penrose triangle and creates an interlocking impossible figure, a concept I have shown examples of before in this blog. For instance, there is the impossible Valknut.
The next impossible figure is another modification of the Penrose triangle, showing what happens when the blocks intersect each other.
Looking at objects from different angles helps us understand their spatial structure.
Looking at a given subject from different angles is a requirement for creativity. But eventually, in your mind, you realize that reality itself is malleable, and this is the domain of dreams. And dreaming is good for creativity because it helps us get out of the box of everyday experience and use our vision in a new way.
The key
Once I asked myself a question about impossible objects: what is the key to making one?
The key trick which is used in impossible figures is this: locally possible globally impossible. In the case of a Penrose triangle (also called a Reutersvärd triangle because Oscar Reutersvärd was the first to depict it) local corners and pieces of objects are entirely possible to construct but the way they are globally connected is spatially impossible.
I have constructed another impossible figure which is included above. This figure contains several global contradictions, yet remains locally plausible. However, there are two global levels of impossibility in this figure. Let's consider what they are.

My original drawing didn't actually have M's at the three corners. It was a Penrose triangle. To make the figure compact, I added the M's on each of the three corners of the Penrose triangle. This doesn't make the figure any more possible though. It just adds a little salt and pepper to the mix; it helps confuse the eye a bit.

There is really nothing about this strong figure that is impossible either. It can be totally constructed in real space.
Actually, it is a nice figure by itself, standing alone. You can see each block sliding by itself through the set of blocks.
And further, I this this figure would make a good logo. It feels like an impossible figure even though it's perfectly realizable. And it can be depicted from any angle because it is an honest three-dimensional construction. I have an idea to construct one out of lucite or another transparent material.

But this is the first level of impossibility. Such a loop is not really constructible without bending the top face. In this way, it is related to the unending staircase of M. C. Escher's Ascending and Descending.
The second level of impossibility is, of course, the Penrose triangle itself. When it comes to levels of impossibility and a clean depiction of impossibility, consider Reutersvärd. Pretty much all of Reutersvärd's art contains this illusion as a key. Though, I would encourage you to look at all of his work, because individual pieces can be both stunning and subtle simultaneously.

Any two blocks may certainly intersect each other, but to have all three intersect each other in this way is a clear impossibility.
It would probably have been more striking to make the triangular space in the center a bit larger.
Impossible objects take imagination out of the real world and into a world that maybe could be. Perhaps it's the world of flying cars, of paper that can hold any image and quickly change to any other, or of people whose thoughts are interconnected by quantum entanglement. In such a world, imagination can fly free.
Thursday, June 6, 2013
My Artwork
For those of you who have been reading posts and checking out my artwork for a while, I have a small present. I have posted the full-sized versions of much of my special artwork from this blog on Pinterest.
You can get to it here.
Finally, if you click on into them, you can see the details and get an idea of how many hours I spent on these pieces. Sometimes a bit of work was so complicated it took several days to complete. Which totally explains why my posts take so long!
Enjoy!
I have other boards on Pinterest, some with cool patterns, clouds, and collections of things. Just a hobby, and I thought I'd share a bit.
You can get to it here.
Finally, if you click on into them, you can see the details and get an idea of how many hours I spent on these pieces. Sometimes a bit of work was so complicated it took several days to complete. Which totally explains why my posts take so long!
Enjoy!
I have other boards on Pinterest, some with cool patterns, clouds, and collections of things. Just a hobby, and I thought I'd share a bit.
Sunday, June 2, 2013
Mastering Nature's Patterns: Basalt Formations
I love patterns. This all originally stems from my observations of nature's patterns. A lot of the objects I draw (and develop in code mathematically) come directly from nature.
Strikingly, nature will often conspire to produce objects of great beauty, ones which we cannot match without tremendous effort. An example of this are the basalt formations. Created by volcanic upwelling, great pressure leading to crystallization, and fracturing during cooling, they are nature's brilliant tessellations, awe-inspiring extrusions, and mad ravings simultaneously.
They resemble three-dimensional bar graphs. Their fracture pattern, in two dimensions, is a natural Voronoi diagram. I first saw this pattern in nature while observing the way that soap bubbles join. Without fully understanding it, this observation introduced me to the mathematical laws of geometry when I was very young. Little did I know that I would never stop trying to duplicate it.
In this post, I show you how I duplicated this particular kind of nature. And I did it in my style, as you can see.
To create a drawing of a basalt formation, I actually used a rendered Voronoi diagram, which you see here, transformed it into a subtle perspective, establishing two vanishing points. Then I made three copies arranged as layers in a way that approximated placing them on three-dimensional transparent layers at various depths. This was so I could see the levels, and so the third vanishing point could be right.
Of course, I used Painter's Free Transform to do this!
I kept each layer a little bit transparent so I could get an intuitive feeling for which layer was on the top and which layer was on the bottom. This technique is called depth-cueing.
As you can see, it worked pretty well. I stopped at three layers because I didn't want the drawing project to get too complicated. But, of course, like all of my projects, it soon did!
Next, on a new layer, I drew lines on top of the the lines that I wanted to represent the three-dimensional surface of the basalt formation. This meant choosing a three-dimensional height for each cell. The base layer that extended to the outside of the drawing was the lowest height, of course, and a second and third layer was built on top of it.
This causes cells to raise out of the base layer and appear to become extruded.
When I consulted some real images of basalt formations as a guide, I found that they were quite imperfect and usually were cracked, damaged, or eroded in some way.
I really wanted my drawing to represent a perfect un-eroded result.
I used an extra transparent layer (behind the layer with the lines) and marked each cell with a three-dimensional height index so I could be sure which heights corresponded with each cells. This told me where to put the shading and also told me how to interpret the extrusion lines.
This layer was for informational purposes only. You see here the original small layer with crudely drawn lines. It's actually kind of hard to see the three-dimensional relative positions of the cells in some cases, which is another reason I labelled each cell with a height index.
Once I had designed it, I found that the drawing was way too small to shade the way I like to (using a woodcut technique) and so I resized the image and went over each of the lines by hand to make it crystal clear at the new resolution.
That only took a few days.
Why? After resizing the image, I found that each line was unusually soft. This meant that I had to go over the lines with a small brush, darkening and resolving the line. Then I had to go around it with white to create a clean edge. This is what really took the time!
Naturally I do lots of other things than just draw all the time, and so I had to use extra minutes here and there. I kept the Painter file on my laptop and brought my Wacom tablet with me in my bag.
I spent probably ten or twenty hours drawing this image.
Once the lines were perfect, the next step was shading. But of course it had to be in my style, and this also took quite a bit of time.
I used woodcut shading to create shadows and accessibility shading. This created a very nice look.
To do this, I drew parallel lines at a desired spacing, taking care to make them correspond in length and position to the shading and shadows that would result from a light coming from the left side.
I thickened the lines at their base, and made them a bit triangular. Then at the end, I used a small white brush to erode and sharpen the point and clean the sides of each shading line to get the right appearance.
The final step was coloring the tops and the sides, using a gel layer.
I colored each layer using a different shade of slightly bluish gray. The top layer got the lightest shade.
Here you can see a close-up of the final image, which was very high resolution indeed.
Even though I started out with a computer-generated fracturing pattern, I was able to retain a hand-wrought look to the final image. None of the lines are really computer-prefect
Yes, nature's patterns often take a bit of time to master!
Strikingly, nature will often conspire to produce objects of great beauty, ones which we cannot match without tremendous effort. An example of this are the basalt formations. Created by volcanic upwelling, great pressure leading to crystallization, and fracturing during cooling, they are nature's brilliant tessellations, awe-inspiring extrusions, and mad ravings simultaneously.
They resemble three-dimensional bar graphs. Their fracture pattern, in two dimensions, is a natural Voronoi diagram. I first saw this pattern in nature while observing the way that soap bubbles join. Without fully understanding it, this observation introduced me to the mathematical laws of geometry when I was very young. Little did I know that I would never stop trying to duplicate it.
In this post, I show you how I duplicated this particular kind of nature. And I did it in my style, as you can see.

Of course, I used Painter's Free Transform to do this!
I kept each layer a little bit transparent so I could get an intuitive feeling for which layer was on the top and which layer was on the bottom. This technique is called depth-cueing.
As you can see, it worked pretty well. I stopped at three layers because I didn't want the drawing project to get too complicated. But, of course, like all of my projects, it soon did!

This causes cells to raise out of the base layer and appear to become extruded.
When I consulted some real images of basalt formations as a guide, I found that they were quite imperfect and usually were cracked, damaged, or eroded in some way.
I really wanted my drawing to represent a perfect un-eroded result.

This layer was for informational purposes only. You see here the original small layer with crudely drawn lines. It's actually kind of hard to see the three-dimensional relative positions of the cells in some cases, which is another reason I labelled each cell with a height index.
Once I had designed it, I found that the drawing was way too small to shade the way I like to (using a woodcut technique) and so I resized the image and went over each of the lines by hand to make it crystal clear at the new resolution.

Why? After resizing the image, I found that each line was unusually soft. This meant that I had to go over the lines with a small brush, darkening and resolving the line. Then I had to go around it with white to create a clean edge. This is what really took the time!
Naturally I do lots of other things than just draw all the time, and so I had to use extra minutes here and there. I kept the Painter file on my laptop and brought my Wacom tablet with me in my bag.
I spent probably ten or twenty hours drawing this image.
Once the lines were perfect, the next step was shading. But of course it had to be in my style, and this also took quite a bit of time.

To do this, I drew parallel lines at a desired spacing, taking care to make them correspond in length and position to the shading and shadows that would result from a light coming from the left side.
I thickened the lines at their base, and made them a bit triangular. Then at the end, I used a small white brush to erode and sharpen the point and clean the sides of each shading line to get the right appearance.

I colored each layer using a different shade of slightly bluish gray. The top layer got the lightest shade.
Here you can see a close-up of the final image, which was very high resolution indeed.
Even though I started out with a computer-generated fracturing pattern, I was able to retain a hand-wrought look to the final image. None of the lines are really computer-prefect
Yes, nature's patterns often take a bit of time to master!
Tuesday, May 21, 2013
Security, Part 1
As much as we'd like it to be true, security is not all about ciphers; it's also about physical security, the human factor, and an often overlooked area called side channels.
Physical Security
We all know that you need a password to keep a computer secure, right? But what happens when the hard drive is stolen? Your data can walk right out the door, that's what!
But even the transmission of secret keys and plain text is an issue. For instance, a keystroke logging program can easily intercept all the passwords you type. So you want to make sure that such a program never gets onto your computer.
With some cipher text, the more you get of it the easier it is to decode it. While this usually describes not-so-good security, things like feedback shift register xor cipher techniques are still employed in stream ciphers. To combat this, the feedback shift register must be re-initialized periodically to prevent the code from being broken. This is usually done by using a more secure encryption technique, like an RSA public-key cryptosystem.
But the best thing would be to make the transmission un-interceptable. This leads to the use of quantum key cryptography.
The Human Factor
The mobile computing revolution didn't invent the need for accessing your data externally, but it did make it a lot more common. So we use passwords to protect our data.
Passwords are secret keys that are possible to remember. But humans are frail and forgetful and so often they use passwords that are easy to guess. Ones they can't forget. Like 12345. I talk about just how insecure these kinds of passwords are in my first post on hackers.
But humans are always doing dumb, insecure things, like leaving doors unlocked or ajar, leaving a key under the flower pot, or leaving the keys to the car behind the visor. This kind of behavior happens out of force of habit to some people and represents a massive security breach.
But the most powerful kinds of attacks are called social engineering attacks.
Side Channels
This is the most interesting kind of insecurity, because it really describes an indirect attack.
One side channel is comprised of signals emanating from a device like an LCD screen. The video signals are generally leaked out and can be intercepted and reconstructed for spying on the device. For CRTs, a fellow named Wim Van Eck demonstrated in 1985 that he could display on a TV monitor the contents of a CRT screen, captured from hundreds of meters away, just by tuning into the video frequency emanations. The technique, known as Van Eck phreaking, can work on any display hardware.
When it comes to radio frequency (RF) emanations, a standard, known as TEMPEST since the 1960s, covers the techniques and methods used in shielding devices and components from being surveilled in this way.
Simple things like wi-fi are easily broken into, in a process called wardriving. There are published approaches for how to crack WEP and other security protocols used in wi-fi. But other methods can also be used to gain the password. Once the wi-fi is accessed, then anything transported on the wi-fi is also accessible. Google got in trouble for accessing wi-fi from their street view vehicles, but the fact is it is too easy to collect data in this manner. Thus, the mobile computing revolution introduces a whole new set of insecurities.
Another side channel concerned cryptography and this one is a doozy: just by observing the process that is encrypting or decrypting some data, you can infer information about, for instance, the size of the prime numbers used in an RSA public-key cryptosystem. If you can tell how long it takes to divide the public key by a secret key, you can infer some valuable information about the size and bitwise complexity of the secret key. If, when producing a prime number pair, you can determine how long it took to produce it, you can tell a bit about the algorithm used to produce them. Each bit of information is useful in chopping away at the space of all possible answers to the question of what the secret is.
The data you observe about the cryptography process can be power consumption, the timing, or really anything that can be measured externally. With a power consumption curve, you can do differential analysis to get really precise information about how big the multiply was, and even which parts of the multiply are more complicated than others.
And you can also measure thermal and acoustic signatures as well. For instance, by focusing an infrared camera at a chip during a certain computation, you can determine which parts of the chip are active and at what times.
Physical Security
We all know that you need a password to keep a computer secure, right? But what happens when the hard drive is stolen? Your data can walk right out the door, that's what!
But even the transmission of secret keys and plain text is an issue. For instance, a keystroke logging program can easily intercept all the passwords you type. So you want to make sure that such a program never gets onto your computer.
With some cipher text, the more you get of it the easier it is to decode it. While this usually describes not-so-good security, things like feedback shift register xor cipher techniques are still employed in stream ciphers. To combat this, the feedback shift register must be re-initialized periodically to prevent the code from being broken. This is usually done by using a more secure encryption technique, like an RSA public-key cryptosystem.
But the best thing would be to make the transmission un-interceptable. This leads to the use of quantum key cryptography.
The Human Factor
The mobile computing revolution didn't invent the need for accessing your data externally, but it did make it a lot more common. So we use passwords to protect our data.
Passwords are secret keys that are possible to remember. But humans are frail and forgetful and so often they use passwords that are easy to guess. Ones they can't forget. Like 12345. I talk about just how insecure these kinds of passwords are in my first post on hackers.
But humans are always doing dumb, insecure things, like leaving doors unlocked or ajar, leaving a key under the flower pot, or leaving the keys to the car behind the visor. This kind of behavior happens out of force of habit to some people and represents a massive security breach.
But the most powerful kinds of attacks are called social engineering attacks.
Side Channels
This is the most interesting kind of insecurity, because it really describes an indirect attack.
One side channel is comprised of signals emanating from a device like an LCD screen. The video signals are generally leaked out and can be intercepted and reconstructed for spying on the device. For CRTs, a fellow named Wim Van Eck demonstrated in 1985 that he could display on a TV monitor the contents of a CRT screen, captured from hundreds of meters away, just by tuning into the video frequency emanations. The technique, known as Van Eck phreaking, can work on any display hardware.
When it comes to radio frequency (RF) emanations, a standard, known as TEMPEST since the 1960s, covers the techniques and methods used in shielding devices and components from being surveilled in this way.
Simple things like wi-fi are easily broken into, in a process called wardriving. There are published approaches for how to crack WEP and other security protocols used in wi-fi. But other methods can also be used to gain the password. Once the wi-fi is accessed, then anything transported on the wi-fi is also accessible. Google got in trouble for accessing wi-fi from their street view vehicles, but the fact is it is too easy to collect data in this manner. Thus, the mobile computing revolution introduces a whole new set of insecurities.
Another side channel concerned cryptography and this one is a doozy: just by observing the process that is encrypting or decrypting some data, you can infer information about, for instance, the size of the prime numbers used in an RSA public-key cryptosystem. If you can tell how long it takes to divide the public key by a secret key, you can infer some valuable information about the size and bitwise complexity of the secret key. If, when producing a prime number pair, you can determine how long it took to produce it, you can tell a bit about the algorithm used to produce them. Each bit of information is useful in chopping away at the space of all possible answers to the question of what the secret is.
The data you observe about the cryptography process can be power consumption, the timing, or really anything that can be measured externally. With a power consumption curve, you can do differential analysis to get really precise information about how big the multiply was, and even which parts of the multiply are more complicated than others.
And you can also measure thermal and acoustic signatures as well. For instance, by focusing an infrared camera at a chip during a certain computation, you can determine which parts of the chip are active and at what times.
Subscribe to:
Posts (Atom)