Showing posts with label applications. Show all posts
Showing posts with label applications. Show all posts

Saturday, October 13, 2012

How Old Is Your Software?

Let's look at software vulnerability. What kinds of software are the most vulnerable?

Well, duh! The oldest, most crufty kinds of course! Whenever you add onto software year after year, you unwittingly create opportunities for exploitation. We say that our data are secure, yet we do not test software in anywhere near the rigorous fashion it requires!

This leaves us with highly-functional yet completely-vulnerable software. And the users don't even realize it. Business users, corporate users, individual users, you.

Which Software is the Most Vulnerable?

Means: Programmers only need to be connected to the Internet and have a computer capable of being programmed to become a hacker. This makes up basically every person on the planet in all but the seriously developing nations. So let's just say there is a large sample set of possible hackers.

Motive: To be vulnerable, you also have to be hiding something desirable, interesting, or perhaps embarrassing. In other words: valuable to someone who just needs some street cred. What holds this kind of data? Your computer, your hard disk, your database, managed by operating systems, software that routinely gets installed or updated, things like distributed database server software also that protect huge amounts of data. For more motives for hacking, see my first blog post on Hackers.

Opportunity: So, let's look at software that has enjoyed release after release year after year. These releases are generally done for the purposes of:
  • increasing their feature set
  • making them faster
  • fixing their security holes
So let's examine systems which do this. Operating systems, like Windows, Mac OS X, iOS, and Android certainly are updated quite often. System software for supporting desirable things like videos are updated often as well, like Adobe's Flash. So are things like their suite of programs the Creative Suite. In business, the Oracle SQL Server is updated quite often also, to add features and, more often, to patch vulnerabilities. Programming capabilities like Java site updated a lot also. Even GNU, the Free Software Foundation's operating system, which declares proudly that GNU's Not Unix (though it is identical to it in every way I can see) is updated quite often.

These are the most vulnerable software systems on the planet, merely because they are updated so often. And because so many people and businesses use them.

What Makes These Vulnerabilities?

The best positive marketing driver is the first one: increasing their feature set. To do this, it is often necessary to allow other developers to add to their feature set. We see this in nearly every OS platform in history. Supporting Applications. Allowing Plug-ins. Enabling programmability.

Being able to program something is highly desirable. It is also exactly what causes the vulnerabilities.

In 1984, I bought my first Macintosh. Actually it was an original 128K Mac. And the first thing I did was to take it apart, with a long Torx screwdriver and some splints to crack open the shell. My business partner in Fractal Software, Tom Hedges, was doing the exact same thing in the very same room. We both came to the conclusion that it needed a real hard drive, which was an interesting hardware task. We also came to the conclusion that we wanted to program it.

I wanted to create a new application.

We met an Apple person, Owen Densmore, at Siggraph that year and he put us in touch with a key developer, Bill Duvall, who had built the Consulair C system with a text editor. Owen gave us the external terminal debugging capability, called TermBugA, that we could use to debug our applications. He put us in touch with Steve Jasik, who authored MacNosy, and had disassembled the entire ROMs in a Mac. We built our first apps for the Mac within a couple of weeks and began our development career.

This is the old school method. The very ability to program a device has a name now: pwn. This means "owning it" but it also has a whiff of programmability to it.

If a device is a computer of any kind, then the desire to program it freely is a natural consequence of these old school ways.

But those ways must change.

How Are The Vulnerabilities Exploited?

The goal is to become a privileged user on the computer. This will enable the hacker to install their programs, get access to whatever data is available without restriction, and basically to take over the computer. Once this is done, then malware can be installed. Things that log your keystrokes. Or watch you through your webcam. Or check which web sites you use, remembering whatever passwords you use to access them.

This enables them to steal your identity or your money. Or you can be blackmailed with whatever incriminating data is present. In other words, criminal activity that exploits you, your business, or your customers.

But overwhelmingly, your computer can become something that is not under your control and can be used as a base for expansion, virus propagation, or as a machine to support DDoS attacks as well.

How do they get control of your computer? Often it is with a very small bug.

Now, software above a certain size always has bugs in it, and that's the problem in a nutshell.

The kind of bugs that hackers look for are primarily buffer overrun bugs. Because all machines are Von Neumann machines, data is stored in the same place as code. This means that all the hacker needs to do is insert their code into your system and transfer control to it.

A buffer overrun bug allows them to do this because, by definition, once a buffer (a fixed-size place in memory to store data) is overrun then the program has lost control of what is going into memory. With a little cleverness, after overrunning the buffer, the data will go someplace that is a tender spot. This can cause another bug to happen or it can be a spot where program control will end up soon enough in the future.

And voilá, the hacker is running their own native code on your computer.

Their next trick is to become a superuser. This is sometimes referred to as becoming root. These terms come from UNIX, which is the basis for many operating systems, like Mac OS X and Linux.

This can be done several ways, but the most effective way is apparently to masquerade as a routine install of familiar software. Like Photoshop, Flash, a Windows Service Pack, etc.

But the process of taking over a computer, which comprises a rootkit, is often a several-step process.

Perhaps the computer becomes a bot, simply running jobs for the hacker: sending email spam at random times, using the computer's position in the network to attack other local computers, making the computer be part of a Distributed Denial of Service (DDoS) attack.

Perhaps the hacker only wants to get the data in that computer. The easiest way is to gain superuser access, and then you have the privileges to access all the files. Maybe the hacker just wants to watch the user and gain information like bank account numbers and passwords.

Sometimes the hacker just wants to get access to databases. The databases contain information that might be sensitive, like credit card information, telephone numbers. Since these databases are generally SQL servers, a specific kind of attack is used: SQL Injection attacks.

Poorly-written SQL can have statements in it that evaluate a string and execute it. Rather than running code with pre-specified bind variables. It is these strings that make SQL vulnerable to being co-opted by a hacker, who can modify the SQL program simply by changing its parameters. When the string gets changed to SQL code of the hacker's choice, it can be executed and the hacker can, for instance, extract all of the database records, instead of the usual case where the records on certain date may be accessed. Or the hacker can change the fields that get extracted to all the fields instead of a small number of them.

How Do We Combat This?

It is easy to say there is no way to fight system vulnerabilities, but you would be wrong.

The strongest way to stop it is curation. One form of curation is the ability of a supervisor to prevent malware from becoming installed on a system. When a system allows plug-ins and applications, these must be curated and examined for malware and the backdoors and errors that allow malware to take hold. And they must be limited in their scope to prevent conscription of the operating system and applications that run them.

In the case of Apple, curation means examining every App built for its platform for malware or even the whiff of impropriety. And this is a really good thing in itself, because it means that far less malware attacks iOS than does Android.

In the case of SQL injection attacks, rewrite your SQL to not use executed strings.

But general practices need to be followed religiously. Make sure your passwords are not guessable. Use firewalls to prevent unintended connections. Beware phishing attacks.


Thursday, August 9, 2012

Paper

I have a piece of paper on my desk, and it is white, 8.5" by 11", letter size. I have a pen in my hand, and I draw on the paper in clean crisp lines. Oops, that line was wrong, so I can zoom in within the paper, using a reverse-pinch, and correct the line using more pen strokes. I can dropper white or black from the paper to draw in white or black for correction.

But, if I really don't like that line, I can undo it and try again. All on what appears to be a regular piece of paper!

Wait, this is just like a paint app on an iPad!

Yes, this is how paper will be in the future: just a plain piece of paper. Plus.

The drawing can be finished and cleaned up and then saved using an extremely simple interface. Touching the paper with my finger brings up this interface. Touching the paper with the pen allows me to draw.

When I bring up the interface, I can save the drawing. Into the cloud.

Smaller and Smaller

How did this come to be? Simple: miniaturization.


I think the computer concept, stemming from WW II and afterwards, is the transformative concept of our lifetimes. The web, though amazingly useful, is just an offshoot of computing; it's a natural consequence. We have seen computers go from house-sized monstrosities during the war to room-sized beasts during the 50s and 60s to refrigerator-sized cabinets with front-panel switch-based consoles in the 70s to TV-sized personal computers in the 80s to portable laptops in the 90s to handheld items in the 2000s to wearable items in the 2010s.

It's perfectly clear to me where this is going.

Computers are going to be embedded in everyday objects in our lifetime. When I was born, computers were room-sized and required punched cards to communicate with them. When I die, computers will be embedded in everything and will require but a word or a touch to make them do what we require.

Gadgetizing Ordinary Objects

In the future, the world I live in has objects with their own ability to compute, like modern gadgets, but they are impossibly thin, apparently lacking a power source, and can transmit and receive effortlessly through the ether into the cloud. So, let's summarize what they need in order to be a full-functioning gadget:
  1. computation - a processor or a distributed system of computation
  2. imaging - the ability to change its appearance, at least on the surface
  3. sensing - the ability to respond to touch, light, sound, movement, location
  4. transmission/reception - the ability to communicate with the Internet
  5. storage - the ability to maintain local data
  6. power - perhaps the tiny size means the light shining on the object will be enough to power it
You know what? I don't need as many pieces of paper as I used to. This saves trees, which grow outside all over because we are no longer chopping them down except to control overgrowth. Even paper used to wrap boxes rarely exists, because the outsides of boxes also act this way.

The same paper can be used to read the local new feed or to check the weather. But, unlike a newspaper, it is updated in real time. I can even look at the satellite image.

It becomes clear that the "internet of things" is necessary to make this vision happen.

Yet To Do

It's amazing to think so, but most of this magic already works on an iPad. The only conceptual leaps that need to be made are these:
  1. the display becomes a microscopically-thin layer, reflecting light rather than producing it
  2. the computation, sensing, transmission, and reception must use organic, paper-thin processors
  3. touch interfaces must learn to discern between fingers and pen-points
  4. the paper powers itself, using capacitance or perhaps with a paper-thin power source
In 1, like existing eInk and ePaper solutions used in eBooks, power is only used to change the inherent color of a spot on the paper. Normally, power doesn't get used at all when the display is stable and unchanging. In 2, the smaller they processors are, the less power they will use. We can already envision computation at the atomic level, and also in quantum computers. In 4, maybe the light you see the paper with can power the device (a fraction of the light gets absorbed by the paper, particularly where you have drawn black).

Why Change People When We Can Change Objects

Now go through this scenario with any object you are familiar with. Why couldn't it be done using computing, imaging, sensing, transmission, storage, power, etc. ?

Things like undo, automatic save and recall, global communication, and information retrieval become the magic that is added to real-world objects. It's like a do-what-I-mean world.

But what might be different from a current iPad? Turning your image. Imagine turning your image using current applications like Painter. You can turn it using space-option to adjust the angle of the paper you are drawing onto so your pen strokes can be at ergonomic angles.

But with a paper computing device, you just turn the paper!

The ergonomics of paper use are exactly like those of existing paper, which solves some problems right off the bat.

Also imagine that you lay the paper on something and it can copy exactly what is underneath it. It's like a chameleon.

So objects like paper become more useful in the future. And we are just the same people, but we are enabled to be do so much more than we can do now. And the problems of ergonomics can be solved in the way they have already been solved: with the objects we use in everyday life.

Any solution that doesn't require the human being to change can be accepted. The easier it is, the more likely it will be accepted. The closer to the way it's already done in a non-technological way, the more likely it is that anybody can use it.

Solutions that do require the human to change, like implants, connectors, ways to "jack into" the matrix seem to me to lead to a very dystopian future. But remember there are those who are disabled and who will probably need a better way to communicate, touch, talk, hear, or see.

Hmm. I Never Thought Of That!

Cameras are interesting to make into a paper-thin format. Maybe there are some physics limitations that make this unlikely. When eyes get small, they become like fly's eyes. Perhaps some answer is to be found in mimicking that technology.

Low-power transmission is a real unknown. There may be a massive problem with not having enough power unless some resonance-based ultra-low-power transmission trick gets discovered. Perhaps there are enough devices nearby that only low-power transmission needs to be done. Maybe the desk can sense the paper, or the clipboard has a good transceiver.

And if (a fraction of) the light being used to view the device is not enough to power it? Hmm. Let's take a step back. How much power is really needed to change the state of the paper at a spot? Perhaps less power than is needed to deposit plenty of graphite atoms on the surface: the friction of contact may supply enough energy to operate the paper device. There are plenty of other sources of energy: piezoelectrics from movement, torsion, and tip pressure on the paper, heat from your hand, inductive power, the magnetic field of the earth, etc.

Still, I think that computing is becoming ubiquitous, and that one of the inevitable products of this in the future is the gadgetization of everyday objects.

Saturday, May 12, 2012

Pieces

Pieces, the separate parts of a whole, help us understand the logical process of construction. The relationship between the pieces, such as how well they fit, help us understand the workings and character of the parts. The individual pieces limitations can bear on the capabilities of the finished product.

A cohesive design is almost always made up of separate pieces.

In a good design there are no inessential pieces: each piece is necessary for the design to be complete. Each piece does what it should and also as much as it can do.

Interrelationships Between Pieces

Also, the relationship between the pieces is key. In organization, there are requirements for one department that are produced by another department. In development, one module produces a result that is used by one or more other modules. In three-dimensional objects, the objects can fit together like a dovetail joint.

In a drawing, the pieces can be shaded to fully reveal their form. They can shadow other pieces to show their inter-positioning. When you see a drawing, it can make you think about how the figures in the drawing are placed, and what message is intended by the artist. In a still-life this may be of little consequence. In an Adoration of the Magi, this can be of great consequence.

Cycles

The interconnection of pieces can be cyclic, producing an induction. This cycle should be essential to the concept of the design. In programming, the loop should be essential to the working of the program, an iteration that converges on a desired result.

In a drawing, the interrelationship becomes essential to the piece as well, as indicated by this impossible triangle, copied loosely from Oscar Reutersvärd, the Swedish artist. Sometimes we can highlight something different than what was originally intended, as in this case: we indicate how the figure can be made of three L-bends that mutually depend upon each other. Impossible figures often make an excellent illustration of cyclic structures.

Also, though, looking at cycles in different ways can reveal to us more about the problem than we originally knew.

Development In Pieces

In development, we first conceive of a problem to solve and then sketch out a structure of how we will solve it. Then it helps to divide the problem into pieces. It suits us best if each piece is well-defined. We know its inputs, its results, and how it will produce them. When a piece is too complex, we can divide it up into smaller pieces.

The nature of each piece can then be worked on individually. Either sequentially by one person, or concurrently by multiple people in a workgroup. Because each piece of the problem has a different nature, this lends itself to specialization, which is suited to modern workgroups. Each piece can then be tracked separately. The interrelationship between the pieces will need to be known by the manager to properly chart the progress of the development.

Most large projects are done this way. When they are done by one person, then that person needs to understand the workings of the project as a whole, and this can lead to a huge, unmanageable situation. But not always. When a problem gets too large for one person, the pieces of the problem lend themselves to adding extra people to help, and so project division is essential to minimizing unpredictable schedules.

When Pieces Fail To Connect

When conceptualizing the division of a project into pieces, it is sometimes not possible to foresee each and every wrinkle in the workings of each of the pieces. This can lead to a situation where a piece can not be constructed or where some pieces can't be connected properly.

It is times like these when it's important to stand back, take stock of what you have learned, and integrate that into the design. Sometimes this necessitates a redivision of the project into new pieces. Sometimes the redivision only affects a few neighboring pieces. This is part of the art of project design.

Development Strategies

The pieces of a project represent the result of top-down decomposition, which usually works as a division process. Once you have a project split into pieces, and the pieces implemented, then it becomes a problem of making sure that each piece works as it should.

This entails isolation of the piece, testing its inputs, and validating its results.

In a workable system, it is essential to be able to view the intermediate results of each piece. In a graphics system, this means literally viewing them on a screen to visually verify that the result is correct. And sometimes, the ability to view each minute detail is also required.

In a system that is constructed in pieces, one problem which is presented to the authors is this: how can we add a new feature or behavior to the project. This is important because usually it is necessary to construct a simplified version of the project and then make it more complex, adding features, until it is complete.

A useful capability is this: build a simplified version of a piece for testing with the other pieces. Then, each developer can work with the entire project and flesh out their piece independently. Or, even better, a new version of the piece can be checked in, adding essential capabilities, while more complex behavior gets worked on independently.

Performing the Division

I mentioned top-down decomposition as a useful tool in dividing up a project into pieces. But this must be tempered with other considerations. For instance, the necessity that each piece do exactly what it needs to do, no more and no less. Another example is the requirement that the inner loops be as simple as possible, necessitating the factoring of extraneous and more complex cases out. This means that the subdivision must be judicious, to achieve local economy within each piece. I have been on many projects where this goal was a critical factor in deciding how to divide the problem up into pieces. This can also serve as a razor which cuts away inessential parts, leaving only a minimal interconnection of pieces.

You also want to make sure the project is organized so that, if a piece fails, we can directly verify this by turning it on and off, and seeing the result of its action and the effect of it on the entire result. This is particularly useful when each piece is a pass of the total process, like in a graphical problem, or in a compiler.

Also, it is useful to construct a test harness that contains UI so that each piece can be independently controlled, preferably with real-time adjustment. This is a great way to exercise the project. I have used this many times.

Taking Stuff Apart

Moving from development to three-dimensional construction, the disassembly process can reveal a tremendous amount about the problems encountered in producing the object, device, or mechanism. When I was a kid, I liked to take things apart. Of course, putting them back together took a bit longer.

In modern times, there are entire companies that specialize in taking gadgets apart, and even slicing open chips to reveal their inner workings. This is the process of reverse-engineering. Examples of companies that do this are chipworks.com and iSuppli.

Gadgets

I was going do do a section on gadgets and the pieces thereof, but I realized that my knowledge of such things is really not up for grabs, nor is it for public consumption.

It's really too bad since gadgets are a classic example of how each part needs to do as much as possible with as few resources as can be spared. This is one of the basic design decisions that govern the division of a project.

Often the most remote considerations suddenly become of primary importance in the division process.

Code

A friend wishes to divide up code in such a way that module authorship can be retained and the usage monitored so royalties can trickle in the proper way back to the source. Very distributed-economy. This reminds me of the App market in a way, and I'll tell you why.

In early days of software, there was much custom software that cost huge amounts of money. There were accounting systems and mainframes. These would often cost a hundred thousand dollars. The CAD systems I worked on in the 70s were very expensive as well, and specialized software, such as all-angle fracturing software, could cost plenty. It's funny how big business still maintains this model, with distributed systems still costing lots of money. This will certainly be replaced by a distributed app-based model. Some believe that the gadgets are only the front end to a giant database. This model will be replaced by the cloud model.

In the 80s, personal computers' penetration increased and software became a commodity that was sold on the shelves of computer stores. This drove the average price down to hundreds of dollars, but some software still could command up to a thousand dollars. Consider Photoshop and the huge bundles of software that have become the Creative Suite. As time went by, lots of software was forced into bundles in what I call shovelware: software that comes with too much extraneous stuff in it, to convince the buyer that it is a wonderful deal. I'm thinking of Corel Draw! in those days. Nowadays, sometimes computers are bundled with crapware, which is the descendent of shovelware.

The commoditization of software was just a step in the progress of applications. Now, applications are sold online for the most part, even with over-the-air delivery. This is because much computing has gone mobile and desktop usage is on the decrease. Many desktops have in fact been replaced by laptops, which was one step in the process.

But the eventual result was that software is now sold for a buck and the market has consequently been widened to nearly everyone.

To do this, the software had to become easier. The model for the use of the software had to become easier. The usefulness of an application had to become almost universal for this to occur and for applications to become more finely grained. Apps now sell for anywhere from free to ten bucks. But on the average, perhaps a complex app will cost a piddling two dollars.

Is it realistic for the remuneration of code authorship to also go into the fine-grained direction from the current vanguard of open-source software? Nowadays, many app authors receive royalties for their work. The market for applications has exploded and the number of app designers has also exploded: widely viewed as the democratization of programming. This is the stirring story of how app development penetrated the largest relevant market. Can the programmers themselves become democratized?

The applications of today live in a rich encomium of capabilities that include cameras, GPS, magnetic sensor, accelerometers, gyros, and so much more. For code itself to go down a democratization path, I expect that the API it lives under will have to be just as rich.

Unfortunately, the API is owned by the platforms. And even, as in the case of Java (as we have found out this last week), by the company that bought it (Oracle). Apparently an API can be copyrighted, which is a sticky wicket for Google. The vast majority of apps are written for iOS today. But, if this won't be true forever, then at least it has clearly indicated how to create an incredibly successful business model around applications. And it indicates that APIs will certainly be heavily guarded and controlled.

The spread of technology is never as simple as entropy and thermodynamics, though the concepts may certainly bear on the most profitable use case.

Either way, the democratization of code could possibly solve the litigation problem, at least when it comes to applications built on top of APIs, because the new model might in some sense replace the patent model by reducing ownership to a revenue stream, democratizing software developers. But the APIs could not be a part of this solution as long as the platform developers considered them to be proprietary.

So, in the end, I don't think system software can be a client for this model. Unless its the GNU folks.