Showing posts with label software. Show all posts
Showing posts with label software. Show all posts

Tuesday, November 8, 2016

Analysts: What Are These?

Analysts are not always a savvy breed. In fact, sometimes they are downright stupid. Their general types of stupidity can be broken down into classes. I'll just name a few.

The first class, show offs, often throw around terms like disruption, logistics, zero-inventory and so forth without actually knowing their implications. Showing off is a pointless pretense of prowess, unless it shows valuable insight. Usually this class misses the forest for the trees.

The complainers just have axes to grind about their specific issues. They consider their beefs to be of paramount importance while ignoring the majority of users. A specific kind of complainer is the port complainer. They have whined about their disappearing serial port, FireWire port, headphone jack, and old-style USB port. But, hey, things change. It's disruption in action. Old media becomes obsolete, like vinyl records, cassette tapes, and CDs: this is because media is now delivered online. Cords disappear and wireless connections dominate: this is because virtually all updates are now accomplished over-the-air (OTA).

Then there are trolls. They know that the generation of disinformation creates knee jerk reactions that budge stock price. Close your eyes and imagine for a minute that many of them are simply Russians from the St. Petersburg Troll Factory and you will be just about right!

The feature creatures are typically Windows people who just care about feature lists and spec bullet points. They count ports, processors, gigaHertz, and keys on the keyboard. They are the ones that think shovelware makes for good workflow. If they actually use the features that they write about then they would know better. It's the user experience that leads to user satisfaction and commands user loyalty.

I don't want to forget the price people. To them price is everything. Forget about surprise and delight, user experience, or even quality! I can't tell you how annoying these people are. Their inevitable assertion is that the cheapest product always wins, which as we know already is totally wrong. Even if you're selling refrigerators! It's the product that gives the best value that wins. If you get into a price war, you've already lost.

The market share obsessors are yet another class of flawed analysts. To them, it's only about units, no matter if these units are only used for limited purposes, left in a drawer, or even if they are catching fire. They totally avoid the issue of who is actually profiting and thus who will see the consistent growth. For instance, Apple has 12.1% of the smartphone market yet makes 104% of the profit. Yet Android has 87.5% market share. How can this be? The Android hardware makers' profit is largely negative. Yep - they are losing money.

The software profiteers subscribe to the 90s Microsoft model: just build the software and let other idiots kill each other making cheaper and cheaper hardware; there's no profit in hardware, right? Wrong! If there's no profit in hardware then who is going to make it? By the way, the hardware makers often want their own unique look, defeating the standardized software. Also consider that software prices are plummeting. With the introduction of the App Store, Apple has turned software into a $2 commodity. This has forced the software profiteer into the subscription model.

Finally I give you the walled garden haters. These are descended from the people who like to build their own computers and hack them. They want freedom from carriers, authoritarian systems, and so forth. They want to pwn their hardware. In their minds all software is free, regardless of the time and effort expended by software developers. This class doesn't fundamentally grok the concept of an ecosystem, along with why ecosystems are essential to the survival of modern hardware. The hubris of these haters is in ignoring that hacking, device security, and identity theft has become the defining crucial problem of our time. All this for one reason: walled gardens are inherently more secure. IT people have long ago figured this out.


It's disappointing to find that so many analysts are last-millennium-thinkers, and they have themselves become disrupted. They're still betting on Microsoft for God's sake! Don't let their investment firms get ahold of your portfolio!

Friday, August 23, 2013

Observing Microsoft, Part 4

This day is an interesting one for Microsoft. First, Ballmer sends out a letter to employees that states that he will resign within 12 months. Then it is announced that there is a committee on the Microsoft board, containing Bill Gates, of course, which has the responsibility of finding a new CEO. No, I suspect that Ballmer is not on that committee.

Some writers are saying that Microsoft is not forcing Ballmer out. But think about it. If you had to get rid of a failed CEO who owned 333 million shares of your company's stock, what would you do? It was most certainly a negotiated force-out. With a legal release. And probably some kind of honorary employment that requires Ballmer to only sell within certain windows of time and keeps him on a leash.

Welcome to the mobile revolution.

I must say that this change is way too late. After all, in 2010 people were already clamoring to fire Ballmer. And doesn't clean things up soon enough. Obviously Microsoft's board or directors should have been doing this for the last several years!

The reorganization that Ballmer has been accomplishing seems like a smart idea, except that it is trying to make a silk purse out of a sow's ear. It's made for the PC era which is slowly fading away. Still, the new organization is probably one less thing that a new CEO will have to worry about. That is: if he accepts this vision for the new Microsoft. A vision that depends upon Microsoft succeeding in the mobile revolution. Still with the reorg, Microsoft has a corporate culture that can't simply turn on a dime.

And Windows is exactly the problem.

Energy Efficiency

The mobile revolution has created two very interesting trends in the computing landscape. These are battery longevity and cloud computing. In order for batteries to last a long time, the products they power must be energy-efficient in a system-wide way. In order for cloud computing, with its massive compute farms, to be cost-effective, each server must be singularly power-efficient and generate as little heat as possible since cooling is a power consumption concern as well.

Of course battery longevity also affects electric cars like the Tesla. But, when it comes to computing, the battery longevity comes from three sources: more efficient batteries, hardware systems where power efficiency is an integral part of their design, and finally the economical use of resources in software. In the cloud computing arena, instead of more efficient batteries we are concerned with heat dissipation and cooling strategies.

More efficient batteries is a great thing, when you can get them. But advances in supercapacitors and carbon nanotube electrodes on various substrates is yet to pan out. This means that hardware systems such as SoC's (Systems on a Chip) must be designed with power efficiency in mind. Power management solutions that allow parts of a chip to turn themselves off on demand are one way to help.

Even at the chip level, you can send signals between various components of an SoC (System on a Chip) by using power-efficient transmission. For example, the MIPI M-PHY layer even enables lower power consumption by the transmission of the high-frequency data that usually chews up so much power. Consider using a camera and processing the data on-chip. Or using a scaler that operates from/to on-chip memory. These applications involve images, which are huge resource hogs and must be specially considered, in order to save significant amounts of power.

But there's more to this philosophy of power management, and this gets to the very heart of why SoC-based gadgets are so useful in this regard. General tasks that use power by processing large amounts of data are handled increasingly by specialized areas of the SoC. Like image scaling and resampling. Like encrypting and decrypting files. Like processing images from the onboard cameras. Like display processing and animation processing. Like movie codec processing. Each of these applications of modern gadgets are resource hogs. So they must be optimized for power efficiency at the very start or else batteries simply won't last as long.

Of course, you could simply user a bigger battery. Which makes the product larger. And less elegant!

Windows?

So what is the problem with Windows? The Wintel architecture wasn't built from the ground up for power-efficiency. Or distributed specialized computing, like so many gadgets are constructed these days. And now you can see what a daunting process this must be for Microsoft engineers that basicaly have to start over to get the job done. It will take quite a bit of time to get Windows to run on an SoC. Almost all implementations of Windows today are built to run on discrete CPUs. The Surface Pro appears to use a regular CPU board with a stock Intel part.

You see, power efficiency isn't just a hardware problem to solve. The software must also have this in mind with everything it does. The consumption of resources is a serious issue with any operating system, and affects the user experience in a huge way. I can't even begin to go into the legacy issues with the Windows operating system. The only way is to rewrite it. One piece at a time.

This problem has led many companies who lead the cloud computing initiatives to use Linux for their server operating systems. Mostly because it can easily be tailored for power efficiency. The server operating system share of Unix-based operating systems is 64%, compared to about 36% for Windows.

Servers are almost certainly going to go the way of the SoC also, with dedicated processors doing the expensive things like video codec processing, web page computation, image processing, etc. But I do see multiple cores and multithreading still being useful in the server market.

But not if they increase the power requirements of the system.

On mobile devices, Windows hasn't done so well either. Windows Phone probably has less than 3% of the mobile space, if that.

The Surface never clicked

Why didn't the Surface RT and the Surface Pro tablets succeed? First off, it's possible that they are simply yet to succeed. I just had to say that.

But more likely they will never succeed. It's hard to move into a market where your competitors have been working on the hardware solutions for years. And when hardware isn't your expertise.

At first, the Surface marketing campaign was all flash and no substance. A video of dancers clicking their tablet covers into their Surface tablets was certainly criticized by a few bloggers as vacuous. The main problem was it stressed the expensive keyboard cover, and skirted the issue that the cover is totally needed. With the cover, the Surface tablet becomes just a crappy laptop. That you can't really use on your lap, because of the kickstand. Their follow-up video was curt and to the point, but sounds a bit like propaganda. saying "Surface is yours. Your way of working. Your way of playing".

Yeah. Trying to get into the mind of their prospective users.

But it's clear that their strategies were simply not working, because they went to the old adage "if we don't look good, then maybe we should just make them look bad". And they started releasing anti-iPad ads. The first one used Siri's voice to sum it up "do you still think I'm pretty?". They compared the price of the legendary iPad to the Surface RT without a cover. I suspect that a Surface RT without a keyboard cover is pretty much useless. The next anti-iPad ad compared features in a less quirky way. But anybody using a Surface RT knew that it didn't support the apps that the iPad has, or really have any of the advanced iOS/iTMS ecosystem in place. And without the keyboard cover it was cheaper, certainly. But you really had to have the cover to get full functionality.

So Microsoft decided to drop the price. This was echoed in the nearly $1-billion charge they took that quarter. Then they followed up by dropping the price of the Surface Pro! It seems desperate to sell their inventory. Otherwise they will be taking another huge charge against Windows revenues like before.

Saturday, October 13, 2012

How Old Is Your Software?

Let's look at software vulnerability. What kinds of software are the most vulnerable?

Well, duh! The oldest, most crufty kinds of course! Whenever you add onto software year after year, you unwittingly create opportunities for exploitation. We say that our data are secure, yet we do not test software in anywhere near the rigorous fashion it requires!

This leaves us with highly-functional yet completely-vulnerable software. And the users don't even realize it. Business users, corporate users, individual users, you.

Which Software is the Most Vulnerable?

Means: Programmers only need to be connected to the Internet and have a computer capable of being programmed to become a hacker. This makes up basically every person on the planet in all but the seriously developing nations. So let's just say there is a large sample set of possible hackers.

Motive: To be vulnerable, you also have to be hiding something desirable, interesting, or perhaps embarrassing. In other words: valuable to someone who just needs some street cred. What holds this kind of data? Your computer, your hard disk, your database, managed by operating systems, software that routinely gets installed or updated, things like distributed database server software also that protect huge amounts of data. For more motives for hacking, see my first blog post on Hackers.

Opportunity: So, let's look at software that has enjoyed release after release year after year. These releases are generally done for the purposes of:
  • increasing their feature set
  • making them faster
  • fixing their security holes
So let's examine systems which do this. Operating systems, like Windows, Mac OS X, iOS, and Android certainly are updated quite often. System software for supporting desirable things like videos are updated often as well, like Adobe's Flash. So are things like their suite of programs the Creative Suite. In business, the Oracle SQL Server is updated quite often also, to add features and, more often, to patch vulnerabilities. Programming capabilities like Java site updated a lot also. Even GNU, the Free Software Foundation's operating system, which declares proudly that GNU's Not Unix (though it is identical to it in every way I can see) is updated quite often.

These are the most vulnerable software systems on the planet, merely because they are updated so often. And because so many people and businesses use them.

What Makes These Vulnerabilities?

The best positive marketing driver is the first one: increasing their feature set. To do this, it is often necessary to allow other developers to add to their feature set. We see this in nearly every OS platform in history. Supporting Applications. Allowing Plug-ins. Enabling programmability.

Being able to program something is highly desirable. It is also exactly what causes the vulnerabilities.

In 1984, I bought my first Macintosh. Actually it was an original 128K Mac. And the first thing I did was to take it apart, with a long Torx screwdriver and some splints to crack open the shell. My business partner in Fractal Software, Tom Hedges, was doing the exact same thing in the very same room. We both came to the conclusion that it needed a real hard drive, which was an interesting hardware task. We also came to the conclusion that we wanted to program it.

I wanted to create a new application.

We met an Apple person, Owen Densmore, at Siggraph that year and he put us in touch with a key developer, Bill Duvall, who had built the Consulair C system with a text editor. Owen gave us the external terminal debugging capability, called TermBugA, that we could use to debug our applications. He put us in touch with Steve Jasik, who authored MacNosy, and had disassembled the entire ROMs in a Mac. We built our first apps for the Mac within a couple of weeks and began our development career.

This is the old school method. The very ability to program a device has a name now: pwn. This means "owning it" but it also has a whiff of programmability to it.

If a device is a computer of any kind, then the desire to program it freely is a natural consequence of these old school ways.

But those ways must change.

How Are The Vulnerabilities Exploited?

The goal is to become a privileged user on the computer. This will enable the hacker to install their programs, get access to whatever data is available without restriction, and basically to take over the computer. Once this is done, then malware can be installed. Things that log your keystrokes. Or watch you through your webcam. Or check which web sites you use, remembering whatever passwords you use to access them.

This enables them to steal your identity or your money. Or you can be blackmailed with whatever incriminating data is present. In other words, criminal activity that exploits you, your business, or your customers.

But overwhelmingly, your computer can become something that is not under your control and can be used as a base for expansion, virus propagation, or as a machine to support DDoS attacks as well.

How do they get control of your computer? Often it is with a very small bug.

Now, software above a certain size always has bugs in it, and that's the problem in a nutshell.

The kind of bugs that hackers look for are primarily buffer overrun bugs. Because all machines are Von Neumann machines, data is stored in the same place as code. This means that all the hacker needs to do is insert their code into your system and transfer control to it.

A buffer overrun bug allows them to do this because, by definition, once a buffer (a fixed-size place in memory to store data) is overrun then the program has lost control of what is going into memory. With a little cleverness, after overrunning the buffer, the data will go someplace that is a tender spot. This can cause another bug to happen or it can be a spot where program control will end up soon enough in the future.

And voilá, the hacker is running their own native code on your computer.

Their next trick is to become a superuser. This is sometimes referred to as becoming root. These terms come from UNIX, which is the basis for many operating systems, like Mac OS X and Linux.

This can be done several ways, but the most effective way is apparently to masquerade as a routine install of familiar software. Like Photoshop, Flash, a Windows Service Pack, etc.

But the process of taking over a computer, which comprises a rootkit, is often a several-step process.

Perhaps the computer becomes a bot, simply running jobs for the hacker: sending email spam at random times, using the computer's position in the network to attack other local computers, making the computer be part of a Distributed Denial of Service (DDoS) attack.

Perhaps the hacker only wants to get the data in that computer. The easiest way is to gain superuser access, and then you have the privileges to access all the files. Maybe the hacker just wants to watch the user and gain information like bank account numbers and passwords.

Sometimes the hacker just wants to get access to databases. The databases contain information that might be sensitive, like credit card information, telephone numbers. Since these databases are generally SQL servers, a specific kind of attack is used: SQL Injection attacks.

Poorly-written SQL can have statements in it that evaluate a string and execute it. Rather than running code with pre-specified bind variables. It is these strings that make SQL vulnerable to being co-opted by a hacker, who can modify the SQL program simply by changing its parameters. When the string gets changed to SQL code of the hacker's choice, it can be executed and the hacker can, for instance, extract all of the database records, instead of the usual case where the records on certain date may be accessed. Or the hacker can change the fields that get extracted to all the fields instead of a small number of them.

How Do We Combat This?

It is easy to say there is no way to fight system vulnerabilities, but you would be wrong.

The strongest way to stop it is curation. One form of curation is the ability of a supervisor to prevent malware from becoming installed on a system. When a system allows plug-ins and applications, these must be curated and examined for malware and the backdoors and errors that allow malware to take hold. And they must be limited in their scope to prevent conscription of the operating system and applications that run them.

In the case of Apple, curation means examining every App built for its platform for malware or even the whiff of impropriety. And this is a really good thing in itself, because it means that far less malware attacks iOS than does Android.

In the case of SQL injection attacks, rewrite your SQL to not use executed strings.

But general practices need to be followed religiously. Make sure your passwords are not guessable. Use firewalls to prevent unintended connections. Beware phishing attacks.


Saturday, May 12, 2012

Pieces

Pieces, the separate parts of a whole, help us understand the logical process of construction. The relationship between the pieces, such as how well they fit, help us understand the workings and character of the parts. The individual pieces limitations can bear on the capabilities of the finished product.

A cohesive design is almost always made up of separate pieces.

In a good design there are no inessential pieces: each piece is necessary for the design to be complete. Each piece does what it should and also as much as it can do.

Interrelationships Between Pieces

Also, the relationship between the pieces is key. In organization, there are requirements for one department that are produced by another department. In development, one module produces a result that is used by one or more other modules. In three-dimensional objects, the objects can fit together like a dovetail joint.

In a drawing, the pieces can be shaded to fully reveal their form. They can shadow other pieces to show their inter-positioning. When you see a drawing, it can make you think about how the figures in the drawing are placed, and what message is intended by the artist. In a still-life this may be of little consequence. In an Adoration of the Magi, this can be of great consequence.

Cycles

The interconnection of pieces can be cyclic, producing an induction. This cycle should be essential to the concept of the design. In programming, the loop should be essential to the working of the program, an iteration that converges on a desired result.

In a drawing, the interrelationship becomes essential to the piece as well, as indicated by this impossible triangle, copied loosely from Oscar Reutersvärd, the Swedish artist. Sometimes we can highlight something different than what was originally intended, as in this case: we indicate how the figure can be made of three L-bends that mutually depend upon each other. Impossible figures often make an excellent illustration of cyclic structures.

Also, though, looking at cycles in different ways can reveal to us more about the problem than we originally knew.

Development In Pieces

In development, we first conceive of a problem to solve and then sketch out a structure of how we will solve it. Then it helps to divide the problem into pieces. It suits us best if each piece is well-defined. We know its inputs, its results, and how it will produce them. When a piece is too complex, we can divide it up into smaller pieces.

The nature of each piece can then be worked on individually. Either sequentially by one person, or concurrently by multiple people in a workgroup. Because each piece of the problem has a different nature, this lends itself to specialization, which is suited to modern workgroups. Each piece can then be tracked separately. The interrelationship between the pieces will need to be known by the manager to properly chart the progress of the development.

Most large projects are done this way. When they are done by one person, then that person needs to understand the workings of the project as a whole, and this can lead to a huge, unmanageable situation. But not always. When a problem gets too large for one person, the pieces of the problem lend themselves to adding extra people to help, and so project division is essential to minimizing unpredictable schedules.

When Pieces Fail To Connect

When conceptualizing the division of a project into pieces, it is sometimes not possible to foresee each and every wrinkle in the workings of each of the pieces. This can lead to a situation where a piece can not be constructed or where some pieces can't be connected properly.

It is times like these when it's important to stand back, take stock of what you have learned, and integrate that into the design. Sometimes this necessitates a redivision of the project into new pieces. Sometimes the redivision only affects a few neighboring pieces. This is part of the art of project design.

Development Strategies

The pieces of a project represent the result of top-down decomposition, which usually works as a division process. Once you have a project split into pieces, and the pieces implemented, then it becomes a problem of making sure that each piece works as it should.

This entails isolation of the piece, testing its inputs, and validating its results.

In a workable system, it is essential to be able to view the intermediate results of each piece. In a graphics system, this means literally viewing them on a screen to visually verify that the result is correct. And sometimes, the ability to view each minute detail is also required.

In a system that is constructed in pieces, one problem which is presented to the authors is this: how can we add a new feature or behavior to the project. This is important because usually it is necessary to construct a simplified version of the project and then make it more complex, adding features, until it is complete.

A useful capability is this: build a simplified version of a piece for testing with the other pieces. Then, each developer can work with the entire project and flesh out their piece independently. Or, even better, a new version of the piece can be checked in, adding essential capabilities, while more complex behavior gets worked on independently.

Performing the Division

I mentioned top-down decomposition as a useful tool in dividing up a project into pieces. But this must be tempered with other considerations. For instance, the necessity that each piece do exactly what it needs to do, no more and no less. Another example is the requirement that the inner loops be as simple as possible, necessitating the factoring of extraneous and more complex cases out. This means that the subdivision must be judicious, to achieve local economy within each piece. I have been on many projects where this goal was a critical factor in deciding how to divide the problem up into pieces. This can also serve as a razor which cuts away inessential parts, leaving only a minimal interconnection of pieces.

You also want to make sure the project is organized so that, if a piece fails, we can directly verify this by turning it on and off, and seeing the result of its action and the effect of it on the entire result. This is particularly useful when each piece is a pass of the total process, like in a graphical problem, or in a compiler.

Also, it is useful to construct a test harness that contains UI so that each piece can be independently controlled, preferably with real-time adjustment. This is a great way to exercise the project. I have used this many times.

Taking Stuff Apart

Moving from development to three-dimensional construction, the disassembly process can reveal a tremendous amount about the problems encountered in producing the object, device, or mechanism. When I was a kid, I liked to take things apart. Of course, putting them back together took a bit longer.

In modern times, there are entire companies that specialize in taking gadgets apart, and even slicing open chips to reveal their inner workings. This is the process of reverse-engineering. Examples of companies that do this are chipworks.com and iSuppli.

Gadgets

I was going do do a section on gadgets and the pieces thereof, but I realized that my knowledge of such things is really not up for grabs, nor is it for public consumption.

It's really too bad since gadgets are a classic example of how each part needs to do as much as possible with as few resources as can be spared. This is one of the basic design decisions that govern the division of a project.

Often the most remote considerations suddenly become of primary importance in the division process.

Code

A friend wishes to divide up code in such a way that module authorship can be retained and the usage monitored so royalties can trickle in the proper way back to the source. Very distributed-economy. This reminds me of the App market in a way, and I'll tell you why.

In early days of software, there was much custom software that cost huge amounts of money. There were accounting systems and mainframes. These would often cost a hundred thousand dollars. The CAD systems I worked on in the 70s were very expensive as well, and specialized software, such as all-angle fracturing software, could cost plenty. It's funny how big business still maintains this model, with distributed systems still costing lots of money. This will certainly be replaced by a distributed app-based model. Some believe that the gadgets are only the front end to a giant database. This model will be replaced by the cloud model.

In the 80s, personal computers' penetration increased and software became a commodity that was sold on the shelves of computer stores. This drove the average price down to hundreds of dollars, but some software still could command up to a thousand dollars. Consider Photoshop and the huge bundles of software that have become the Creative Suite. As time went by, lots of software was forced into bundles in what I call shovelware: software that comes with too much extraneous stuff in it, to convince the buyer that it is a wonderful deal. I'm thinking of Corel Draw! in those days. Nowadays, sometimes computers are bundled with crapware, which is the descendent of shovelware.

The commoditization of software was just a step in the progress of applications. Now, applications are sold online for the most part, even with over-the-air delivery. This is because much computing has gone mobile and desktop usage is on the decrease. Many desktops have in fact been replaced by laptops, which was one step in the process.

But the eventual result was that software is now sold for a buck and the market has consequently been widened to nearly everyone.

To do this, the software had to become easier. The model for the use of the software had to become easier. The usefulness of an application had to become almost universal for this to occur and for applications to become more finely grained. Apps now sell for anywhere from free to ten bucks. But on the average, perhaps a complex app will cost a piddling two dollars.

Is it realistic for the remuneration of code authorship to also go into the fine-grained direction from the current vanguard of open-source software? Nowadays, many app authors receive royalties for their work. The market for applications has exploded and the number of app designers has also exploded: widely viewed as the democratization of programming. This is the stirring story of how app development penetrated the largest relevant market. Can the programmers themselves become democratized?

The applications of today live in a rich encomium of capabilities that include cameras, GPS, magnetic sensor, accelerometers, gyros, and so much more. For code itself to go down a democratization path, I expect that the API it lives under will have to be just as rich.

Unfortunately, the API is owned by the platforms. And even, as in the case of Java (as we have found out this last week), by the company that bought it (Oracle). Apparently an API can be copyrighted, which is a sticky wicket for Google. The vast majority of apps are written for iOS today. But, if this won't be true forever, then at least it has clearly indicated how to create an incredibly successful business model around applications. And it indicates that APIs will certainly be heavily guarded and controlled.

The spread of technology is never as simple as entropy and thermodynamics, though the concepts may certainly bear on the most profitable use case.

Either way, the democratization of code could possibly solve the litigation problem, at least when it comes to applications built on top of APIs, because the new model might in some sense replace the patent model by reducing ownership to a revenue stream, democratizing software developers. But the APIs could not be a part of this solution as long as the platform developers considered them to be proprietary.

So, in the end, I don't think system software can be a client for this model. Unless its the GNU folks.