As much as we'd like it to be true, security is not all about ciphers; it's also about physical security, the human factor, and an often overlooked area called side channels.
Physical Security
We all know that you need a password to keep a computer secure, right? But what happens when the hard drive is stolen? Your data can walk right out the door, that's what!
But even the transmission of secret keys and plain text is an issue. For instance, a keystroke logging program can easily intercept all the passwords you type. So you want to make sure that such a program never gets onto your computer.
With some cipher text, the more you get of it the easier it is to decode it. While this usually describes not-so-good security, things like feedback shift register xor cipher techniques are still employed in stream ciphers. To combat this, the feedback shift register must be re-initialized periodically to prevent the code from being broken. This is usually done by using a more secure encryption technique, like an RSA public-key cryptosystem.
But the best thing would be to make the transmission un-interceptable. This leads to the use of quantum key cryptography.
The Human Factor
The mobile computing revolution didn't invent the need for accessing your data externally, but it did make it a lot more common. So we use passwords to protect our data.
Passwords are secret keys that are possible to remember. But humans are frail and forgetful and so often they use passwords that are easy to guess. Ones they can't forget. Like 12345. I talk about just how insecure these kinds of passwords are in my first post on hackers.
But humans are always doing dumb, insecure things, like leaving doors unlocked or ajar, leaving a key under the flower pot, or leaving the keys to the car behind the visor. This kind of behavior happens out of force of habit to some people and represents a massive security breach.
But the most powerful kinds of attacks are called social engineering attacks.
Side Channels
This is the most interesting kind of insecurity, because it really describes an indirect attack.
One side channel is comprised of signals emanating from a device like an LCD screen. The video signals are generally leaked out and can be intercepted and reconstructed for spying on the device. For CRTs, a fellow named Wim Van Eck demonstrated in 1985 that he could display on a TV monitor the contents of a CRT screen, captured from hundreds of meters away, just by tuning into the video frequency emanations. The technique, known as Van Eck phreaking, can work on any display hardware.
When it comes to radio frequency (RF) emanations, a standard, known as TEMPEST since the 1960s, covers the techniques and methods used in shielding devices and components from being surveilled in this way.
Simple things like wi-fi are easily broken into, in a process called wardriving. There are published approaches for how to crack WEP and other security protocols used in wi-fi. But other methods can also be used to gain the password. Once the wi-fi is accessed, then anything transported on the wi-fi is also accessible. Google got in trouble for accessing wi-fi from their street view vehicles, but the fact is it is too easy to collect data in this manner. Thus, the mobile computing revolution introduces a whole new set of insecurities.
Another side channel concerned cryptography and this one is a doozy: just by observing the process that is encrypting or decrypting some data, you can infer information about, for instance, the size of the prime numbers used in an RSA public-key cryptosystem. If you can tell how long it takes to divide the public key by a secret key, you can infer some valuable information about the size and bitwise complexity of the secret key. If, when producing a prime number pair, you can determine how long it took to produce it, you can tell a bit about the algorithm used to produce them. Each bit of information is useful in chopping away at the space of all possible answers to the question of what the secret is.
The data you observe about the cryptography process can be power consumption, the timing, or really anything that can be measured externally. With a power consumption curve, you can do differential analysis to get really precise information about how big the multiply was, and even which parts of the multiply are more complicated than others.
And you can also measure thermal and acoustic signatures as well. For instance, by focusing an infrared camera at a chip during a certain computation, you can determine which parts of the chip are active and at what times.
Mark Zimmer: Creativity + Technology = Future
Please enter your email address and click submit to follow this blog
Tuesday, May 21, 2013
Hackers, Part 6: Methods of Entry
I have often wondered how hackers gain control of your system when you are just browsing the web. It's actually an interesting process, and knowing about it can help you be aware of the threats.
Through the rabbit hole
In order to understand what's happening when you get infected by a virus or another sort of malware, it seems a bit like going through the rabbit hole. This is because computer programming can be a bit of a dark corridor to the average person. Perhaps it's a place they don't usually go.
Have you heard of compromised websites? Well, I was surprised to know that almost any website can be compromised through a number of techniques. The main thing needed is for the website to contain a link that directs you to another website. This can easily be done, for instance in an ad. But HTML code often contains SQL code in it, when a database access is done. This kind of code is susceptible to SQL injection exploits. Perhaps the hacker gains access to the website's administration via a cracked password or some other mistake in configuration.
When you visit a compromised website, you don't really notice the intrusion. Actually it's supposed to be that way. They want to catch you unaware. So you will probably just see the website's normal content. But somewhere in the HTML stream, a malicious URL is included. This is what directs you to another website.
Wait: if it directs you to another website, then you should see your browser loading another page, right? No. Pointing you off to that website does not necessarily mean loading a page from that website. So you may not even notice anything at all. It can mean merely accessing a file at a specific URL in that website. But even accessing a single file can call for HTML code to be executed. Yes, before the file is loaded, special HTML code that verifies which kind of computer you are running and which OS version you are running gets executed first. This makes sure you are an intended victim: one with the vulnerability in question that is being exploited. And then a file is accessed, and loaded.
And Flash files are the most common kind of file that are chosen.
Flash: what's happning there?
The file being loaded is specially crafted to make use of a buffer overrun or another specific security hole in Flash Player. This is the kind of fault that seems to get patched nearly every month by Adobe. A recent update is a priority 1 (critical) security flaw, initially reported by MITRE. Apparently it's quite a problem. When logging into yahoo a while ago, I was prevented from doing so until I installed the most recent version of Flash Player.
However it happens, once you load this Flash file, the inevitable process of being infected with a virus has begun.
Eventually, an unsuspecting Windows XP user ends up downloading an EXE file which gets run and the virus is now installed.
When examining the SWF Flash files, it becomes clear that hackers like to obfuscate their code internally, usually by XORing parts of it with an 8-bit key. This renders plaintext unreadable to the casual observer. Or to anti-virus code that scans for dangerous items.
Steve Jobs, in April of 2010, noted that Adobe Flash Player was the number one reason for Macs crashing. Why is this?
One reason is that Flash allows code to be embedded into an animation file that gets run locally in your Flash cache folder. So just loading an animation file can cause actual code to be run! This code can be malware, of course. It can even be encrypted so it can't be detected by virus scanning software. And that presumes that the virus-scanning software even gets a look at that file.
Ah, but is this still true? Not exactly. Adobe has implemented a Protected View sandbox that prevents malware from being executed. But, as the recent security patch indicates, the wrinkles in this approach are still being ironed out. Still, it represents some progress.
It is well-documented that, in 2010, security experts denounced Flash.
And nearly every computer has it installed. So Adobe has had a lot to lose.
Adobe has updated Flash once again a few days ago, plugging memory leaks that get exploited so malware can insert their own code.
Building secure software
But, treating security flaws like a perception problem is really at the flawed center of a public relations way of dealing with security. Sandboxing approaches, internal file fuzzing, and white-box texting are the proper ways of dealing with such issues. Also, it is possible to hire a tiger team of professionals whose job it is to break the software in question and use it to compromise test websites. In other words, be the hacker. A regimen of code review is useful as well. Some would say absolutely necessary, particularly close to a release, when it is impossible for QA people to properly assess the security of the software. It is also necessary to have the latest in compilers as well. This means having a compiler that rigorously and continuously performs deep semantic analysis: tests for logical flaws that can lead to insecurities such as buffer overruns, enumerates and discovers cases that weren't handled, spots unlikely code scenarios, and so forth. People who program make mistakes all the time. It is unconscionable (and just plain stupid) to use a compiler that does not perform as many checks as possible.
When management doesn't embrace the methods of building secure software, then the users are the ones that lose. This is because the software's insecurities cause the users to be compromised. And then the software manufacturer loses as well. Because users won't buy it. These days, word spreads pretty fast about insecurity. It's all over the news. So, even in the case of Flash, where it is a significant part of the workflow of the web, this problem can lead to market share slippage and eventual replacement by transparent standard technologies, like HTML5.
For many years, Adobe treated the problem like a public relations problem. I speculate that is because they were concerned merely with getting releases out and reaping the revenue. In other words managers were concerned with making the quarterly revenue. Not with the future viability of their product.
Those who use secure software methodologies can see the forest for the trees. They know that sustainability is important. Perhaps the page has turned at Adobe.
Back to public relations. How should public relations work when dealing with perceptions of security failures? It's hopeless unless the company they are representing takes a proactive stance in preventing attacks to their security. When the hackers laugh at your security, you are going to be a big target, because the word will spread through the hacker community that you are a low-hanging fruit. Ripe for the picking. You get it.
Through the rabbit hole
In order to understand what's happening when you get infected by a virus or another sort of malware, it seems a bit like going through the rabbit hole. This is because computer programming can be a bit of a dark corridor to the average person. Perhaps it's a place they don't usually go.
Have you heard of compromised websites? Well, I was surprised to know that almost any website can be compromised through a number of techniques. The main thing needed is for the website to contain a link that directs you to another website. This can easily be done, for instance in an ad. But HTML code often contains SQL code in it, when a database access is done. This kind of code is susceptible to SQL injection exploits. Perhaps the hacker gains access to the website's administration via a cracked password or some other mistake in configuration.
When you visit a compromised website, you don't really notice the intrusion. Actually it's supposed to be that way. They want to catch you unaware. So you will probably just see the website's normal content. But somewhere in the HTML stream, a malicious URL is included. This is what directs you to another website.
Wait: if it directs you to another website, then you should see your browser loading another page, right? No. Pointing you off to that website does not necessarily mean loading a page from that website. So you may not even notice anything at all. It can mean merely accessing a file at a specific URL in that website. But even accessing a single file can call for HTML code to be executed. Yes, before the file is loaded, special HTML code that verifies which kind of computer you are running and which OS version you are running gets executed first. This makes sure you are an intended victim: one with the vulnerability in question that is being exploited. And then a file is accessed, and loaded.
And Flash files are the most common kind of file that are chosen.
Flash: what's happning there?
The file being loaded is specially crafted to make use of a buffer overrun or another specific security hole in Flash Player. This is the kind of fault that seems to get patched nearly every month by Adobe. A recent update is a priority 1 (critical) security flaw, initially reported by MITRE. Apparently it's quite a problem. When logging into yahoo a while ago, I was prevented from doing so until I installed the most recent version of Flash Player.
However it happens, once you load this Flash file, the inevitable process of being infected with a virus has begun.
Eventually, an unsuspecting Windows XP user ends up downloading an EXE file which gets run and the virus is now installed.
When examining the SWF Flash files, it becomes clear that hackers like to obfuscate their code internally, usually by XORing parts of it with an 8-bit key. This renders plaintext unreadable to the casual observer. Or to anti-virus code that scans for dangerous items.
Steve Jobs, in April of 2010, noted that Adobe Flash Player was the number one reason for Macs crashing. Why is this?
One reason is that Flash allows code to be embedded into an animation file that gets run locally in your Flash cache folder. So just loading an animation file can cause actual code to be run! This code can be malware, of course. It can even be encrypted so it can't be detected by virus scanning software. And that presumes that the virus-scanning software even gets a look at that file.
Ah, but is this still true? Not exactly. Adobe has implemented a Protected View sandbox that prevents malware from being executed. But, as the recent security patch indicates, the wrinkles in this approach are still being ironed out. Still, it represents some progress.
It is well-documented that, in 2010, security experts denounced Flash.
And nearly every computer has it installed. So Adobe has had a lot to lose.
Adobe has updated Flash once again a few days ago, plugging memory leaks that get exploited so malware can insert their own code.
Building secure software
But, treating security flaws like a perception problem is really at the flawed center of a public relations way of dealing with security. Sandboxing approaches, internal file fuzzing, and white-box texting are the proper ways of dealing with such issues. Also, it is possible to hire a tiger team of professionals whose job it is to break the software in question and use it to compromise test websites. In other words, be the hacker. A regimen of code review is useful as well. Some would say absolutely necessary, particularly close to a release, when it is impossible for QA people to properly assess the security of the software. It is also necessary to have the latest in compilers as well. This means having a compiler that rigorously and continuously performs deep semantic analysis: tests for logical flaws that can lead to insecurities such as buffer overruns, enumerates and discovers cases that weren't handled, spots unlikely code scenarios, and so forth. People who program make mistakes all the time. It is unconscionable (and just plain stupid) to use a compiler that does not perform as many checks as possible.
When management doesn't embrace the methods of building secure software, then the users are the ones that lose. This is because the software's insecurities cause the users to be compromised. And then the software manufacturer loses as well. Because users won't buy it. These days, word spreads pretty fast about insecurity. It's all over the news. So, even in the case of Flash, where it is a significant part of the workflow of the web, this problem can lead to market share slippage and eventual replacement by transparent standard technologies, like HTML5.
For many years, Adobe treated the problem like a public relations problem. I speculate that is because they were concerned merely with getting releases out and reaping the revenue. In other words managers were concerned with making the quarterly revenue. Not with the future viability of their product.
Those who use secure software methodologies can see the forest for the trees. They know that sustainability is important. Perhaps the page has turned at Adobe.
Back to public relations. How should public relations work when dealing with perceptions of security failures? It's hopeless unless the company they are representing takes a proactive stance in preventing attacks to their security. When the hackers laugh at your security, you are going to be a big target, because the word will spread through the hacker community that you are a low-hanging fruit. Ripe for the picking. You get it.
Subscribe to:
Posts (Atom)