Solving malware problems by adding hardware, again

We in the security community tend to reach for hardware as the ultimate solution to securing systems against unwanted modification. Hardware cannot be manipulated by software.  Back in the time when I was young, or, more precisely, back in the time before I even was young, computers were running one program at a time, and that program had access to all resources. When a program finished, the next program could be run. Then we developed timesharing facilities. A computer would run several programs at the same time. We needed a way to regulate access to resources so that programs could run independently from another without one program messing up with the resources used by another program. There were PhD dissertations on the topic, and several implementations of protection mechanisms. Multics, as an example, was a system that did a lot of things right.

And every time we thought we had solved the problem of separating programs from each other, we added more features and opened up formerly closed and protected systems.

For good reasons we added features to our systems. Users liked to get things done and wanted to re-use results from one program with another. Programs had to share data, needed input from each other, used shared libraries. Implementing features was often faster to do than thinking about security implications of a change, and, as one of the speakters at NISK2010 put it, „the marketing guys always beat the security guys“.

At the time the security-conscious developers come up with a solution, they often tend to see the existing system as fundamentally flawed, unrepairable, and propose to replace or supplement a system by adding another component to it. The new component will be rigidly analyzed, planned, implemented, and – this time – used unmodifiably only for a limited purpose. It is typically a new piece of hardware.

Some examples:

  • Electronic signatures on personal computers are seen as susceptible to malicious software attacks, so smart card terminals are outfitted with their own pin pad.
  • Display of banking transaction details might be modified on a personal computer, so the user must confirm transactions on a small separate display carried around or integrated in a smart card terminal.
  • Programs on a personal computer might be attacked by computer viruses, so mobile phones promised a virus-free secure execution environment.
  • Programs on smart phones might be attacked by mobile malware, so why not force users to buy a new dedicated device as an execution environment for all security-sensitive operations?
  • A personal computer might be compromised, so let the user boot from a cd into a new dedicated single-application session on the same hardware. That way, we are actually in a pre-timesharing situation.

 In all cases functionality is limited. This often leads to an unfavorable price/performance ratio. The market does not put such a high value on security that the devices mentioned above have experienced widespread adoption for their stated purpose. So, to increase value, a significant amount of functionality must still be provided by untrustworthy systems, or the devices will be opened up for more applications, thereby compromising on their original level of protection.

I do not have a universal solution at hand, but throwing hardware at the problem of malicious software has not solved it in the past and is unlikely to do so in the future.

In my eyes, more promising aproaches lie in accepting the operating system infrastructure we have (it is not that bad), use and extend it where necessary, and enforce more accountability and liability of all actors composing a system.

About Author: Hanno Langweg

Comments are closed.