September 11th, 2005 01:40 AM
Six Dumb Ideas In Computer Security
Judging by the rest of the site and the reviews of his book, this guy seems more sound and fury than sound advisor, but I thought this article was worth a read.
Let me introduce you to the six dumbest ideas in computer security. What are they? They're the anti-good ideas. They're the braindamage that makes your $100,000 ASIC-based turbo-stateful packet-mulching firewall transparent to hackers. Where do anti-good ideas come from? They come from misguided attempts to do the impossible - which is another way of saying "trying to ignore reality." Frequently those misguided attempts are sincere efforts by well-meaning people or companies who just don't fully understand the situation, but other times it's just a bunch of savvy entrepreneurs with a well-marketed piece of junk they're selling to make a fast buck. In either case, these dumb ideas are the fundamental reason(s) why all that money you spend on information security is going to be wasted, unless you somehow manage to avoid them.
September 11th, 2005 02:03 AM
Points one and two are just different perspectives of the same thing, and in fact should be in the other order. To use a default permit system indicates that you are selectively rejecting, one could assume that this selection is based on enumerated "badness." Regardless, the overall point by 1/2 is a good one.
Point three is ok-ish... he is basically right, but almost accidentally as he doesn't really seem to have a good grasp on the relationships required between vulnerabilities that lead to an exploitable security flaw. He seems to be taking a programmer/administrator perspective rather than a system perspective... I guess most people just assume that the system models are static. While a frequently practical approach, it is nonetheless and unfortunate one.
The remaining three points are, I suppose good enough... though I can think of much bigger and more common issues. Having Information Security report to Information Technology for example.
Interesting link though.
September 12th, 2005 05:00 PM
I am very familiar with the author. Typically he does stand on the soapbox of "developer" simply because that is where his roots are. He developed what was the old Guantlet firewall.
Also, catch, he did a large presentation last year at IANETSEC on separation of computer security from the IT branch of the org chart. To appreciate what he is saying, you should take a broader look at some of the things he's spoken/written about already.
Yeah, the homeland security book, well, that was a rotton egg. LOL.
this guy seems more sound and fury than sound advisor
He typically takes this tone no matter what he speaks about.
Our scars have the power to remind us that our past was real. -- Hannibal Lecter.
Talent is God given. Be humble. Fame is man-given. Be grateful. Conceit is self-given. Be careful. -- John Wooden
September 12th, 2005 06:14 PM
It's part of a larger philosophical debate in engineering between two approaches.
- Get it right the first time
- Start with a prototype and improve it with feedback
Getting it right the first time sounds like the obvious choice, but we live in a changing world.
No sooner have you introduced the light bulb, than someone suggests that your choice
of voltage was unwise. Edison and Westinghouse fought over DC vs AC. AC was better, so
earlier systems had to be scrapped.
If a software company didn't release a product till it was perfect, it would never hit the
market because "perfect" is a moving target. An OS that was adequate in 1995 is
a joke today. If Microsoft had waited to perfect win95 before releasing it, some
competitor would be the OS leader instead of them.
Security is a new field. The only way we could "get it right the first time" would be to turn
back the clock, and, armed with hindsight, design yesterday's software with today's knowledge.
Since we can't do that, we'll keep on kludging.
I came in to the world with nothing. I still have most of it.
September 12th, 2005 07:02 PM
Security is not a new field... check out the Anderson report from 1973 (!)
Security is a new field. The only way we could "get it right the first time" would be to turn back the clock, and, armed with hindsight, design yesterday's software with today's knowledge. Since we can't do that, we'll keep on kludging.
- Higher assurance software design practices
- Higher assurance programming languages
- The idea of the security kernel and reference monitor
- Formal models to define the security policy
- Formal models to map state changes, ensuring the security kernel is correct
- Automated intrusion detection and survivability
- "Insufficient argument validation"
Yup for 32 years we knew about all of this stuff... yet...
- The vast majority of software projects still use ad hoc development
- Low assurance programming languages account for nearly all of the software we use
- No standard OS utilizes a security kernel or reference monitor
- No standard OS utilizes a formally verified to be effective security model
- No standard OS utilizes any sort of formal state mapping
- Intrusion survivability has just been introduced and is nowhere near production ready
- What percentage of BugTraq is about "Insufficient argument validation"? 90%?
The field isn't new... your standard computer user/developer just hasn't been paying attention. It isn't their fault, they are just 30 years behind, that is a lot of reading... most of the docs are hard to come by. Most universities don't even touch on any formal security until masters level (even at top tech schools in InfoSec programs).
The feedback is, security is more complicated than most people want to deal with... so it has been shoved to the back burner and now... most of the security field doesn't want to fess up to the fact that nearly everything they know about security is flawed... so they keep beating the dead horse. Keep addressing the vulnerabilities of only one set, without every looking at how code vulnerabilities without model vulnerabilities are not exploitable.
Start with a prototype and improve it with feedback
People then look to qmail and OpenBSD as the apex of security... unfortunately they are not. They are the apex of a single ideal of security and consequently they work very well against that kind of attacker. Think about it... the only kinds of attacks you ever hear about are simple defacements or occasionally some credit card hack, but only if the hackee was dumb. What about compromises of hardened R&D data servers of fortune 100 companies? What percentage of those are reported to the media? What percentage of those are discovered by the relevant InfoSec teams? Those are the kind of attackers that turn theoretical weakness into practical weakness. Those are the attackers that take advantage of castles built on swamps, by the very fact that they are built on a swamp... no open window required.
Until the industry as a whole realizes that such instances actually occur, there will be no true increase of industry wide security. Until then the author is right, bad approaches and crap on top of crap.
September 13th, 2005 11:39 AM
Personally, I would'nt want a computer that was secure by design.
It would take all the fun out of it. It would be a single-purpose appliance
that is incorrigible, having a mind of its own. Prolly out of my price range too.
Conspiracy guys have always said that you could have a 100 mpg car,
but the evil and/or incompetent industry people refuse. If you examine the
engineering and physics, there's a grain of truth to it, but the question is
would the consumer, in a free market, choose the 100 mpg model
over the other car that is more "comfortable"?
I worked on cars for years, and constantly railed at the engineers.
"the SOBs should be forced to work on this crap themselves"
but it's useless to suggest a different design, because the engineer,
and the CEO has fifty reasonable answers why they do it the way
they do, all boiling down to "if our customers requested it we would do
it", and they're probably right
Actually, failures are a good thing. They instruct us where to put
more resources, what to do differently in the next version.
Perfection is a waste of resources. The Brooklyn Bridge will
never collapse. We will never know, until we deliberately
demolish it, how much concrete was wasted making it stronger
than it really needed to be.
Likewise, a perfect computer security system would cost us in
convenience and features. As long as those paying the bills feel
that the level of loss through security breaches is less than
the total cost of designing, building and living with a "perfect"
system, then what the heck?
I came in to the world with nothing. I still have most of it.
September 13th, 2005 02:58 PM
See... that is the myth. That is following the OpenBSD security philosophy and trying to lock down a bad design by cutting out features.
Likewise, a perfect computer security system would cost us in convenience and features.
Why not just use a security kernel? This way only one very small amount of code handles all of the security rather than putting this burden on each application? This system would be more secure and more feature rich since developers would need to give security no concern at all. They could just trust the OS to protect itself from applications that go belly up.
To look at the problem otherwise is just insane... it is expecting different results by just doing a lot more of the same activities.