Page 2 of 4 FirstFirst 1234 LastLast
Results 11 to 20 of 36

Thread: An argument against OpenBSD, qmail, et al.

  1. #11
    Senior Member
    Join Date
    Jan 2002
    Posts
    1,207
    As far as I'm aware, none of these systems have anything even remotely approaching formal methods used in any of their development; they're merely well-checked systems. Formal methods don't really work for real software, and that's pretty much the end of it.

    If you see how much effort it takes to develop even the most trivial of functions by formal methods, you'd understand. As you're just an IT manager blinded by words, you won't.

    Slarty

  2. #12
    Banned
    Join Date
    May 2003
    Posts
    1,004
    As far as I'm aware, none of these systems have anything even remotely approaching formal methods used in any of their development; they're merely well-checked systems. Formal methods don't really work for real software, and that's pretty much the end of it.
    As far as you are aware, you are just plain wrong. SecureOS Type Enforcement directly from Honeywell's TCSEC-A1 LOCK system. That is about as formal as you can get.
    The security policies for the other systems are formally validated as well (even NT was designed specifically against the TCSEC-C2 criteria).

    If you see how much effort it takes to develop even the most trivial of functions by formal methods, you'd understand. As you're just an IT manager blinded by words, you won't.
    I am not an IT manager and I am not sure where you pulled that from. I am an Information Security Director currently in charge of new product development for a multi-billion dollar defense firm's Trusted Systems & ICE (Info/Cyber/Electronic) Warfare department. (Though to be fair, I am searching for a new job.) So I might have a smidge of experience in this area, though I suppose if I were blinded by words, I wouldn't know it... but then again neither would you or anyone.

    I am sorry that you feel the need to insult me to make a point, that is just sad.

    catch

  3. #13
    Senior Member gore's Avatar
    Join Date
    Oct 2002
    Location
    Michigan
    Posts
    7,177
    Originally posted here by Tiger Shark
    *COUGH*

    You might want to view my profile before you start claiming that my "pony-tailed head" is "thick"..... The damned thing is _solid_...... and I'm not for open source in a business environment for many of the reasons already laid down here.....
    Uhhh, you're about as much of an Open Source guy as the coder of Windows. Besides, your pony tail is more like "I own a comic book store and think a hot date is a six pack and a good connection".

    As for Open Source and liability.... Aren't those words just as offensive as "reward on results" is to a typical manager? SUSE and other companies do give you a bit more in this area then say Slackware, and they have idemnification in case hell freezes over and SCO wins.

    And catch, I should point out than you're not a real Unix guy, is it possible that your opinion is swayed? And while on topic, you know of any GOOD docs on SecureOS and the others you've mentioned? I could google for them but I don't want crap, I want something you've looked over and said was good.

    Another thing, for permissions which I'm sure have something to do with security NT permissions as opposed to Unix ones, can you honestly set custom permissions on NT without Registry hacking? Or... Editing? Unix is quite easy for this without any re-learning.

    I say relearning because in NT you set accounts and they set permissions, but Unix based stuff, chmod and chown... Ah I'm running around here.

  4. #14
    Banned
    Join Date
    May 2003
    Posts
    1,004
    Gore, I'm not a UNIX guy? UNIX is fine, nearly all secure operating systems run something functioning reasonably close to a Single UNIX Specification compliant environment on them. I do have a few qualms with a few elements of traditional UNIX security, but that is totally an aside. (Beside, you know I quite like AIX, and IRIX and various Linuxes/other Unices are very powerful systems, just not right for my needs.)

    However, all of that has nothing to do with this thread. Here the point was to indicated that a strong "tested secure" history is little assurance of secure future. In fact in my original thread I indicated that the two example products have done an excellent job at delivering relevant security.

    I'll PM you about the docs in a bit.

    More or less all NT security functionality can be modified in the security/group policy editor and the given object's properties editor. The set permissions/priviliges for given accounts in NT are just templates that can easily be changed. In fact in the case of SYSTEM/Administrator vs. root I think you'd be hard pressed to find anyone who thinks changing root's permission/priviliges is the simpler task.

    cheers,

    catch

  5. #15
    Senior Member
    Join Date
    Mar 2003
    Posts
    245
    The enterprise security topics that I am pretty hot on right now are Role Based Access Control (RBAC), and true privilege separation. Unfortunately both of these things have the same problem; they are typically proprietary (e.g. Sun's RBAC) or they don't really 'work' like they should in multiple platforms (e.g. NIS+ netgroups).

    Funny that these are classic enterprise problems. "How do I give the helpdesk the ability to restart only in.ftpd?", or "How do I limit remote login access to the financial servers to just these three machines?". There have been some pretty good attempts at these problems, i.e. sudo and netgroups, but none that I am aware of that provide a top-down level of administration.

    I was asked by a SOX inspector what my procedure was for having Sun Field Service personnel rack and maintain my gold maint. hardware. It took me a while to understand what he was trying to get at, but it dawned on me that he was suggesting that because I don't stand there myself and watch it happen that our data could be 'compromised' by a rogue Sun guy. That my phone rings off the hook, my pager never stops buzzing on my belt, I have hundreds of emails to read, and dozens of machines to build before I go home doesn't seem to register with this guy. Point is that some security measures that may be 'secure practices' are too *****ing impractical to even consider. Same goes with software, if an extremely elite black-hat wants into your network the only thing you can do to stop them is pull the plug on your uplink. If a VP wants a sysadmin to do something questionable or unethical, and does it in a very threatening way, guess who will win that battle.

    -- spurious
    Get OpenSolaris http://www.opensolaris.org/

  6. #16
    Banned
    Join Date
    May 2003
    Posts
    1,004
    I fear my point here has been missed.

    I did not say OpenBSD and qmail suck. (In fact I said they do what they do quite well.)
    I did not make any mention of open source being bad with regards to corporate assurances.
    I did not even suggest that more companies or products use formal V&V.

    The idea here was to raise the point of, If a security hole is the system acting in a way other than it was intended... how do you know what was intended if their is no top level specification? The validation is then used to ensure that the product's security policy will actually meet those needs, and verification ensures that the security policy not only does the right thing, but performs correctly.

    Verification is all about bugs and exploits:
    httpd should not be able to be broken in a manner that propigates its privileges
    users shall not be able to change the permissions on files they do not own

    OpenBSD and qmail do this quite well. (not perfectly mind you, but very well and exceptionally considering their price)

    Validation on the other hand is about design.
    A user maliciously emails away a private document.

    Did the user violate the security policy? Does the FTLS mention anything about what kind of data the product can export? If it doesn't, technically the system has operated in the manner it was designed to and this cannot be considered a hole.

    Another example.
    User A places a trojan horse in a shared directory that, when run by user B it copies all of B's private files to a directory accessible by user A.

    Does this violate the security policy? No bugs we exploited, each user preformed an action perfectly legal as far as the security policy is concerned. User A created an executable and placed it in a directory they had write permissions on. User B executed a file they had execute permissions on. This process read files that it had read permissions on finally copied them to a directory it had write permissions on. Does the FTLS state that rights shall not be transitive? If not, technically this cannot be considered a hole.

    Do statements about "No holes in X years!" still retain the same weight in the absence of an FTLS?

    Without an FTLS and subsequently validation of the security policy... how do you know what the system will and will not protect? How do you know if it will meet your needs?

    cheers,

    catch

  7. #17
    AO Curmudgeon rcgreen's Avatar
    Join Date
    Nov 2001
    Posts
    2,716
    "If my net can't catch it, it ain't fish"

    If I have a concept of security, and an OS meets that spec, then it is
    secure (to me). If you have a (different) concept of security, and your chosen
    OS meets your spec, then you will regard it as secure.

    But what is the absolute, objective definition of security? Is it that the
    software was designed to be secure, according to an approved set
    of methods, assumptions and accountability policies? Or does an historical
    record count for nothing?

    All industry is based on feedback. You design a product to the best of your
    ability, but not being God, you may make errors. Everyone has heard the
    old jokes about english cars. One Jaguar mechanic finally put it into
    perspective for me. "It's "engineered about 80-90%", he said, the rest is
    up to the mechanic"

    You could embark on the quest of engineering your product 100%, but would fail.
    The last 10, and especially the last 5% of perfection tends to require more time than
    the first 90 or 95%. The last infinitesimal bit will take an infinite amount
    of time.

    If you have very high security needs, you spend more time and effort
    on design, because the costs of failure are high, but 100% is still an illusion
    and a good track record is still the only yardstick recognizeable on this
    planet.
    I came in to the world with nothing. I still have most of it.

  8. #18
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,255
    I'm not going to bother getting into the details of whether OpenBSD (who was receiving government funding until very recently, if memory serves) or QMail are verifiably secure because that depends largely on their use.

    On the contrary, to say that they are bad examples of secure software because they haven't had what *you* consider proper Validation and Verification is rather amusing to me, especially given the evidence to the contrary. In a world of strictly theory I might agree with your idea Catch, however, we have a track record, we have evidence to indicate they are indeed secure software.

    Now, considering the elements here:
    - Perhaps not the best theoretical implementation
    - A very good practical implementation
    Is it a good idea to be discouraging programmers from considering similar design elements or programming techniques when writing their own applications or services? IMO No.
    Chris Shepherd
    The Nelson-Shepherd cutoff: The point at which you realise someone is an idiot while trying to help them.
    \"Well as far as the spelling, I speak fluently both your native languages. Do you even can try spell mine ?\" -- Failed Insult
    Is your whole family retarded, or did they just catch it from you?

  9. #19
    Banned
    Join Date
    May 2003
    Posts
    1,004
    On the contrary, to say that they are bad examples of secure software because they haven't had what *you* consider proper Validation and Verification is rather amusing to me, especially given the evidence to the contrary.
    Again, I did not say they were bad examples of secure software. I merely presented the fact that although their track records are very strong, what does that mean? I have repeatedly said, they do what the do well and have a very successful history.

    The idea here is to not look at security as how many holes something has... because really what does that even mean? Without knowing what kind of security the product is trying to present you cannot possibly declare the number of holes.

    Is the example of email a private document out of the system good for security? Is it a hole?
    Is the example of the trojan horse good for security? Is it a hole? This ambiguity taints the whole track record.

    Clearly both of these examples are flaws in the system's security policy, but since they are absent from the system specification noone considers them to be holes. This can be failed validation, since whatever security policy it does have does not adequently address issues of disclosure. Or it can be failed verification since the system fails to address the implict rules of disclosure. Which is it?

    Now clearly if these confidentiality issues are not part of your requirements, such a system would be fine. To not count them as holes is fine, but to consider such a system to be a very (many even say "the most") secure system?

    All of that aside OpenBSD and qmail (among others) are very good practical implementations. I have no desire to discourage programmers from anything... what I would like to do is encourage programmers to learn more about formal development models, system requirements mapping and specification development. If those developers were not constantly reinventing the wheel, imagine what they'd have time to work on.

    cheers,

    catch

  10. #20
    Senior Member
    Join Date
    Nov 2001
    Posts
    1,255
    Originally posted here by catch
    Again, I did not say they were bad examples of secure software. I merely presented the fact that although their track records are very strong, what does that mean? I have repeatedly said, they do what the do well and have a very successful history.
    Actually you are; you suggested that perhaps QMail and OpenBSD, while not being particularly vulnerable pieces of software, may be being used incorrectly (your confidential email example, as one example of what I'm talking about). For the following discussion let's be extremely clear about this:
    - Secure shall refer to overall implementation and use, from a very high level.
    - Vulnerable shall refer to the actual vulnerability state of a particular piece of software.

    I think you are somewhat confusing the issue by discussing security from a 1000 ft perspective in the same paragraphs as security from a 10 ft perspective. A lot of people seem to have mistaken your statements as referring to vulnerable software, whereas I do understand you are talking about secure software.

    Clearly both of these examples are flaws in the system's security policy, but since they are absent from the system specification noone considers them to be holes. This can be failed validation, since whatever security policy it does have does not adequently address issues of disclosure. Or it can be failed verification since the system fails to address the implict rules of disclosure. Which is it?
    The real issue you are trying to put forth here is not anything OpenBSD or QMail have dominion over, and lies wholly under the control of the administrator. If you are simply saying that vulnerability-free software isn't enough to ensure a secure system, well duh, I thought that was obvious.

    All of that aside OpenBSD and qmail (among others) are very good practical implementations. I have no desire to discourage programmers from anything... what I would like to do is encourage programmers to learn more about formal development models, system requirements mapping and specification development. If those developers were not constantly reinventing the wheel, imagine what they'd have time to work on.
    Quite frequently in my experiences as a developer, I've encountered situations where formal development models fail rather easily. For instance, you get one piece of bad analysis, and an application is broken and probably stays that way until a redesign.

    Most of the top ten lists that are kicking around are focused on the more tangible technical faults with software design for a reason - we (people) are the other element in all of these processes. The industry is at the point (IMO) where the realization has set in that people are more harmful to systems than vulnerable software, which is why best practices guides and sites dedicated to helping people get better.
    Chris Shepherd
    The Nelson-Shepherd cutoff: The point at which you realise someone is an idiot while trying to help them.
    \"Well as far as the spelling, I speak fluently both your native languages. Do you even can try spell mine ?\" -- Failed Insult
    Is your whole family retarded, or did they just catch it from you?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •