Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19

Thread: A question

  1. #11
    Senior Member
    Join Date
    Dec 2004
    Posts
    3,171
    Hi Tiger Shark,

    Well...actually there were two questions asked...and I was primarily responding to the first...

    1. " Which would you label as "more secure"? "

    and...

    2. " Which is more important in evaluating security? Potential for loss, or potential for attack? "

    To which my response was 1. A 2. B

    I said B for the second one simply based on a generality that under normal circumstances the best way to gage your strengths and weaknesses in most circumstances is while and after being attacked.

    My answers are, as stated, based on non-computer related comparisons. I'll leave the computer fact to you guys!

  2. #12
    AO Ancient: Team Leader
    Join Date
    Oct 2002
    Posts
    5,197
    Egal:

    My response wasn't "pointed" at any previous response or person.... but I have to make the following comment since you brought it up:-

    I said B for the second one simply based on a generality that under normal circumstances the best way to gage your strengths and weaknesses in most circumstances is while and after being attacked.
    Yes, you can say that is true....But the point of the risk asessment is to determine the "value" to the organization of the information. If the "value" is high then there are two things you can do, spend a lot of money and time to protect it while making it publicly available or, simply, not making it publicly avaliable.

    The word "publicly" is very important here... I _can_ make data available to any IP on the internet, thus making it "publicly" available but I _will_ make you authenticate yourself twice, in two different ways, before you have access to the data. If you can cross both authentication schemes without playing games that my IDS or other systems can't alert on then _I_ have a problem.

    Your comment implies that, regardless of the risk assessment's conclusions, you would place critical data into the public domain and learn how it gets attacked as you go.... That's a flawed principle since you are relying on your ability to see and recognize an attack by watching _every_ packet.. We both know that can't be done... So the, more simple, principle, of minimizing your exposure applies.

    That's my 2c.... FWIW...
    Don\'t SYN us.... We\'ll SYN you.....
    \"A nation that draws too broad a difference between its scholars and its warriors will have its thinking done by cowards, and its fighting done by fools.\" - Thucydides

  3. #13
    Senior Member
    Join Date
    Dec 2004
    Posts
    3,171
    [quote]My response wasn't "pointed" at any previous response or person....[quote]

    I know your comments weren't directed at my comments...and mine weren't directed at you either...except to say there were two questions asked...

    then, I was just restating my answers and clarifying why I said B to the second one ( which was basically a boxing/sports answer...that you can't judge your or your opponents strengths and weaknesses until you actually get into the ring )


    I'm definitely not going to argue with you about computers I'll save you the trouble...if we ever argue about computers...I concede...you win...end of story KO'ed in the first round!

  4. #14
    Senior Member
    Join Date
    Mar 2004
    Posts
    557
    Hi

    Soda, excellent question. All I write is based on personal views and experiences.

    Just a initial remark: Executives often are not aware of proper actions to
    deal with system risks, such as the manipulation and loss of data. Controlling
    might only be able to number its consequences (process failures, temporal shrinking
    of human ressources, even the loss of reputation etc.), but in order to deal and
    cope with system risks, a security risk planning model, the eductation in security
    awareness and a countermeasure analysis[1,Coping with Systems Risk: Security Planning
    Models for Management Decision Making (rather old but worth a read)] are mandatory.

    We here mainly deal with a security risk planning model. In order to be able to talk
    about something like "more secure", we need to define some "measure" - this definition
    is, as mentioned, quite subjective. However, we can try to confine that definition of
    the measure in a region of applicability to get a "feeling". This "feeling" might then
    allow us to deduce/propose some model. Such regions might be 'personal environment', 'SMBs',
    ..., 'global players'. In each of these regions different weights to a list of points
    can be given - direct costs (money <-> time, nerves , ...), consequential costs (loss of
    partners and customers, regathering of data (missing backups), ...) to name a few (which
    basically means I cannot think of more at the moment ... ).

    Good moment to give an example: the simplest: 'personal environment'. What is the impact
    of loss of data (personally I go with the view, that the potential for loss should be
    numbered rather than the potential for attack). The loss of data can be caused by a
    deep critical vulnerability or a less critical one and thus does not contribute to the
    bill. The loss of nerves (of my father owning the box or me having to fix it) is caused by
    the constant failure of functionality, which is more frequently caused by less critical
    vulnerabilities rather than deep ones (depends on the definition of deep critical/less critical
    and the current state of statistics -> should be checked). These are the main factors
    (I assume so for simplicity). Hence, in this environment, I go with B) rather than A).
    People, making use of A) (depending on its definition) often "know what they are doing",
    usually leaving the box intact - the attack is unnoticed by my father. My father is unaware
    of possible malicious activity starting on his box. There is another issue - which data
    is stored on the harddisk and could imply expenses on my father's side. These are
    soft factors and deal as well with the measure as well as the awareness issue. I skip them
    here.

    For the other environments, it is not so simple. Just to give some input: For a multinational SMB,
    back a few years, the daily life issue was B) as well - causing a steady stream of "costs" based
    on such a measure. In a military environment, A) was more critical due to an immense weighting
    factor for classified data. B) could not do harm - human ressources were of lower priority
    and awareness programs were "hot".

    I hope the example and thoughts gave at least an idea of my world of thought in this context and
    maybe, it is of some use in order to model better and more multi-faceted measures. Or maybe it was
    just another bunch of blah (once you repeatedly heard the management talking, you start to talk like
    them )


    Cheers

    [1] http://dstraub.cis.gsu.edu:88/
    If the only tool you have is a hammer, you tend to see every problem as a nail.
    (Abraham Maslow, Psychologist, 1908-70)

  5. #15
    In my opinion, if you believe that A is more secure, then you put value into obscurity for defense. Myself personally, I believe that security isn't so much resiliance to attack, but instead resiliance to loss. (If attack succeeds, how much is lost?)

    So in this case, an application running as root being exploited is less secure as an application running in a lesser privledged account or sandbox. The vulnerability is the same, but the security is different.

    So with these thoughts laid out, is it more effective to evaluate a code's security by reviewing coding practices, or by review of its design?

    edit: oh boy thats a long reply sec_ware. Gimme a few mins

  6. #16
    Regal Making Handler
    Join Date
    Jun 2002
    Posts
    1,668
    So with these thoughts laid out, is it more effective to evaluate a code's security by reviewing coding practices, or by review of its design?
    document.write('HELLO WORLD');

    Is a piece of code, how do I make it more secure?????????????????????
    What happens if a big asteroid hits the Earth? Judging from realistic simulations involving a sledge hammer and a common laboratory frog, we can assume it will be pretty bad. - Dave Barry

  7. #17
    Senior Member
    Join Date
    Mar 2004
    Posts
    557
    Hi

    Originally posted here by Soda_Popinsky
    reviewing coding practices, or by review of its design [/B]
    I think definitively by design. As OS developper you could design to control
    what I program is able to do (even in the "context of root") (maybe some particular
    hardware is needed), but you cannot control coding practices of all product-
    manufactures (especially 3rd party). I have the feeling to misunderstand your
    question...


    [offtopic]

    edit: oh boy thats a long reply sec_ware. Gimme a few mins

    Yeah, sorry about the length, but somehow I (almost) never manage
    to keep me short. I might believe that I am developping the idea while
    writing - causing a lot of blah. Isn't it that brevity is the soul of wit... ?

    I am often wondering how people can be so brief and sometimes exact.
    Those guys have my admiration (as well in executive as techies). Oups,
    I did it again.
    [/offtopic]

    Keep us updated!

    Cheers
    If the only tool you have is a hammer, you tend to see every problem as a nail.
    (Abraham Maslow, Psychologist, 1908-70)

  8. #18
    Senior Member nihil's Avatar
    Join Date
    Jul 2003
    Location
    United Kingdom: Bridlington
    Posts
    17,188
    Hi Soda~,

    I would agree with sec_ware on this one.

    I think definitively by design. As OS developer you could design to control what a program is able to do
    Code is basically language, and I am not aware of any language that contains security flaws per se. It is how the language (code) is used that causes the potential for weakness.

    Now, it is conceivable that a coding error could cause a vulnerability. A crude analogy would be a night latch installed wrong way round, such that the latch mechanism is on the outside of the door. Any passer by could see that and operate the mechanism to open the door.

    In the case of applications and OS code, there tends to be rather a lot of it, and the source is frequently not available, which leaves discovering a code based weakness very much to chance (with a low probability, given the volume and complexity of the code)

    In the case of open source, a coding error would be spotted and corrected very quickly, which leads me to suggest that one of the factors in your model should be the presence of and adherence to coding standards; and the rigorousness of the testing and QA processes.

    Design faults are a totally different scenario, as the potential attacker can see what the product does and have a good idea of how it works. They can then determine potential holes and probe for them. In this case it does not matter how well the product is coded, as the flaw is in the design.

    Over twenty five years ago I can remember a colleague commenting: "700 man-years in developing the product, and half a man-day in thinking about security" (it was an IBM product )

    So my conclusion is that design, and a commitment to build in security from the outset is paramount.

    Just my thoughts................

  9. #19
    There are many good points and issues raised here. I think I might be able to structure things a little bit by listing the essential issues that must be addressed:

    1. How "secure" is my application, server, etc. ? That is, if its vulnerabilities are attacked, what is the chance that the attack will be successful.

    2. How likely is it that the vulnerabilities will be discovered? In our case this probably doesn't need much discussion, since vulnerabilities of common operating systems and other software is routinely discussed on the web.

    3. How much do I care? If an attack is successful, what is lost? Will it cost me an hour or two to rebuild/restore with no other consequences or does the attacker gain the keys to my kingdom.

    4. How much would I have to spend (money, time, explanations to the boss, carping from the users, etc.) to improve protection.

    Ultimate action is based on a balanced combination of the three. I really don't care if some systems are compromised. When I see it, I'll fix them or just get rid of the whole thing. Some are worth investing modest effort in, because the cost benefit for that effort is good. Some I just have to protect or I'm out of business.

    In the context of #1 above, it doesn't matter whether a system has been compromised or not, because the question is "how easy is it?" not "has it been done yet?" Let's skip #2. #3 is a biggie. If I can replace a system or its function in a little while (for example, buy a new PC and restore its operation from a common restore disk), it doesn't make too much sense to spend a lot of time and effort in protecting it. #4 is hard to deal with, because the consequences are not all in dollars, even though the upgrades are. These are big questions and are unlikely to be resolved as a general proposition.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •