Results 1 to 6 of 6

Thread: Need Help with Research into IS Security Failures

  1. #1
    Junior Member
    Join Date
    Jul 2003
    Posts
    4

    Need Help with Research into IS Security Failures

    I'm conducting research into IS Security failures and measures for my MSc Dissertation at South Bank University, London.
    Main part of this research is a survey of UK based companies, establishing the number, underlying reason, and impact of security failures, plus an overview of the employee facing security measures and their efficacy.
    This is done under the premise that most security failures have their root in humans not following the rules and procedures (which consists of a failure on the Human-Computer interface of the IS), and not due to technological failures or malicious intent. Note that this does not mean that technology is not important, to the contrary. It means that technology is worthless if people don't apply or use it right (simple example - ever seen a password on a post-it note, on the monitor?).

    What I'm looking for are UK based IS Security Managers that would be willing to spend an hour or two to contribute to this research, in return for an external high level review of their security practices and feedback on how well these work. I will not need access to your systems, technical setup details, or any other confidential information that could jeopardize the security of your systems if it
    gets out.

    Please contact me if you are interested! I'd also be happy to discuss the topics indicated above on this list if anyone would like to hear more or has experiences to the contrary.

    Thanks

  2. #2
    Banned
    Join Date
    May 2003
    Posts
    1,004
    Hmm actually the majority of IS security technology is flawed and do lead to a good number of failures. The following paper was put out by the NSA and discusses this point:

    "The Inevitability of Failure:
    The Flawed Assumption of Security in Modern Computing Environments"

    http://www.cs.utah.edu/flux/fluke/ht...vitability.htm

    I am not UK based, but I am a sr. risk manager and I am very familiar with BS7799, so if you are unable to get anyone more local, feel free to let me know.

    catch

  3. #3
    Junior Member
    Join Date
    Jul 2003
    Posts
    4
    Thanks for the link!

    Originally posted here by catch
    Hmm actually the majority of IS security technology is flawed and do lead to a good number of failures.
    Point taken - but aren't these flaws the result of either human failure (e.g. the coder) or of management failure (e.g. focus on features instead of secure system developemnt)?
    Not that you or I as humble IS managers can do anything about this, of course. So we resort to security policies and procedures to work around the fact that our IT is fallible. All failures you encounter now are again either human failures (e.g. patch not applied, despite a procedure telling you to do so) or managment failures (forgot to put procedure in place, or decided not to based on risk/value assessment). Frameworks like BS7799 help us to think of the most common areas to evaluate and address.
    I know it's not the most common viewpoint - what's anyone's feeling on this?

    In regards to the research - I'm not stuck up on UK only participation, the only issue will be that all the face2face work will need to be replaced by other means (email, phone, instand messaging etc). Get in contact if you're interested, and I'll send you additional information on what I'm looking to do in detail.

  4. #4
    Originally posted here by mthierst
    Point taken - but aren't these flaws the result of either human failure (e.g. the coder) or of management failure (e.g. focus on features instead of secure system developemnt)?
    Well, if it's going to be taken that far back, then of course it's human failure... you need to establish a base state somewhere along the chain as a starting point, and then evaluate from there what activities should have been performed.

    Perhaps have two categories, one for in-house software and one for off-the-shelf software. Off the shelf, you could probably have the base state as 'as purchased'. Inhouse of course would start much further back, at design level, where it would almost always be human error.

  5. #5
    AntiOnline Senior Member souleman's Avatar
    Join Date
    Oct 2001
    Location
    Flint, MI
    Posts
    2,883
    Every security failure can be traced back to human failure. For some, its a matter of the admin didn't patch the system. For others, the original coder did a poor job of writing secure code. And still others, the fact that some of the older processors and other hardware had flaws because of the human designers.

    Though you could say that some times the flaws are there on purpose. Sometimes its not a flaw but a back door, in which case its not a "human error". When ssh was trojaned, people that made updates installed the back door. It wasn't actually human error, they did what they were supposed to do. But the error was still caused by humans.

    Now I guess, on a rare occassion, it could be a failure, bassed on something else. ie problems with power or something, but that is very very very unusual.
    \"Ignorance is bliss....
    but only for your enemy\"
    -- souleman

  6. #6
    Junior Member
    Join Date
    Jul 2003
    Posts
    4
    Originally posted here by souleman
    Though you could say that some times the flaws are there on purpose. Sometimes its not a flaw but a back door, in which case its not a "human error". When ssh was trojaned, people that made updates installed the back door. It wasn't actually human error, they did what they were supposed to do.
    besides the cases of 'human error' when errors occur in the underlying technology or in the application of the usage procedures for it there is another class of error, similar to the one pointed out by souleman - management error. It might be a clear dedication of management to focus on system features and not on security (which OS comes to mind?), or it could be a wrong risk assessment or implementation leading to the decision not to address a certain vulnerability for (incorrect) business reasons, or it can be a simple oversight - forgetting to do something (e.g. monitoring of security procedure compliance)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •