Results 1 to 8 of 8

Thread: Ethical Disclosure

  1. #1

    Ethical Disclosure

    Ethical Disclosure
    (at least how I do it)
    by Soda_Popinsky

    Intoduction
    When one comes across a vulnerability, they can do one of four things:
    • Full Disclosure
    • Ethical Disclosure to vendor/author
    • No Disclosure
    • 0-Day Exploitation of vulnerability

    This is the process I use for disclosure of vulnerability, and it's pretty simple. I believe that the first party to receive technical information about a vulnerability should be the vendor/author or service responsible for creating the vulnerabilty.

    Discovery

    First off (after I discover a vulnerability) I confirm that the vulnerability exists so I don't waste anyone's time or ruin my own credibiltiy with a false alarm. Afterwards, I make two contacts:
    • Vendor or author of the asset with vulnerability
    • Secunia

    Why contact Secunia? Because there's always the chance that the author or vendor won't have the capacity to handle (or give a ****) about the vulnerability. I make contact with Secunia so they can use their muscle (aka an @secunia.com email address) to convince a vendor to turn their wheels with a fix. This doesn't happen as much as you might think, however.

    I give everything regarding the vulnerability over to Secunia, and they then hold on to the vulnerability information and delay the advisory until I give them the go-ahead to release it. Secunia can obviously be replaced by another advisory service (like iDefense to whore it out), except for something like BugTraq or Full Disclosure mailing lists, but I can't vouch for their willingness to delay an advisory the way Secunia does. I delay the advisory, and I then move on to the vendor...

    Dealing with the Vendor

    If you are looking for a place to contact the vendor, first browse the website for any email address that looks at all relevant (developers, security) or general (info, general, contactus@). If that's a no-go and can't be found... whois their domain, and look for the abuse contact (usually abuse@vendor.com)

    http://www.whois.sc

    Be prepared to speak over PGP encrypted email. If you don't have a key, get one (this isn't a public key tutorial, you figure it out).

    http://www.pgp.com/
    http://www.gnupg.org/

    If possible, ask them to sign their emails so they can't deny that you contacted them. (You might be talking to a help desk cronie that doesn't have a clue) Just a precaution for later. Send them whatever is needed (PoC code, text, etc) for them to understand the vulnerability once they ask for it... the quicker they understand, the quicker the response.

    If they ask for it, they imply they are in a position to deal with it. Don't just randomly send off PoC exploits to all the email addresses you can find at a vendor.

    If the vendor is cooperative, then delay the release of the advisory until there is an official fix available. Ask them for this time frame and offer your help in the development of the fix.

    There is always the possibility that they won't understand the threat, won't have a functioning bug reporting system, or are just straight up unresponsive... In that case...

    ...They Don't Care!

    This is where our contact to Secunia/Wherever is useful. After the initial email to the vendor, I'll give 2 weeks to the vendor to respond. I'll then tell Secunia that they aren't responsive, and Secunia will also attempt to make the contact. (they have connections )

    Two weeks later, I'll tell Secunia to (or Secunia will themselves) release the advisory and the news will travel to Full Disclosure or Bugtraq soon after.

    Credit
    Getting credit for published advisories is common. For the infosec analyst, an advisory isn't worth much except for childish/worthless IRC/Forum/Classroom street cred or whatever... unless it's done responsibly. It has landed me a few jobs and I'm glad I've been handling them the way I have. It shows character and proves that an individual is worth trusting.

    Conclusion
    This process lowers the amount of exploitation to an amount that's acceptable for my own concious, allows the vendor to respond at their own rate, and gives sysadmins full details of the vulnerability when an official fix is ready (as opposed to dirty hacks, like we saw with the WMF exploit).

    Web services are tough... because users can't apply fixes (such as Antionline). I don't have the answers for that, but I've been lucky to not have to deal with an unresponsive provider in that area.

    That's all I got. I'm sure there will be discussion considering the events that have happened over the past week or so. Regardless, I'm selfish and have a huge ego, so I prefer this method for my own reputation.

    Ain't that right Neg?

  2. #2
    Senior Member
    Join Date
    Dec 2003
    Location
    Pacific Northwest
    Posts
    1,675
    Good Stuff Soda,

    Demostrates responsibility and class. However at the same time you can put them on the spot to at least acknowledge their problem, and a reasonable opportunity to fix it. And if the don't.....here comes the pain

    cheers
    Connection refused, try again later.

  3. #3
    Junior Member
    Join Date
    Jan 2006
    Posts
    25
    how did you decide that this method is ethical or more ethical than other methods?

    i would say the opposite is true and in a free market environment this type of disclosure does more harm than good. it protects vendors and encorages them to create the idea that people who do no not help protect sloppy vendors are some how unethical.

    by punishing these vendors and forcing their hand in replying to widely known unpatched vulnerabilities you will change the cost structure. perhaps someday vendors will be held accountable for making such faulty products if enough major incidents occur and the blame is placed where it should be and not shifted to "unethical" disclosure. companies will learn it is cheaper to use better software development methods and hopefully one day better software design methods and software configuration mechanisms.

    to make this clearer one must realize that just becase a patch is availible does not mean that all installations can use this patch. sometimes unusual or obscure software good software can result in failures when patches are applied. so who are you helping? a select few organizations that acquire all of their software from mccoders? this damages diversity and ultimately damages the overall market.

    considering how many software flaws can be prevented from becoming vulnerabities through the proper application of a good security policy the discontinuation of "ethical" disclosures will guide organizations and users alike to be more reliant on proper configuration and less reliant on inevitably flawed applications.

  4. #4
    Hi MS

    how did you decide that this method is ethical or more ethical than other methods?
    From using a utilitarianist perspective... here's why everyone is happy
    1. Exploits / Worms have the least chance of success (non-victims are happy)
    2. Reliable forms of patching are available (sysadmins are happy)
    3. Advisory author is not held responsible for aftermath of early advisory (happy analyst)

    i would say the opposite is true and in a free market environment this type of disclosure does more harm than good.

    it protects vendors and encorages them to create the idea that people who do no not help protect sloppy vendors are some how unethical. by punishing these vendors and forcing their hand in replying to widely known unpatched vulnerabilities you will change the cost structure.
    By that logic, when you release an advisory... you are punishing the users of that vendor. They are then forced to rely on half baked patches until an official patch is ready.

    Do you develop software? How familiar are you with software development processes? Are you saying that maintenance should be punished?

    to make this clearer one must realize that just becase a patch is availible does not mean that all installations can use this patch. sometimes unusual or obscure software good software can result in failures when patches are applied. so who are you helping? a select few organizations that acquire all of their software from mccoders? this damages diversity and ultimately damages the overall market.
    "Unusual or obscure"? Sounds more like "unsupported". If software isn't supported in an enviroment, expect it to fail... If a company creates bad patches, then they're punishing themselves... Let bad software be bad software, just don't let it effect the innocent users of that software.

    considering how many software flaws can be prevented from becoming vulnerabities through the proper application of a good security policy the discontinuation of "ethical" disclosures will guide organizations and users alike to be more reliant on proper configuration and less reliant on inevitably flawed applications.
    If you think software quality improvement by assisting attacks against their users is ethical, then I will have to disagree. It's like blackmail, that's what immediate full disclosure comes down to in my mind. Sure, proper policy may make application level security somewhat pointless, but in releasing an advisory prematurely you are putting others at risk.

    If you're going to punish a vendor, I don't agree it should be done by assisting the exploitation of it's users so they lose marketshare. Fine 'em or something...

    Unless of course, they ignore the disclosure. Then by all means hop on a soapbox.

  5. #5
    Senior Member
    Join Date
    Jan 2003
    Posts
    3,915
    Hey Hey,

    I'd have to say that I'm sitting on the fence with this issue. After all, wasn't the old slogan here "Hackers Know the Weaknesses to Your System... Shouldn't You?".. How often have we seen MS release patches to vulns that were reported to them months and months ago.. Even if the vendor responds to you, it doesn't mean they'll take action in a timely process... Also, someone else could be holding that same vuln, ready to exploit it... this means that people won't be prepared...

    I'd also have to say that I don't agree with the name you chose... "Ethical Disclosure"... A lot of people think it's unethical to provide anything other than Full Disclosure... I'd say call it "Responsible Disclosure" but even that.. some people think it's irresponsible not to provide Full Disclosure... However I think that Responsible is a better term... for this reason.. I, ethically speaking, think that a vuln should be released under the aspect of Full Disclosure... however I wouldn't because I think it would be irresponsible to do that... The responsible thing is to give the vendor time to prepare a response... limited time... but at least a time frame..

    However, as long as you're only Disclosing the vuln it's all good.. it's when you start imediately releasing a PoC/Vuln.. A lot of times you'll see companies say that they won't provide a PoC for 3 months from report time, or from patch release.. or sometimes just from a public announcement if they're a Full Disclosure type of people..

    I think that as long as you don't give the details that you're being responsible... Prime example.. a problem with a certain chipset of wireless card looking for randomly named APs.. and it's the driver that causes it to do this.. this was just reported... However the name of the chipset and the name of the APs was not released... however the vendor was also contacted specifically.. It is Full Disclosure.. but it's doing it responsibly..

    It's like releasing the information with your own fix or a 3rd party patch.. as long as you can do that you're being responsible... you're providing protection.. Even if it's just some snort filters and a firewall rule...

    Then again sometimes the company just needs a kick in the ass... How many companies have vulns reported to them all the time and never do anything about them? Someitmes you just have to go public in a way that will make them look bad because of their lack of response in the past..

    again... these are just my opinions... but I'm entitled to them.

    Peace,
    HT

  6. #6
    Junior Member
    Join Date
    Dec 2005
    Posts
    13

    Double Edged Sword

    A little history about Christopher Columbus: After "discovering" the new world he was back in the queens court when a detractor/heckler was minimizing and deriding his accomplishment. Christopher Columbers asked for and egg and then asked that this person make the egg stand on end. He made several attempts to no avail. Christopher the took the egg and very gently tapped it on one end cracking the shell enough to allow the egg to stand. Christopher's comment then, (paraphrased) "Once accomplished, anyone can do it."

    That statement holds true for PoC/Vuln's. Once the vulnerability is published, you see an increase in the number of attempts to exploit it.

    I believe the most responsible and ethical thing that can be done is to allow a company a little time to fix/patch it. If they are unresponsive, publish a Proof of Concept. I know the nuances that may be required to produce a fix and that some solutions cannot be accomplished very quickly. Sometimes, however, a swift kick is required, i.e. the recent WMF exploit.

    Is it ethical for a company to release a fix on their schedule? I was greatly pleased to see public pressure expedite the WMF fix being released out of schedule. Unfortunately, there are vulnerabilities that exist that have been known about for some time by the companies. The only way they will be fixed is if public pressure is brought to bear.

    Hence the double edge sword, the script-kiddies will immediately start exploiting it but if I don't know about it, then I cannot take any measures to mitigate or reduce the risk involved.

    For me, the ethical thing to do is give me the knowledge so that I may make an informed decision on my course of action.
    Epithath: What lies here beneath is just the shell, just the nut is gone.

  7. #7
    Junior Member
    Join Date
    Jan 2006
    Posts
    25
    By that logic, when you release an advisory... you are punishing the users of that vendor. They are then forced to rely on half baked patches until an official patch is ready.
    your logic is short sided.

    yes users will be punished in the short term and that is unfortunate but it is the only way to bring about change. your "ethical" approach only empowers the status quo by protecting vendors and users of bad software that most likely did not establish a good security policy. the fault for damages from vulnerabilities is redirected to unethical disclosure and hackers and the finger is never pointed back to the vendor for providing vulnerable software or the users for not securing the software properly.

    the only time a vendor catches heat is if they fail to respond swiftly to a published flaw. this is the wrong approach. the public fails to grasp that the software has been vulnerable since the flaw was introduced not published. if a vendor responds swiftly to a disclosure the public has a "no harm no foul approach" which contains no push for change.

    Do you develop software? How familiar are you with software development processes? Are you saying that maintenance should be punished?
    i am too young to develop software professionally but i do for school. i am familiar with the waterfall v&v waterfall and spiral development process models. i am also familiar with cocomo models function point model and ideal and cmmi. i am not saying that maintenance should be punished. i am saying that putting the effects of the flaws directly on to the public the resulting loss of customers will force companies to change their approach to security. only when it becomes more expensive to have flaws than to not have flaws will this happen.

    "Unusual or obscure"? Sounds more like "unsupported". If software isn't supported in an enviroment, expect it to fail... If a company creates bad patches, then they're punishing themselves... Let bad software be bad software, just don't let it effect the innocent users of that software.
    i would say "unsupported" is too strong a word. it is unacceptable how many patches alter the way some elements of effected apis work. if this were not the case testing before applying patches would not be required.

    users innocent or not must be punished. you are providing short term comfort and enabling a long term problem. vendors have no cause to stop building so much software that requires patches. vendors have no cause to make security configuration guides more available to users. the only way to create a cost is to hurt the customers. how is this different than any other industry? by protecting vendors from their faults you further bad software.

    If you think software quality improvement by assisting attacks against their users is ethical, then I will have to disagree. It's like blackmail, that's what immediate full disclosure comes down to in my mind. Sure, proper policy may make application level security somewhat pointless, but in releasing an advisory prematurely you are putting others at risk.
    i think the long term advantages of forcing tighter competition of quality and not just features outweighs the immediate damages caused by not protecting the bad vendors.

    If you're going to punish a vendor, I don't agree it should be done by assisting the exploitation of it's users so they lose marketshare. Fine 'em or something...
    customers won't decide with their checkbooks that the demand better software unless they are forced to. the presence of open source software and the international nature of the situation prevents fining from being applicable at this time. also the mind set that exploits are the fault of unethical disclosure provides vendors a scapegoat to prevent being fined.

  8. #8
    I have to take sides with Soda on this. The main problem with the "immediate release" approach championed by MS Security is that it assumes 20-20 (or better) foresight by all application and OS developers.

    The problems encountered, vulnerabilities found and such aren't 100 percent avoidable during development because our crystal balls are not always perfectly polished. Much of an application's original development is done a year or more before release. Sometimes the application is used for years before a vulnerability is found. Buffer overflows and sloppy type checking, and development methodology aside, sometimes someone finds a vulnerability in a part of an application or OS that no one else thought of before.

    Sometimes, something that wasn't a target before, becomes a target. Vulnerabilities that were there for years suddenly come to the surface because more attention is paid to the product.

    Forcing adoption of adequate security policies (which I think is overly simplistic as a solution) would not be the end result of the type of disclosure you posit, and it wouldn't be realistic. Take a look at the varying approaches to security policy, security methods, security programs and standards. What works in one place may or may not work in another. Besides, the majority of the developers/vendors have security policies and practices in place and it is within this model that Soda is trying to work.

    I get the feeling that you think from out of chaos will come order, MS. Keep in mind that it is from the chaos of the early years of information technology that we have emerged into the chaos of today. Nor will customers make valid purchasing decisions based on the security policies implemented by the vendor of their software (assuming they understand these policies in any way). The collateral damage is likely too high and the resulting instability in the industry may cause worse problems than we have now.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •