Results 1 to 7 of 7

Thread: Centralized / Decentralized Reporting

  1. #1

    Centralized / Decentralized Reporting

    I'm putting together something similar to a logging architecture. It's not a network I'm dealing with (not servers, firewalls, routers...) but something a little different. The location of the devices are secret and very valuable.

    For discussion, let's assume I have 10 devices creating logs. I have the option of pushing those logs to a central point, or leaving the logs on those devices.

    To read the logs, I'd either be:
    Reading the logs locally
    Reading the logs remotely

    Clearly it's easier to secure a centralized point. However, a centralized point of failure is not good either. If the centralized location is discovered, all of the device locations are discovered (in this scenario, the business is destroyed). I figure a decentralized option may have benefits for that reason (if one device's logs are compromised, the locations of the other devices are still hidden)

    By temporarily pulling in the logs to a central location (for the benefits of centralized reporting) I feel I can prevent the disclosure of all the devices by making that information non-existent for the greatest amount of time.

    The devices are owned by different third parties and have to meet specifications provided by me to ensure what they report is solid. If they don't meet those specifications, then they can't report in. This creates a high level of untrust, which also points toward the decentralized option (if they mess up, it's their problem... their logs are invaluable then and we can continue monitoring the other surviving devices)

    I know this sounds abnormal. But what are the benefits of decentralized logging, and are there any other scenarios you can think of where this is practiced?

    Summary: The devices are honeypots reporting in data from third party organizations. There is a high likelyhood that they will be compromised. I want to prevent any discovery of the central logging location.

  2. #2
    Master-Jedi-Pimps0r & Moderator thehorse13's Avatar
    Join Date
    Dec 2002
    Location
    Washington D.C. area
    Posts
    2,885
    We have a simple and elegant solution to this already in production. We stream the same data to three central locations if that makes sense. In other words, instead of having device 1,2,3 report here and 4,5,6 over there, we simply pipe data to 3 locations, none of which have paths or avenues to one another. Hard to explain but easy to see on paper. The theory goes that if all three data collectors don't match up with records, there is a cause for concern.

    To date, we've had no issues with compromise. It proved valuable when one central server dumped due to hardware failure. We didn't skip a beat.

    fwiw

    --TH13
    Our scars have the power to remind us that our past was real. -- Hannibal Lecter.
    Talent is God given. Be humble. Fame is man-given. Be grateful. Conceit is self-given. Be careful. -- John Wooden

  3. #3
    Computer Forensics
    Join Date
    Jul 2001
    Posts
    672
    With a centralized model you will always run the risk of your log collection point being noticed. In fact, it can be another point of honeypotting setting up a honeypot syslog server can be extremely beneficial.

    What do you do if the logs are cleared from a honeypot and you don't have a centralized location where logs are sent? Honestly, by the time they discover the centralized loghost, you should have enough information collected about the compromise to disconnect the attacker and collect your evidence.

    Your best bet is to maintain a centralized location, that is hardened and leave copies of the logs on the honeypots. This gives you the best of both worlds. It allows you to monitor locally, rather than log in to a host to monitor its logs. (you think you wouldn't be noticed then? How often do you see an attacker check who is on the machine?)

    Secure the logfile transmission, and the loghost and you should be fine. As for single point of failure..backup the loghost.
    Antionline in a nutshell
    \"You\'re putting the fate of the world in the hands of a bunch of idiots I wouldn\'t trust with a potato gun\"

    Trust your Technolust

  4. #4
    Horse - If the compromised device has access to all three locations... wouldn't all three locations be at risk? Or are you hoping they wouldn't notice and ignore the other two devices... causing an exception which would alert you?

    hog - I like your idea about leaving the logs on both places.
    How often do you see an attacker check who is on the machine?
    Very, very often

  5. #5
    AO Ancient: Team Leader
    Join Date
    Oct 2002
    Posts
    5,197
    You could also bring them to a central location and then have a machine stealthed, (no services bound to the NIC), running Snort that filters for syslog messages only and logs them. That way the existence of the secondary log system is totally hidden from an attacker and he can change the logs all he likes you still have the Snort record of every log entry sent - in fact it would be most beneficial to be able to see what the attacker removed.....
    Don\'t SYN us.... We\'ll SYN you.....
    \"A nation that draws too broad a difference between its scholars and its warriors will have its thinking done by cowards, and its fighting done by fools.\" - Thucydides

  6. #6
    Elite Hacker
    Join Date
    Mar 2003
    Posts
    1,407
    I don't know if we're talking *nix or Windows here (probably both), but it seems like in *nix you should be able to write a kernel module that will log to the remote server, and hide all indications of it doing so. If this is the case, you could have the logs on the central log server and the honeypots, and if all goes well the attacker will never know about the central log server.

  7. #7
    Master-Jedi-Pimps0r & Moderator thehorse13's Avatar
    Join Date
    Dec 2002
    Location
    Washington D.C. area
    Posts
    2,885
    We offload data to "middle man" collection points and then forward the data on from there. For instance, our perimeter firewall dumps syslog data to three different hosts who then dump to the central collectors. Because of routing, ACLs and layered security, there is no way that all three sources can be compromised.

    Administrative overhead isn't as bad as you may think. The only thing that takes a small hit is bandwidth and processing.

    --TH13
    Our scars have the power to remind us that our past was real. -- Hannibal Lecter.
    Talent is God given. Be humble. Fame is man-given. Be grateful. Conceit is self-given. Be careful. -- John Wooden

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •