Results 1 to 5 of 5

Thread: Fast Packet Capturing

  1. #1
    Senior Member
    Join Date
    Jun 2003
    Posts
    236

    Fast Packet Capturing

    Im trying to implement some fast packet capturing mechanisims.
    I am using snort(which I believe the pattern match functionality is the main bottle neck)
    and a modified libpcap.
    Some of the thing I am reading say to have an additonal NIC card that for capturing packets. The card is supposed to be IP'less and not able to transmit packets.
    Now I got the IP'less part but I am unable to determine how to configure a card to not transmit packets. When I rebuild the kernel(2.6.6) I cannot find anything under network options or anywhere for this.
    Can anyone shed light on this?

    Also I am looking to maybe try real time linux (rtlinux) and I was wondering if anyone knew anything regarding improved performances with this. Since snort has been the bottleneck I am assuming it would run with the highest priority and be non-preemptible but will that really make a difference?

    Any other ideas on speeding up packet capturing and processing would be greatly appreciated. So far I have to limit myself to what I can use with snort. I have managed to get it to not drop packets at up to about 300 Mb/s but I really want to break the 500 Mb/s.

    Any other technologies I should look at?

    Thanks
    That which does not kill me makes me stronger -- Friedrich Nietzche

  2. #2
    Senior Member
    Join Date
    Jan 2002
    Posts
    1,207
    Setting up an interface IP-less and transmit no packets - is tautological.

    An interface which is "up" but isn't configured with IP or another protocol (say IPX, or appletalk DDP), should not ever transmit any packets unless an application explicitly sends them out of it. And snort won't (nor should anything else).

    As far as optimising throughput is concerned, my best guesses would be:

    1. Recompile snort with absolutely every processor-specific optimisation turned on. Link it statically.
    2. Optimise your snort rules as much as possible
    3. Disable swap (get enough memory first)
    4. Ensure that logging is not blocking snort i.e. make sure there are not IO bottlenecks on the logging (hopefully you won't see that many intrusions that it matters)

    But it's really anybody's guess.

    I wouldn't bother setting realtime scheduling, you will only see a small change, and you could end up starving other tasks of CPU they need in order to work - unless it's a multiprocessor system, in which case you could have it on just one CPU.

    On the other hand, if it's a multi-CPU system it might be worth running several copies of snort with different rulesets, to share the load between CPUs (if snort cannot do multithreading natively (which I don't know the answer to))

    Slarty

  3. #3
    AO Ancient: Team Leader
    Join Date
    Oct 2002
    Posts
    5,197
    As Slarty says no protocols bound to the interface = no packets transmittable. Libpcap will open the interface in promiscuous mode and listen away quite happily. Look at the interface stats after an hour and the worst you will see is 1 packet transmitted and that usually seems to get transmitted at startup on Win32.

    I'm not sure how you get a bottleneck with Snort other than the theoretical one because the app _has_ to do something therefore it is a bottleneck. On a 1Ghz Pentium with 256Mb of ram and Windows 2K as the OS on the internal interface of my primary snort box Snort processes some 1500 packets per second at a maximum CPU usage of 8% with an extensive rule set. With the OS, two snort engines, syslog logging from some 20 remote locations, and a couple of other packet capture/analysis apps running too the box itself, (at rest - ie: I'm not doing anything myself too), rarely exceeds 20% total CPU. If I run an ad hoc log analysis Snort does not drop any packets even though the CPU usage of the analysis engine exceeds 95%. Snort is beautifully written to buffer the incoming packets rather efficiently in memory if it can or drop it to disk if absolutely necessary.

    Unless you are using hardware that is inadequate for the network traffic you are trying to monitor I really don't see how you are experiencing a bottleneck.
    Don\'t SYN us.... We\'ll SYN you.....
    \"A nation that draws too broad a difference between its scholars and its warriors will have its thinking done by cowards, and its fighting done by fools.\" - Thucydides

  4. #4
    Senior Member
    Join Date
    Jun 2003
    Posts
    236
    Thanks Slarty,
    Ive done 2 out of your 4 recomendations, I wasnt aware there was compile time options from preprocessors and I had only been using runtime options to improve performance. Do you have a link to info on disabling swap?

    Tiger_Shark
    I think its pretty well known that the pattern matching facility in snort is a major bottle neck. Ive read several papers about it but I found the white papers here to go into the best technical info about it:
    http://www.ist-scampi.org/
    or
    http://www.ntop.org
    has some info on packet capturing speeds in general

    I am also running on a gigE network. Snort works fine on a 100 Mb/s network but once you exceed this with a default snort you will start dropping packets. And also snort actually reports the captured traffic incorrectly( I should report this to the devel list but Im too lazy)
    When I saturate my network with about 800 Mb/s of traffic snort will hit 100% of cpu. I was getting about 80-90% packet loss with the default snort and libpcap. But with kernel level modifications (ie drastically reducing interupts) I have been able to reduce this to about 10% and I am trying to get this to 0%. My hardware is rather nice too. I have dual p4 Xeons with 2 gigs of ram running on a debian 2.6 kernel. I am using intel gigE cards with MMAP polling and NAPI enabled.

    -- snip --
    Snort is beautifully written to buffer the incoming packets rather efficiently in memory if it can or drop it to disk if absolutely necessary.
    -- snip --
    Ive been thourgh the snort code extensively lately and I cannot find anywhere where it will save packets to disk when running it in NIDS mode. Nor do I see it buffering packets anywhere in memory. From what I have determined it uses libpcap to grab the pakets and processes a single packet at a time all the way thorugh before grabbing another packet.
    That which does not kill me makes me stronger -- Friedrich Nietzche

  5. #5
    AO Ancient: Team Leader
    Join Date
    Oct 2002
    Posts
    5,197
    Angel: LOL, I think I can see your problem...... 800Mb/s is going to drown anything that has to assess and act on the data. I don't think you have much of a choice but to use multiple Snort boxes appropriately located on the network with varying rulesets that are appropriate to both the subnet they monitor and the expected traffic there. Paring down the rulesets to match the threat at each segment will help throughput.

    Re: Buffering... You are right...... Going back to Caswell's book Snort 2.0 Intrusion Detection my memory has played a trick on me.... The packets are placed into structures after they hit the detection engine and are flushed post detection to disk in logging mode.... Somehow the old brain had switched that to caching un-processed packets..... Don't ask....
    Don\'t SYN us.... We\'ll SYN you.....
    \"A nation that draws too broad a difference between its scholars and its warriors will have its thinking done by cowards, and its fighting done by fools.\" - Thucydides

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •