I'm looking over the source code right now... looks pretty interesting... could be nice to have around...


By measuring the behavior of various operating systems' TCP retransmission timeout lengths (or RTOs), it is possible to distinguish between OSes on a network. Franck Veysset, Olivier Courtay, and Olivier Heen of the Intranode Research Team first published this concept in April, 2002, and their paper goes into appreciable detail in its discussion of this technique, the mechanisms by which TCP retransmission timers are computed, and OS fingerprinting in general.

My analysis in May, 2002 of the Intranode proof-of-concept code, RING, illustrated the effectiveness of this technique in a controlled setting. Unfortunately, since then, the libraries RING depends on have changed, as has my own Linux platform, so I've been having a lot of trouble getting RING to compile and run reliably (ring-0.0.1 never claimed to be portable). So, instead of chasing down include dependencies (which I'm not very good at), I wrote Snacktime, a Perl implementation of the concepts published by Intranode.
find more at http://www.planb-security.net/wp/snacktime.html