Did you know that a with a single tool you could have skipped every single IIS patch issued in the last few years? (depending on your exact needs of course) As I am sure you have already guessed the tool is called URLScan and can be found at:

http://www.microsoft.com/technet/tre...ls/urlscan.asp

What does this tool do and how do we use it?
As you can tell from the above page, the tool is an ISAPI filter that runs pattern matches against all IIS requests. These requests are matched against an admin defined configuration file. (urlscan.ini)

Let's have a look at my urlscan.ini file:

Code:
[options]
UseAllowVerbs=1
UseAllowExtensions=1
NormalizeUrlBeforeScan=1
VerifyNormalization=1
AllowHighBitCharacters=0
AllowDotInPath=0
RemoveServerHeader=1
EnableLogging=1
PerProcessLogging=0
PerDayLogging=0
AllowLateScanning=0
LogLongUrls=0
UseFastPathReject=0
RejectResponseUrl=/documents/urlserr.wtx

[AllowVerbs]
GET
HEAD
POST

[DenyHeaders]
Translate:
If:
Lock-Token:

[AllowExtensions]
.
.asf
.avi
.exc
.gif
.htm
.ico
.jpg
.mov
.mp3
.mpg
.pdf
.png
.wmv
.wtx

[DenyUrlSequences]
..
./
\
:
%
&
cmd

[RequestLimits]
MaxAllowedContentLength=350000000
MaxUrl=200
MaxQueryString=0
Don't worry all of this is much simpler than it may initially look.

Code:
UseAllowVerbs=1
UseAllowExtensions=1
These two just tell URLScan to use a default deny stance and only accept the verbs and extensions listed and to deny everything else. If these values are set to 0, URLScan will take a default accept stance and allow everything except items listed in [DenyVerbs] and [DenyExtentions]. I prefer to use default deny because it only allows exactly what I specify, this way if I forget something, a resource is blocked and I can correct it. With default allow, if I forget something the server may be compromised.

Code:
NormalizeUrlBeforeScan=1
VerifyNormalization=1
Remember all those Unicode issues with all of the %25%sd%23%fd%fb looking things in them? In this case the are prolly garbage as I just mashed some keys, but in reality they contained things like "/" and ".." in order to bypass the systems checking for such things. These options ensure that the %xx are turned into whatever they equal, the normalized data is then matched. What about those more sophisticated attacks that were double encoded? That is were the VerifyNormalization comes in. If after one pass of normalization the request is still encoded, URLScan knows that something is amiss and the request is denied.

Code:
AllowHighBitCharacters=0
No resources on my system require high bit characters, so I have denied them. Odds are good if you don't know if you use high bit chars, you can safely deny them.

Code:
AllowDotInPath=0
If this is set to zero, the request may only have one dot and everything after that dot is later matched as the extension. I felt this was safer because it may catch directory transversal attacks missed by other aspects of this filter and because I have no resources which fail to follow the (name).(extension) scheme.

Code:
RemoveServerHeader=1
Although I do not believe this adds any real security I figured that the modification and though it is still possible to reasonably detect the system, I figured that no harm could come from this and any gain at all is worth the cost.

Code:
EnableLogging=1
PerProcessLogging=0
PerDayLogging=0
I wish to have logging, but my server does not get enough traffic to justify additional granularity.

Code:
AllowLateScanning=0
My server runs no other filters that modify the request so I had no need to scan after those filters.

Code:
LogLongUrls=0
I have limited the maximum URL size to 200 bytes, the normal URL cut off is at 1024 bytes, consequently there was no need to enable this.

Code:
UseFastPathReject=0
RejectResponseUrl=/documents/urlserr.wtx
Fast reject uses a predefined standard IIS error, for my purposes I wished to use my own URLScan specific error. The first line tells it to do this, the second points to the new error page. :)

Code:
[AllowVerbs]
GET
HEAD
POST
These are the only verbs I wished to allow, I don't have use for things like CONNECT, so I chose not to bother with them.

Code:
[DenyHeaders]
Translate:
If:
Lock-Token:
I do not have WebDAV enabled so I have no use for these headers.

Code:
[AllowExtensions]
.
.asf
.avi
.exc
.gif
.htm
.ico
.jpg
.mov
.mp3
.mpg
.pdf
.png
.wmv
.wtx
. is allowed as I have both directory browsing and default documents enabled and without . listed http://www.address.tld/ would give an error as it does not have an allowed extension :) You will notice that I allow primarily image files (gif,ico,jpg,png), movie/music files (asf,avi,mov,mp3,mpg,wmv). htm and wtx files are also allowed. I bet this raises two questions. No active content? and What the heck are wtx and exc files? This system does have active content, I have mapped asp files to the htm extension, this allows me to serve all dynamic and static mark-up content under one extension and I don't need to worry about worms that use asp files (another "just in case") plus kiddies don't tend to save my server address for use with 1-day asp exploits, overall lowering naive hostile traffic, though no really increase in security. This leads to a minor reduction in speed for delivering static html documents, but my systems traffic is limited enough to make this a non-issue. So what about those wtx files? wtx = plain text, look in your windows extension map if you don't believe me! :P I use wtx because I this allows me to exactly specify what I do and do not wish to share as some of my http dirs are also network shares that may contain .txt or .doc files that I do not wish to share via the website. This allows user to determine what is and is not web available at a glance. The same is true for exc, I have some exe files that I do not wish to be run by web users, and although they are protected via ACLs and DACs, this is just an extra precaution in case someone makes a mistake.

Code:
[DenyUrlSequences]
..
./
\
:
%
&
exe
This should all be self explanatory. :) I have no content on my site that includes those chars, so if a user is trying to pass them, it means they are stepping outside of the realm of "normal use" and we can't have that. exe is just obvious. :) Although the web server is denied access to cmd.exe via both ACLs and DACs one extra precaution doesn't hurt, plus this will make it more obvious in the logs.

Code:
[RequestLimits]
MaxAllowedContentLength=350000000
MaxUrl=200
MaxQueryString=0
Content length is kinda silly when it is set so high, but I believe the default is much lower, and I have some very large files on there for people that are unable/do not wish to learn the wonders of FTP. Too large for normal practical applications, yes... but none the less appropriate for my uses. Max URL should be defined to whatever the longest path on your server is. I guesstimated at 200 bytes, though I am sure it is less. QueryString? I allow no queries, all data is passed via post and the server only accepts post data from itself as a refer, this makes it very difficult for an attack to send any extra data to the server at all.

Although my system has additional protections as well, they have yet to be needed, URLScan has defeated every IIS exploit released in recent memory. And to think, people ask why I never patch my server. ;) An ounce of prevention (well I also run Argus' Pitbull Protector and a few other research security alterations, but you get the idea) = years of laziness, and let's face it, people who run NT tend to be lazy, I know I am. :D

Let me know about any questions,

catch

PS. This file of course will not meet your exact needs, but I thought an example might be useful, although... IIS6 has made much of URLScan uneeded.