July 27th, 2004 07:53 PM
The number of NDRs that your system will send back to other systems does not depend on the number of users on your system. It totally depends on the number of emails that come into your domain that are not addressed to valid recipients... If you are seeing hundreds of just a couple of thousand I wouldn't worry about it. Especially if you have a really common domain name, or if you do a lot of web advertising with your email addresses.
July 28th, 2004 02:32 AM
Ok lets see the problem with this statement in a real world where one do this with an email server
"Palemoon- If you want to make the store smaller you have to run an offline defrag. This has been available since Exchange4. "edbutil" for early exchange, "eseutil" for current exchange." Ok take a 200 gig array offline to defrag it.? Chances are nill to none at least in the last couple places I was at. Not saying it does not work cause it does but business owners bitch and complain and everyone want their email. It is a good way to fix the problem but it is not workable in the real world servers are not taken off line for days for a defrag and real world sometimes you step into a network where you just have to work with what was left faults and all and the owners will not listen to you but believe in M$ and one day the server dies cause you never got to defrag because the people before you MCSE Certs and all just put more drives in the array until it became to big to even conside a defrag. Hummm then again the last two systems I admin the MCSE seemed more intent upon setting up the next visit then really fix the problem.
Fact is the relay was enabled by accident because MCSE certs mean you earn the right to learn in the real world as any Cert or Degree does. An Engieer get out of college and takes his test and then in the AEC world earns the right and pays the dues to make it to the next level same in IT.
I believe that one of the characteristics of the human race - possibly the one that is primarily responsible for its course of evolution - is that it has grown by creatively responding to failure.- Glen Seaborg
July 28th, 2004 06:32 PM
Unless you are running a very small and slow harddrive it would not take days to defrag a 200GB database. I think the slowest I ever saw a database defrag run was something like 45minutes/GB. Of course that is when the store is defragging actual data. In the case of what you are talking about, deleting data and the store doesn't shrink. That is called whitespace. When you defrag whitespace the speed at which the defrag runs increases dramatically. I have seen a 65GB database that had 40GB of white space defrag in under 1hour. On current hardware, say a 2GB/sec fiber SAN with 15k rpm drives I have seen 30-45GB databases complete in about 1 1/2 hours.
The good thing about exchange 2000 and higher is that the online defrag process has been improved and whitespace can be recovered for use by the store again. So if you have 5GB of whitespace, the store size will not grow until that 5GB of space is used up. This makes the need for offline defrags less likely. Even better is that you can create multiple databases on the same system, so you can always create a new database, move the users, and then defrag the bloated database. So they have tried to find ways to get around bad management.
Bad management is definitely to blame for a 200GB database though. I'm curious how it was possible to even back that database up several years ago, as tape drives used to be way to slow to handle that amount of data every night. I would never let a database grow over 30-50GB now because of the backup/restore time involved.
July 29th, 2004 12:21 AM
I believe that my problem has been resolved.
The main issue was with a conflict with my anti-virus software interfering with the IMC and the IS, reference Symantec article # 2004052416452048.
Thanks again for all the support.