Page 1 of 2 12 LastLast
Results 1 to 10 of 18

Thread: Raid Array Memory

  1. #1
    Senior Member Blunted One's Avatar
    Join Date
    Dec 2005
    Posts
    183

    Question Raid Array Memory

    Good afternoon to the Anti-Online community...

    I am working on an issue with one of our overworked servers and trying to give it a break. It currently runs an application that every one of our 50+ employees pulls and pushes data from all day long. Some of these files can be as big as 300-400 MB so I am sure you get the picture that this server when at peak usage is shoving a ton of data to and from the hard drives and over our network.

    Aside from upgrading the ram on the unit, which should help it out somewhat, I was wondering about upgrading another piece of memory on the raid controller.

    On the Raid Smart Array 5i on our HP DL380 server it has a piece of memory that can be upgraded. Currently it has a 64mb cache chip on it and I was thinking of upgrading it to the max of 256mb.

    Would this improve the I/O performance of the hard drives/raid array? Is it something that would be noticeable or is it just throwing money down the drain? I know Raid 5 is not optimal for maximum performance, but it is what it is and I can't change it...for now. If you need more information let me know.
    It's not a war on drugs it's a war against personal freedoms!

  2. #2
    Senior Member nihil's Avatar
    Join Date
    Jul 2003
    Location
    United Kingdom: Bridlington
    Posts
    17,188
    OK mate, I don't know your kit, but my suggestions are based on general hardware engineering principles.

    In the Finance Sector, they have this expression "cash is king".............. well, in ours I would say: "cache is king"

    You might want to expand your RAM.......... it can give you instant benefits, but you are faced with a "law of diminishing returns"

    In your position, and assuming that you don't have a ridiculously small amount of RAM, I would go for cache memory.

    Hey! do you remember the earlier P4s? a PIII Xeon would eat them for breakfast........... cost about 10x the price though...............

    Also, I have a couple of PI/133s............... one has a 256K onboard cache, and the other doesn't............. guess which one can race a PII/266?

    Remember it also depends on your software efficiency. You can put crap software on the finest box on Earth, and it will still be crap........... only you just might notice it a little quicker

    just my thoughts...............

  3. #3
    Senior Member Blunted One's Avatar
    Join Date
    Dec 2005
    Posts
    183
    Thanks for the good info.

    The system itself is well suited for most work. Xeon 3.2, 4GB ram (I am going to bump it to 8GB), 6 scsi HD in raid 5 config w/64mb chip on it with the accelerator on and set to 100% read cache which is supposed to be the fastest.

    I also found out that to increase the ram to 8GB I have to edit the registry in windows 2003 server so it will recognize and support the extra memory. Some tech from India said that doing this adjustment in the registry so you can add more ram can actually hurt system performance...does that even make sense?

    I have been told that Raid 5 hurts the performance of this software (called perforce). That a raid 1+0 is the best for this type of software since it basically just distributes and takes in files all day long from small pieces of code and images to large animations and huge game level files. So the biggest impact on performance comes from the I/O of the HD.

    Anyone have good ideas on how to make this a more efficient I/O setup...or is that not possible without reconfiguring the raid and upgrading the server?
    It's not a war on drugs it's a war against personal freedoms!

  4. #4
    It's a gas!
    Join Date
    Jul 2002
    Posts
    699
    Why dont you fire up perfmon on the server and add your usual counters to monitor ram, hdd's, nic's.
    Also you could try increasing the size of your page file and running a defrag.

  5. #5
    Senior Member nihil's Avatar
    Join Date
    Jul 2003
    Location
    United Kingdom: Bridlington
    Posts
    17,188
    Hmmmm.................

    I don't know the software first hand, but I would question the 100% read setting........... as you say you are "pushing and pulling" and I would suspect that setting to be more suitable where "pulling" is in the preponderance?

    RAID5 is pretty much a standard these days. Personally, I use RAID1, and have only "worked with" 5, 6 or 10 (I think that is your 1+0?).................... remember that RAID is about robustness, stability and recovery........... it is not about performance.

    Please remember that RAID10 has a 50% hard drive redundancy, as it stripes mirrored pairs. Given that, I do not understand why it would be faster than RAID5, other than that it does not have to calculate and store the parity data?

    I would warn you that I only build stuff for SOHO type clients, so my hands on experience is purely RAID1. In my "other life" I mostly encounter RAID5, although in the finance sector I have worked with 6 and 10.

    My gut feeling is that your RAID will not be a major performance factor. I would suggest that you talk to the software vendor to check this.

    Please bear in mind that nanosecond differences in artificial benchmark software is irrelevant..............

    Cheers

  6. #6
    Senior Member Aardpsymon's Avatar
    Join Date
    Feb 2007
    Location
    St Annes (aaaa!)
    Posts
    434
    Quote Originally Posted by nihil
    Please remember that RAID10 has a 50% hard drive redundancy, as it stripes mirrored pairs. Given that, I do not understand why it would be faster than RAID5, other than that it does not have to calculate and store the parity data?
    Disk access has linear delay. As in, t is proportional to kn, where t is time, k is constant relating to disk speed and n is number of write opperations.

    So, few things come into play. Biggest delay is seeking of course.

    With mirroring, all the mirrored drives work together, single write operation.
    With all raids you have an additional seek delay if files are written to two drives on a stripe, thus increasing k slightly.

    Raid 5 however essentially doubles your write operations. 1 write operation for the data and a second for the parity bits. With an additional delay that the parity bits must be calculated first. Hence, writing to a raid 5 is half the speed of reading from it. BUT when a single hard drive dies in raid 5 the raid can be rebuilt. A single drive dying on a stripe will loose all your data.

    MTBF on a stripe alone is MTBF of drive type / number of drives.
    If the world doesn't stop annoying me I will name my kids ";DROP DATABASE;" and get revenge.

  7. #7
    Master-Jedi-Pimps0r & Moderator thehorse13's Avatar
    Join Date
    Dec 2002
    Location
    Washington D.C. area
    Posts
    2,885
    Defragging a striped set (RAID 5) won't do anything. At least nothing that you'll notice.

    Are you sure that the problem is your server and not something along the transmit path? Switches, cable speed limits, etc.?

    If you add RAM to the RAID cache, obviously you'll see a performance increase in disk I/O but that doesn't mean that the app itself will run any faster. Follow the logic? Your benefit will be only related to disk performance, not application performance.



    --Th13
    Our scars have the power to remind us that our past was real. -- Hannibal Lecter.
    Talent is God given. Be humble. Fame is man-given. Be grateful. Conceit is self-given. Be careful. -- John Wooden

  8. #8
    Senior Member nihil's Avatar
    Join Date
    Jul 2003
    Location
    United Kingdom: Bridlington
    Posts
    17,188
    Quote Originally Posted by thehorse13
    Are you sure that the problem is your server and not something along the transmit path? Switches, cable speed limits, etc.?

    If you add RAM to the RAID cache, obviously you'll see a performance increase in disk I/O but that doesn't mean that the app itself will run any faster. Follow the logic? Your benefit will be only related to disk performance, not application performance.

    --Th13
    Yes, that is a good point.............. where is the problem? we haven't been told what the precise symptoms are.

    For example is the retrieve significantly faster than the return, or is it that the machine just "peaks" and runs slow at certain times?

    As I understand it, the application is just a repository and management tool for applications development and the like? I would not have thought it was particularly demanding per se, and that the issue would be more on the I/O and transmission front. Naturally, that is not confined to the server alone.

    Aard~ seems to confirm my suggestion that it is the parity side of RAID5 that would make a performance difference, but I wouldn't like to say how influential that would be.


  9. #9
    AOs Resident Troll
    Join Date
    Nov 2003
    Posts
    3,152
    What all is running on this server...

    If my memory serves me correctly...arent you running a MS SBS 2003??

    As stated........and also suggested in another thread...check your networking hardware...

    I think you should try and off load that app to another server.....with a mirror for the system and app\programs...and a disk array Raid5, for the data...

    because if the files are that big...you may see some improvement by seperating the os and application functions from the data read\write functions. Also if the app server is just serving the app..then your other server can deal with authentication, exchange, ISA etc.

    What is the app again...and do they have a website or recommended hardware guidelines???

    Most applications do....

    MLF
    How people treat you is their karma- how you react is yours-Wayne Dyer

  10. #10
    Senior Member Blunted One's Avatar
    Join Date
    Dec 2005
    Posts
    183
    Just a little refresher...

    The server is running Windows Server (Std) 2003 and the application that is running on it is called perforce which is a large database program that contains all our game data. All 50+ people from my office read and write to it all day.

    Example: One person is working on a texture or bug that is wrong in a level. After they finish working on it they submit it back to the server which replaces/updates the file they were working on. Then everyone else in the company must sync with the server to obtain the new fix/file.

    Does this give any insight into where the bottleneck is? Also remember that files are being checked in and out by 50+ people so it is never just one file/level/bug that is being sent back and forth to the server. Perhaps I am off the mark, but I would think this is relate to HD performance, but I am sure the network interface is also running at maximum too. It does seem that ever since we greatly increased the amount of data people had to sync and as well as adding more employees the server has taken a fall in performance and response times.

    Both checking files in and out is taking extended periods of time, especially when trying to sync with the large game level files. Where it used to take just a number of seconds it now can be up to a minute or two.
    It's not a war on drugs it's a war against personal freedoms!

Similar Threads

  1. Heap-Based Overflows
    By frostedegg in forum The Security Tutorials Forum
    Replies: 0
    Last Post: June 9th, 2005, 02:51 PM
  2. The history of the Mac line of Operating systems
    By gore in forum Operating Systems
    Replies: 3
    Last Post: March 7th, 2004, 08:02 AM
  3. Linux software RAID
    By problemchild in forum The Security Tutorials Forum
    Replies: 2
    Last Post: October 7th, 2002, 05:11 PM
  4. Replies: 1
    Last Post: July 15th, 2002, 03:46 AM
  5. Black Wolf's Guide to Memory Resident Viruses.
    By ahmedmamuda in forum AntiVirus Discussions
    Replies: 2
    Last Post: March 20th, 2002, 02:03 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •