Get the most from your hard drives with hdparm
Results 1 to 7 of 7

Thread: Get the most from your hard drives with hdparm

  1. #1
    str34m3r
    Guest

    Exclamation Get the most from your hard drives with hdparm

    Most people who convert to linux from widoze are so happy with the boost in performance that they never even bother to see how they can optimize linux. One of the quickest ways to get a performance boost from your linux machine with IDE drives is hdparm. I looked around and couldn't find any tutorials on hdparm, so I figured that I'd share the wealth.

    I'm going to use /dev/hda in all my examples. Although it probably doesn't need to be said, you should change /dev/hda to whatever device represents your hardrive:
    /dev/hda Master on IDE0
    /dev/hdb Slave on IDE0
    /dev/hdc Master on IDE1
    /dev/hdd Slave on IDE1
    /dev/hde Master on IDE2 (You get the idea...)

    Before we begin, we should exit out of XWindows and unmount as many partitions as possible. Since we're going to be playing with the way that linux talks to the hard drive, we want to minimize our risk as much as possible. First, we'll check our current settings:
    Code:
    # /sbin/hdparm /dev/hda
    
    /dev/hda:
     multcount    =  8 (on)
     IO_support   =  0 (default 16-bit)
     unmaskirq    =  1 (on)
     using_dma    =  0 (off)
     keepsettings =  0 (off)
     readonly     =  0 (off)
     readahead    =  8 (on)
     geometry     = 2637/240/63, sectors = 39876480, start = 0
    Ok, that doesn't mean much to us now, so lets see how the harddrive is performing with these settings:
    Code:
    # /sbin/hdparm -tT /dev/hda
    
    /dev/hda:
     Timing buffer-cache reads:   128 MB in  1.04 seconds =123.19 MB/sec
     Timing buffered disk reads:  64 MB in 17.45 seconds =  3.67 MB/sec
    Ugh... that's pitiful considering that this drive is less than a year old. Lets see what we can do to fix that. The first thing we'll try is changing the multcount setting. This affects how many sectors can be read with a single I/O interrupt. Changing this setting doesn't always speed up data transfer rates, but at the very least, it reduces the amount of overhead incurred by the kernel during I/O. Let's check first to see what this drive can handle:
    Code:
    # /sbin/hdparm -i /dev/hda
    
    /dev/hda:
    
    <snip>
     BuffType=DualPortCache, BuffSize=2000kB, MaxMultSect=16, MultSect=8
    </snip>
    One of the lines of output should look like the one above. This particular drive can support reading up to 16 sectors at a time so let's test it:
    Code:
    # /sbin/hdparm -m16 -tT /dev/hda
    
    /dev/hda:
     setting multcount to 16
     multcount    = 16 (on)
     Timing buffer-cache reads:   128 MB in  0.98 seconds =130.03 MB/sec
     Timing buffered disk reads:  64 MB in 15.91 seconds =  4.02 MB/sec
    A minor improvement, but I know we can do better than that. Since I know my computer has a 32-bit bus, the next thing to do is turn on 32-bit support with the -c option. There are three supported values: 0 for off, 1 for on, and 3 for on (w/sync). The sync incurs a little overhead , but is required by some chipsets if you're going to be using 32-bit mode. Try them all and see what works best for you. For my machine, a value of 1 worked best:
    Code:
    # /sbin/hdparm -m16 -c1 -tT /dev/hda
    
    /dev/hda:
     setting 32-bit IO_support flag to 1
     setting multcount to 16
     multcount    = 16 (on)
     IO_support   =  1 (32-bit)
     Timing buffer-cache reads:   128 MB in  0.98 seconds =130.81 MB/sec
     Timing buffered disk reads:  64 MB in  8.41 seconds =  7.61 MB/sec
    Next is the -u option. This flag determines whether or not interrupts are unmasked. I've talked to a few people who caim that the option is useful, but you'll probably want to read the manpage for yourself, since the option can be dangerous, and I personally haven't ever seen any improvement from it on my drives.
    Code:
    # /sbin/hdparm -m16 -c1 -u1 -tT /dev/hda
    
    /dev/hda:
     setting 32-bit IO_support flag to 1
     setting multcount to 16
     setting unmaskirq to 1 (on)
     multcount    = 16 (on)
     IO_support   =  1 (32-bit)
     unmaskirq    =  1 (on)
     Timing buffer-cache reads:   128 MB in  1.00 seconds =128.50 MB/sec
     Timing buffered disk reads:  64 MB in  8.40 seconds =  7.62 MB/sec
    Since this one didn't provide any improvement, I'm going to turn it off for the rest of the example. Why use a possibly dangerous flag is there's no benefit? Anyway, the next flag is DMA access. This is where I usually find the most improvement. First, we'll check to be sure the drive supports it:
    Code:
    # /sbin/hdparm -i /dev/hda
    
    /dev/hda:
    
    <snip>
     DMA modes:  mdma0 mdma1 mdma2
     UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5
    </snip>
    The settings for DMA are kind of unusual. We'll want to use the -d flag with a value of 1 so that DMA is enabled. We'll also want to set the -X flag which controls the transfer mode.

    DMA settings are calculated as 32 + the DMA mode. Since my particular hard drive supports DMA0, DMA1, and DMA2, this means that -X32, -X33, and -X34 should be supported by my drive. UDMA settings are calculated as 64 + the UDMA mode. Again, this corresponds to -X64, -X65, -X66, -X67, -X68, and -X69. If UDMA is available, I don't know why you would ever bother with the plain DMA modes, so I won't. Upon testing all of the UDMA modes, you should notice a dramatic speed increase over previous tests. I tested all of the UDMA modes and found that -X67, -X68, and -X69 all performed similarly for my hard drive:
    Code:
    # /sbin/hdparm -m16 -c1 -u0 -d1 -X67 -tT /dev/hda
    
    /dev/hda:
     setting 32-bit IO_support flag to 1
     setting multcount to 16
     setting unmaskirq to 0 (off)
     setting using_dma to 1 (on)
     setting xfermode to 67 (UltraDMA mode3)
     multcount    = 16 (on)
     IO_support   =  1 (32-bit)
     unmaskirq    =  0 (off)
     using_dma    =  1 (on)
     Timing buffer-cache reads:   128 MB in  1.02 seconds =125.31 MB/sec
     Timing buffered disk reads:  64 MB in  1.95 seconds = 32.90 MB/sec
    Before we rejoice and think that we're through, there's one more thing to do: make the settings permanent. In the systems current state, if we were to reboot the machine, we would return to the dismal speed of 3.67 MB/sec and that's no fun now that we know we can do better. We'll open the file /etc/sysconfig/harddisks in our favorite editor (vi) and uncomment/edit/add the following lines:
    Code:
    USE_DMA=1
    MULTIPLE_IO=16
    EIDE_32BIT=1
    EXTRA_PARAMS=-X67
    Then, once we save the file, we're done. For those of you that are bad at math, we just achieved a data rate almost 9x faster than the original data rate. Not bad for 15 minutes of work. There are other settings for hdparm, but as best I can tell, these are the most commonly used options. I suggest you read the manpage for hdparm if you're interested in trying to squeeze alittle extra performance out of your hard drive.

  2. #2
    str34m3r
    Guest

    Exclamation Get the most from your hard drives with hdparm

    Most people who convert to linux from widoze are so happy with the boost in performance that they never even bother to see how they can optimize linux. One of the quickest ways to get a performance boost from your linux machine with IDE drives is hdparm. I looked around and couldn't find any tutorials on hdparm, so I figured that I'd share the wealth.

    I'm going to use /dev/hda in all my examples. Although it probably doesn't need to be said, you should change /dev/hda to whatever device represents your hardrive:
    /dev/hda Master on IDE0
    /dev/hdb Slave on IDE0
    /dev/hdc Master on IDE1
    /dev/hdd Slave on IDE1
    /dev/hde Master on IDE2 (You get the idea...)

    Before we begin, we should exit out of XWindows and unmount as many partitions as possible. Since we're going to be playing with the way that linux talks to the hard drive, we want to minimize our risk as much as possible. First, we'll check our current settings:
    Code:
    # /sbin/hdparm /dev/hda
    
    /dev/hda:
     multcount    =  8 (on)
     IO_support   =  0 (default 16-bit)
     unmaskirq    =  1 (on)
     using_dma    =  0 (off)
     keepsettings =  0 (off)
     readonly     =  0 (off)
     readahead    =  8 (on)
     geometry     = 2637/240/63, sectors = 39876480, start = 0
    Ok, that doesn't mean much to us now, so lets see how the harddrive is performing with these settings:
    Code:
    # /sbin/hdparm -tT /dev/hda
    
    /dev/hda:
     Timing buffer-cache reads:   128 MB in  1.04 seconds =123.19 MB/sec
     Timing buffered disk reads:  64 MB in 17.45 seconds =  3.67 MB/sec
    Ugh... that's pitiful considering that this drive is less than a year old. Lets see what we can do to fix that. The first thing we'll try is changing the multcount setting. This affects how many sectors can be read with a single I/O interrupt. Changing this setting doesn't always speed up data transfer rates, but at the very least, it reduces the amount of overhead incurred by the kernel during I/O. Let's check first to see what this drive can handle:
    Code:
    # /sbin/hdparm -i /dev/hda
    
    /dev/hda:
    
    <snip>
     BuffType=DualPortCache, BuffSize=2000kB, MaxMultSect=16, MultSect=8
    </snip>
    One of the lines of output should look like the one above. This particular drive can support reading up to 16 sectors at a time so let's test it:
    Code:
    # /sbin/hdparm -m16 -tT /dev/hda
    
    /dev/hda:
     setting multcount to 16
     multcount    = 16 (on)
     Timing buffer-cache reads:   128 MB in  0.98 seconds =130.03 MB/sec
     Timing buffered disk reads:  64 MB in 15.91 seconds =  4.02 MB/sec
    A minor improvement, but I know we can do better than that. Since I know my computer has a 32-bit bus, the next thing to do is turn on 32-bit support with the -c option. There are three supported values: 0 for off, 1 for on, and 3 for on (w/sync). The sync incurs a little overhead , but is required by some chipsets if you're going to be using 32-bit mode. Try them all and see what works best for you. For my machine, a value of 1 worked best:
    Code:
    # /sbin/hdparm -m16 -c1 -tT /dev/hda
    
    /dev/hda:
     setting 32-bit IO_support flag to 1
     setting multcount to 16
     multcount    = 16 (on)
     IO_support   =  1 (32-bit)
     Timing buffer-cache reads:   128 MB in  0.98 seconds =130.81 MB/sec
     Timing buffered disk reads:  64 MB in  8.41 seconds =  7.61 MB/sec
    Next is the -u option. This flag determines whether or not interrupts are unmasked. I've talked to a few people who caim that the option is useful, but you'll probably want to read the manpage for yourself, since the option can be dangerous, and I personally haven't ever seen any improvement from it on my drives.
    Code:
    # /sbin/hdparm -m16 -c1 -u1 -tT /dev/hda
    
    /dev/hda:
     setting 32-bit IO_support flag to 1
     setting multcount to 16
     setting unmaskirq to 1 (on)
     multcount    = 16 (on)
     IO_support   =  1 (32-bit)
     unmaskirq    =  1 (on)
     Timing buffer-cache reads:   128 MB in  1.00 seconds =128.50 MB/sec
     Timing buffered disk reads:  64 MB in  8.40 seconds =  7.62 MB/sec
    Since this one didn't provide any improvement, I'm going to turn it off for the rest of the example. Why use a possibly dangerous flag is there's no benefit? Anyway, the next flag is DMA access. This is where I usually find the most improvement. First, we'll check to be sure the drive supports it:
    Code:
    # /sbin/hdparm -i /dev/hda
    
    /dev/hda:
    
    <snip>
     DMA modes:  mdma0 mdma1 mdma2
     UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5
    </snip>
    The settings for DMA are kind of unusual. We'll want to use the -d flag with a value of 1 so that DMA is enabled. We'll also want to set the -X flag which controls the transfer mode.

    DMA settings are calculated as 32 + the DMA mode. Since my particular hard drive supports DMA0, DMA1, and DMA2, this means that -X32, -X33, and -X34 should be supported by my drive. UDMA settings are calculated as 64 + the UDMA mode. Again, this corresponds to -X64, -X65, -X66, -X67, -X68, and -X69. If UDMA is available, I don't know why you would ever bother with the plain DMA modes, so I won't. Upon testing all of the UDMA modes, you should notice a dramatic speed increase over previous tests. I tested all of the UDMA modes and found that -X67, -X68, and -X69 all performed similarly for my hard drive:
    Code:
    # /sbin/hdparm -m16 -c1 -u0 -d1 -X67 -tT /dev/hda
    
    /dev/hda:
     setting 32-bit IO_support flag to 1
     setting multcount to 16
     setting unmaskirq to 0 (off)
     setting using_dma to 1 (on)
     setting xfermode to 67 (UltraDMA mode3)
     multcount    = 16 (on)
     IO_support   =  1 (32-bit)
     unmaskirq    =  0 (off)
     using_dma    =  1 (on)
     Timing buffer-cache reads:   128 MB in  1.02 seconds =125.31 MB/sec
     Timing buffered disk reads:  64 MB in  1.95 seconds = 32.90 MB/sec
    Before we rejoice and think that we're through, there's one more thing to do: make the settings permanent. In the systems current state, if we were to reboot the machine, we would return to the dismal speed of 3.67 MB/sec and that's no fun now that we know we can do better. We'll open the file /etc/sysconfig/harddisks in our favorite editor (vi) and uncomment/edit/add the following lines:
    Code:
    USE_DMA=1
    MULTIPLE_IO=16
    EIDE_32BIT=1
    EXTRA_PARAMS=-X67
    Then, once we save the file, we're done. For those of you that are bad at math, we just achieved a data rate almost 9x faster than the original data rate. Not bad for 15 minutes of work. There are other settings for hdparm, but as best I can tell, these are the most commonly used options. I suggest you read the manpage for hdparm if you're interested in trying to squeeze alittle extra performance out of your hard drive.

  3. #3
    Senior Member
    Join Date
    May 2002
    Posts
    143
    Thank you for the nice tutorial. I will be adding a Linux server soon to my system, so this will definitely come in handy.

    . . . . V.
    All truths are easy to understand once they are discovered; the point is to discover them. What lies behind us and what lies before us are tiny matters compared to what lies within us.

  4. #4
    Senior Member
    Join Date
    May 2002
    Posts
    143
    Thank you for the nice tutorial. I will be adding a Linux server soon to my system, so this will definitely come in handy.

    . . . . V.
    All truths are easy to understand once they are discovered; the point is to discover them. What lies behind us and what lies before us are tiny matters compared to what lies within us.

  5. #5
    Banned
    Join Date
    Nov 2002
    Posts
    677
    Thank You!!! I was getting errors regarding the -u1 flag when the computer first boots up. Now I know which file I need and my chapter is finally closed.

  6. #6
    AO übergeek phishphreek's Avatar
    Join Date
    Jan 2002
    Posts
    4,325
    This is great tutorial.

    I have used hdparm on several of my linux installs. I have not been able to get it to work on separate HDs using the /etc/sysconfig/harddisks file. Meaning... whatever settings I put in that file, it will apply to all my harddrives.

    I get better performance using different parameters on each harddrive... so I just added a new script to run at boot that will make the parameters different for each harddrive.

    I can def. say... it works great for me!
    Quitmzilla is a firefox extension that gives you stats on how long you have quit smoking, how much money you\'ve saved, how much you haven\'t smoked and recent milestones. Very helpful for people who quit smoking and used to smoke at their computers... Helps out with the urges.

  7. #7
    Member
    Join Date
    Oct 2001
    Posts
    76
    To fix this, you just copy /etc/sysconfig/harddisks to /etc/sysconfig/harddiskhda and change settings there. If you have several drives, copy the file to /etc/sysconfig/harddiskhdx where x is the letter of the device you want to change the settings on.

    In my system I have 2 hard disks, a DVD-ROM, and CD-RW drive. My hard disks are hda and hdb, my DVD-ROM is hdc, and my CD-RW drive is hdd. I need different settings for all devices, so I have 4 files that set each device up individually, instead of the harddisks file. The files I have are obviously harddiskhda, harddiskhdb, harddiskhdc, and harddiskhdd. I still got harddisks, but that has no effect seeing as all my drives have individual configs.

    For my hard disks, hda and hdb, they are configured just as if I typed hdparm -c1 -d1 -m16 -X69 /dev/hda or hdb, depending on the drive. Both drives are identical, so can use the same settings.

    Another performance boost can be gained for your optical drives by passing kernel parameters at boot time. I use hdc=ide-scsi and hdd=ide-scsi at the moment. I will be experimenting to see if performance is improved on my hard disks as well. I'll let you know if it works as expected...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •