Growing a RAID5 array – MDADM

With the release of kernel 2.6.17, there’s new functionality to add a device (partition) to a RAID 5 array and make this new device part of the actual array rather than a spare.

I cam across this post a while ago on the LK list.

LKML: Sander: RAID5 grow success (was: Re: 2.6.16-rc6-mm2)

So I gave it a go. My HOME directory is mounted on a 3x70gb SCSI RAID5 array. so I tried adding a further drive.

Although with the release of mdadm > 2.4 the only real critical part of the process is safer (it backs up some live data that is being copied), I didn;t fancy risking growing a mounted array. So I did plenty of backups, then switched to single user run level.

Basically the step includes adding a disc to the array as a spare, then growing the array onto this device.

mdadm --add /dev/md1 /dev/sdf1
mdadm --grow /dev/md1 --raid-devices=4

This then took about 3 hours to reshape the array.

The filesystem the needs to be expanded to fill up the new space.

fsck.ext3 /dev/md1
resize2fs /dev/md1

I then remounted the drive and wahey. Lots extra space….! Cool or what

EDIT  It’s over 18 months since I wrote this post, and Linux kernel RAID and mdadm have continued to move on. All the info here is still current, but as extra info check out the Linux RAID Wiki.

EDIT 2  The Linux Raid Wiki has moved

July 3, 2006. Computergeekery. Tags: , . 133 Comments.

133 Comments

  1. Frode replied:

    Excellent stuff. It inspired me to grow my 3x400GB by one drive. Here’s the result:


    [root@coruscant ~]# mdadm --add /dev/md1 /dev/sde1
    [root@coruscant ~]# mdadm --grow /dev/md1 --raid-devices=4
    [root@coruscant ~]# cat /proc/mdstat
    [>....................] reshape = 0.0% (166656/390708736) finish=1347.2min speed=4830K/sec
    :
    [root@coruscant ~]# cat /proc/mdstat
    Personalities : [raid1] [raid5] [raid4]
    md2 : active raid5 sde1[3] sdd1[1] sdf1[2] sdc1[0]
    1172126208 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
    :
    [root@coruscant ~]# pvdisplay
    :
    PV Size 745.21 GB / not usable 0
    :
    [root@coruscant ~]# pvresize /dev/md2
    Physical volume "/dev/md2" changed
    1 physical volume(s) resized / 0 physical volume(s) not resized
    [root@coruscant ~]# pvdisplay
    :
    PV Size 1.09 TB / not usable 0
    :
    [root@coruscant ~]# df -BMB /dev/mapper/VG1-LV0
    Filesystem 1MB-blocks Used Available Use% Mounted on
    /dev/mapper/VG1-LV0 105690MB 7894MB 92428MB 8% /mnt/backup
    [root@coruscant ~]# lvresize -L +10G /dev/VG1/LV0
    Extending logical volume LV0 to 110.00 GB
    Logical volume LV0 successfully resized
    [root@coruscant ~]# ext2online /dev/VG1/LV0
    ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b
    [root@coruscant ~]# df -BMB /dev/mapper/VG1-LV0
    Filesystem 1MB-blocks Used Available Use% Mounted on
    /dev/mapper/VG1-LV0 116259MB 7894MB 102460MB 8% /mnt/backup

    Reshaping the raid took almost 24 hours! But in the end everything just worked – online.

    July 15th, 2006 at 9:21 am. Permalink.

  2. Nicholas replied:

    I added another 300GB drive to my 4x300GB RAID5. It worked great!

    Keep in mind you have to enable the experimental kernel option before mdadm will let you grow your RAID5.

    The reason it took almost 24 hours for Frode is probably because of the minimum reconstruction speed configured. Mine was looking like it would take 30+ until I bumped it up by doing “echo 25000 > /proc/sys/dev/raid/speed_limit_min”. After that it ran for about 9 hours.

    Then I used xfs_growfs, worked like a charm!

    I’m really happy Linux is getting mature with great options like this.

    August 2nd, 2006 at 9:07 am. Permalink.

  3. Nicholas replied:

    Oh, I should mention that I also had to do “echo 1024 > /sys/block/md0/md/stripe_cache_size” before mdadm would let me grow the array. The mdadm maintainer said a future version would do this automatically. If you’re having an error trying to grow it, check dmegs, if it says the stripe cache is too low you need to change the number (mine said it needed at least 1024) and run that command. You should also replace md0 with your device name.

    August 2nd, 2006 at 9:09 am. Permalink.

  4. Ferg replied:

    > I’m really happy Linux is getting mature with great options like this.

    I’m pretty pleased too. The Linux kernel is pretty dammed good at the moment. For another project I’m setting myself a MythTV HTPC box. All the hardware had decent drivers in the kernel.. Well apart from a crappy Linksys WMP54GS card I should never have bought!!

    August 2nd, 2006 at 6:02 pm. Permalink.

  5. Jack chen replied:

    Dears:
    I couldn’t use pvresize while the RAID is reshaping. Is there any method to enhance this limitation ?

    Thanks
    Jack

    August 3rd, 2006 at 7:38 am. Permalink.

  6. bfish.xaedalus.net » Software RAID 5 in Ubuntu with mdadm replied:

    [...] Growing RAID 5 volumes with mdadm posted by Jonny | at 5:48pm on November 23rd 2006 Category: Computing, Linux | | [...]

    November 23rd, 2006 at 6:48 pm. Permalink.

  7. Nicholas replied:

    Well, I copied all my DVDs to the RAID and now it’s full again. So I’m adding another disk, up to 6 now. I’m getting 14MB/s reconstruction and it’s telling me it will take around 6 hours.

    Jack, I’m pretty sure the new space doesn’t become available until the reshape is finished. It would be theoretically possible to provide it, but accessing it would be very slow, and it’s extra complexity the RAID system doesn’t need. Use the method I mention above to speed it up, and just wait until it’s finished.

    November 29th, 2006 at 8:50 pm. Permalink.

  8. Muganor replied:

    Just in case there is Ubuntu readers, they need to know that RAID5 array reshaping is disabled in the stock kernels, giving an “invalid argument” error when trying to grow the array. So they need to compile a custom kernel and enable the option CONFIG_MD_RAID5_RESHAPE=y in the .config file. Note that kernel 2.6.17 or above is required.

    January 16th, 2007 at 2:03 pm. Permalink.

  9. Ferg replied:

    hi Muganor,

    thanks for commenting. Is that just a Ubuntu thing, or a Debian thing in general?

    January 16th, 2007 at 3:05 pm. Permalink.

  10. krampo replied:

    Hello, thanks for the post, I’m realy excited and going to try to add 5th 250GB disk to RAID5.

    So theoretically everything can be done without losing data on the raid, yes? I was thinking, can I do the reshape working in normal mode, just umount’ing /dev/md0 ?

    Can I add 2 devices at a time?
    mdadm –add /dev/md0 /dev/hdc
    mdadm –add /dev/md0 /dev/hdd
    mdadm –grow –raid-devices=6 (assuming, that before there were 4 of them)
    or should I add one device at a time?

    Are there any requirements for the resize2fs version? I have 1.38 on my system. Looks that there’s newer version available.

    Thanks!

    January 18th, 2007 at 10:10 pm. Permalink.

  11. Ferg replied:

    I just unmounted the RAID device. Apparently this can be done with the device still mounted. But it sounds a bit risky to me!!

    Personally I would add a single device at a time. But the functionality has been improving, and it might be fine to add two at a time. You might want to ask on the linux-Raid list. Or do more searching around.

    January 19th, 2007 at 10:05 am. Permalink.

  12. Eric replied:

    I’m in the process of adding a 5th drive to my 4 drive RAID5 array. My root partition is on the array, so everything’s still mounted. I haven’t seen any problems yet. Disk access is a lot slower than normal, but that makes sense because it’s rewriting the whole drive. It’s only 10% done, so we’ll see if it gets through the whole reshape without problems.

    January 21st, 2007 at 2:24 am. Permalink.

  13. Eric replied:

    It just finished adding the 5th drive. Everything still seems to be running okay.

    January 21st, 2007 at 6:57 am. Permalink.

  14. attic » Blog Archive » RAID5 grow replied:

    [...] Papildus informāciju par šo var smelties: scotgate.org RAID5 grow success [...]

    January 23rd, 2007 at 8:55 am. Permalink.

  15. krampo replied:

    Does it makes any difference to do fsck BEFORE growing to check whether everything is ok with fs?

    March 1st, 2007 at 12:49 pm. Permalink.

  16. krampo replied:

    Does it make any difference to do fsck BEFORE growing to check whether everything is ok with fs?

    March 1st, 2007 at 12:50 pm. Permalink.

  17. Ferg replied:

    Not sure. But it cannot hurt to ensure that the filesystem is good.

    March 1st, 2007 at 1:43 pm. Permalink.

  18. krampo replied:

    Just started to reshape :)

    [>....................] reshape = 1.5% (3676288/244195840) finish=459.6min speed=8719K/sec

    Will let you know tomorrow how it ends. I’m going to add 2 disks to existing 4 drive raid5, but I’m little superstitious and will do it one at a time. Thanks a lot for the post!

    March 1st, 2007 at 7:15 pm. Permalink.

  19. Ferg replied:

    Good luck!!!!

    March 1st, 2007 at 8:34 pm. Permalink.

  20. krampo replied:

    First grow went like a charm, now second is in progress, looks ok.

    March 2nd, 2007 at 10:59 am. Permalink.

  21. krampo replied:

    The second one didn’t go as smooth as first one, but everything turned out ok. After reshape had finished, I did fsck /dev/md0 – and it said that file system is clean, so I executed resize2fs -p /dev/md0 which said that filesystem actually isn’t clean and I should rund fsck -f /dev/md0. I ran with force flag and then ran the resize2 and that’s it – everything looks good to me. So I growed RAID5 from 4 to 6 disks in 2 steps (took about 8-9hours for each 250G 7200 disk to reshape, ~30min fsck, ~30min resize2s). NICE!

    March 2nd, 2007 at 9:43 pm. Permalink.

  22. Tommi replied:

    Is it possible to speed up the build process. I have just created RAID5 4x250G.

    March 23rd, 2007 at 11:25 pm. Permalink.

  23. Ferg replied:

    You can change the value that is in the proc filesystem for that raid device:

    e.g.

    echo 400000 >/proc/sys/dev/raid/speed_limit_max

    That will increase the maximum speed at which it will sync. Obviously if you are already running at the maximum, it will not help. Good luck.

    March 24th, 2007 at 9:53 am. Permalink.

  24. Tommi replied:

    What kind on speed should I get with dual P3 733. At the time it is going 890K/Sec

    March 24th, 2007 at 11:21 am. Permalink.

  25. Pie Pants replied:

    Thanks heaps for this guide, you’re a lifesaver! Got my array reshaping right now, adding a 5th disk.

    md1 : active raid5 hda1[4] sdb1[3] hdg1[2] hde1[1] hdc1[0]
    234420096 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
    [>....................] reshape = 3.6% (2837120/78140032) finish=161.9min speed=7748K/sec
    bitmap: 0/150 pages [0KB], 256KB chunk

    April 4th, 2007 at 10:13 am. Permalink.

  26. itspudding.com » Linux as a fileserver…. replied:

    [...] Then came adding another 80gb drive to the array. A bit of Googling found this excellent howto on adding a spare drive to a software RAID 5 setup, extending it to the spare drive, and extending the filesystem to fit. About 2 hours later, it was all done. Couldn’t be easier. [...]

    April 4th, 2007 at 4:14 pm. Permalink.

  27. Richard replied:

    Would it be a similar step for adding drives to a linear RAID you think?

    April 7th, 2007 at 11:56 am. Permalink.

  28. Ferg replied:

    I Don’t know I’m afraid. Probably not. I saw your posts and replies on the LinuxRaid mailing list. A better source of knowledge than me!!!

    Cheers
    Ferg

    April 9th, 2007 at 3:19 pm. Permalink.

  29. SpeedyGonsales replied:

    To 3 x 500GB I added one disk (second is waiting in line :-) , reshaping took 9h 47min
    Speed was between 12000 and 14000KB/s
    It all depends on size of disks and whole array,
    as bigger the size, as longer it take.
    Thanks for the info how to do it, man pages on mdadm are…. manpages :-)

    April 16th, 2007 at 8:06 am. Permalink.

  30. Imagination « News from Cfleesia replied:

    [...] Expanding RAID? Sure can do, but the ‘reshaping’ can be quite slow =) RedHat.com, Ferg’s gaff. [...]

    May 3rd, 2007 at 1:08 pm. Permalink.

  31. Edward Murphy replied:

    looks like Ubuntu Feisty’s server kernel has the correct options to start re-shaping without re-compiling the kernel.

    root@tank:~# cat /boot/config-2.6.20-15-server |grep CONFIG_MD_RAID5_RESHAPE
    CONFIG_MD_RAID5_RESHAPE=y
    root@tank:~#

    May 12th, 2007 at 9:17 am. Permalink.

  32. Donald Teed replied:

    Thanks for the useful resource/howto.

    I have grown my existing raid 5 array at home from 3 to 4 disks, having several md partitions on each disk for /var, /usr, etc. I’m running Debian 4 with an array that was created several years ago.

    In my experience, running grow on a live system can lead to the file system being partially unavailable. For /home I could stop services using it, umount, fsck, resize2fs and move along. Doing this operation on /usr and /var partitions required going to init 1. Doing it on / caused me to have a system that had no access to commands init, reboot, halt, nor response to Ctrl+Alt+delete. I was at init 1 anyway so I used the power button and it rebooted fine.

    For production system, I would not use mdadm –grow unless it was a partition not a part of the OS and I could turn off the services on it while these steps are taken. I’m sure in years to come this will improve.

    The Software Raid Howto page at the Linux Documentation Project is now very out of date so I expect this page will have more visitors.

    May 18th, 2007 at 11:01 am. Permalink.

  33. Ferg replied:

    Hi Donald,

    thanks for the comments. I only tried this on an umounted partition. Which was fairly easy due to it being my HOME partition.

    There’s a few other useful RAID resources. None more useful than the linux-Raid mailing list.

    http://vger.kernel.org/vger-lists.html#linux-raid

    It’s fairly low traffic. I would guess around about 40 posts a week (that is a very, very rough guess). There may be a digest version.

    May 18th, 2007 at 11:10 am. Permalink.

  34. Tommi replied:

    Hi,

    Something wrong with my debian based Storage server. I have compiles new kernel with raid5 add disk “setting”

    Drive failed after my kernel upgrade. kernel version 2.6.21.1 Debian 4,0 Etch.

    Still I get this error:
    i# mdadm –add /dev/md0 /dev/hdk1
    mdadm: add new device failed for /dev/hdk1 as 3: Invalid argument

    I have 3 disk array witch off one drive has failed.

    May 23rd, 2007 at 1:48 pm. Permalink.

  35. Ferg replied:

    I can only guess that either your MD device or the SCSI device is wrong. If not try removing the failed device and re-adding the new device to the array.

    If upgrading the kernel has caused you grief, I would also suggest you go back to the last workign kernel, until all is well again.

    Good luck!

    May 23rd, 2007 at 2:44 pm. Permalink.

  36. Tommi replied:

    I added the failed device back to array, worked just fine. All HD’s are connected to IDE controllers, Promise chipset.

    May 23rd, 2007 at 5:01 pm. Permalink.

  37. Ferg replied:

    Nice one!!

    May 24th, 2007 at 10:17 am. Permalink.

  38. Tommi replied:

    Could someone check these SMART outputs. I used webmin and smartmontool.

    Drive I can’t add to array
    http://pastebin.ca/507188

    Error:
    sudo mdadm –manage /dev/md0 –add /dev/hdk1
    mdadm: add new device failed for /dev/hdk1 as 3: Invalid argument

    Drive that drops from array at boot. http://pastebin.ca/507185

    May 24th, 2007 at 2:04 pm. Permalink.

  39. Bubba replied:

    Heya, thanks a bunch for this howto. I’m running a 6x 500g array (coming from 3×500 in one go!). The only thing I did differently was that I put LVM2 in between the array and the filesystem. This way I can steal some extra space from a (hardware) raid 1 set that’s also in the system (has root, swap, etc. on it but still 250G free space… would’t want to waste them).

    Very neat that raid5 resizing now works with sw raids as well! – saved me a bundle on an hw controller there.

    May 24th, 2007 at 3:54 pm. Permalink.

  40. Ferg replied:

    No problem. Glad it was of some use! Have a look at the Linux-Raid mailing list to see just how well MDADM and Linux kernel software raid is coming on!

    May 24th, 2007 at 4:31 pm. Permalink.

  41. Mike replied:

    I’d been waiting for this functionality for years, and was very psyched when 2.6.17 came out. I just had occasion to make use of it for the first time, went from 4x320GB to 5. Left everything mounted and network services running, all worked okay even with continuous accesses to the array. RIP raidreconf.

    June 8th, 2007 at 2:13 pm. Permalink.

  42. Ferg replied:

    MDADM truly rocks for anything Linux software related. Neil Brown deserves a medal for writing this.

    June 8th, 2007 at 2:54 pm. Permalink.

  43. Chris replied:

    I also have this strange error from mdadm … add new device failed. I copied the partition table with dd. and added the hdc2 to md2 and now I can’t add hdc3 to md3 ?!?!?! Very strange.. Anyone an idea ?

    July 7th, 2007 at 12:12 pm. Permalink.

  44. Jeremy Leigh replied:

    Guys, thought this might help.
    You need to create a blank partition first, and then set the ‘raid’ flag on the drive. You can do this through gparted.
    Then add it with the mdadm –add

    July 23rd, 2007 at 11:48 am. Permalink.

  45. Jeremy Leigh replied:

    Oooh guys. Careful when adding drives like this. I just lost 3 hours trying to find what was going on.
    When you resize your raid, make sure you edit your /etc/mdadm.conf file to update the number of devices in your raid configuration!
    Otherwise you reboot and poof, ur raid is gone :(
    At least this happened to me on Fedora Core 7.
    Stupid thing fails on boot and goes to this “Repair filesystem” thingy because it can’t mount /dev/”insert your lvm or fs here”.
    The anoying thing is, the Repair Filesystem mount’s your root readonly!
    To get your / writeable:
    mount -w -o remount /

    Then edit your /etc/mdadm.conf to the new number of drives in your configuration.
    Or alternatively, edit your /etc/fstab and take out the line that mounts your raid (then you can fix it later).

    July 25th, 2007 at 12:22 pm. Permalink.

  46. Ferg replied:

    As long as each member of the array has the partition type set to FD – Raid autodetect, then you should not have to update mdadm.conf. The MD driver should autodetect, assemble and start each array.

    You mention that the ROOT was mounted read only. This cannot have been a RAID5 device, as to the best of my knowledge you can only boot from a RAID1 array. IS that true?

    Cheers

    July 25th, 2007 at 1:30 pm. Permalink.

  47. Jeremy Leigh replied:

    I am not booting from the raid. I have another drive in my pc seperate from raid that is running the linux.
    Can you explain a little more about this FD partition type.
    I have my raid drives formatted as a blank partition, and then i have set the flags ‘boot’, and ‘raid’, on each partition. This is also in fact how DiskDruid creates ‘software raid’ partitions.
    And yes, mdadm auto-detects the drives, but only up until the number of devices that is specified in /etc/mdadm.conf

    DEVICE partitions
    MAILADDR root
    ARRAY /dev/md0 level=raid5 num-devices=6 uuid=46e9ab9d:c365cd2c:e08c1bc3:73c070ca

    maybe, if the ‘num-devices=6′ was removed entirely, then it would detect the drives fine. I will try that.
    It is also possible to specify which drives you want in your array specifically i.e.

    ARRAY /dev/md0 level=raid5 num-devices=6 uuid=46e9ab9d:c365cd2c:e08c1bc3:73c070ca devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1,/dev/sdg1

    if you specify like this, then the config will ignore any drives not in the devices list.

    July 29th, 2007 at 3:44 am. Permalink.

  48. Daniel Albuschat replied:

    I’ve just successfully completed the RAID growth of my 3x500GB RAID5. It’s now 4x500GB, and it took something about a full day, including filesystem-check and resize.

    August 8th, 2007 at 6:33 pm. Permalink.

  49. Som replied:

    I’m actually doing the same thing now with Ubuntu 7.04 — expanding a 3x500GB RAID5 to 4x500GB.

    The grow took about 12 hours. I ran fsck.ext3 with no issue, but when I went to run resize2fs it said, “Please run ‘e2fsck -f /dev/md0′ first”. Did you have to do this, Daniel?

    That’s where I am now… the first time I ran it my ssh connection was dropped. Wasn’t listed as a process when I relogged in, so I ran it again. We’ll see how it goes…

    - Som

    August 23rd, 2007 at 1:17 am. Permalink.

  50. Som replied:

    Well, I wasn’t able to run e2fsck from over ssh — the connection kept dropping.

    I went home and ran it directly on the box. Everything worked great.

    So, as a recap:

    Ubuntu 7.04 (Feisty Fawn)

    3x500GB grown to 4x500GB

    mdadm –add /dev/md0 /dev/sde1
    mdadm –grow /dev/md0 –raid-devices=4
    (~12 hours)
    e2fsck -f /dev/md0
    (~15 minutes)
    resize2fs /dev/md0
    (~10 minutes)

    - Som

    August 23rd, 2007 at 6:59 pm. Permalink.

  51. Ferg replied:

    Hi, You might want to investigate Screen if you have an dodgy connection. You can keep a persistent session going with it. If your connection drops, you can start screen again and pick up the session where it drops.

    August 23rd, 2007 at 7:10 pm. Permalink.

  52. Som replied:

    I didn’t have putty set to send null packets to maintain connections. Just changed it… will see if it helps. I’ll check out screen, though… thanks!

    - Som

    August 31st, 2007 at 9:05 pm. Permalink.

  53. growing raid5 arrays is a lot of work replied:

    [...] as you maybe know, I do use linux software raid to store all my data. now i bought a new 500gb western digital drive and added it to my existing raid5 array which is made of 3 drives like the new one (information on growing came from this site). I already thought that this might be a longer task, but I didn’t think it would take this long; md3 : active raid5 sdd1[3] sdc1[2] sdb1[1] sda1[0] 976767872 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [>………………..] reshape = 0.4% (2237184/488383936) finish=977.7min speed=8286K/sec [...]

    September 9th, 2007 at 12:20 pm. Permalink.

  54. Christophe replied:

    hmm ok, im getting a Hardware RAID Controller that can grow the RAID5 Array. am I right with this:
    Ill need to use lvm in order to grow the partitions and the expand the filesystem ?

    September 20th, 2007 at 12:09 am. Permalink.

  55. Ferg replied:

    You may be able to grow the RAID5 array with your hardware RAID controller, but I know nothing about this. This post was about using the Software MD RAID driver in the Linux kernel, and using the RAID utility MDADM to grow the array.

    September 20th, 2007 at 8:17 am. Permalink.

  56. Expandir RAID5 por software con mdadm | Greentown replied:

    [...] Post de un guiri en su blog con bastantes comentarios interesantes. Hilo en Ubuntu Forums sobre el tema. [...]

    September 27th, 2007 at 12:39 pm. Permalink.

  57. Willow of Oz replied:

    I have just grown a raid 6 array from 6 to 8 disks on an alpha of Ubuntu 7.10, no real problems.
    The kernel is an ubuntued 2.6.22.
    Pretty much as listed here, I added the two drives as spares then grew the raid onto them.
    Also updated the mdadm.conf since I specify the array by devices. I’ve removed the two new drives and re-added them too. Interestingly, although I add them both at once, it rebuilt onto one disk then rebuilt onto the other. I can only assume that this is a holdover from raid 5.
    The LVM had to be resized on top and the JFS filesystem unmounted and then remounted/resized.
    Rebuilding was on the order of 2 hours @ 65 meg / sec with no tweaking.

    September 29th, 2007 at 5:39 pm. Permalink.

  58. Jon H replied:

    Spares, Spares every wherez!

    Quick Q:

    Does a spare need to be configured to be EXACTLY like the other drives in the array? I want 1 drive to serve as a spare for 2 arrays 1 is ~300GB raid 1 the other is 500GB raid 5.. can i just partition the whole drive and it’ll use what it needs? Thanks!

    October 4th, 2007 at 4:00 pm. Permalink.

  59. Ferg replied:

    I’ve addded a 147Gb partition to a RAID10 array with 70Gb partition members. This worked fine. However, I’m not sure for RAID5. I would think it would work, but you might be best testing this before relying on it

    October 4th, 2007 at 6:20 pm. Permalink.

  60. Jon H replied:

    Answered my own Q above. It doesn’t need to be the exact size. You are warned it is bigger (wont allow smaller) and the unused space is wasted. Until you can ‘grow’ the array to use it.

    BUT Q2 for you all.

    Can you remove a drive from an active array? Say you have a 4 drive raid 5 array.. gives you lots of space. you decide you wont need all that space and want a HDD back after all the drives have become active, is it possible to tell it to stop using the drive? maybe shrink the size of each parition and remove a device or something?

    October 9th, 2007 at 11:19 am. Permalink.

  61. Ferg replied:

    I don’t think that will work. You’ve two questions. Can you resize the filesystem you’re using, and can MDADM and Grow shrink a raid array. I don’t think either is possible, but I stand to be corrected. Try the LinuxRaid mailing list.

    October 9th, 2007 at 11:32 am. Permalink.

  62. Jon H replied:

    Well you can shrink *some* file systems – like ext3 which is the one i am using. Although i believe you cant shrink the likes of xfs.

    October 9th, 2007 at 11:49 am. Permalink.

  63. jon H replied:

    Hi, me again!

    Question for you and your viewer/readers..

    I have noticed that 1 of the 2 drives in my mirrored array is always reporting to be a couple of degrees hotter than the other.

    Is this because (possibly) it is the primary drive and is used for all the reads as well as writes?

    Any suggestions?

    November 5th, 2007 at 5:37 pm. Permalink.

  64. SimpleSimon replied:

    Well… I’ve got the itch to add another 500gig drive to my three drive RAID5 array. Problem is that I have noplace to backup the ~1TB of data on the RAID. How risky is this process? This is a fresh Debian lenny installation.

    November 19th, 2007 at 10:16 am. Permalink.

  65. Stef replied:

    Thanks for the help , I am adding 2x750Go one by one to my 3x750Go RAID 5 device . It will take more than 24hours minimum but it works !!!

    November 19th, 2007 at 10:20 am. Permalink.

  66. paaki replied:

    Thank you very much for this. I’m currently putting together a 7*500GB NAS with debian and RAID5. This information came in very handy because five of those disks are formatted as NTFS. Cheers, mate!

    November 25th, 2007 at 12:54 pm. Permalink.

  67. Ferg replied:

    No Problem. Glad it was of use.

    Cheers

    November 25th, 2007 at 5:02 pm. Permalink.

  68. SMINO replied:

    Anyone know what the Software raid 5 limit is size wise and disk wise?
    Ie. Is there a 5TB Limit, 2TB limit?
    Do you need a certain amount of memory to go past 2TB… After X Drives the Array becomes unstable…

    November 30th, 2007 at 9:09 pm. Permalink.

  69. Ian replied:

    Thanks for this! I am about 21% complete on rebuilding from 4 to 6 500Gb SATAII drives in a single step, and your article has been brilliant.

    I’m amazed at how resilient this is – although I probably wouldn’t recommend it, I was watching a movie streamed from this raid array, copying data to it *and* growing at the same time.

    Awesome.

    December 1st, 2007 at 12:45 am. Permalink.

  70. Harry Moyes replied:

    This array originally built using Debian Etch
    OS is not on the array.
    Hardware is/will be 10 half terrabyte Maxtor sataII discs
    Configured as raid6
    Mounted two in icybox 5 disc sata disc racks.
    Connected by three no-name sii quad port sata drive controllers.
    5 drives actually configured. Number 6 installed.
    Could not grow the number of devices in the array using Etch (kernel too old)
    OS deleted and replaced by Ubuntu 7.04 (desktop actually)
    Installed mdadm and:

    root@Beijun:/# uname -a
    Linux Beijun 2.6.22-14-generic #1 SMP Sun Oct 14 23:05:12 GMT 2007 i686 GNU/Linux

    root@Beijun:~# mdadm –assemble /dev/md0
    mdadm: failed to add /dev/sde1 to /dev/md0: Invalid argument
    mdadm: /dev/md0 has been started with 5 drives.

    root@Beijun:/# fdisk /dev/sde

    The number of cylinders for this disk is set to 60801.
    There is nothing wrong with that, but this is larger than 1024,
    and could in certain setups cause problems with:
    1) software that runs at boot time (e.g., old versions of LILO)
    2) booting and partitioning software from other OSs
    (e.g., DOS FDISK, OS/2 FDISK)

    Command (m for help): p

    Disk /dev/sde: 500.1 GB, 500107862016 bytes
    255 heads, 63 sectors/track, 60801 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0×00000000

    Device Boot Start End Blocks Id System
    /dev/sde1 1 60801 488384001 83 Linux

    Command (m for help): q

    root@Beijun:/# mdadm –add /dev/md0 /dev/sde
    mdadm: added /dev/sde
    root@Beijun:/# mdadm –grow /dev/md0 –raid-devices=6
    mdadm: Need to backup 768K of critical section..
    mdadm: … critical section passed.
    root@Beijun:/#
    /dev/md0: 1397.28GiB raid6 6 devices, 0 spares. Use mdadm –detail for more detail.
    root@Beijun:/# mdadm –query –detail /dev/md0
    /dev/md0:
    Version : 00.91.03
    Creation Time : Tue Nov 20 00:26:12 2007
    Raid Level : raid6
    Array Size : 1465159488 (1397.29 GiB 1500.32 GB)
    Used Dev Size : 488386496 (465.76 GiB 500.11 GB)
    Raid Devices : 6
    Total Devices : 6
    Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Dec 13 23:01:44 2007
    State : clean, recovering
    Active Devices : 6
    Working Devices : 6
    Failed Devices : 0
    Spare Devices : 0

    Chunk Size : 64K

    Reshape Status : 2% complete
    Delta Devices : 1, (5->6)

    UUID : a562da4a:8beaefd3:ed4e5664:d2029d2e
    Events : 0.7860

    Number Major Minor RaidDevice State
    0 8 96 0 active sync /dev/sdg
    1 8 32 1 active sync /dev/sdc
    2 8 16 2 active sync /dev/sdb
    3 8 48 3 active sync /dev/sdd
    4 8 80 4 active sync /dev/sdf
    5 8 64 5 active sync /dev/sde

    Absolutely bl**dy marvelous software.

    Thanks for the hints, the man page lacks somewhat in detail.

    Cheers Harry

    December 14th, 2007 at 12:18 am. Permalink.

  71. Stefan replied:

    hi,

    it’s really a cool thing, but how do u connect the drives?
    i’ve looked for some cheap sata hba’s but i couldn’t find some exept Promise SATA300 TX4 – but it doesn’t let u read out the s.m.a.r.t. infos ;(

    greets Stefan

    January 22nd, 2008 at 12:17 am. Permalink.

  72. Ed H replied:

    I made a mistake and ran:
    mdadm -add /dev/md0 /dev/sdd
    instead of :
    mdadm -add /dev/md0 /dev/sdd1

    then I ran the grow command. Is this going to be a problem?

    January 26th, 2008 at 5:26 am. Permalink.

  73. Ferg replied:

    I once did the same and created an entire array from the raw block devices rather than the partitions. It did not cause any problems for me. Too be honest I\m not too sure what the real ramifications are. Why not swap out the disc when you have some time, repartition the drive and re add it except as a partition instead?

    January 26th, 2008 at 11:22 am. Permalink.

  74. stile replied:

    Ah, just got done adding FOUR 500 gig drives to bring my total to EIGHT 500 gig drives. Added them one by one without having them mounted. Had to do e2fsck -f before I resized it. Last drive too over 40 hours to add and another 2 hours to resize. Now have 3.36 TB space avail.

    Thanks Ferg!

    -stile

    January 31st, 2008 at 11:11 pm. Permalink.

  75. Ferg replied:

    happy to be of some help. Proper kudos goes to Neil Brown the MDADM developer though! Check out the Linux Raid mailing list for some even better cool stuff.

    February 1st, 2008 at 12:43 am. Permalink.

  76. Will replied:

    Thanks for the great information!

    Question: anyone know what will happen if power is lost or a drive fails during the reshape part of “grow”?

    February 8th, 2008 at 12:49 am. Permalink.

  77. Linux Support Company replied:

    Nice info, thanks. All we need next is an update to mdadm’s handling of bad blocks.

    February 13th, 2008 at 6:42 pm. Permalink.

  78. RAID-5 Online-Expand « senseless brainwanking replied:

    [...] Eine gute Anleitung findet sich auch auf der Seite Ferg’s Gaff. [...]

    February 24th, 2008 at 3:20 pm. Permalink.

  79. Dean replied:

    I would like to create a raid1 volume from an existing single volume that isn’t a lvm volume…

    how can I add the existing disk to be in a lvm disk group with another disk to be a raid1. Basically I’m looking to create a mirror to migrate data to another location.

    February 25th, 2008 at 2:38 am. Permalink.

  80. Blazestorm replied:

    I am a complete noob with Linux… but am using Software Raid5 over Windows Server 2003 just because of how slow and limited it was.

    Took 22 hours to expand my 3 x 750 array to 4 drives… check the file system and expand it…

    Even those this guide was super-simple, it was all I needed. Although I had to do a few things different (Ubuntu) it worked out in the end.

    Thanks, I’ll probably come back to this site when I have to expand with more 750 drives… I’ve already forgotten the commands :o

    February 28th, 2008 at 1:21 am. Permalink.

  81. Ferg replied:

    Great. Glad it was of use!

    Have a look at the newly revamped Linux RAID wiki for more updated info
    http://linux-raid.osdl.org/index.php/Main_Page

    February 28th, 2008 at 10:03 am. Permalink.

  82. Tout sur le RAID « Libitt’s Weblog replied:

    [...] Growing a RAID 5 array Blog post which describes using mdadm to grow a RAID 5 array to include more disk space. [...]

    March 31st, 2008 at 4:53 pm. Permalink.

  83. Samuel replied:

    Hey guys, I grew my RAID5 from 3x300gb to 4x300gb. This drive (/dev/md2) was mounted as root filesystem, with a small raid0 for the /boot partition and lilo as bootloader. anyway, it won’t boot anymore after the grow…
    Lilo seems to load from the /boot partition, but after some time during the boot process it just says:

    md: md2 stopped.

    Do I have to tell lilo somehow that the md2 disk is bigger now? I think that is the problem, but i don’t know how to do it…

    thanks for any suggestions!
    samuel

    April 16th, 2008 at 9:31 am. Permalink.

  84. Jeremy Leigh replied:

    Hi Guys, I have just grown my raid 5 from 7 to 8 500Gb drives.
    I am now running into this problem.
    When I try to run fsck /dev/md0

    fsck.ext2: Device or resource busy while trying to open /dev/md0
    Filesystem mounted or opened exclusively by another program?

    Or when I try to run resize2fs /dev/md0

    resize2fs: Device or resource busy while trying to open /dev/md0
    Couldn’t find valid filesystem superblock.

    Cat /proc/mdstat outputs:

    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdb1[0] sdi1[7] sdg1[6] sdh1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1]
    3418686208 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUUUUUUU]

    unused devices:

    However, I can still access and mount my filesystem – i just cannot grow it, which I find very strange.
    I have tried to init 1, to go to single user mode, and I have made sure file system is unmounted, but this ‘busy’ error still occurs.

    Any ideas? I’m kinda stuck. Google is no help…

    June 2nd, 2008 at 2:55 pm. Permalink.

  85. Ferg replied:

    Hi Jeremy, I’m afraid I’ve no idea. I saw your post on the Linux Raid list. I’m sure you’ll get a decent response on their. BTW I replied to your email address but it bounced. Is it not asdf@asdf.com? ;-)

    June 2nd, 2008 at 8:59 pm. Permalink.

  86. Jeremy Leigh replied:

    hehe i feel a bit silly.
    Since I am using LVM, all I had to do was 1:
    PVresize.

    2. using system-config-lvm just increase to use new space :)

    Reason why the ‘busy’ message kept appearing is because LVM has the device locked.

    June 6th, 2008 at 1:23 pm. Permalink.

  87. ferg replied:

    I saw the thread. Easy mistake to make. I’m sure I would have done so!

    June 6th, 2008 at 1:33 pm. Permalink.

  88. Daze replied:

    I have 2x500GB in a RAID 0 array. I have a spare 500GB lying around. Is there a way I can add the spare disk and covert it to RAID 5?

    How would I do it?

    Daze

    June 11th, 2008 at 2:46 pm. Permalink.

  89. Keyzer Suze replied:

    I have just expanded a 4 disk raid5 to 6 disk raid5 whilst being online, took roughly 12 hours, worked smoothly

    July 27th, 2008 at 10:12 pm. Permalink.

  90. Cleary replied:

    Daze,

    Depending on how much data you have on your RAID0 and if it’ll fit on one 500GB.
    - add your third disk
    - copy all the raid0 data if it’ll fit to the third disk
    - break the raid0
    - create a mirror/raid1 with the third disk
    - then create a raid5 with only 2 disks
    - then add the last disk
    - then resize the filesystem to use all of the new raid5.

    See this reference;
    http://scott.wallace.sh/node/1521

    September 2nd, 2008 at 2:04 pm. Permalink.

  91. Jennifer Miller replied:

    Hi there, I am using a 4x500GB Raid5 and would like to grow it using the described method. *BUT* I am using dm-crypt and no LVM.
    Q: Is dm-crypt transparent enough to make the growing possible? Any recommendations or experiences yet?

    September 11th, 2008 at 7:33 pm. Permalink.

  92. František Soukup replied:

    Hi, thank’s to this howto, I’ve started to reshape my array :)
    md8 : active raid5 sdd8[3] sda8[0] sdc8[2] sdb8[1]
    608702464 blocks super 0.91 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    [>....................] reshape = 2.8% (8538112/304351232) finish=344.6min speed=14301K/sec

    Hope, everything will be OK. :)

    October 15th, 2008 at 1:41 am. Permalink.

  93. Jon H replied:

    Hey,

    I am about to expand my existing 4x500GB raid5 array. It has been a year since i created it and i cant remember exactly what i did. Do you HAVE to create the Linux Raid partition for mdadm? I have vague memories of that not being necessary? (might have been a pure lvm system i read that for)

    Also I plan to add 2 new 750GB the plan being that as the existing drives are replaced they will be bigger and eventually all of them will be at least 750GB and i can grow the array. to at least the size of the smallest. Does this sound ok?

    Also I had the idea of before i add to the array decommissioning one of the existing 500GB and adding a 750 in its place? Checking it works and then add the 500GB back as a spare to keep wear down. Does this sound crazy or sane? :)

    October 17th, 2008 at 10:47 am. Permalink.

  94. Ferg replied:

    Hi, You do have to create the partition in order to add it to the array. You can add plain block devices, but that becomes less flexible in future.

    Check out the Linux raid Wiki
    http://linux-raid.osdl.org/index.php/Main_Page

    October 17th, 2008 at 10:52 am. Permalink.

  95. Jaytag Computer - News » Blog Archive » Creating a RAID 5 array in Ubuntu with MDADM - Walkthrough replied:

    [...] Growing RAID 5 volumes with mdadm [...]

    October 20th, 2008 at 10:51 pm. Permalink.

  96. Jon H replied:

    Anyone here done much with Consistency Checks and software raid?

    October 22nd, 2008 at 3:30 pm. Permalink.

  97. Ferg replied:

    I regularly do the following:

    echo check >> /sys/block/md1/md/sync_action

    That’s about all.

    October 22nd, 2008 at 3:37 pm. Permalink.

  98. Jon H replied:

    That is what i have just read.. and then you can do a repair. I am surprised that mdadm has no feature to automate this. Also, how regularly is regularly?

    October 22nd, 2008 at 3:46 pm. Permalink.

  99. Mr.Gosh replied:

    Thanks for the Infos!
    I`m currently growing ;)

    md0 : active raid5 sdd1[4] sda1[0] sdc1[3] sdb1[1]
    976767488 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
    [>....................] reshape = 0.9% (4747520/488383744) finish=1071.8min speed=7517K/sec
    bitmap: 1/466 pages [4KB], 512KB chunk

    unused devices:

    October 27th, 2008 at 12:57 pm. Permalink.

  100. Zion ist größer geworden : Marcs Weblog replied:

    [...] der Platten und GBs mit 4 – 10 Stunden rechnen, bis das sog. “Reshape” fertig ist. Ferg hat dafür einen guten Artikel [...]

    November 7th, 2008 at 5:21 pm. Permalink.

  101. Michael Shigorin replied:

    Returning to this article, thanks for nice info.

    Just in case — those running Linux software RAID5 on partitioned disks might badly need understanding of stripe alignment: http://www.pythian.com/blogs/411/aligning-asm-disks-on-linux

    November 17th, 2008 at 8:16 pm. Permalink.

  102. quazeye replied:

    Jon H-

    Did your test from Oct 17th work? I was thinking about doing something similar. It seem like it should work. The only thing that I question is how happy Linux RAID will be when I add a partition that is bigger than the existing partitions (like adding your 750GB).

    According to the Linux RAID Wiki I should be able to size a partition that has the “amount of space [I] will eventually use on all…disks”. But I want to stop short of growing the RAID and use drives in that state until am ready to pull the smaller disks (once all the big disks are procured over time), then grow the RAID at that time. I am aware that the RAID would use only a 500GB portion of the 750GB (hypothetically). But the intent is to be ready for future expansions and ever-increasing drive sizes.

    Am I off my rocker? Are we playing with fire?

    December 18th, 2008 at 10:19 pm. Permalink.

  103. Jon H replied:

    CHAPS.

    Just a quick Q i just did a mdadm –detail md4 to see a state of ‘active’ not ‘clean’ as per my other arrays. any clues what this means?

    January 13th, 2009 at 7:22 pm. Permalink.

  104. Jon H replied:

    -quazeye-

    I had no problems adding the 750GB’s i just created a linux raid partition on them that is the size of the WHOLE disk, and set the number of raid devices. It then just expanded to include them. I believe that as and when i have replaced the original 4 500GB’s i can set the size to max and it will grow to the new smallest size (750GB) i am closer to doing this than i imagined i would be 18months down the line but hey.

    January 13th, 2009 at 7:25 pm. Permalink.

  105. lilrabbit129 replied:

    Thanks to this awesome entry, I’m currently adding a 4th 750GB hd to my RAID5. With the age of the system, and the layout of the drives, it says its going to take over a week to resync.

    I’m currently at 25.9% and its been running since Sunday the 25th.

    January 27th, 2009 at 10:01 pm. Permalink.

  106. chad replied:

    hey I had a question about the growth speed. I added a 1.5TB drive to a raid6, only I didn’t change the /proc/sys/dev/raid/speed_limit_minor speed_limit_max before I issued the grow, does it change speeds dynamically? I have about 28 hours left if not (12 hours into it)… :(

    April 3rd, 2009 at 8:10 pm. Permalink.

  107. External USB RAID 5 array using mdadm - jacoblarson.com « replied:

    [...] Update: I just added another two 1TB drives, for a total of five!  This article about growing RAID 5 arrays was very helpful. [...]

    April 29th, 2009 at 3:51 am. Permalink.

  108. Jonas replied:

    Growing a raid5(6tb->7.5tb):
    md_d0 : active raid5 sdg1[5] sdb1[0] sdf1[4] sde1[3] sdd1[2] sdc1[1]
    5860543488 blocks super 0.91 level 5, 256k chunk, algorithm 2 [6/6] [UUUUUU]
    [====>................] reshape = 22.0% (322644992/1465135872) finish=1875.4min speed=10152K/sec

    May 14th, 2009 at 6:49 am. Permalink.

  109. Jon H replied:

    Hey, anyone got any pointers on how big to make the raid partitions? I have a 6 drive array made up of 4×500 and 2×750 in preparation for all drives being grown when i have phased out all the 500GB’s – which is what i am about to do.. switch out a 500GB with a 1TB. But i wonder how large should the raid partition be? I read that all drives vary slightly in block sizes? so i should smooth that over?

    August 16th, 2009 at 10:23 am. Permalink.

  110. mehoo replied:

    Jonas I am going from 5tb drives to 6 and am very hesitant, it will be maxing out my space on this mb and hopefully will help with my editing space.

    At Jon H that doesn’t look like a raid 5 set up. I was told u are limited by the smallest partition in raid 5 and u don’t want to mix different ones.
    Hence why my coming server rack is going to probally go 2tb drive right off so as they come down and i expand I can fill out to the 10 tb mirrored raid I need. Only problem is where to buy WD black 2tb drives they seem to only be coming in green where i shop.

    August 20th, 2009 at 5:59 am. Permalink.

  111. Janiek replied:

    Don’t forget to modify mdadm.conf, or the array won’t be detected on the next boot.

    October 18th, 2009 at 1:16 pm. Permalink.

  112. JamieRF » Software RAID-5: using mdadm in Ubuntu 9.10 replied:

    [...] Growing a RAID5 array – MDADM [...]

    November 4th, 2009 at 8:54 pm. Permalink.

  113. Fermulator replied:

    FYI: Updated Linux RAID Wiki Link: http://www.linuxfoundation.org/collaborate/workgroups/linux-raid

    January 6th, 2010 at 12:47 am. Permalink.

  114. stephen replied:

    Looks cool. Instead of growing my one RAID-10 array, I popped in another 4 disks and created a second RAID-10 array, then added that array to the volume group.

    I think this another way of getting at the same issue, but my new array came online right away.

    February 17th, 2010 at 7:54 pm. Permalink.

  115. rm_you replied:

    Just expanded my array from 5x1Tb to 6x1Tb, and it took around 30 hours to finish growing the array. Then the fsck took quite a bit of time (I decided to sleep sometime after 20 minutes or so of watching stage 1… BTW at least on ubuntu, it has to be “fsck -f /dev/md0″, or else it will just say “clean!” and refuse to resize). Finally, the grow took about 3 seconds. :P

    This went more smoothly than I could possibly have imagined… At every stage I was anticipating disaster, but I was pleasantly surprised. Not one error all the way through! Thanks for the simple walkthrough!

    February 18th, 2010 at 6:12 pm. Permalink.

  116. Mike replied:

    Linux Raid Wiki has moved to http://raid.wiki.kernel.org/

    March 15th, 2010 at 1:45 am. Permalink.

  117. RAID-5 software sous GNU/Linux « desgrange.net replied:

    [...] Si vous avez déjà un volume RAID en place et que vous voulez l’étendre en rajoutant un disque, apparemment c’est possible : http://scotgate.org/?p=107. [...]

    April 28th, 2010 at 7:14 pm. Permalink.

  118. Andey replied:

    Thanks man, I keep going to this page every time I grow my raid…hahah i never learn

    June 5th, 2010 at 3:51 pm. Permalink.

  119. StealthNerd.net » Blog Archive » Growing linux RAID replied:

    [...] gleaned most of this from: Linux RAID Wiki Ferg’s Gaff /var/log/sysnote Tags: initramfs, linux, mdadm, raid | June 26th, 2010 | Posted in Computers, [...]

    June 26th, 2010 at 11:28 pm. Permalink.

  120. phantomd replied:

    Version : 0.91
    Creation Time : Wed Jan 7 12:50:04 2009
    Raid Level : raid5
    Array Size : 7759393792 (7399.93 GiB 7945.62 GB)
    Used Dev Size : 969924224 (924.99 GiB 993.20 GB)
    Raid Devices : 10
    Total Devices : 10
    Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Aug 2 14:44:06 2010
    State : clean, recovering
    Active Devices : 10
    Working Devices : 10
    Failed Devices : 0
    Spare Devices : 0

    Layout : left-symmetric
    Chunk Size : 4K

    Reshape Status : 46% complete
    Delta Devices : 1, (9->10)

    UUID : 03c2f344:f3abb8ba:cffca458:4a18b991
    Events : 0.4210172

    Number Major Minor RaidDevice State
    0 8 1 0 active sync /dev/sda1
    1 8 17 1 active sync /dev/sdb1
    2 8 33 2 active sync /dev/sdc1
    3 8 49 3 active sync /dev/sdd1
    4 8 81 4 active sync /dev/sdf1
    5 8 97 5 active sync /dev/sdg1
    6 8 113 6 active sync /dev/sdh1
    7 8 129 7 active sync /dev/sdi1
    8 8 145 8 active sync /dev/sdj1
    9 8 161 9 active sync /dev/sdk1

    August 2nd, 2010 at 1:44 pm. Permalink.

  121. unkle_george replied:

    To resize ext4 on Ubuntu 10.04:
    sudo resize2fs /dev/mapper/vgdata-lvdata

    August 11th, 2010 at 12:46 am. Permalink.

  122. sujith replied:

    I have tried to add a 4th device. I am using xfs file system for raid. Should I use xfs_growfs after reshape is finished?Is there any risk doing it without umount? Is it possible to access data while reshape in progress?

    August 28th, 2010 at 10:38 am. Permalink.

  123. Creating a RAID 5 array in Ubuntu with MDADM – Walkthrough | Jaytag Computer replied:

    [...] Growing RAID 5 volumes with mdadm This entry was posted in RAID, mdadm, ubuntu and tagged mdadm, RAID, ubuntu. Bookmark the permalink. Fixing a broken mdadm array – failed drive → [...]

    November 24th, 2010 at 8:04 pm. Permalink.

  124. linux下使用mdadm进行软raid升级 | 风叶 replied:

    [...] 参考:http://scotgate.org/2006/07/03/growing-a-raid5-array-mdadm/ [...]

    January 31st, 2011 at 6:01 am. Permalink.

  125. Zoltan replied:

    Hi everyone,

    having some trouble with my raid6 array. After growing it (as read above) it started to behave strange, became read-only, and after a reboot it won’t get assembled any more, and the configuration seems to be pretty messed up.

    I am a dummy enduser, so I hope this is the right place to ask for help, if not: would you please be so kind to redirect me? (And also sorry for the spam…)

    This is all the data I have:

    Drives:
    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdf: 500.1 GB, 500107862016 bytes (for OS only)
    Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdi: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdj: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdl: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
    Disk /dev/sdm: 1000.2 GB, 1000204886016 bytes

    Info /proc/mdstat tells me:

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sdb[6](S) sdi[2](S) sdd[7](S) sdc[9](S) sdg[0](S) sdh[4](S) sdk[3](S) sda[1](S) sdj[5](S)
    8790862464 blocks

    unused devices:

    This is in the /etc/mdadm/mdadm.conf:

    ARRAY /dev/md0 level=raid6 num-devices=9 UUID=8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    spares=1
    MAILADDR root

    Run –examine on all the drives:
    root@tank:~/mdstuff# mdadm –examine /dev/sd[a-n]
    /dev/sda:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Sun Aug 22 08:35:17 2010
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 4
    Spare Devices : 0
    Checksum : 2401fc35 – correct
    Events : 2188228

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 1 8 144 1 active sync /dev/sdj

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 0 0 5 faulty removed
    6 6 0 0 6 faulty removed
    7 7 0 0 7 faulty removed
    8 8 0 0 8 faulty removed
    /dev/sdb:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Tue Jul 27 19:42:00 2010
    State : clean
    Active Devices : 8
    Working Devices : 9
    Failed Devices : 1
    Spare Devices : 1
    Checksum : 23c52dfc – correct
    Events : 1299054

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 6 8 0 6 active sync /dev/sda

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 8 16 5 active sync /dev/sdb
    6 6 8 0 6 active sync /dev/sda
    7 7 8 48 7 active sync /dev/sdd
    8 8 0 0 8 faulty removed
    9 9 8 32 9 spare /dev/sdc
    /dev/sdc:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Tue Jul 27 19:42:00 2010
    State : clean
    Active Devices : 8
    Working Devices : 9
    Failed Devices : 1
    Spare Devices : 1
    Checksum : 23c52e1c – correct
    Events : 1299054

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 9 8 32 9 spare /dev/sdc

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 8 16 5 active sync /dev/sdb
    6 6 8 0 6 active sync /dev/sda
    7 7 8 48 7 active sync /dev/sdd
    8 8 0 0 8 faulty removed
    9 9 8 32 9 spare /dev/sdc
    /dev/sdd:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Sun Aug 22 08:33:50 2010
    State : clean
    Active Devices : 7
    Working Devices : 7
    Failed Devices : 2
    Spare Devices : 0
    Checksum : 2401fb4e – correct
    Events : 2188224

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 7 8 48 7 active sync /dev/sdd

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 8 16 5 active sync /dev/sdb
    6 6 0 0 6 faulty removed
    7 7 8 48 7 active sync /dev/sdd
    8 8 0 0 8 faulty removed
    mdadm: No md superblock detected on /dev/sde.
    mdadm: No md superblock detected on /dev/sdf.
    /dev/sdg:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Sun Aug 22 08:35:17 2010
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 4
    Spare Devices : 0
    Checksum : 2401fbf3 – correct
    Events : 2188228

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 0 8 80 0 active sync /dev/sdf

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 0 0 5 faulty removed
    6 6 0 0 6 faulty removed
    7 7 0 0 7 faulty removed
    8 8 0 0 8 faulty removed
    /dev/sdh:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Sun Aug 22 08:35:17 2010
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 4
    Spare Devices : 0
    Checksum : 2401fc0b – correct
    Events : 2188228

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 4 8 96 4 active sync /dev/sdg

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 0 0 5 faulty removed
    6 6 0 0 6 faulty removed
    7 7 0 0 7 faulty removed
    8 8 0 0 8 faulty removed
    /dev/sdi:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Sun Aug 22 08:35:17 2010
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 4
    Spare Devices : 0
    Checksum : 2401fc17 – correct
    Events : 2188228

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 2 8 112 2 active sync /dev/sdh

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 0 0 5 faulty removed
    6 6 0 0 6 faulty removed
    7 7 0 0 7 faulty removed
    8 8 0 0 8 faulty removed
    /dev/sdj:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Sun Aug 22 08:33:50 2010
    State : clean
    Active Devices : 7
    Working Devices : 7
    Failed Devices : 2
    Spare Devices : 0
    Checksum : 2401fb2a – correct
    Events : 2188224

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 5 8 16 5 active sync /dev/sdb

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 8 16 5 active sync /dev/sdb
    6 6 0 0 6 faulty removed
    7 7 8 48 7 active sync /dev/sdd
    8 8 0 0 8 faulty removed
    /dev/sdk:
    Magic : a92b4efc
    Version : 00.90.00
    UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
    Creation Time : Thu Jun 4 17:48:40 2009
    Raid Level : raid6
    Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
    Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
    Raid Devices : 9
    Total Devices : 9
    Preferred Minor : 0

    Update Time : Sun Aug 22 08:35:17 2010
    State : clean
    Active Devices : 5
    Working Devices : 5
    Failed Devices : 4
    Spare Devices : 0
    Checksum : 2401fc29 – correct
    Events : 2188228

    Chunk Size : 32K

    Number Major Minor RaidDevice State
    this 3 8 128 3 active sync /dev/sdi

    0 0 8 80 0 active sync /dev/sdf
    1 1 8 144 1 active sync /dev/sdj
    2 2 8 112 2 active sync /dev/sdh
    3 3 8 128 3 active sync /dev/sdi
    4 4 8 96 4 active sync /dev/sdg
    5 5 0 0 5 faulty removed
    6 6 0 0 6 faulty removed
    7 7 0 0 7 faulty removed
    8 8 0 0 8 faulty removed
    mdadm: No md superblock detected on /dev/sdl.
    mdadm: No md superblock detected on /dev/sdm.

    Any Ideas what I can do here? I really don’t wish to loose data…

    Thanks for your help in advance!
    Regards,
    Zoltan

    February 19th, 2011 at 9:48 pm. Permalink.

  126. Grow your RAID array with mdadm | Cumptrnrd's Blog replied:

    [...] http://scotgate.org/2006/07/03/growing-a-raid5-array-mdadm/ This entry was posted in Uncategorized. Bookmark the permalink. ← Backup/Image/Restore [...]

    February 19th, 2011 at 10:12 pm. Permalink.

  127. Nikolay replied:

    Just enlarged my RAID5 too.
    I use 500 GB disks with 450 GB RAID partition.
    was 3 disks * 450 GB.
    adding 1 disk.

    mdadm commands returned to the prompt immediately.

    them reshape to 4 disks took = 9h 30 min

    fsck.ext3 = 35 min
    resize2fs = 5 min
    second fsck.ext3 = 40 min

    March 9th, 2011 at 10:58 pm. Permalink.

  128. Tariq replied:

    Hi,
    I am using HP Proliant machine with Linux as OS. There are 2 RAID controllers in it and each RAID controller is having 8 Harddrives on it. I made hardware from boot menu and I am able to see the RAID drives and partitions from Controller-1 but the partition which was set to auto-grow is only showing the space from controller-1 disks. I am using RAID-5 for all controllers, what do i need to do?? Shall I create a new VG??

    September 17th, 2011 at 9:53 am. Permalink.

  129. Linux NAS / Media Server « Petteway's Blog replied:

    [...] a raid 5 array http://scotgate.org/2006/07/03/growing-a-raid5-array-mdadm/ Replace Failed drive http://www.kernelhardware.org/replacing-failed-raid-drive/ sources [...]

    October 21st, 2011 at 5:46 am. Permalink.

  130. gh0st replied:

    Hey,

    What’s the risk of data loss? I have a 2.1TB array and need to grow it further – don’t have any place to back it up.

    November 15th, 2011 at 3:55 pm. Permalink.

  131. Ferg replied:

    Not much, but then RAID is not backup. If you cannot afford to lose that data, then you must back it up.

    November 15th, 2011 at 4:17 pm. Permalink.

  132. Richard Gration replied:

    I’ve just used this successfully to add another 2TB to my 2 * 2TB RAID5. Still works :-) No snags at all. And, I even forgot to unmount the filesystem on the raid device until it had reshaped by a few percent.

    @gh0st: Data that isn’t backed up doesn’t exist! Raid is not a backup, raid is about high availability. You can still access your data after a device fails, rather than waiting until *the data is restored from backup*. You still have to keep backups!

    December 21st, 2011 at 8:57 pm. Permalink.

  133. How to Add a New Hard Drive to a Linux Software RAID (CentOS & Fedora) « Lâmôlabs replied:

    [...] Growing a RAID5 array – MDADM – scotgate.org [...]

    March 5th, 2012 at 3:02 pm. Permalink.

Bad Behavior has blocked 28054 access attempts in the last 7 days.