With the release of kernel 2.6.17, there’s new functionality to add a device (partition) to a RAID 5 array and make this new device part of the actual array rather than a spare.
I cam across this post a while ago on the LK list.
LKML: Sander: RAID5 grow success (was: Re: 2.6.16-rc6-mm2)
So I gave it a go. My HOME directory is mounted on a 3x70gb SCSI RAID5 array. so I tried adding a further drive.
Although with the release of mdadm > 2.4 the only real critical part of the process is safer (it backs up some live data that is being copied), I didn;t fancy risking growing a mounted array. So I did plenty of backups, then switched to single user run level.
Basically the step includes adding a disc to the array as a spare, then growing the array onto this device.
mdadm --add /dev/md1 /dev/sdf1
mdadm --grow /dev/md1 --raid-devices=4
This then took about 3 hours to reshape the array.
The filesystem the needs to be expanded to fill up the new space.
fsck.ext3 /dev/md1
resize2fs /dev/md1
I then remounted the drive and wahey. Lots extra space….! Cool or what
EDIT It’s over 18 months since I wrote this post, and Linux kernel RAID and mdadm have continued to move on. All the info here is still current, but as extra info check out the Linux RAID Wiki.
EDIT 2 The Linux Raid Wiki has moved
Returning to this article, thanks for nice info.
Just in case — those running Linux software RAID5 on partitioned disks might badly need understanding of stripe alignment: http://www.pythian.com/blogs/411/aligning-asm-disks-on-linux
Jon H-
Did your test from Oct 17th work? I was thinking about doing something similar. It seem like it should work. The only thing that I question is how happy Linux RAID will be when I add a partition that is bigger than the existing partitions (like adding your 750GB).
According to the Linux RAID Wiki I should be able to size a partition that has the “amount of space [I] will eventually use on all…disks”. But I want to stop short of growing the RAID and use drives in that state until am ready to pull the smaller disks (once all the big disks are procured over time), then grow the RAID at that time. I am aware that the RAID would use only a 500GB portion of the 750GB (hypothetically). But the intent is to be ready for future expansions and ever-increasing drive sizes.
Am I off my rocker? Are we playing with fire?
CHAPS.
Just a quick Q i just did a mdadm –detail md4 to see a state of ‘active’ not ‘clean’ as per my other arrays. any clues what this means?
-quazeye-
I had no problems adding the 750GB’s i just created a linux raid partition on them that is the size of the WHOLE disk, and set the number of raid devices. It then just expanded to include them. I believe that as and when i have replaced the original 4 500GB’s i can set the size to max and it will grow to the new smallest size (750GB) i am closer to doing this than i imagined i would be 18months down the line but hey.
Thanks to this awesome entry, I’m currently adding a 4th 750GB hd to my RAID5. With the age of the system, and the layout of the drives, it says its going to take over a week to resync.
I’m currently at 25.9% and its been running since Sunday the 25th.
hey I had a question about the growth speed. I added a 1.5TB drive to a raid6, only I didn’t change the /proc/sys/dev/raid/speed_limit_minor speed_limit_max before I issued the grow, does it change speeds dynamically? I have about 28 hours left if not (12 hours into it)… 🙁
Growing a raid5(6tb->7.5tb):
md_d0 : active raid5 sdg1[5] sdb1[0] sdf1[4] sde1[3] sdd1[2] sdc1[1]
5860543488 blocks super 0.91 level 5, 256k chunk, algorithm 2 [6/6] [UUUUUU]
[====>…………….] reshape = 22.0% (322644992/1465135872) finish=1875.4min speed=10152K/sec
Hey, anyone got any pointers on how big to make the raid partitions? I have a 6 drive array made up of 4×500 and 2×750 in preparation for all drives being grown when i have phased out all the 500GB’s – which is what i am about to do.. switch out a 500GB with a 1TB. But i wonder how large should the raid partition be? I read that all drives vary slightly in block sizes? so i should smooth that over?
Jonas I am going from 5tb drives to 6 and am very hesitant, it will be maxing out my space on this mb and hopefully will help with my editing space.
At Jon H that doesn’t look like a raid 5 set up. I was told u are limited by the smallest partition in raid 5 and u don’t want to mix different ones.
Hence why my coming server rack is going to probally go 2tb drive right off so as they come down and i expand I can fill out to the 10 tb mirrored raid I need. Only problem is where to buy WD black 2tb drives they seem to only be coming in green where i shop.
Don’t forget to modify mdadm.conf, or the array won’t be detected on the next boot.
FYI: Updated Linux RAID Wiki Link: http://www.linuxfoundation.org/collaborate/workgroups/linux-raid
Looks cool. Instead of growing my one RAID-10 array, I popped in another 4 disks and created a second RAID-10 array, then added that array to the volume group.
I think this another way of getting at the same issue, but my new array came online right away.
Just expanded my array from 5x1Tb to 6x1Tb, and it took around 30 hours to finish growing the array. Then the fsck took quite a bit of time (I decided to sleep sometime after 20 minutes or so of watching stage 1… BTW at least on ubuntu, it has to be “fsck -f /dev/md0”, or else it will just say “clean!” and refuse to resize). Finally, the grow took about 3 seconds. 😛
This went more smoothly than I could possibly have imagined… At every stage I was anticipating disaster, but I was pleasantly surprised. Not one error all the way through! Thanks for the simple walkthrough!
Linux Raid Wiki has moved to http://raid.wiki.kernel.org/
Thanks man, I keep going to this page every time I grow my raid…hahah i never learn
Version : 0.91
Creation Time : Wed Jan 7 12:50:04 2009
Raid Level : raid5
Array Size : 7759393792 (7399.93 GiB 7945.62 GB)
Used Dev Size : 969924224 (924.99 GiB 993.20 GB)
Raid Devices : 10
Total Devices : 10
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Aug 2 14:44:06 2010
State : clean, recovering
Active Devices : 10
Working Devices : 10
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 4K
Reshape Status : 46% complete
Delta Devices : 1, (9->10)
UUID : 03c2f344:f3abb8ba:cffca458:4a18b991
Events : 0.4210172
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
6 8 113 6 active sync /dev/sdh1
7 8 129 7 active sync /dev/sdi1
8 8 145 8 active sync /dev/sdj1
9 8 161 9 active sync /dev/sdk1
To resize ext4 on Ubuntu 10.04:
sudo resize2fs /dev/mapper/vgdata-lvdata
I have tried to add a 4th device. I am using xfs file system for raid. Should I use xfs_growfs after reshape is finished?Is there any risk doing it without umount? Is it possible to access data while reshape in progress?
Hi everyone,
having some trouble with my raid6 array. After growing it (as read above) it started to behave strange, became read-only, and after a reboot it won’t get assembled any more, and the configuration seems to be pretty messed up.
I am a dummy enduser, so I hope this is the right place to ask for help, if not: would you please be so kind to redirect me? (And also sorry for the spam…)
This is all the data I have:
Drives:
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdf: 500.1 GB, 500107862016 bytes (for OS only)
Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdi: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdj: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdl: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdm: 1000.2 GB, 1000204886016 bytes
Info /proc/mdstat tells me:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb[6](S) sdi[2](S) sdd[7](S) sdc[9](S) sdg[0](S) sdh[4](S) sdk[3](S) sda[1](S) sdj[5](S)
8790862464 blocks
unused devices:
This is in the /etc/mdadm/mdadm.conf:
ARRAY /dev/md0 level=raid6 num-devices=9 UUID=8a45a28c:612ad22f:2fe13ab6:8e70c8e3
spares=1
MAILADDR root
Run –examine on all the drives:
root@tank:~/mdstuff# mdadm –examine /dev/sd[a-n]
/dev/sda:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Sun Aug 22 08:35:17 2010
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 4
Spare Devices : 0
Checksum : 2401fc35 – correct
Events : 2188228
Chunk Size : 32K
Number Major Minor RaidDevice State
this 1 8 144 1 active sync /dev/sdj
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 0 0 5 faulty removed
6 6 0 0 6 faulty removed
7 7 0 0 7 faulty removed
8 8 0 0 8 faulty removed
/dev/sdb:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Tue Jul 27 19:42:00 2010
State : clean
Active Devices : 8
Working Devices : 9
Failed Devices : 1
Spare Devices : 1
Checksum : 23c52dfc – correct
Events : 1299054
Chunk Size : 32K
Number Major Minor RaidDevice State
this 6 8 0 6 active sync /dev/sda
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 8 16 5 active sync /dev/sdb
6 6 8 0 6 active sync /dev/sda
7 7 8 48 7 active sync /dev/sdd
8 8 0 0 8 faulty removed
9 9 8 32 9 spare /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Tue Jul 27 19:42:00 2010
State : clean
Active Devices : 8
Working Devices : 9
Failed Devices : 1
Spare Devices : 1
Checksum : 23c52e1c – correct
Events : 1299054
Chunk Size : 32K
Number Major Minor RaidDevice State
this 9 8 32 9 spare /dev/sdc
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 8 16 5 active sync /dev/sdb
6 6 8 0 6 active sync /dev/sda
7 7 8 48 7 active sync /dev/sdd
8 8 0 0 8 faulty removed
9 9 8 32 9 spare /dev/sdc
/dev/sdd:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Sun Aug 22 08:33:50 2010
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 2
Spare Devices : 0
Checksum : 2401fb4e – correct
Events : 2188224
Chunk Size : 32K
Number Major Minor RaidDevice State
this 7 8 48 7 active sync /dev/sdd
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 8 16 5 active sync /dev/sdb
6 6 0 0 6 faulty removed
7 7 8 48 7 active sync /dev/sdd
8 8 0 0 8 faulty removed
mdadm: No md superblock detected on /dev/sde.
mdadm: No md superblock detected on /dev/sdf.
/dev/sdg:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Sun Aug 22 08:35:17 2010
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 4
Spare Devices : 0
Checksum : 2401fbf3 – correct
Events : 2188228
Chunk Size : 32K
Number Major Minor RaidDevice State
this 0 8 80 0 active sync /dev/sdf
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 0 0 5 faulty removed
6 6 0 0 6 faulty removed
7 7 0 0 7 faulty removed
8 8 0 0 8 faulty removed
/dev/sdh:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Sun Aug 22 08:35:17 2010
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 4
Spare Devices : 0
Checksum : 2401fc0b – correct
Events : 2188228
Chunk Size : 32K
Number Major Minor RaidDevice State
this 4 8 96 4 active sync /dev/sdg
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 0 0 5 faulty removed
6 6 0 0 6 faulty removed
7 7 0 0 7 faulty removed
8 8 0 0 8 faulty removed
/dev/sdi:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Sun Aug 22 08:35:17 2010
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 4
Spare Devices : 0
Checksum : 2401fc17 – correct
Events : 2188228
Chunk Size : 32K
Number Major Minor RaidDevice State
this 2 8 112 2 active sync /dev/sdh
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 0 0 5 faulty removed
6 6 0 0 6 faulty removed
7 7 0 0 7 faulty removed
8 8 0 0 8 faulty removed
/dev/sdj:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Sun Aug 22 08:33:50 2010
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 2
Spare Devices : 0
Checksum : 2401fb2a – correct
Events : 2188224
Chunk Size : 32K
Number Major Minor RaidDevice State
this 5 8 16 5 active sync /dev/sdb
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 8 16 5 active sync /dev/sdb
6 6 0 0 6 faulty removed
7 7 8 48 7 active sync /dev/sdd
8 8 0 0 8 faulty removed
/dev/sdk:
Magic : a92b4efc
Version : 00.90.00
UUID : 8a45a28c:612ad22f:2fe13ab6:8e70c8e3
Creation Time : Thu Jun 4 17:48:40 2009
Raid Level : raid6
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Array Size : 6837337472 (6520.59 GiB 7001.43 GB)
Raid Devices : 9
Total Devices : 9
Preferred Minor : 0
Update Time : Sun Aug 22 08:35:17 2010
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 4
Spare Devices : 0
Checksum : 2401fc29 – correct
Events : 2188228
Chunk Size : 32K
Number Major Minor RaidDevice State
this 3 8 128 3 active sync /dev/sdi
0 0 8 80 0 active sync /dev/sdf
1 1 8 144 1 active sync /dev/sdj
2 2 8 112 2 active sync /dev/sdh
3 3 8 128 3 active sync /dev/sdi
4 4 8 96 4 active sync /dev/sdg
5 5 0 0 5 faulty removed
6 6 0 0 6 faulty removed
7 7 0 0 7 faulty removed
8 8 0 0 8 faulty removed
mdadm: No md superblock detected on /dev/sdl.
mdadm: No md superblock detected on /dev/sdm.
Any Ideas what I can do here? I really don’t wish to loose data…
Thanks for your help in advance!
Regards,
Zoltan
Just enlarged my RAID5 too.
I use 500 GB disks with 450 GB RAID partition.
was 3 disks * 450 GB.
adding 1 disk.
mdadm commands returned to the prompt immediately.
them reshape to 4 disks took = 9h 30 min
fsck.ext3 = 35 min
resize2fs = 5 min
second fsck.ext3 = 40 min