磁盤管理——RAID 10
一 什麼是RAID10
RAID 10/01細分為RAID 1+0或RAID 0+1。
RAID 1+0是先鏡射再分割資料,再將所有硬碟分為兩組,視為是RAID 0的最低組合,然後將這兩組各自視為RAID 1運作。
RAID 0+1則是跟RAID 1+0的程序相反,是先分割再將資料鏡射到兩組硬碟。它將所有的硬碟分為兩組,變成RAID 1的最低組合,而將兩組硬碟各自視為RAID 0運作。
效能上,RAID 0+1比RAID 1+0有著更快的讀寫速度。
可靠性上,當RAID 1+0有一個硬碟受損,其余三個硬碟會繼續運作。RAID 0+1 只要有一個硬碟受損,同組RAID 0的另一只硬碟亦會停止運作,只剩下兩個硬碟運作,可靠性較低。
因此,RAID 10遠較RAID 01常用,零售主機板絕大部份支援RAID 0/1/5/10,但不支援RAID 01。
二 RAID10演示
第一步 對磁盤進行分區
[plain]
#分別對sdb sdc sde sde分區
[root@serv01 ~]# fdisk /dev/sdb
[root@serv01 ~]# fdisk /dev/sdc
[root@serv01 ~]# fdisk /dev/sdd
[root@serv01 ~]# fdisk /dev/sde
[root@serv01 ~]# ls /dev/sd
sda sda1 sda2 sda3 sda4 sda5 sdb sdb1 sdc sdc1 sdd sdd1 sde sde1 sdf sdg
[root@serv01 ~]# mdadm -C /dev/md0 -l 1 -n2 /dev/md0 /dev/md1
mdadm: device /dev/md0 not suitable for anystyle of array
第二步 創建RAID10
[plain]
#創建/dev/md0,RAID1
[root@serv01 ~]# mdadm -C /dev/md0 -l 1 -n2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at thestart and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
#創建/dev/md1,RAID1
[root@serv01 ~]# mdadm -C /dev/md1 -l 1 -n2 /dev/sdd1 /dev/sde1
mdadm: Note: this array has metadata at thestart and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
#創建/dev/md10,RAID0
[root@serv01 ~]# mdadm -C /dev/md10 -l 0 -n2 /dev/md0 /dev/md1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.
[root@serv01 ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md1[1] md0[0]
4188160 blocks super 1.2 512k chunks
md1 : active raid1 sde1[1] sdd1[0]
2095415 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sdc1[1] sdb1[0]
2095415 blocks super 1.2 [2/2] [UU]
unused devices: <none>
[root@serv01 ~]# mkfs.ext4 /dev/md10
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047040 blocks
52352 blocks (5.00%) reserved for the superuser
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments pergroup
8192 inodes per group
Superblock backups stored on blocks:
32768,98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystemaccounting information: done
This filesystem will be automaticallychecked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@serv01 ~]# mkdir /web
[root@serv01 ~]# mount /dev/md10 /web
[root@serv01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.7G 1.1G 8.1G 12% /
tmpfs 385M 0 385M 0% /dev/shm
/dev/sda1 194M 25M 160M 14% /boot
/dev/sda5 4.0G 137M 3.7G 4% /opt
/dev/sr0 3.4G 3.4G 0 100% /iso
/dev/md10 4.0G 72M 3.7G 2% /web
[root@serv01 ~]# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2name=serv01.host.com:0 UUID=78656148:8251f76a:a758c84e:f8927ae0
ARRAY /dev/md1 metadata=1.2name=serv01.host.com:1 UUID=176f932e:a6451cd4:860a9cf7:847b51b7
ARRAY /dev/md10 metadata=1.2name=serv01.host.com:10 UUID=0428d240:4e9d097a:80bfe439:2802ff3e
[root@serv01 ~]# mdadm --detail --scan >/etc/mdadm.conf
[root@serv01 ~]# vim /etc/fstab
[root@serv01 ~]# echo "/dev/md10 /webext4 defaults 1 2" >> /etc/fstab
第三步 重啟後發現系統壞掉,很顯然RAID10不能正常使用
[plain]
[root@serv01 ~]# reboot
#重啟後系統不能正常啟動,把添加到fstab中的關於RAID10的那一行刪除
#硬RAID才可以實現這個功能,軟RAID只能做演示
#注意:默認跟分區是以只讀的形式掛載,需要重新掛載
[root@serv01 ~]# mount -o remount,rw /
第四步 固定名字,也不能正常使用
[plain]
[root@serv01 ~]# mdadm --assemble /dev/md10/dev/md0 /dev/md1
mdadm: no correct container type: /dev/md0
mdadm: /dev/md0 has no superblock -assembly aborted
[root@serv01 ~]# mdadm --assemble /dev/md0/dev/sdb1 /dev/sdc1
mdadm: cannot open device /dev/sdb1: Deviceor resource busy
mdadm: /dev/sdb1 has no superblock -assembly aborted
[root@serv01 ~]# mdadm --assemble /dev/md0/dev/sdc1 /dev/sdc1
[root@serv01 ~]# mdadm --manage /dev/md0--stop
mdadm: stopped /dev/md0
[root@serv01 ~]# mdadm --manage /dev/md1--stop
mdadm: stopped /dev/md1
[root@serv01 ~]# mdadm --assemble /dev/md0/dev/sdb1 /dev/sdc1
mdadm: /dev/md0 has been started with 2drives.
[root@serv01 ~]# mdadm --assemble /dev/md1/dev/sdd1 /dev/sde1
mdadm: /dev/md1 has been started with 2drives.
[root@serv01 ~]# mdadm --assemble /dev/md10/dev/md0 /dev/md1
mdadm: /dev/md10 has been started with 2drives.
[root@serv01 ~]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md0[0] md1[1]
4188160 blocks super 1.2 512k chunks
md1 : active raid1 sdd1[0] sde1[1]
2095415 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sdb1[0] sdc1[1]
2095415 blocks super 1.2 [2/2] [UU]
unused devices: <none>
第五步 實驗完畢,停掉硬盤,清空磁盤
[plain]
[root@serv01 ~]# mdadm --manage /dev/md0--stop
mdadm: Cannot get exclusive access to/dev/md0:Perhaps a running process, mounted filesystem or active volume group?
[root@serv01 ~]# mdadm --manage /dev/md10--stop
mdadm: stopped /dev/md10
[root@serv01 ~]# mdadm --manage /dev/md0--stop
mdadm: stopped /dev/md0
[root@serv01 ~]# mdadm --manage /dev/md1--stop
mdadm: stopped /dev/md1
[root@serv01 ~]# mdadm --misc --zero-superblock/dev/sdb1
[root@serv01 ~]# mdadm --misc--zero-superblock /dev/sdc1
[root@serv01 ~]# mdadm --misc--zero-superblock /dev/sdd1
[root@serv01 ~]# mdadm --misc--zero-superblock /dev/sde1
[root@serv01 ~]# mdadm -E /dev/sdb1
mdadm: No md superblock detected on/dev/sdb1.
[root@serv01 ~]# mdadm -E /dev/sdd1
mdadm: No md superblock detected on/dev/sdd1.
[root@serv01 ~]# mdadm -E /dev/sdc1
mdadm: No md superblock detected on/dev/sdc1.
[root@serv01 ~]# mdadm -E /dev/sde1
mdadm: No md superblock detected on/dev/sde1.