Update 2022: I’ve added notes on GPT as well as naming RAID devices

Years ago, I built a big file server to hold all my important stuff. In modern terms, it doesn’t have that much storage: just shy of 4 TB. An external hard drive would be much cheaper, but having a RAID6 array gives me more peace of mind about my data. Up to 2 drives can die before the array starts to lose data.

Recently, I had a drive fail (but no data loss!) and decided to rebuild my array with new drives. I have six drives each with a 1 TB capacity. My general strategy is to make one large RAID6 device, and then use LVM to be able to create, destroy, and modify logical volumes as I need them.

On a fresh install of RHEL 7.1, my RAID drives are sda, sdb, sdc, sdd, sdf, and sdh. For some reason, my two boot drives (which are set up as a RAID1 mirror) get sde and sdg.

First, each drive must be partitioned, I use the whole disk with one partition. The partition type will be different depending on whether the disk has a DOS partition table or a GPT partition table.

GPT is the modern standard, and MUST be used for drives larger than 2 TB.

For DOS, the type should be “Linux autodetect RAID”, type 0xfd:

# fdisk /dev/sda

The device presents a logical sector size that is smaller than
 the physical sector size. Aligning to a physical sector (or optimal
 I/O) size boundary is recommended, or performance may be impacted.
 Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
 Be careful before using the write command.

Command (m for help): p

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes, 1953525168 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 4096 bytes
 I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 Disk label type: dos
 Disk identifier: 0x220c759d

Device Boot Start End Blocks Id System
 /dev/sda1 2048 1953525167 976761560 fd Linux raid autodetect

For GPT, the type should be “Linux RAID”, type 29:

# fdisk /dev/sdi

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdi: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 72084FC0-D3D7-4209-A363-CF504B52BAF8

Device     Start        End    Sectors  Size Type
/dev/sdi1   2048 7814037134 7814035087  3.7T Linux RAID

After a partition is created on all six disks, use the mdadm command to create the RAID device. I’m using md200 as my device name, but the actual number is somewhat arbitrary. It’s also best practice to set a human-readable name, using the --name flag. I’m using galadriel to match the hostname of the host that this RAID array is installed in.

# mdadm --create /dev/md200 --level=6 --raid-devices=6 --name galadriel /dev/sdc1 /dev/sdf1 /dev/sdh1 /dev/sdd1 /dev/sda1 /dev/sdb1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md200 started.

After the RAID device is started, then the physical volumes can be created:

# pvcreate /dev/md200
 /run/lvm/lvmetad.socket: connect failed: No such file or directory
 WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
 Physical volume "/dev/md200" successfully created
# pvdisplay
 /run/lvm/lvmetad.socket: connect failed: No such file or directory
 WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
 "/dev/md200" is a new physical volume of "3.64 TiB"
 --- NEW Physical volume ---
 PV Name /dev/md200
 VG Name 
 PV Size 3.64 TiB
 Allocatable NO
 PE Size 0 
 Total PE 0
 Free PE 0
 Allocated PE 0
 PV UUID HgaGgf-fPd6-lmyp-zIG2-VcYu-84lI-FdmR45

Then a volume group can be created using the physical volume. My host is called galadriel (all my machines are named after Tolkien characters). I’m naming the volume group the same as my hostname, in this example. However, the volume group name can be whatever you like.

# vgcreate galadriel /dev/md200 
 /run/lvm/lvmetad.socket: connect failed: No such file or directory
 WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
 Volume group "galadriel" successfully created
# vgdisplay
 /run/lvm/lvmetad.socket: connect failed: No such file or directory
 WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
 --- Volume group ---
 VG Name galadriel
 System ID 
 Format lvm2
 Metadata Areas 1
 Metadata Sequence No 1
 VG Access read/write
 VG Status resizable
 MAX LV 0
 Cur LV 0
 Open LV 0
 Max PV 0
 Cur PV 1
 Act PV 1
 VG Size 3.64 TiB
 PE Size 4.00 MiB
 Total PE 953740
 Alloc PE / Size 0 / 0 
 Free PE / Size 953740 / 3.64 TiB
 VG UUID WkCmXV-ASTz-N8dA-H48O-hwoC-nU4w-LiPJcm

Now with the volume group created, I can now create a logical volume, make a filesystem on it, and mount it:

# lvcreate -L 10G -n test galadriel
 /run/lvm/lvmetad.socket: connect failed: No such file or directory
 WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
 Logical volume "test" created.

# lvdisplay
 /run/lvm/lvmetad.socket: connect failed: No such file or directory
 WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
 --- Logical volume ---
 LV Path /dev/galadriel/test
 LV Name test
 VG Name galadriel
 LV UUID Lc9uzd-DJa6-eLFl-5O0N-8CC1-rEPB-1455g6
 LV Write Access read/write
 LV Creation host, time localhost.localdomain, 2015-09-24 16:27:54 -0400
 LV Status available
 # open 1
 LV Size 10.00 GiB
 Current LE 2560
 Segments 1
 Allocation inherit
 Read ahead sectors auto
 - currently set to 8192
 Block device 253:0

# mkfs.xfs /dev/galadriel/test
meta-data=/dev/galadriel/test isize=256 agcount=16, agsize=163712 blks
 = sectsz=4096 attr=2, projid32bit=1
 = crc=0 finobt=0
data = bsize=4096 blocks=2619392, imaxpct=25
 = sunit=128 swidth=512 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
 = sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

# mount /dev/galadriel/test /mnt
# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/galadriel-test 10G 33M 10G 1% /mnt

That’s almost all there is to it. You’ll notice above there were errors about lvmetad. Turns out, I didn’t have the service enabled or started. Easy fix with systemctl:

systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

The lvmetad service keeps a cache of LVM information, so it doesn’t have to rescan all the time. Not absolutely required.