storage¶
add new disk to a volume group¶
what disk will be used, and what group will be extended?
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 16G 0 disk
|-sda1 8:1 0 1G 0 part /boot
`-sda2 8:2 0 15G 0 part
|-centos-root 253:0 0 29.4G 0 lvm /
`-centos-swap 253:1 0 1.6G 0 lvm
sdb 8:16 0 16G 0 disk
sr0 11:0 1 1024M 0 rom
in my case, i want to add sdb
to /
(group centos-root
). run all as root
user
pvcreate /dev/sdb
lvmdiskscan -l
vgextend centos /dev/sdb
lvm lvextend -l +100%FREE /dev/mapper/centos-root
xfs_growfs /dev/mapper/centos-root
pvcreate /dev/sdb
error: device is partitioned
¶
use wipefs
first to get rid of all metadata
creating partition size larger than 2tb¶
in this example, operations are done to create /dev/sdb1
using ext4
$ parted /dev/sdb
GNU Parted 2.3
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted)
# creating a primary with 3tb? use this
(parted) mkpart primary 0 3
# creating a partition using all the space available? use this
(parted) mkpart primary 0% 100%
# want to set the filesystem too?
(parted) mkpart primary ext4 0% 100%
(parted) print
Model: ATA ST33000651AS (scsi)
Disk /dev/sdb: 3.00TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 0.00TB 3.00TB 3.00TB ext4 primary
$ mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
183148544 inodes, 732566272 blocks
36628313 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
22357 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
find the partition id, and add it to /etc/fstab
$ blkid /dev/sdb1
/dev/sdb1: UUID="7b7bdd66-218c-4a64-b760-82824f87724b" BLOCK_SIZE="512" TYPE="xfs" PARTLABEL="primary" PARTUUID="dc00594a-b233-46c3-8f79-cdc3a5cf28ad"
$ vim /etc/fstab
UUID=7b7bdd66-218c-4a64-b760-82824f87724b /DISK_BACKUP ext4 defaults,noatime 0 0
$ mkdir /DISK_BACKUP
$ mount -a
create software raid 10¶
create some partitions using the raid
flag. after that, use mdadm to manage the raid array.
yum install mdadm -y
mdadm --create /dev/md0 --level raid10 --name <RAID_NAME> --raid-disks <NUMBER_OF_DISKS> <LIST_OF_PARTITIONS, like /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1>
echo "MAILADDR root@localhost" >> /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
create new file system on the new raid device
mkfs.xfs /dev/disk/by-id/md-name-<RAID_NAME>
mkdir /data
mount /dev/disk/by-id/md-name-<RAID_NAME> /data
check the array status
simulate disk sdb1 failure
mdadm --manage --set-faulty /dev/disk/by-id/md-name-<RAID_NAME> /dev/sdb1
#Check syslog for new failure messages
tail /var/log/messages
Oct 3 16:43:42 centos-62-1 kernel: md/raid10:md0: Disk failure on sdb1, disabling device.
Oct 3 16:43:42 centos-62-1 kernel: md/raid10:md0: Operation continuing on 3 devices.
# check array status
mdadm --detail /dev/disk/by-id/md-name-<RAID_NAME>
cat /proc/mdstat
simulate disk sdd1 failure
mdadm --manage --set-faulty /dev/disk/by-id/md-name-<RAID_NAME> /dev/sdd1
# check syslog for new failure messages
tail /var/log/messages
Oct 3 16:45:01 centos-62-1 kernel: md/raid10:md0: Disk failure on sdd1, disabling device.
Oct 3 16:45:01 centos-62-1 kernel: md/raid10:md0: Operation continuing on 2 devices.
# check array status
mdadm --detail /dev/disk/by-id/md-name-<RAID_NAME>
cat /proc/mdstat
remove sdb1 from the array and re-add it
mdadm /dev/disk/by-id/md-name-<RAID_NAME> -r /dev/sdb1
mdadm /dev/disk/by-id/md-name-<RAID_NAME> -a /dev/sdb1
# check array status
mdadm --detail /dev/disk/by-id/md-name-<RAID_NAME>
cat /proc/mdstat
remove sdd1 from the array and re-add it
mdadm /dev/disk/by-id/md-name-<RAID_NAME> -r /dev/sdd1
mdadm /dev/disk/by-id/md-name-<RAID_NAME> -a /dev/sdd1
# check array status
mdadm --detail /dev/disk/by-id/md-name-<RAID_NAME>
cat /proc/mdstat