student
storage1
, storage2
)scgc/
directory:student@scgc:~$ cd scgc student@scgc:~/scgc$ wget --user=user-curs --ask-password http://repository.grid.pub.ro/cs/scgc/laboratoare/lab-03.zip student@scgc:~/scgc$ unzip lab-03.zip
qcow2
format) should be present, as well as two scripts (lab03-start
and lab03-stop
)lab03-start
script:student@scgc:~/scgc$ bash lab03-start
student@scgc:~/scgc$ ssh student@192.168.1.X
student
and root
users is student
Start by connecting to storage1
:
student@scgc:~/scgc$ ssh student@192.168.1.1
mdadm
is a tool used to manage RAID arrays.
root@storage1:~# apt-get update root@storage1:~# apt-get install mdadm
At this point, we can create a new RAID0 array from sdb1
, sdc1
, and sdd1
:
root@storage1:~# mdadm --create /dev/md0 --level=0 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
We can inspect the existing configuration of mdstat by checking the contents of /proc/mdstat
:
root@storage1:~# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sdd1[2] sdc1[1] sdb1[0] 3139584 blocks super 1.2 512k chunks unused devices: <none>
We can inspect the existing raid array in detail with the following command:
root@storage1:~# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Apr 16 22:20:53 2020 Raid Level : raid0 Array Size : 3139584 (2.99 GiB 3.21 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Thu Apr 16 22:20:53 2020 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Consistency Policy : none Name : storage1:0 (local to host storage1) UUID : 3a32b1a5:d2a40561:cedff3db:fab25f1b Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1
We will attempt to remove one of the disks from the array:
root@storage1:/home/student# mdadm /dev/md0 --fail /dev/sdd1 mdadm: set device faulty failed for /dev/sdd1: Device or resource busy
Let us remove the existing RAID0 array and replace it with a RAID1 array containing the same disks. We begin by stopping the exiting array.
root@storage1:~# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 root@storage1:~# mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory
We proceed to zero the superblocks of the previously used partition to clean them.
root@storage1:~# mdadm --zero-superblock /dev/sdb1 root@storage1:~# mdadm --zero-superblock /dev/sdc1 root@storage1:~# mdadm --zero-superblock /dev/sdd1
Finally, we can create the new array.
root@storage1:~# mdadm --create /dev/md0 --level=1 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. root@storage1:~# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Apr 16 22:26:51 2020 Raid Level : raid1 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Thu Apr 16 22:27:30 2020 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Consistency Policy : resync Name : storage1:0 (local to host storage1) UUID : e749932d:26646eac:a21d0186:57f4f545 Events : 17 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1
Let us remove a disk from the newly created array.
root@storage1:~# mdadm /dev/md0 --fail /dev/sdd1 mdadm: set /dev/sdd1 faulty in /dev/md0 root@storage1:~# mdadm --detail /dev/md0 /dev/md0: [...] Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 - 0 0 2 removed 2 8 49 - faulty /dev/sdd1 root@storage1:~# mdadm /dev/md0 --remove /dev/sdd1 mdadm: hot removed /dev/sdd1 from /dev/md0 root@storage1:~# mdadm --detail /dev/md0 /dev/md0: [...] Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 - 0 0 2 removed
Using partitions sdb2
, sdc2
and sdd2
, create the md1
RAID5 array on storage1
.
Do the same setup on storage2
. We will use it at a later stage.
The result should look like below
root@storage1:~# mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Thu Apr 16 22:30:28 2020 Raid Level : raid5 Array Size : 2091008 (2042.00 MiB 2141.19 MB) Used Dev Size : 1045504 (1021.00 MiB 1070.60 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Thu Apr 16 22:30:56 2020 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Rebuild Status : 83% complete Name : storage1:1 (local to host storage1) UUID : f1eaf373:3b6233a9:6e20a22e:0de9d93f Events : 14 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 34 1 active sync /dev/sdc2 3 8 50 2 spare rebuilding /dev/sdd2
Mark sdb2
as faulty and then remove it from the array.
root@storage1:~# mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Thu Apr 16 22:30:28 2020 Raid Level : raid5 Array Size : 2091008 (2042.00 MiB 2141.19 MB) Used Dev Size : 1045504 (1021.00 MiB 1070.60 MB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Apr 16 22:31:02 2020 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : storage1:1 (local to host storage1) UUID : f1eaf373:3b6233a9:6e20a22e:0de9d93f Events : 18 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 34 1 active sync /dev/sdc2 3 8 50 2 active sync /dev/sdd2
We can add a disk in place of the one removed. In this case, we will use the same disk, sdb2
.
root@storage1:~# mdadm /dev/md1 --add /dev/sdb2 mdadm: added /dev/sdb2 root@storage1:~# mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Thu Apr 16 22:30:28 2020 Raid Level : raid5 Array Size : 2091008 (2042.00 MiB 2141.19 MB) Used Dev Size : 1045504 (1021.00 MiB 1070.60 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Thu Apr 16 22:54:26 2020 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Rebuild Status : 12% complete Name : storage1:1 (local to host storage1) UUID : f1eaf373:3b6233a9:6e20a22e:0de9d93f Events : 23 Number Major Minor RaidDevice State 4 8 18 0 spare rebuilding /dev/sdb2 1 8 34 1 active sync /dev/sdc2 3 8 50 2 active sync /dev/sdd2
In order to make our RAID configuration persistent, we can use the following command
root@storage1:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf root@storage1:~# update-initramfs -u
Reboot storage1
and verify that both md0
and md1
are active.
root@storage1:~# reboot
GlusterFS is a distributed file system used to aggregate storage from multiple sources. We will use it in conjunction with a RAID array to test various forms of redundancy.
XFS
is the recommended file system for a GlusterFS configuration. Ext4
is also supported.
Begin by installing xfsprogs
:
root@storage1:~# apt install xfsprogs
Proceed by making a XFS partition. Use the previously created RAID5 array.
root@storage1:~# fdisk /dev/md1 Welcome to fdisk (util-linux 2.33.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x629ddc02. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-4182015, default 2048): Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4182015, default 4182015): Created a new partition 1 of type 'Linux' and of size 2 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. root@storage1:~# mkfs.xfs -i size=512 /dev/md1 mkfs.xfs: /dev/md1 appears to contain a partition table (dos). mkfs.xfs: Use the -f option to force overwrite. root@storage1:~# mkfs.xfs -i size=512 -f /dev/md1 meta-data=/dev/md1 isize=512 agcount=8, agsize=65408 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=0 data = bsize=4096 blocks=522752, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Mount the created partition on a local path.
root@storage1:~# mkdir /export root@storage1:~# echo "/dev/md1 /export xfs defaults 1 2" >> /etc/fstab root@storage1:~# mount /export root@storage1:~# df -h | grep export /dev/md1 2.0G 35M 2.0G 2% /export
storage2
Install glusterfs-server
and enable the gluster daemon on the system.
root@storage1:~# apt install glusterfs-server root@storage1:~# systemctl enable --now glusterd
storage2
Now, let us connect the two hosts. You must first add hostname to IP mappings for the other storage VMs in /etc/hosts
on each host.
root@storage1:~# gluster peer probe storage2 peer probe: success. root@storage1:~# gluster peer status Number of Peers: 1 Hostname: storage2 Uuid: 919fb03c-ddc5-4bcc-bdc1-ce8780aaf7c1 State: Peer in Cluster (Connected) root@storage1:~#
Once the hosts are connected, we can create a GlusterFS volume using partitions on both machines.
root@storage1:~# gluster volume create scgc transport tcp storage1:/export/brick1 storage2:/export/brick1 volume create: scgc: success: please start the volume to access data root@storage1:~# gluster volume info Volume Name: scgc Type: Distribute Volume ID: e1e4b3b2-6efe-483d-b7aa-6761c5a01853 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: storage1:/export/brick1 Brick2: storage2:/export/brick1 Options Reconfigured: transport.address-family: inet nfs.disable: on
We will setup access permission for our network and start the volume.
root@storage1:~# gluster volume set scgc auth.allow 192.168.1.* volume set: success root@storage1:~# gluster volume start scgc volume start: scgc: success
We will now use the host as a GlusterFS client and mount the scgc
volume.
root@scgc:~# apt install glusterfs-client root@scgc:~# mkdir /export root@scgc:~# mount -t glusterfs storage1:/scgc /export root@scgc:~# df -h | grep export storage1:/scgc 4.0G 66M 4.0G 2% /export
Test that removing and adding volumes from the RAID arrays does not affect the GlusterFS volume. What limitations can you notice?
In the above setup, we used RAID5 in order to provide redundancy and GlusterFS to aggregate storage.
Now use GlusterFS to replicate data.
Use the same disks on the two hosts as previously (sdb2
, sdc2
, sdd2
) and arrange them in a suitable RAID array.
To remove the existing GlusterFS volume, do the following:
root@scgc:~# umount /export root@storage1:~# gluster volume stop scgc Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y root@storage1:~# gluster volume delete scgc Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: scgc: success root@storage1:~# rm -rf /export/brick1/ root@storage2:~# rm -rf /export/brick1/
replica <number of replicas>
after the name of the volume when creating to create a replicated volume