This is an old revision of the document!
student
storage1
, storage2
, storage3
)scgc/
directory:student@saisp:~$ cd scgc student@saisp:~/scgc$ wget --user=user-curs --ask-password http://repository.grid.pub.ro/cs/scgc/laboratoare/lab-03.zip student@saisp:~/scgc$ unzip lab-03.zip
qcow2
format) should be present, as well as two scripts (lab03-start
and lab03-stop
)lab03-start
script:student@saisp:~/saisp$ ./lab03-start
student@saisp:~/saisp$ ssh -l root 192.168.1.X
student
and root
users is student
Start by connecting to storage1
:
student@saisp:~/saisp$ ssh -l root 192.168.1.1
mdadm
is a tool used to manage RAID arrays.
root@storage1:~# apt-get update [...] root@storage1:~# apt-get install mdadm [...] [ ok ] Assembling MD array md0...done (started [3/3]). [ ok ] Generating udev events for MD arrays...done. update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults [ ok ] Starting MD monitoring service: mdadm --monitor. Processing triggers for initramfs-tools (0.115) ... update-initramfs: Generating /boot/initrd.img-3.12-1-amd64 root@storage1:~#
If prompted whether the root file system needs any arrays, leave the field blank (or enter none
).
Notice that a RAID0 array has been configured during the installation process. Let us inspect it:
root@storage1:~# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 sdb1[0] sdd1[2] sdc1[1] 3139584 blocks super 1.2 512k chunks unused devices: <none>
We can inspect the existing raid array in detail with the following command:
root@storage1:~# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Mar 17 00:49:44 2014 Raid Level : raid0 Array Size : 3139584 (2.99 GiB 3.21 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Mar 17 00:49:44 2014 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Name : raid:0 UUID : 7c853116:6277002c:1799d9e1:5a0eadcd Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1
We will attempt to remove one of the disks from the array:
root@storage1:~# mdadm /dev/md0 --fail /dev/sdd1 mdadm: set device faulty failed for /dev/sdd1: Device or resource busy
Let us remove the existing RAID0 array and replace it with a RAID1 array containing the same disks. We begin by stopping the exiting array.
root@storage1:~# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 root@storage1:~# mdadm --detail /dev/md0 mdadm: cannot open /dev/md0: No such file or directory
We proceed to zero the superblocks of the previously used partition to clean them.
root@storage1:~# mdadm --zero-superblock /dev/sdb1 root@storage1:~# mdadm --zero-superblock /dev/sdc1 root@storage1:~# mdadm --zero-superblock /dev/sdd1
Finally, we can create the new array.
root@storage1:~# mdadm --create /dev/md0 --level=1 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. root@storage1:~# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Mar 10 22:14:33 2018 Raid Level : raid1 Array Size : 1046976 (1022.61 MiB 1072.10 MB) Used Dev Size : 1046976 (1022.61 MiB 1072.10 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sat Mar 10 22:14:50 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Name : storage1:0 (local to host storage1) UUID : e1b180bf:b5232a89:d0420e5c:f9d74013 Events : 17 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1
Let us remove a disk from the newly created array.
root@storage1:~# mdadm /dev/md0 --fail /dev/sdd1 mdadm: set /dev/sdd1 faulty in /dev/md0 root@storage1:~# mdadm --detail /dev/md0 /dev/md0: [...] Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 4 0 0 4 removed 2 8 49 - faulty /dev/sdd1 root@storage1:~# mdadm /dev/md0 --remove /dev/sdd1 mdadm: hot removed /dev/sdd1 from /dev/md0 root@storage1:~# mdadm --detail /dev/md0 /dev/md0: [...] Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 4 0 0 4 removed
Using partitions sdb2
, sdc2
and sdd2
, create the md1
RAID5 array on storage1
.
Do the same setup on storage2
. We will use it at a later stage.
The result should look like below
root@storage1:~# mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Sat Mar 10 22:31:42 2018 Raid Level : raid5 Array Size : 2095104 (2046.34 MiB 2145.39 MB) Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sat Mar 10 22:31:51 2018 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 59% complete Name : storage1:1 (local to host storage1) UUID : 129b8d62:eba119a1:172adf60:1adb20ee Events : 10 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 34 1 active sync /dev/sdc2 3 8 50 2 spare rebuilding /dev/sdd2
Mark sdb2
as faulty and then remove it from the array.
root@storage1:~# mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Sat Mar 10 22:31:42 2018 Raid Level : raid5 Array Size : 2095104 (2046.34 MiB 2145.39 MB) Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sat Mar 10 22:36:49 2018 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : storage1:1 (local to host storage1) UUID : 129b8d62:eba119a1:172adf60:1adb20ee Events : 21 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 34 1 active sync /dev/sdc2 3 8 50 2 active sync /dev/sdd2
We can add a disk in place of the one removed. In this case, we will use the same disk, sdb2
.
root@storage1:~# mdadm /dev/md1 --add /dev/sdb2 mdadm: added /dev/sdb2 root@storage1:~# mdadm --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Sat Mar 10 22:31:42 2018 Raid Level : raid5 Array Size : 2095104 (2046.34 MiB 2145.39 MB) Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sat Mar 10 22:37:09 2018 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 25% complete Name : storage1:1 (local to host storage1) UUID : 129b8d62:eba119a1:172adf60:1adb20ee Events : 27 Number Major Minor RaidDevice State 4 8 18 0 spare rebuilding /dev/sdb2 1 8 34 1 active sync /dev/sdc2 3 8 50 2 active sync /dev/sdd2
In order to make our RAID configuration persistent, we can use the following command
root@storage1:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Reboot storage1
and verify that both md0
and md1
are active.
root@storage1:~# reboot
GlusterFS is a distributed file system used to aggregate storage from multiple sources. We will use it in conjunction with a RAID array to test various forms of redundancy.
XFS
is the recommended file system for a GlusterFS configuration. Ext4
is also supported.
Begin by installing xfsprogs
:
root@storage1:~# apt-get install xfsprogs [...] Setting up xfsprogs (3.2.1) ... Processing triggers for libc-bin (2.17-97) ... root@storage1:~#
Proceed by making a XFS partition. Use the previously created RAID5 array.
root@storage1:~# fdisk /dev/md1 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0x95711919. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): Using default response p Partition number (1-4, default 1): Using default value 1 First sector (2048-4190207, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-4190207, default 4190207): Using default value 4190207 Command (m for help): q root@storage1:~# mkfs.xfs -i size=512 /dev/md1 log stripe unit (524288 bytes) is too large (maximum is 256KiB) log stripe unit adjusted to 32KiB meta-data=/dev/md1 isize=512 agcount=8, agsize=65408 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=523264, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Mount the created partition on a local path.
root@storage1:~# mkdir /export root@storage1:~# echo "/dev/md1 /export xfs defaults 1 2" >> /etc/fstab root@storage1:~# mount /export root@storage1:~# df -h | grep export /dev/md1 2.0G 33M 2.0G 2% /export
storage2
Install glusterfs-server
root@storage1:~# apt-get install glusterfs-server [...] Setting up glusterfs-common (3.5.2-2+deb8u3) ... Setting up glusterfs-client (3.5.2-2+deb8u3) ... Setting up glusterfs-server (3.5.2-2+deb8u3) ... [ ok ] Starting glusterd service: glusterd. Setting up dmsetup (2:1.02.90-2.2+deb8u1) ... update-initramfs: deferring update (trigger activated) Processing triggers for libc-bin (2.17-97) ... Processing triggers for initramfs-tools (0.115) ... update-initramfs: Generating /boot/initrd.img-3.12-1-amd64 root@storage1:~#
storage2
Now, let us connect the two hosts.
root@storage1:~# gluster peer probe storage2 peer probe: success. root@storage1:~# gluster peer status Number of Peers: 1 Hostname: storage2 Uuid: 7faf0a96-48ea-4c23-91af-80311614fd57 State: Peer in Cluster (Connected) root@storage1:~#
Once the hosts are connected, we can create a GlusterFS volume using partitions on both machines.
root@storage1:~# gluster volume create scgc transport tcp storage1:/export/brick1 storage2:/export/brick1 volume create: scgc: success: please start the volume to access data root@storage1:~# gluster volume info Volume Name: scgc Type: Distribute Volume ID: 91f6f6f3-9473-48e2-b49a-e8dcbe5e45e0 Status: Created Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: storage1:/export/brick1 Brick2: storage2:/export/brick1
We will setup access permission for our network and start the volume.
root@storage1:~# gluster volume set scgc auth.allow 192.168.1.* volume set: success root@storage1:~# gluster volume start scgc volume start: scgc: success
We will now use storage3
as a GlusterFS client and mount the scgc
volume.
root@storage3:~# apt-get install glusterfs-client [...] Setting up glusterfs-common (3.5.2-2+deb8u3) ... Setting up glusterfs-client (3.5.2-2+deb8u3) ... Setting up dmsetup (2:1.02.90-2.2+deb8u1) ... update-initramfs: deferring update (trigger activated) Processing triggers for libc-bin (2.17-97) ... Processing triggers for initramfs-tools (0.115) ... update-initramfs: Generating /boot/initrd.img-3.12-1-amd64 root@storage3:~# mkdir /export root@storage3:~# mount -t glusterfs storage1:/scgc /export root@storage3:~# df -h | grep export storage1:/scgc 4.0G 66M 4.0G 2% /export
Test that removing and adding volumes from the RAID arrays does not affect the GlusterFS volume. What limitations can you notice?
In the above setup, we used RAID5 in order to provide redundancy and GlusterFS to aggregate storage.
Now use GlusterFS to replicate data.
Use the same disks on the two hosts as previously (sdb2
, sdc2
, sdd2
) and arrange them in a suitable RAID array.
To remove the existing GlusterFS volume, do the following:
root@storage3:~# umount /export root@storage1:~# gluster volume stop scgc Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y root@storage1:~# gluster volume delete scgc Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y volume delete: scgc: success root@storage1:~# rm -rf /export/brick1/ root@storage2:~# rm -rf /export/brick1/
replica <number of replicas>
after the name of the volume when creating to create a replicated volume