This shows you the differences between two versions of the page.
scgc:laboratoare:03 [2020/03/04 16:26] maria.mihailescu [Lab Setup] |
scgc:laboratoare:03 [2021/10/27 14:08] (current) maria.mihailescu |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Laboratory 03. Network File Systems ====== | + | ====== Network File Systems ====== |
===== Lab Setup ===== | ===== Lab Setup ===== | ||
- | * We will be using a virtual machine in the [[http://cloud.curs.pub.ro/|faculty's cloud]]. | + | * We will be using a virtual machine in the [[http://cloud.grid.pub.ro/|faculty's cloud]]. |
- | * When creating a virtual machine follow the steps in this [[https://cloud.curs.pub.ro/about/tutorial-for-students/|tutorial]]. | + | |
* Create a VM | * Create a VM | ||
* When creating a virtual machine in the Launch Instance window: | * When creating a virtual machine in the Launch Instance window: | ||
- | * For **Availability zone**, choose **CAMPUS**, **CI** or **hp** | ||
* Select **Boot from image** in **Instance Boot Source** section | * Select **Boot from image** in **Instance Boot Source** section | ||
- | * Select **SCGC Template v1** in **Image Name** section | + | * Select **SCGC Template** in **Image Name** section |
- | * Select a flavor that is at least **m1.medium**. | + | * Select a flavor that is at least **m1.large**. |
* The username for connecting to the VM is ''student'' | * The username for connecting to the VM is ''student'' | ||
- | * Within the above virtual machine, we will be running three virtual machines (''storage1'', ''storage2'', ''storage3'') | + | * Within the above virtual machine, we will be running two virtual machines (''storage1'', ''storage2'') |
* First, download the laboratory archive in the ''scgc/'' directory:<code bash> | * First, download the laboratory archive in the ''scgc/'' directory:<code bash> | ||
student@scgc:~$ cd scgc | student@scgc:~$ cd scgc | ||
Line 19: | Line 17: | ||
</code> | </code> | ||
* After unzipping the archive, several KVM image files (''qcow2'' format) should be present, as well as two scripts (''lab03-start'' and ''lab03-stop'') | * After unzipping the archive, several KVM image files (''qcow2'' format) should be present, as well as two scripts (''lab03-start'' and ''lab03-stop'') | ||
- | * To run the virtual machines, use the ''lab03-start'' script:<code bash> | + | * To run the virtual machines, use the ''lab03-start'' script: |
- | student@saisp:~/saisp$ ./lab03-start | + | |
+ | <code bash> | ||
+ | student@scgc:~/scgc$ bash lab03-start | ||
</code> | </code> | ||
+ | |||
* It may take a minute for the virtual machines to start | * It may take a minute for the virtual machines to start | ||
- | * In order to connect to each of the machines, use the following command (substitute X with 1, 2 or 3):<code bash> | + | * In order to connect to each of the machines, use the following command (substitute X with 1 or 2): |
- | student@saisp:~/saisp$ ssh -l root 192.168.1.X | + | |
+ | <code bash> | ||
+ | student@scgc:~/scgc$ ssh student@192.168.1.X | ||
</code> | </code> | ||
+ | |||
* The password for both ''student'' and ''root'' users is ''student'' | * The password for both ''student'' and ''root'' users is ''student'' | ||
Line 33: | Line 37: | ||
Start by connecting to ''storage1'':<code bash> | Start by connecting to ''storage1'':<code bash> | ||
- | student@saisp:~/saisp$ ssh -l root 192.168.1.1 | + | student@scgc:~/scgc$ ssh student@192.168.1.1 |
</code> | </code> | ||
Line 40: | Line 44: | ||
<code bash> | <code bash> | ||
root@storage1:~# apt-get update | root@storage1:~# apt-get update | ||
- | [...] | ||
root@storage1:~# apt-get install mdadm | root@storage1:~# apt-get install mdadm | ||
- | [...] | ||
- | [ ok ] Assembling MD array md0...done (started [3/3]). | ||
- | [ ok ] Generating udev events for MD arrays...done. | ||
- | update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults | ||
- | [ ok ] Starting MD monitoring service: mdadm --monitor. | ||
- | Processing triggers for initramfs-tools (0.115) ... | ||
- | update-initramfs: Generating /boot/initrd.img-3.12-1-amd64 | ||
- | root@storage1:~# | ||
</code> | </code> | ||
- | If prompted whether the root file system needs any arrays, leave the field blank (or enter ''none''). | + | At this point, we can create a new RAID0 array from ''%%sdb1%%'', ''%%sdc1%%'', and ''%%sdd1%%'': |
+ | <code bash> | ||
+ | root@storage1:~# mdadm --create /dev/md0 --level=0 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 | ||
+ | mdadm: Defaulting to version 1.2 metadata | ||
+ | mdadm: array /dev/md0 started. | ||
+ | </code> | ||
- | Notice that a RAID0 array has been configured during the installation process. Let us inspect it: | + | We can inspect the existing configuration of mdstat by checking the contents of ''%%/proc/mdstat%%'': |
<code bash> | <code bash> | ||
root@storage1:~# cat /proc/mdstat | root@storage1:~# cat /proc/mdstat | ||
- | Personalities : [raid0] | + | Personalities : [raid0] |
- | md0 : active raid0 sdb1[0] sdd1[2] sdc1[1] | + | md0 : active raid0 sdd1[2] sdc1[1] sdb1[0] |
3139584 blocks super 1.2 512k chunks | 3139584 blocks super 1.2 512k chunks | ||
- | + | | |
unused devices: <none> | unused devices: <none> | ||
</code> | </code> | ||
Line 69: | Line 69: | ||
root@storage1:~# mdadm --detail /dev/md0 | root@storage1:~# mdadm --detail /dev/md0 | ||
/dev/md0: | /dev/md0: | ||
- | Version : 1.2 | + | Version : 1.2 |
- | Creation Time : Mon Mar 17 00:49:44 2014 | + | Creation Time : Thu Apr 16 22:20:53 2020 |
- | Raid Level : raid0 | + | Raid Level : raid0 |
- | Array Size : 3139584 (2.99 GiB 3.21 GB) | + | Array Size : 3139584 (2.99 GiB 3.21 GB) |
- | Raid Devices : 3 | + | Raid Devices : 3 |
- | Total Devices : 3 | + | Total Devices : 3 |
- | Persistence : Superblock is persistent | + | Persistence : Superblock is persistent |
- | Update Time : Mon Mar 17 00:49:44 2014 | + | Update Time : Thu Apr 16 22:20:53 2020 |
- | State : clean | + | State : clean |
- | Active Devices : 3 | + | Active Devices : 3 |
- | Working Devices : 3 | + | Working Devices : 3 |
- | Failed Devices : 0 | + | Failed Devices : 0 |
- | Spare Devices : 0 | + | Spare Devices : 0 |
- | Chunk Size : 512K | + | Chunk Size : 512K |
- | Name : raid:0 | + | Consistency Policy : none |
- | UUID : 7c853116:6277002c:1799d9e1:5a0eadcd | + | |
- | Events : 0 | + | Name : storage1:0 (local to host storage1) |
+ | UUID : 3a32b1a5:d2a40561:cedff3db:fab25f1b | ||
+ | Events : 0 | ||
Number Major Minor RaidDevice State | Number Major Minor RaidDevice State | ||
Line 94: | Line 96: | ||
1 8 33 1 active sync /dev/sdc1 | 1 8 33 1 active sync /dev/sdc1 | ||
2 8 49 2 active sync /dev/sdd1 | 2 8 49 2 active sync /dev/sdd1 | ||
+ | |||
</code> | </code> | ||
We will attempt to remove one of the disks from the array: | We will attempt to remove one of the disks from the array: | ||
<code bash> | <code bash> | ||
- | root@storage1:~# mdadm /dev/md0 --fail /dev/sdd1 | + | root@storage1:/home/student# mdadm /dev/md0 --fail /dev/sdd1 |
mdadm: set device faulty failed for /dev/sdd1: Device or resource busy | mdadm: set device faulty failed for /dev/sdd1: Device or resource busy | ||
</code> | </code> | ||
Line 113: | Line 116: | ||
mdadm: cannot open /dev/md0: No such file or directory | mdadm: cannot open /dev/md0: No such file or directory | ||
</code> | </code> | ||
+ | |||
We proceed to zero the superblocks of the previously used partition to clean them. | We proceed to zero the superblocks of the previously used partition to clean them. | ||
<code bash> | <code bash> | ||
Line 129: | Line 133: | ||
Continue creating array? y | Continue creating array? y | ||
mdadm: Defaulting to version 1.2 metadata | mdadm: Defaulting to version 1.2 metadata | ||
- | |||
mdadm: array /dev/md0 started. | mdadm: array /dev/md0 started. | ||
root@storage1:~# mdadm --detail /dev/md0 | root@storage1:~# mdadm --detail /dev/md0 | ||
/dev/md0: | /dev/md0: | ||
- | Version : 1.2 | + | Version : 1.2 |
- | Creation Time : Sat Mar 10 22:14:33 2018 | + | Creation Time : Thu Apr 16 22:26:51 2020 |
- | Raid Level : raid1 | + | Raid Level : raid1 |
- | Array Size : 1046976 (1022.61 MiB 1072.10 MB) | + | Array Size : 1046528 (1022.00 MiB 1071.64 MB) |
- | Used Dev Size : 1046976 (1022.61 MiB 1072.10 MB) | + | Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB) |
- | Raid Devices : 3 | + | Raid Devices : 3 |
- | Total Devices : 3 | + | Total Devices : 3 |
- | Persistence : Superblock is persistent | + | Persistence : Superblock is persistent |
+ | |||
+ | Update Time : Thu Apr 16 22:27:30 2020 | ||
+ | State : clean | ||
+ | Active Devices : 3 | ||
+ | Working Devices : 3 | ||
+ | Failed Devices : 0 | ||
+ | Spare Devices : 0 | ||
- | Update Time : Sat Mar 10 22:14:50 2018 | + | Consistency Policy : resync |
- | State : clean | + | |
- | Active Devices : 3 | + | |
- | Working Devices : 3 | + | |
- | Failed Devices : 0 | + | |
- | Spare Devices : 0 | + | |
- | Name : storage1:0 (local to host storage1) | + | Name : storage1:0 (local to host storage1) |
- | UUID : e1b180bf:b5232a89:d0420e5c:f9d74013 | + | UUID : e749932d:26646eac:a21d0186:57f4f545 |
- | Events : 17 | + | Events : 17 |
Number Major Minor RaidDevice State | Number Major Minor RaidDevice State | ||
Line 169: | Line 174: | ||
0 8 17 0 active sync /dev/sdb1 | 0 8 17 0 active sync /dev/sdb1 | ||
1 8 33 1 active sync /dev/sdc1 | 1 8 33 1 active sync /dev/sdc1 | ||
- | 4 0 0 4 removed | + | - 0 0 2 removed |
2 8 49 - faulty /dev/sdd1 | 2 8 49 - faulty /dev/sdd1 | ||
Line 180: | Line 185: | ||
0 8 17 0 active sync /dev/sdb1 | 0 8 17 0 active sync /dev/sdb1 | ||
1 8 33 1 active sync /dev/sdc1 | 1 8 33 1 active sync /dev/sdc1 | ||
- | 4 0 0 4 removed | + | - 0 0 2 removed |
</code> | </code> | ||
Line 195: | Line 200: | ||
root@storage1:~# mdadm --detail /dev/md1 | root@storage1:~# mdadm --detail /dev/md1 | ||
/dev/md1: | /dev/md1: | ||
- | Version : 1.2 | + | Version : 1.2 |
- | Creation Time : Sat Mar 10 22:31:42 2018 | + | Creation Time : Thu Apr 16 22:30:28 2020 |
- | Raid Level : raid5 | + | Raid Level : raid5 |
- | Array Size : 2095104 (2046.34 MiB 2145.39 MB) | + | Array Size : 2091008 (2042.00 MiB 2141.19 MB) |
- | Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB) | + | Used Dev Size : 1045504 (1021.00 MiB 1070.60 MB) |
- | Raid Devices : 3 | + | Raid Devices : 3 |
- | Total Devices : 3 | + | Total Devices : 3 |
- | Persistence : Superblock is persistent | + | Persistence : Superblock is persistent |
- | Update Time : Sat Mar 10 22:31:51 2018 | + | Update Time : Thu Apr 16 22:30:56 2020 |
- | State : clean, degraded, recovering | + | State : clean, degraded, recovering |
- | Active Devices : 2 | + | Active Devices : 2 |
- | Working Devices : 3 | + | Working Devices : 3 |
- | Failed Devices : 0 | + | Failed Devices : 0 |
- | Spare Devices : 1 | + | Spare Devices : 1 |
- | Layout : left-symmetric | + | Layout : left-symmetric |
- | Chunk Size : 512K | + | Chunk Size : 512K |
- | Rebuild Status : 59% complete | + | Consistency Policy : resync |
- | Name : storage1:1 (local to host storage1) | + | Rebuild Status : 83% complete |
- | UUID : 129b8d62:eba119a1:172adf60:1adb20ee | + | |
- | Events : 10 | + | Name : storage1:1 (local to host storage1) |
+ | UUID : f1eaf373:3b6233a9:6e20a22e:0de9d93f | ||
+ | Events : 14 | ||
Number Major Minor RaidDevice State | Number Major Minor RaidDevice State | ||
Line 225: | Line 232: | ||
3 8 50 2 spare rebuilding /dev/sdd2 | 3 8 50 2 spare rebuilding /dev/sdd2 | ||
</code> | </code> | ||
+ | |||
==== 3. [20p] Restoring RAID5 Array ==== | ==== 3. [20p] Restoring RAID5 Array ==== | ||
Line 231: | Line 239: | ||
root@storage1:~# mdadm --detail /dev/md1 | root@storage1:~# mdadm --detail /dev/md1 | ||
/dev/md1: | /dev/md1: | ||
- | Version : 1.2 | + | Version : 1.2 |
- | Creation Time : Sat Mar 10 22:31:42 2018 | + | Creation Time : Thu Apr 16 22:30:28 2020 |
- | Raid Level : raid5 | + | Raid Level : raid5 |
- | Array Size : 2095104 (2046.34 MiB 2145.39 MB) | + | Array Size : 2091008 (2042.00 MiB 2141.19 MB) |
- | Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB) | + | Used Dev Size : 1045504 (1021.00 MiB 1070.60 MB) |
- | Raid Devices : 3 | + | Raid Devices : 3 |
- | Total Devices : 2 | + | Total Devices : 2 |
- | Persistence : Superblock is persistent | + | Persistence : Superblock is persistent |
- | Update Time : Sat Mar 10 22:36:49 2018 | + | Update Time : Thu Apr 16 22:31:02 2020 |
- | State : clean, degraded | + | State : clean, degraded |
- | Active Devices : 2 | + | Active Devices : 2 |
- | Working Devices : 2 | + | Working Devices : 2 |
- | Failed Devices : 0 | + | Failed Devices : 0 |
- | Spare Devices : 0 | + | Spare Devices : 0 |
- | Layout : left-symmetric | + | Layout : left-symmetric |
- | Chunk Size : 512K | + | Chunk Size : 512K |
- | Name : storage1:1 (local to host storage1) | + | Consistency Policy : resync |
- | UUID : 129b8d62:eba119a1:172adf60:1adb20ee | + | |
- | Events : 21 | + | Name : storage1:1 (local to host storage1) |
+ | UUID : f1eaf373:3b6233a9:6e20a22e:0de9d93f | ||
+ | Events : 18 | ||
Number Major Minor RaidDevice State | Number Major Minor RaidDevice State | ||
- | 0 0 0 0 removed | + | - 0 0 0 removed |
1 8 34 1 active sync /dev/sdc2 | 1 8 34 1 active sync /dev/sdc2 | ||
3 8 50 2 active sync /dev/sdd2 | 3 8 50 2 active sync /dev/sdd2 | ||
+ | |||
</code> | </code> | ||
Line 266: | Line 277: | ||
root@storage1:~# mdadm --detail /dev/md1 | root@storage1:~# mdadm --detail /dev/md1 | ||
/dev/md1: | /dev/md1: | ||
- | Version : 1.2 | + | Version : 1.2 |
- | Creation Time : Sat Mar 10 22:31:42 2018 | + | Creation Time : Thu Apr 16 22:30:28 2020 |
- | Raid Level : raid5 | + | Raid Level : raid5 |
- | Array Size : 2095104 (2046.34 MiB 2145.39 MB) | + | Array Size : 2091008 (2042.00 MiB 2141.19 MB) |
- | Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB) | + | Used Dev Size : 1045504 (1021.00 MiB 1070.60 MB) |
- | Raid Devices : 3 | + | Raid Devices : 3 |
- | Total Devices : 3 | + | Total Devices : 3 |
- | Persistence : Superblock is persistent | + | Persistence : Superblock is persistent |
+ | |||
+ | Update Time : Thu Apr 16 22:54:26 2020 | ||
+ | State : clean, degraded, recovering | ||
+ | Active Devices : 2 | ||
+ | Working Devices : 3 | ||
+ | Failed Devices : 0 | ||
+ | Spare Devices : 1 | ||
- | Update Time : Sat Mar 10 22:37:09 2018 | + | Layout : left-symmetric |
- | State : clean, degraded, recovering | + | Chunk Size : 512K |
- | Active Devices : 2 | + | |
- | Working Devices : 3 | + | |
- | Failed Devices : 0 | + | |
- | Spare Devices : 1 | + | |
- | Layout : left-symmetric | + | Consistency Policy : resync |
- | Chunk Size : 512K | + | |
- | Rebuild Status : 25% complete | + | Rebuild Status : 12% complete |
- | Name : storage1:1 (local to host storage1) | + | Name : storage1:1 (local to host storage1) |
- | UUID : 129b8d62:eba119a1:172adf60:1adb20ee | + | UUID : f1eaf373:3b6233a9:6e20a22e:0de9d93f |
- | Events : 27 | + | Events : 23 |
Number Major Minor RaidDevice State | Number Major Minor RaidDevice State | ||
Line 301: | Line 314: | ||
<code bash> | <code bash> | ||
root@storage1:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf | root@storage1:~# mdadm --detail --scan >> /etc/mdadm/mdadm.conf | ||
+ | root@storage1:~# update-initramfs -u | ||
</code> | </code> | ||
Line 322: | Line 336: | ||
Begin by installing ''xfsprogs'': | Begin by installing ''xfsprogs'': | ||
<code bash> | <code bash> | ||
- | root@storage1:~# apt-get install xfsprogs | + | root@storage1:~# apt install xfsprogs |
- | [...] | + | |
- | Setting up xfsprogs (3.2.1) ... | + | |
- | Processing triggers for libc-bin (2.17-97) ... | + | |
- | root@storage1:~# | + | |
</code> | </code> | ||
Line 332: | Line 342: | ||
<code bash> | <code bash> | ||
root@storage1:~# fdisk /dev/md1 | root@storage1:~# fdisk /dev/md1 | ||
- | Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel | + | |
- | Building a new DOS disklabel with disk identifier 0x95711919. | + | Welcome to fdisk (util-linux 2.33.1). |
Changes will remain in memory only, until you decide to write them. | Changes will remain in memory only, until you decide to write them. | ||
- | After that, of course, the previous content won't be recoverable. | + | Be careful before using the write command. |
- | Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) | + | Device does not contain a recognized partition table. |
+ | Created a new DOS disklabel with disk identifier 0x629ddc02. | ||
Command (m for help): n | Command (m for help): n | ||
- | Partition type: | + | Partition type |
p primary (0 primary, 0 extended, 4 free) | p primary (0 primary, 0 extended, 4 free) | ||
- | e extended | + | e extended (container for logical partitions) |
- | Select (default p): | + | Select (default p): p |
- | Using default response p | + | Partition number (1-4, default 1): |
- | Partition number (1-4, default 1): | + | First sector (2048-4182015, default 2048): |
- | Using default value 1 | + | Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4182015, default 4182015): |
- | First sector (2048-4190207, default 2048): | + | |
- | Using default value 2048 | + | Created a new partition 1 of type 'Linux' and of size 2 GiB. |
- | Last sector, +sectors or +size{K,M,G} (2048-4190207, default 4190207): | + | |
- | Using default value 4190207 | + | |
- | Command (m for help): q | + | Command (m for help): w |
+ | The partition table has been altered. | ||
+ | Calling ioctl() to re-read partition table. | ||
+ | Syncing disks. | ||
root@storage1:~# mkfs.xfs -i size=512 /dev/md1 | root@storage1:~# mkfs.xfs -i size=512 /dev/md1 | ||
- | log stripe unit (524288 bytes) is too large (maximum is 256KiB) | + | mkfs.xfs: /dev/md1 appears to contain a partition table (dos). |
- | log stripe unit adjusted to 32KiB | + | mkfs.xfs: Use the -f option to force overwrite. |
+ | root@storage1:~# mkfs.xfs -i size=512 -f /dev/md1 | ||
meta-data=/dev/md1 isize=512 agcount=8, agsize=65408 blks | meta-data=/dev/md1 isize=512 agcount=8, agsize=65408 blks | ||
= sectsz=512 attr=2, projid32bit=1 | = sectsz=512 attr=2, projid32bit=1 | ||
- | = crc=0 finobt=0 | + | = crc=1 finobt=1, sparse=1, rmapbt=0 |
- | data = bsize=4096 blocks=523264, imaxpct=25 | + | = reflink=0 |
+ | data = bsize=4096 blocks=522752, imaxpct=25 | ||
= sunit=128 swidth=256 blks | = sunit=128 swidth=256 blks | ||
- | naming =version 2 bsize=4096 ascii-ci=0 ftype=0 | + | naming =version 2 bsize=4096 ascii-ci=0, ftype=1 |
log =internal log bsize=4096 blocks=2560, version=2 | log =internal log bsize=4096 blocks=2560, version=2 | ||
- | = sectsz=512 sunit=8 blks, lazy-count=1 | + | = sectsz=512 sunit=0 blks, lazy-count=1 |
realtime =none extsz=4096 blocks=0, rtextents=0 | realtime =none extsz=4096 blocks=0, rtextents=0 | ||
+ | |||
</code> | </code> | ||
Line 374: | Line 389: | ||
root@storage1:~# mount /export | root@storage1:~# mount /export | ||
root@storage1:~# df -h | grep export | root@storage1:~# df -h | grep export | ||
- | /dev/md1 2.0G 33M 2.0G 2% /export | + | /dev/md1 2.0G 35M 2.0G 2% /export |
</code> | </code> | ||
Line 387: | Line 402: | ||
=== 5.2 [10p] GlusterFS setup === | === 5.2 [10p] GlusterFS setup === | ||
- | Install ''glusterfs-server'' | + | Install ''glusterfs-server'' and enable the gluster daemon on the system. |
<code bash> | <code bash> | ||
- | root@storage1:~# apt-get install glusterfs-server | + | root@storage1:~# apt install glusterfs-server |
- | [...] | + | root@storage1:~# systemctl enable --now glusterd |
- | Setting up glusterfs-common (3.5.2-2+deb8u3) ... | + | |
- | Setting up glusterfs-client (3.5.2-2+deb8u3) ... | + | |
- | Setting up glusterfs-server (3.5.2-2+deb8u3) ... | + | |
- | [ ok ] Starting glusterd service: glusterd. | + | |
- | Setting up dmsetup (2:1.02.90-2.2+deb8u1) ... | + | |
- | update-initramfs: deferring update (trigger activated) | + | |
- | Processing triggers for libc-bin (2.17-97) ... | + | |
- | Processing triggers for initramfs-tools (0.115) ... | + | |
- | update-initramfs: Generating /boot/initrd.img-3.12-1-amd64 | + | |
- | root@storage1:~# | + | |
</code> | </code> | ||
Line 408: | Line 413: | ||
</note> | </note> | ||
- | Now, let us connect the two hosts. | + | Now, let us connect the two hosts. You **must** first add hostname to IP mappings for the other storage VMs in ''%%/etc/hosts%%'' on each host. |
<code bash> | <code bash> | ||
root@storage1:~# gluster peer probe storage2 | root@storage1:~# gluster peer probe storage2 | ||
Line 416: | Line 421: | ||
Hostname: storage2 | Hostname: storage2 | ||
- | Uuid: 7faf0a96-48ea-4c23-91af-80311614fd57 | + | Uuid: 919fb03c-ddc5-4bcc-bdc1-ce8780aaf7c1 |
State: Peer in Cluster (Connected) | State: Peer in Cluster (Connected) | ||
root@storage1:~# | root@storage1:~# | ||
Line 429: | Line 434: | ||
Volume Name: scgc | Volume Name: scgc | ||
Type: Distribute | Type: Distribute | ||
- | Volume ID: 91f6f6f3-9473-48e2-b49a-e8dcbe5e45e0 | + | Volume ID: e1e4b3b2-6efe-483d-b7aa-6761c5a01853 |
Status: Created | Status: Created | ||
+ | Snapshot Count: 0 | ||
Number of Bricks: 2 | Number of Bricks: 2 | ||
Transport-type: tcp | Transport-type: tcp | ||
Line 436: | Line 442: | ||
Brick1: storage1:/export/brick1 | Brick1: storage1:/export/brick1 | ||
Brick2: storage2:/export/brick1 | Brick2: storage2:/export/brick1 | ||
+ | Options Reconfigured: | ||
+ | transport.address-family: inet | ||
+ | nfs.disable: on | ||
</code> | </code> | ||
Line 448: | Line 457: | ||
=== 5.3 [5p] Mounting a GlusterFS volume === | === 5.3 [5p] Mounting a GlusterFS volume === | ||
- | We will now use ''storage3'' as a GlusterFS client and mount the ''scgc'' volume. | + | We will now use the host as a GlusterFS client and mount the ''scgc'' volume. |
<code bash> | <code bash> | ||
- | root@storage3:~# apt-get install glusterfs-client | + | root@scgc:~# apt install glusterfs-client |
- | [...] | + | root@scgc:~# mkdir /export |
- | Setting up glusterfs-common (3.5.2-2+deb8u3) ... | + | root@scgc:~# mount -t glusterfs storage1:/scgc /export |
- | Setting up glusterfs-client (3.5.2-2+deb8u3) ... | + | root@scgc:~# df -h | grep export |
- | Setting up dmsetup (2:1.02.90-2.2+deb8u1) ... | + | |
- | update-initramfs: deferring update (trigger activated) | + | |
- | Processing triggers for libc-bin (2.17-97) ... | + | |
- | Processing triggers for initramfs-tools (0.115) ... | + | |
- | update-initramfs: Generating /boot/initrd.img-3.12-1-amd64 | + | |
- | root@storage3:~# mkdir /export | + | |
- | root@storage3:~# mount -t glusterfs storage1:/scgc /export | + | |
- | root@storage3:~# df -h | grep export | + | |
storage1:/scgc 4.0G 66M 4.0G 2% /export | storage1:/scgc 4.0G 66M 4.0G 2% /export | ||
</code> | </code> | ||
Line 485: | Line 486: | ||
<code bash> | <code bash> | ||
- | root@storage3:~# umount /export | + | root@scgc:~# umount /export |
root@storage1:~# gluster volume stop scgc | root@storage1:~# gluster volume stop scgc |