因为做实验我用的是VMware ,最少准备3台虚拟机,2台用来做服务端,1台用来做客户端,服务端的配置(1C2G,硬盘最少2块),客户端(1C2G)本人用自己的机子,大家可以按照自己的实际情况来配置。
首先我把两台服务端的虚拟机的主机名改成了node1,node2 接下来配hosts文件主机名会好记点。
node1信息
[root@node1 ~]# hostnamenode1[root@node1 ~]# uname -r3.10.0-957.el7.x86_64[root@node1 ~]# sestatus #这里要把SElinux关闭 (在/etc/sysconfig/selinux 第5行)SELinux status: disabled[root@node1 ~]# systemctl status firewalld #防火墙是要关闭的● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: inactive (dead) since 四 2019-06-13 19:29:51 CST; 1h 21min ago Docs: man:firewalld(1)[root@node1 ~]# cat >> /etc/hosts/ <<EOF #配置hosts文件> node1的ip地址 node1> node2点ip地址 node2> EOFnode2的信息
[root@node2 ~]# hostnamenode2[root@node2 ~]# uname -r3.10.0-957.el7.x86_64[root@node2 ~]# sestatus #这里要把SElinux关闭 (在/etc/sysconfig/selinux 第5行)SELinux status: disabled[root@node2 ~]# systemctl status firewalld #防火墙是要关闭的● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: inactive (dead) since 四 2019-06-13 19:29:51 CST; 1h 21min ago Docs: man:firewalld(1)[root@node2 ~]# cat >> /etc/hosts/ <<EOF #配置hosts解析> node1的ip地址 node1> node2点ip地址 node2> EOFnode1主机挂载磁盘
[root@node1 ~]# mkfs.xfs /dev/sdb[root@node1 ~]# mkdir -p /data/brick1[root@node1 ~]# echo '/dev/sdb /data/brick1 xfs defaults 0 0' >> /etc/fstab[root@node1 ~]# mount -a && mountnode2主机挂载磁盘
[root@node2 ~]# mkfs.xfs /dev/sdb[root@node2 ~]# mkdir -p /data/brick1[root@node2 ~]# echo '/dev/sdb /data/brick1 xfs defaults 0 0' >> /etc/fstab[root@node2 ~]# mount -a && mountnode1,node2都操作
yum -y install centos-release-gluster# 修改镜像源加速sed -i 's#http://mirror.centos.org#https://mirrors.shuosc.org#g' /etc/yum.repos.d/CentOS-Gluster-6.repoyum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdmarpm -qa glusterfs #查看软件版本两个节点上都操作
systemctl start glusterd.servicesystemctl status glusterd.service #查看glusterd.service状态在node1上操作
gluster peer probe node2在node2上操作
gluster peer probe node1注意:一旦建立了这个池,只有受信任的成员可能会将新的服务器探测到池中。新服务器无法探测池,必须从池中探测。
在node1上操作
gluster peer status # 查看状态在node2上操作
gluster peer status注意: 两个节点的UUID不相同
在两个节点上操作
mkdir -p /data/brick1/gv0在任意一个节点上操作
gluster volume create gv0 replica 2 node1:/data/brick1/gv0 node2:/data/brick1/gv0 # 报错信息提示:建议使用非根分区来创建volume,但我们这里为了方便,并没有多加硬盘来挂载,默认使用的是根分区,所以多加一个force参数就可以了 报错信息:volume create: gv0: failed: The brick node1:/data/brick1/gv0 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior. 在上方命令最后加入force [root@node1 ~]# gluster volume create gv0 replica 2 node1:/data/brick1/gv0 node2:/data/brick1/gv0 force Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator Guide/Split brain and ways to deal with it/. Do you still want to continue? (y/n) y volume create: gv0: success: please start the volume to access data启用存储卷
gluster volume start gv0查看信息
[root@node1 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: db2e814d-43bc-4af2-8133-276623668973 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1:/data/brick1/gv0 Brick2: node2:/data/brick1/gv0 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off至此,服务器端配置结束了
注意:客户端的机器要配置好hosts解析,否则连接会出错
[root@localhost ~]# mount.glusterfs node1:/gv0 /mnt [root@localhost ~]# df -h 文件系统 容量 已用 可用 已用% 挂载点 /dev/mapper/centos-root 17G 1.6G 16G 9% / devtmpfs 475M 0 475M 0% /dev tmpfs 487M 0 487M 0% /dev/shm tmpfs 487M 7.7M 479M 2% /run tmpfs 487M 0 487M 0% /sys/fs/cgroup /dev/sda1 1014M 133M 882M 14% /boot /dev/sr0 4.3G 4.3G 0 100% /dvd tmpfs 98M 0 98M 0% /run/user/0 node1:/gv0 17G 1.7G 16G 10% /mnt客户端检查文件
[root@localhost ~]# ll -A /mnt/copy* |wc -l 100服务点检查文件
[root@node1 ~]# ls -lA /data/brick1/gv0/copy* |wc -l 100至此,GlusterFS简单配置完成
作者:GuHu(孤狐)
转载于:https://www.cnblogs.com/Guhu/p/11019762.html
相关资源:DirectX修复工具V4.0增强版