Architect and Server Provision
Basically, 6 servers are used to run OpenShift and all the related services in this case. Each server is a HP ProLiant DL320e Gen8 v2, features a Intel(R) Xeon(R) E3-1220 v3 CPU, 32 GB memory and 500 GB disk space. Since Heketi supports bare devices or bare partitions only, each server is partitioned into a 400GB storage for Gluster FS at /glusterfs
and the rest for running CentOS 7, something very much like
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 400G 0 part /glusterfs
├─sda3 8:3 0 15.7G 0 part [SWAP]
├─sda4 8:4 0 1K 0 part
└─sda5 8:5 0 49.6G 0 part /
Gluster FS and Heketi
Gluster FS and Heketi are best partners for OpenShift. While Gluster FS is a popular solution for distributed storage, Heketi provides fine API management ability upon Gluster.
Install Gluster FS and Heketi
Gluster FS could be installed from source, or from the packages provided by CentOS Storage SIG.
# yum install -y centos-gluster-release
# yum install -y glusterfs-server heketi
Gluster FS requires 24007-24008/tcp
open. On CentOS, we can directly add a service with firewall-cmd
command:
# systemctl enable firewalld
# systemctl start firewalld
# firewall-cmd --add-service=glusterfs --zone=public --permanent
# firewall-cmd --reload
Configure Gluster FS
First of all, each Gluster nodes need to be added to a trusted pool. On any gluster node, run:
# systemctl enable glusterd
# systemctl start glusterd
# gluster peer probe storage-04.intra.sakuragawa.cloud
# gluster peer probe storage-06.intra.sakuragawa.cloud
# gluster peer probe storage-11.intra.sakuragawa.cloud
# gluster peer probe storage-14.intra.sakuragawa.cloud
# gluster peer probe storage-15.intra.sakuragawa.cloud
Once all the commands return peer probe: success.
prompt, the Gluster FS cluster should be ready. Check the peer status:
# gluster peer status
Number of Peers: 5
Hostname: storage-04.intra.sakuragawa.cloud
Uuid: 1db49f8f-b4cf-4339-91f4-34620b4e6c30
State: Accepted peer request (Connected)
Hostname: storage-06.intra.sakuragawa.cloud
Uuid: 15527ee5-cf46-4b49-b763-068e5409e59e
State: Accepted peer request (Connected)
Hostname: storage-11.intra.sakuragawa.cloud
Uuid: 6cd9e255-3f49-43f9-a834-2690951dc92c
State: Accepted peer request (Connected)
....
You can test it out more before the next step just in case.
Set up Heketi
Heketi server needs to be configured by editing /etc/heketi/heketi.json
. In the file, there are some points to be noticed: port, authentication and executor.
......
"port": "8080",
......
"use_auth": true,
......
"key": "$(openssl rand -hex 16)"
......
"executor": "ssh",
"sshexec": {
"keyfile": "/root/.ssh/id_rsa",
"user": "$sudoers",
"port": "22",
"fstab": "/etc/fstab"
},
......
Then we will move to heketi-cli
to complete the configuration process. First we absolutely need to create a cluster using create cluster
:
# heketi-cli --server http://storage-03.intra.sakuragawa.cloud:8080 --user admin --secret "your secret" cluster create
Cluster id: b2ea4cca6d69eced83b5786350033b7a
Then we should use node add
to add all the peers. In this case, I simply use a for
loop to add them all:
# for i in 03 04 06 11 14 15 ; do heketi-cli --server http://storage-03.intra.sakuragawa.cloud:8080 --user admin --secret "bded79b6a0b66b656c00caac96eb050d" node add --cluster=b2ea4cca6d69eced83b5786350033b7a --management-host-name hp-dl320eg8-$i.lab.eng.pek2.redhat.com --storage-host-name hp-dl320eg8-$i.lab.eng.pek2.redhat.com --zone 1 ; done
Then we could create volume for practicle usage:
# heketi-cli --server http://storage-03.intra.sakuragawa.cloud:8080 volume create --size 3 --replica 2
Volume id: ....
Preview: