Bare Metal Installation of GlusterFS and Heketi

GlusterFS is a popular solution for distributed storage. To provide persistent storage for Kubenetes cluster from GlusterFS cluster, Heketi is spreadly used as an API.

· 2 min read
Bare Metal Installation of GlusterFS and Heketi

Architect and Server Provision

Basically, 6 servers are used to run OpenShift and all the related services in this case. Each server is a HP ProLiant DL320e Gen8 v2, features a Intel(R) Xeon(R) E3-1220 v3 CPU, 32 GB memory and 500 GB disk space. Since Heketi supports bare devices or bare partitions only, each server is partitioned into a 400GB storage for Gluster FS at /glusterfs and the rest for running CentOS 7, something very much like

sda      8:0    0 465.8G  0 disk 
├─sda1   8:1    0   500M  0 part /boot
├─sda2   8:2    0   400G  0 part /glusterfs
├─sda3   8:3    0  15.7G  0 part [SWAP]
├─sda4   8:4    0     1K  0 part 
└─sda5   8:5    0  49.6G  0 part /

Gluster FS and Heketi

Gluster FS and Heketi are best partners for OpenShift. While Gluster FS is a popular solution for distributed storage, Heketi provides fine API management ability upon Gluster.

Install Gluster FS and Heketi

Gluster FS could be installed from source, or from the packages provided by CentOS Storage SIG.

# yum install -y centos-gluster-release
# yum install -y glusterfs-server heketi

Gluster FS requires 24007-24008/tcp open. On CentOS, we can directly add a service with firewall-cmd command:

# systemctl enable firewalld
# systemctl start firewalld
# firewall-cmd --add-service=glusterfs --zone=public --permanent
# firewall-cmd --reload

Configure Gluster FS

First of all, each Gluster nodes need to be added to a trusted pool. On any gluster node, run:

# systemctl enable glusterd
# systemctl start glusterd
# gluster peer probe
# gluster peer probe
# gluster peer probe
# gluster peer probe
# gluster peer probe

Once all the commands return peer probe: success. prompt, the Gluster FS cluster should be ready. Check the peer status:

# gluster peer status
Number of Peers: 5

Uuid: 1db49f8f-b4cf-4339-91f4-34620b4e6c30
State: Accepted peer request (Connected)

Uuid: 15527ee5-cf46-4b49-b763-068e5409e59e
State: Accepted peer request (Connected)

Uuid: 6cd9e255-3f49-43f9-a834-2690951dc92c
State: Accepted peer request (Connected)


You can test it out more before the next step just in case.

Set up Heketi

Heketi server needs to be configured by editing /etc/heketi/heketi.json. In the file, there are some points to be noticed: port, authentication and executor.

  "port": "8080",
  "use_auth": true,
      "key": "$(openssl rand -hex 16)"

    "executor": "ssh",
    "sshexec": {
      "keyfile": "/root/.ssh/id_rsa",
      "user": "$sudoers",
      "port": "22",
      "fstab": "/etc/fstab"

Then we will move to heketi-cli to complete the configuration process. First we absolutely need to create a cluster using create cluster:

# heketi-cli --server --user admin --secret "your secret" cluster create
Cluster id: b2ea4cca6d69eced83b5786350033b7a

Then we should use node add to add all the peers. In this case, I simply use a for loop to add them all:

# for i in 03 04 06 11 14 15 ; do heketi-cli --server --user admin --secret "bded79b6a0b66b656c00caac96eb050d" node add --cluster=b2ea4cca6d69eced83b5786350033b7a --management-host-name hp-dl320eg8-$ --storage-host-name hp-dl320eg8-$ --zone 1 ; done

Then we could create volume for practicle usage:

# heketi-cli --server volume create  --size 3 --replica 2
Volume id: ....

Related Articles

Labels in OpenShift 3 Web Console

Labels are used to match application names and resources in OpenShift 3 Web Console.

· 1 min read
Etcd does not Start after Disk is Full
· 5 min read
Migrate ZFS Pool with Sparse File
· 2 min read