Kubernetes external LoadBalancer and Nextcloud deployment

· 7 min read
Kubernetes external LoadBalancer and Nextcloud deployment

Previous Status

Continue on previous post https://blog.sakuragawa.moe/simple-homebrew-kubernetes-deployment/
And post https://blog.sakuragawa.moe/deploy-ceph-storage-and-csi-for-pods-consume-in-kubernetes-home-lab/

The Kubernetes cluster bootstrapping is complete and the Flannel network takes the charge of getting pods connects to each other and expose the right services inside cluster.

The implementation of service functionality is by dynamically insert iptables rule inside cluster, so it's not possible by now to access service from the outside world, not even the gateway (host machine of all those VMs).

And for most fun part, I can deploy my own cloud storage server(Nextcloud) to Kubernetes after the proper ingress setup.

What is needed in next step

For external LoadBalancer the MetalLB is what I deployed here, it creates the speaker pod in all the nodes and announce all the LoadBalancer type service to outside gateway and assign it with an available ip address.
Refer to : https://metallb.universe.tf/

Another experimental thing is istio, it is relatively easy to deploy with helm, and works good with MetalLB, the istio ingress gateway works as a gateway inside cluster, and expose curtain service as virtual service on the edge of the service mesh, it also handles encryption like TLS/SSL.
Refer to : https://istio.io/
Refer to : https://helm.sh/

One thing I definitely want to deploy to home lab is Nextcloud, an opensource private cloud service, although Kubernetes is probably overkill for such a service, but thanks to vfreex I can easily deploy it.
Refer to:https://github.com/nextcloud/server
Refer to:https://github.com/vfreex/kube-nextcloud

Helm, the "package manager" of Kubernetes

Long term short, first we must acknowledge that not every Kubernetes cluster been created in the same way, as we custom configurations, apply different modules, environment changes; there is no easy plug'n'play solution to Kubernetes, for example, implement BGP(Broad Gateway Protocol) and Calico as networking solution perform different with just Flannel with it CNI solution.
Helm is to solve this and provide a similar experience of package management like those you use everyday, and it install the pod like an off-the-shelf-software for little Ops effort you need to pay attention to.
(By the way, Kustomize seems another solution to this, and it's embedded with Kubernetes already. Checkout https://kustomize.io/ for more information)

Helm enables install service from charts, it contains environment variables, config map, Container image information and secrets to run a software.

To be noticed in current Helm V2, Tiller is the component interacts with Kubernetes core components, Helm is the client to fetch chart file and issuing command to it. Tiller will be deployed to kube-system namespace by default.

As I installed Helm V2 and V3 is available right now I will just Refer to another documentation of how to install it with tiller: https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-init/
For now(Dec. 2019) Kustomize and Helm V3 is better option than this.

With Helm with tiller up and running, we can check the status of it by helm version

[dalamud@dalamud helm]$ helm version
Client: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"}

The command reports version information fetched from the Kubernetes server.
Some time Tiller may crash due to RBAC problem, grant it with cluster-adm role will solve this, this provided in istio repo.

Except the use case here to install istio, helm have many other charts for Off The Shelf software, check out https://hub.helm.sh/ for more charts.

Istio install and ingress config

Continue on istio installation, the Helm chart is provided inside GitHub repo, use the follow command in istio folder to install the chart, be sure istio-system is created before this:

helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system

This should be less painful because helm already did the environment configuration for us, to see if the install is completed, use the command as always:

kubectl get svc -n istio-system
kubectl get po -n istio-system

Now the ingress function should be already functional, like the official documentation, example http bin is easy to config and can be a reference for other services as well:

  925  export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
  926  export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
  927  export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
  928  echo INGRESS_PORT
  929  echo $INGRESS_PORT
  930  echo $SECURE_INGRESS_PORT
  931  kubectl apply -f - <<EOF
  932  apiVersion: networking.istio.io/v1alpha3
  933  kind: Gateway
  934  metadata:
  935    name: httpbin-gateway
  936  spec:
  937    selector:
  938      istio: ingressgateway # use Istio default gateway implementation
  939    servers:
  940    - port:
  941        number: 80
  942        name: http
  943        protocol: HTTP
  944      hosts:
  945      - "httpbin.example.com"
  946  EOF
  947  ls
  948  cd ..
  949  ls
  950  vim gateway-httpbin
  951  ls
  952  mv gateway-httpbin gateway-httpbin.yaml
  953  kubectl apply -f - <<EOF
  954  apiVersion: networking.istio.io/v1alpha3
  955  kind: VirtualService
  956  metadata:
  957    name: httpbin
  958  spec:
  959    hosts:
  960    - "httpbin.example.com"
  961    gateways:
  962    - httpbin-gateway
  963    http:
  964    - match:
  965      - uri:
  966          prefix: /status
  967      - uri:
  968          prefix: /delay
  969      route:
  970      - destination:
  971          port:
  972            number: 8000
  973          host: httpbin
  974  EOF
  975  curl -I -HHost:httpbin.example.com http://$INGRESS_HOST:$INGRESS_PORT/status/200
history output of deploy httpbin ingress

The Gateway and VirtualService Kind is CRD(custom resources definition) defined and consumed by istio, Gateway is to define an ingress and VirtualService is to define where the ingress request should land. similar to OpenShift Route, the ingress can bind given hostname to services, like here the httpbin.example.com is the one configured, request ingress with this hostname will let it route to httpbin service for you. By the way, * for wild card hostname is also acceptable, so that istio-gateway will always route the one VirtualService for you.

But istio still don't have ways to get out the cluster, by default installation with helm, the IP address will be pending and it will bind a port to the node that it assigned to, so that is the reason to use kubectl to request which node and which port we can access the istio-ingress in the first lines; and then we can finally use curl to append hostname to the request and access the service.

To solve that problem, we need to assign IP address to the service of istio-ingress service using external LoadBalancer, in my case: MetalLB.

Install and config MetalLB

The installation of MetalLB is very simple if without considering Kustomize, as the official documentation the default manifest is containing the setup needed for container:

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml

This commands create the namespace metallb-system and create service account role binding and DaemonSet etc; like this:

NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-6bcfdfd677-7tmjg   1/1     Running   4          36d
pod/speaker-b6sqr                 1/1     Running   4          36d
pod/speaker-kn59d                 1/1     Running   3          36d
pod/speaker-lg6hs                 1/1     Running   3          36d
pod/speaker-m54lw                 1/1     Running   3          36d
pod/speaker-vd6fz                 1/1     Running   4          36d
pod/speaker-xlpzj                 1/1     Running   3          36d

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
daemonset.apps/speaker   6         6         6       6            6           beta.kubernetes.io/os=linux   36d

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           36d

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-6bcfdfd677   1         1         1       36d

The controller and speaker will work on expose the inside service to outside, it is using the host machine's MAC address to announce a new address for that service.
But we still need to config what IP address pool it should announce, so a config map is needed:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.122.2-192.168.122.255

There I set it to use 192.168.122.x because I use the default libvirt configuration to start those virtual machine, and my host machine is working as gateway at address 192.168.122.1.
With the configmap applied, istio ingress should have assigned a address which should be accessible from outside cluster (from gateway, because of NAT mode).

For example here; istio ingress gets 192.168.122.2 as IP address:

NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                                                                                      AGE
istio-ingressgateway     LoadBalancer   10.100.255.243   192.168.122.2   15020:30255/TCP,80:31692/TCP,443:31810/TCP,15029:30640/TCP,15030:31790/TCP,15031:32538/TCP,15032:30545/TCP,15443:31589/TCP   57d

Create proper virtual service and app

Thanks to vfreex we already have awsome template to use to just deploy Nextcloud to kubernetes cluster:
https://github.com/vfreex/kube-nextcloud
Installation method please refer to the repo; with just minor changes like indicate which StorageClass to use etc, the installation of database and app is pretty simple.

As for ingress; istio should use virtualservices.networking.istio.io and gateways.networking.istio.io two custom resource to define what service user would like to expose outside cluster.

Refer to https://istio.io/docs/reference/config/networking/virtual-service/ and https://istio.io/docs/reference/config/networking/gateway/ we can write our own virtual service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
spec:
  gateways:
  - nextcloud-gateway
  hosts:
  - cloud.valkure.de
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: kube-nextcloud
        port:
          number: 80

This is a very simple one with binding the given hostname and root uri to service kube-nextcloud and port 80; where istio can modify uri or binding wildcard hostname * .

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - cloud.valkure.de
    port:
      name: http
      number: 80
      protocol: HTTP

And here is a example of Gateway I use; again very simple, the virtual service defined what Gateway it uses. And Gateway defines on which host, what port and what protocol it uses.

After everything up and running; request HTTP to 80 port of 192.168.122.2 with hostname cloud.valkure.de will give you the nextcloud welcome page.
Change resolve result or alter /etc/hosts will work just the same.

Thanks for your reading; now homelab is completed with running service thus the end to the series posts.

Related Articles

Install FreeDOS - but Manually
· 2 min read

Labels in OpenShift 3 Web Console

Labels are used to match application names and resources in OpenShift 3 Web Console.

· 1 min read