Revision history

  • 1.0.0 : 2020/9/25

    • k8s-installer: v1.19.1-1

1. Introduction

1.1. The purpose of this book

This document describes how to install and use Kubernetes in an on-premises environment.

1.2. Glossary

Term Meaning

Master Node

The master node of Kubernetes. A master servers of Kubernetes (kube-apiserver, kube-controller-manager, kube-scheduler, etcd) run on this node.

Worker Node

A worker node of Kubernetes. A node to run workloads.

HA

High Availability. 3 master nodes are required for HA cluster.

2. Requirements

2.1. System Configuration

There are three system configurations for installing Kubernetes. The number of machines required will vary depending on the configuration.

  1. Standalone: a configuration with only one machine.

    • Use only one machine and install both Kubernetes master and worker on the machine.

    • The workload will also run on the same machine.

    • Recommended for use in development environments and for verification and demonstration purposes only.

  2. Single Master: one master and one or more workers

    • A single master can be configured with multiple workers.

    • The workload runs on the workers.

    • The master node will be a SPOF and is not recommended for use in a production environment.

  3. HA configuration: 3 masters and 1 or more workers.

    • HA is configured with three masters.

    • L4 load balancer is required separately because it is necessary to distribute the load to the kube-apiserver running on the master.

To create an HA configuration, you need at least three machines as master nodes. In addition, you need at least one worker node, and at least three recommended. The number of worker nodes depends on the amount of workloads to be executed.

2.2. Required machine

This section describes each of the machines that make up Kubernetes.

The following requirements are almost same as described in Installing kubeadm.

2.2.1. OS

Machines with one of the following operating systems installed is required:

  • RedHat Enterprise Linux (RHEL) 7 (7.7 or later), 8

    • You must have completed registration of your RedHat subscription using subscription-manager.

  • CentOS 7 (7.7 or later), 8

  • Ubuntu 18.04, 20.04

Only the ansible installer is supported for RHEL8 / CentOS8 / Ubuntu.
You can’t use firewalld for RHEL8 / CentOS8.

2.2.2. Hardware spec

The machine specs require the following.

  • 2 GB or more of RAM per machine

    • any less will leave little room for your apps)

  • 2 CPUs or more

2.2.3. Network

For the network, the following requirements are required

  • Full network connectivity between all machines in the cluster

    • public or private network is fine

  • Internet connectivity

    • A connection through a proxy server is fine.

    • For offline installation, no Internet connection is required. However, the default route must still be configured.

2.2.4. Other requirements

  • Unique hostname, MAC address, and product_uuid for every node.

    • The hostname can be checked with the hostname command.

    • You can get the MAC address of the network interfaces using the command ip link or ifconfig -a.

    • The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/product_uuid.

    • For more information, See "Verify the MAC address and product_uuid are unique for every node" section in Installing kubeadm.

  • Certain ports are open on your machines. See here for more details.

    • For more information on the ports to be opened, see "Check required ports" section of Installing kubeadm.

    • If you use the installer, the firewall is automatically configured.

    • If you use the installer, it will automatically configure the firewall for you.

  • Swap disabled.

    • You MUST disable swap in order for the kubelet to work properly.

    • If you use an installer, Swap is automatically turned off.

2.3. Load balancer

When configuring HA configuration, an L4(TCP) level load balancer is required.

  • The load balancer must have a DNS-resolvable FQDN name.

  • Load balancing at the L4 level

    • Load balancing TCP port 6443 to same ports of the master node (three nodes).

    • Perform health checks on TCP connections.

3. Kubernetes Installation Overview

3.1. Installation Instructions

There are three different installation procedures. We recommend using the Ansible installer.

  1. Manual Installation: Manual installation using kubeadm

    • These instructions are provided as a reference only. Normally, you can use the k8s-installer installer to perform an automated installation using

    • Please see Chapter 4, Manual installation for details.

  2. Script Installer: Automatic installation using the sh script-based installer

    • Standalone and single-master configurations are supported.

    • This installer is a simplified version and only sets up the Kubernetes cluster (no applications are installed).

    • See Chapter 5, Script based installer for details.

  3. Ansible installer.

    • Automatic cluster batch installation using Ansible.

    • Deploys networking, storage, and applications as well as Kubernetes.

    • See Chapter 6, Ansible Installer for more information.

The script installer and Ansible installer also supports offline installation. In offline installation, all the files required for the installation are obtained on a machine connected to the Internet, and then transferred to the target machines using a USB memory stick or hard disk.

Only the ansible installer is suppported for RHEL8 / CentOS8 / Ubuntu.

3.2. The installers

The scrip installer and ansible installer is included with k8s-installer.

4. Manual installation

This chapter describes the steps to install Kubernetes manually.

The procedures of this chapter are base on Installing kubeadm

This procedure is only supported on RHEL7 / CentOS7.
The Ansible installer is recommended

Manual installation instructions are shown for reference only.

Normally, you should use an installer (recommended by the Ansible installer) to perform an automated installation. The installer is designed to automatically perform the steps shown here.

4.1. Common proceure

This section describes the common steps that must be performed prior to installing Kubernetes. This procedure must be performed on all machines, no matter which configuration you use.

All of these steps must be performed with root privileges. Execute sudo -i and then execute it.

4.1.1. Proxy configuration

If you need to connect to the Internet via a proxy server, you need to do the following.

4.1.1.1. Yum

Add following line to /etc/yum.conf

proxy={PROXY_URL}

Set proxy server URL to {PROXY_URL} in http://[proxy_hostname]:[proxy_port] format.

4.1.1.2. Docker

Create /etc/systemd/system/docker.service.d/http-proxy.conf file as:

mkdir -p /etc/systemd/system/docker.service.d

cat <<EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY={PROXY_URL}" "HTTPS_PROXY={PROXY_URL}" "NO_PROXY=127.0.0.1,localhost"
EOF

4.1.2. Firewall

You must disable the firewall, or open all of the Inbound TCP ports listed below. For more information on the ports to be opened, see Check required ports.

  • Master nodes

    • 6443

    • 2379-2380

    • 10250-10252

  • Worker nodes

    • 10250

    • 30000-32767

The steps to open ports are as follows:

# for master node
firewall-cmd --add-port=6443/tcp --permanent
firewall-cmd --add-port=2379-2380/tcp --permanent
firewall-cmd --add-port=10250-10252/tcp --permanent
firewall-cmd --reload

# for worker-node
firewall-cmd --add-port=10250tcp --permanent
firewall-cmd --add-port=30000-32767/tcp --permanent
firewall-cmd --reload

4.1.3. Installing the Docker Container Runtime

Use the RHEL7/CentOS7 standard Docker as the container runtime.

For RHEL7, enable the rhel-7-server-extra-rpms repository.

subscription-manager repos --enable=rhel-7-server-extras-rpms

Follow the steps below to install and start the docker.

yum install -y docker
systemctl enable --now docker

4.1.4. Disble SELinux

Disable SELinux

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

4.1.5. Disable Swap

If Swap is not disabled, disable it.

sudo swapoff -a

Also, check /etc/fstab and disable the swap setting. Here’s an example.

sed -i 's/^\([^#].* swap .*\)$/#\1/' /etc/fstab

4.1.6. Change sysconfig configuration

Modify sysctl settings to enable bridge the network.

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

4.1.7. Insall kubeadm / kubelet / kubectl

Install kubeadm / kubelet / kubectl and start kubelet.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

Lock version of the kubeadm / kubelet / kubectl to avoid upgrading.

yum install yum-plugin-versionlock
yum versionlock add kubeadm kubelet kubectl

4.2. Installing single master configuration

Here are the installation steps for a single master configuration.

4.2.1. Master Node Installation

Install the Kubernetes control plane on the master node.

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

The network address (CIDR) for the pod is 192.168.0.0/16. If there is a conflict with your current address, you may need to change it accordingly.

The installation takes a few minutes. You should see a log on your screen that looks like this.

...
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  /docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

The message kubadm join…​' shown at the end of the above should be saved (to file etc). The `kubeadm join…​ procedure is necessary for worker nodes to join the Kubernetes cluster.

Follow the above instructions to create ~/.kube/config.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Make sure you can connect to your Kubernetes cluster successfully by following these steps.

kubectl cluster-info

4.2.2. Network add-on

Install the Pod Network Add-on. This procedure uses Calico.

Please install Calico by following the steps below.

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

4.2.3. Scheduling Settings

Change the master node to be schedulable only in a standalone configuration. Failure to do so prevents workloads from running on the master node.

kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-

4.3. Join worker nodes

Join the worker nodes to your Kubernetes cluster by doing the following on each worker node.

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

The argument for kubeadm join should be the one shown on the master node installation.

If you forget the contents of the kubeadm join or the token has become invalid after 24 hours, you can regenerate the token and perform the following steps on the master node.

kubeadm token create --print-join-command

5. Script based installer

This document explains how to install using the script based installer. The script installer is located in the script directory of the k8s-installer.

If you use the script installer, you will need to deploy the installer on each machine and perform the installation on each machine.

5.1. Configuration

Copy config.sample.sh to config.sh and configure.

  • In case of HA configuration:

    • Set DNS name (FQDN) and port number of the load balancer to LOAD_BALANCER_DNS and LOAD_BALANCER_PORT.

    • The load balancer should be configured to load-balance L4 to all master nodes on the specified ports.

    • For more information, see Creating Highly Available clusters with kubeadm.

  • If your Internet connection needs to go through a proxy, set the values of PROXY_URL and NO_PROXY.

    • NO_PROXY must have the IP address or DNS name of kube-apiserver. Specify the value of the master node in the case of a single host configuration, or the value of the load balancer in the case of HA configuration. If this is not properly configured, the master node installation will fail.

  • If you want to do an offline installation, set OFFLINE_INSTALL to yes. Details are discussed later.

5.2. Installing Docker / kubeadm

Install Docker, kubeadm and others.

This procedure must be done on all master and worker nodes.

Log in as a regular user with possible to use sudo (no direct root login). Follow these steps to perform the installation:

$ sudo ./install-common.sh

5.3. Installing master node

Install Kubernetes master control plane on master node.

Log in to the master node (the first master node in the HA configuration) and perform the following procedures.

# For single master configuration
$ sudo ./install-master-single.sh
# For HA configuration (the first master node)
$ sudo ./install-master-ha.sh

The installation takes a few minutes. Please take note that you will see a kubeadm join command line on the screen to join worker nodes.

Once the installation is complete, follow these steps to install ~/.kube/config

$ ./install-kubeconfig.sh

Execute kubectl cluster-info and verify that the control plane is operating normally.

Finally, install the calico network add-on.

$ ./install-cni.sh

5.4. HA configuration: Installing the rest of the master node

See "Steps for the rest of the control plane nodes" section of Creating Highly Available clusters with kubeadm for installation instructions for the second and subsequent master nodes in an HA configuration.

5.5. Installing worker nodes.

Login to each worker node and execute the above obtained kubeadm join command with sudo to join the node to the Kubernetes cluster.

5.6. Confirmation after installation

Run kubectl get nodes on the master node to make sure that all nodes are added and ready.

Also, run kubectl get all -n kube-system on the master node and make sure that all the pods are running normally.

6. Ansible Installer

The following are the steps to install using the Ansible based installer.

When using the Ansible installer, the installation process is carried out on any machine where Ansible is installed. We will refer to this machine as "the Ansible node".

You can use any one of the master or the worker node as an Ansible node.

This install procedure should be done in the ansible directory of k8s-installer.

6.1. Specifications

The specifications for the Kubernetes cluster deployed by the Ansible installer are as follows.

  • The following settings are made on all nodes.

    • Swap is disabled.

    • SELinux is disabled.

    • Firewall is off by default (optionally set to on)

    • The sysctl settings are changed.

    • The container runtime is installed.

      • Docker or containerd is installed, depending on the configuration. The default is Docker.

    • The following packages are installed:

      • kubeadm, kubectl, cfssl, libselinux-python, lvm2, gnupg2, nfs-utils and nfs-common

  • The Kubernetes cluster is deployed using kubeadm.

    • The CA certificate expires in 30 years (kubeadm default is 10 years).

    • The master node will be set to Schedulable (optionally set to NoSchedule)

  • The ~/.kube/config file is installed on the master nodes.

  • The Calico network plugin will be deployed

    • Overlay network (IPIP/VXLAN) is not used. All nodes must be connected to the same L2 network.

  • The following is deployed on the Kubernetes cluster, except for the ones marked with (*) will not be deployed by default.

    • Nginx ingress controller

    • MetalLB (*)

    • rook-nfs (*)

    • rook-ceph (*)

    • storageclass default configuration (*)

    • metrics-server

    • docker registry (*)

Please refer the k8s-installer sequence for the entire install sequence.

6.2. Requirements

Ansible 2.9 or higher must be installed on the Ansible node.

You need to be able to log in via ssh from the Ansible node to all master and worker nodes. In this case, the following conditions are required.

  • You must be able to make an ssh connection from the Ansible node to each node with public key authentication.

    • Do not use root as the login user.

    • It is recommended that the login user name be the same on all machines.

    • Password authentication cannot be used. Use only public key authentication.

  • No passphare input is required when perform public key authentication.

    • If you have a passphrase set, you must use ssh-agent.

  • You must be able to run sudo as the logged-in user after you ssh login to each node from the Ansible node.

6.2.1. Instructions for setting up ssh public key authentication

If you don’t already have an ssh key pair on the Ansible node, follow these steps to create it. ~/.ssh/id_rsa, ~/.ssh/id_rsa.pub will be generated.

$ ssh-keygen

On the Ansible node, perform the following steps for each master/worker node to enable login with public key authentication. The public key will be added to ~/.ssh/authorized_keys on each node.

$ ssh-copy-id [hostname]

6.2.2. ssh-agent

If you set a passphrase for your ssh key (and you should for security reasons), you should use ssh-agent to be able to login using ssh without passphrase.

There are a number of ways to start ssh-agent, but the easiest way is to run the following on the Ansible node:

$ exec ssh-agent $SHELL

Set your passphrase to ssh-agent as follows.

$ ssh-add

Logging out will cause ssh-agent to exit, so this procedure must be performed each time you log in to the Ansible node.

6.2.3. Ansible installation instructions

Create a Python virtual environment on an Ansible node. Use Python 2 + virtualenv or Python 3 + https://docs. python.org/en/3/library/venv.html[venv].

Here’s an example of how to create a virtual environment in the case of Python2 + virtualenv:

$ sudo yum install python-virtualenv
$ virtualenv $HOME/venv

Here’s an example of how to create a virtual environment for Python 3 + venv:

$ sudo subscription-manager repos --enable rhel-7-server-optional-rpms  # RHEL7の場合
$ sudo yum install python3 python3-pip gcc openssl-devel python3-devel
$ python3 -m venv $HOME/venv

Activate the virtual environment as follows:

$ . $HOME/venv/bin/activate

Follow the steps below to install Ansible.

$ pip install -U -r requirements.txt

If you need to install Ansible on an Ansible machine in an offline environment, use python-offline-env.

6.3. Configuration

Log in to the Ansible node as a working user. Extract the installer and perform the following tasks.

6.3.1. Inventory file

Copy the sample/hosts file to the inventory/hosts file and set the information of the node to be installed.

There are three groups, please define the machine in the appropriate group.

  • master_first: Specify the first master node.

  • master_secondary: Specify the second or later master nodes in case of HA configuration.

  • worker: Specify the worker nodes.

For single master configuration, set only master_first and worke. (leave master_secondary empty)

In the HA configuration, you need at least three odd-numbered master nodes. Specify the first one as master_first and the rest as master_secondary.

Here is an example of how to specify the machine:

master1 ansible_user=johndoe ansible_host=10.0.1.10 ip=10.0.1.10
  • hostname: Specify the hostname at the beginning. The hostname is the Kubernetes node name as it is.

  • ansible_user: Specify the username on the target node to be used for ssh login.

    • You can omit this if it is the same username as the user on the Ansible node.

  • ansible_host: The hostname or IP address to use when connecting with ssh.

    • If it is the same as the host name, it can be omitted.

  • ip: Specify the IP address of the node.

    • Specify an IP address that can communicate directly with other nodes in the cluster. This is the advertized IP address of the kube-apiserver and kubelet.

    • If omitted, the IP address of the interface specified as the default gateway for the remote machine will be used.

6.3.2. Define variables

Copy sample/group_vars/all/*.yml files to inventory/group_vars/all/ directory and edit as follows:

  • main.yml

    • lb_apiserver_address: In case of HA configuration, set the FQDN name or IP address of the load balancer.

    • pod_subnet: Speccify Pod subnet (CIDR). Normally, no changes are required, but if the IP address conflicts with an existing address, you need to change it.

  • offline.yml

    • offline_install: Set to yes If you want to do an offline installation. See Chapter 10, Offline Installation for details of the offline installation procedure.

  • proxy.yml

    • If your Internet connection needs to go through a proxy, set the proxy_url and proxy_noproxy.

  • version.yml

    • Specify the version of Kubernetes to install. If not specified, the default value of k8s-installer is used.

  • networking.yml

  • storage.yml

  • registry.yml

If you use a proxy, you must specify the IP address or DNS name of the kube-apiserver in proxy_noproxy. Specify the value of the master node in the case of a single host configuration, or the value of the load balancer in the case of HA configuration.

If this is not set properly, the master node installation will fail.

6.4. Install

6.4.1. common procedure

Perform the following steps to common pre-process for all nodes. This procedure involves offline repository configuration, proxy configuration, installation of necessary packages (including Docker/kubeadm) and common configuration process, etc.

$ ansible-playbook -i inventory/hosts common.yml -K
If you don’t need a sudo password on the machine you are logging into, you can omit the -K (--ask-become-pass) option.

6.4.2. Deploying Kubernetes to the first master node

Install the Kubernetes on the first master node by doing the following:

$ ansible-playbook -i inventory/hosts master-first.yml -K

At this point, Kubernetes will be running as a single node configuration. You can verify that some pods are running by logging in to the host and running kubectl get all --all-namespaces.

6.4.3. Deploy to the second and subsequent master nodes

Join the second and subsequent master nodes to the Kubernetes cluster by doing the following:

$ ansible-playbook -i inventory/hosts master-secondary.yml -K

6.4.4. Deploy to the worker nodes

Join all worker nodes to the Kubernetes cluster by doing the following:

$ ansible-playbook -i inventory/hosts worker.yml -K

6.4.5. Deploy network, storage, and applications

Deploy the network, storage, and applications by doing the following:

$ ansible-playbook -i inventory/hosts networking.yml -K
$ ansible-playbook -i inventory/hosts storage.yml -K
$ ansible-playbook -i inventory/hosts apps.yml -K
How to do all steps at once

You can also perform all of the above steps at once by following the steps below. However, it is usually recommended that you go through them step by step.

$ ansible-playbook -i inventory/hosts site.yml -K

6.5. Operation check

Run kubectl get nodes on the master node to make sure that all nodes are added and ready.

Also, run kubectl get all --all-namespaces on the master node and make sure all the pods are running properly.

7. Networking

7.1. Firewall

The firewall (firewalld) on each node is disabled by default.

If you want to enable it, set the firwall_enabled variable to yes in the inventory/group_vars/all/networking.yml file.

On RHEL 8 / CentOS 8, the firewall must be disabled. If you enable it, it interferes with the kube-proxy nftables configuration and won’t work properly.

7.2. Calico

Calico is installed as CNI Plugin.

Overlay network (IPIP/VXLAN) is not used by default. All nodes must be connected to the same L2 network.

If you want to change the configuration, edit the file inventory/group_vars/all/networking.yml with your editor and add your own settings.

  • If you want to use IPIP, set calico_ipv4pool_ipip to Always.

  • If you use VXLAN, set calico_ipv4pool_vxlan to Always.

7.3. MetalLB

You can use MetalLB as an implementation of the LoadBalancer.

See MetalLB for details. There are two operating modes for MetalLB, L2 mode and BGP mode, but this installer supports only L2 mode.

If you want to use MetalLB, you need to change the file inventory/group_vars/all/networking.yml file with editor and add configurations. An example configuration is shown below.

# Enable MetalLB?
metallb_enabled: yes

# MetalLB IP address pool range
metallb_ip_range: 192.168.1.200-192.168.1.210
  • metallb_enabled: set to yes

  • metallb_ip_range: IP address pool range to be used for LoadBalancer. You must specify the free IP addresses on the subnet to which each node is connected.

8. Storage

8.1. Overview

When you use Chapter 6, Ansible Installer, you can use Rook to set up storage.

The following two types are supported.

8.2. Rook / NFS

You can use Rook + NFS (Network File System).

The NFS server is started as a pod on the Kubernetes cluster. The NFS server uses local volume on one specific node that you specify, and the NFS server pod is configured to run on this node.

For more information, see NFS of the Rook Documentation.

This configuration uses storage on one particular node, so it has no availability.

8.2.1. Configuration and Deployment

To use NFS, describe configuration in inventory/group_vars/all/storage.yml file. For example:

#----------------------------------------------------------
# rook-nfs config
#----------------------------------------------------------

# rook-nfs Enabled?
rook_nfs_enabled: yes

# Hostname hosts NFS PV local volume
rook_nfs_pv_host: "worker-1"

# NFS local volume size
#rook_nfs_pv_size: 10Gi

# NFS local volume dir on host
#rook_nfs_pv_dir: /var/lib/rook-nfs
  • rook_nfs_enabled: To enable NFS, set to yes.

  • rook_nfs_pv_host: The name of the node where Local Volume PV is created.

    • Specify one of the node names listed in Inventory file.

    • The NFS server storage is allocated on the node you specify, and the NFS server pod also runs on this node.

    • The node must be Schedulable.

  • rook_nfs_pv_size: Size of Local Volume (storage size of the NFS server). Default is 10Gi.

  • rook_nfs_pv_dir: Specify the directory where you want to create the Local Volume. The default is /var/lib/rook-nfs.

Follow the steps below to deploy.

$ ansible-playbook -i inventory/hosts apps.yml --tags=rook-nfs

8.2.2. Storage class

A new storage class, rook-nfs-share1, has been added for this NFS server.

Here is an example of a Persistent Volume Claim (PVC) to use this storage.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rook-nfs-pv-claim
spec:
  storageClassName: rook-nfs-share1
  accessModes:
    - ReadWriteMany
  resources:
  requests:
    storage: 100Mi

8.3. Rook / Ceph

You can use Rook + Ceph.

For more information, see Ceph Storage Quickstart of the Rook documentation.

8.3.1. Requirements.

The following conditions are required.

  • A minimum of three worker nodes are required.

  • You must have an unformatted/unpartitioned raw block device connected to your worker nodes.

https://rook.github.io/docs/rook/master/ …​ ceph-prerequisites.html[Ceph Prerequisites] also See also.

8.3.2. Configuration and Deployment

If you use Ceph, describe the configuration in inventory/group_vars/all/storage.yml file.

#----------------------------------------------------------
# rook-ceph config
#----------------------------------------------------------
rook_ceph_enabled: yes

Set rook_ceph_enabled to yes.

Deploy the rook-ceph as follows:

$ ansible-playbook -i inventory/hosts apps.yml --tags=rook-ceph

Deploying Rook / Ceph takes about 10 minutes. Check the status with watch kubectl -n rook-ceph get all.

8.3.3. Storage class

The following storage classes will be added.

  • rook-ceph-block : block storage (rbd)

  • rook-cephfs : Filesystem (cephfs)

Here is an example of a Persistent Volume Claim (PVC) to use this storage class.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rook-ceph-pv-claim
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
  requests:
    storage: 100Mi

9. Application

When you use Chapter 6, Ansible Installer, you can deploy some application automatically.

9.1. Private Registry

You can deploy a private registry.

Docker Registry is used for private registry.

You need Persistent Volume to store container images. If you have Rook / NFS or Rook / Ceph deployment, you can use them.

9.1.1. Configuration and Deployment

If you deploy the private registry, put the configuration in inventory/group_vars/all/registry.yml file.

# Registry enabled?
registry_enabled: yes

# Registry type: static-pod or pv
registry_type: pv

# Auth user
registry_user: registry

# Auth password
registry_password: registry

# Registry image version
#registry_version: 2.7.1

# PVC storage class name
#registry_pvc_storage_class: rook-nfs-share1  # Default to use rook-nfs
registry_pvc_storage_class: rook-ceph-block  # Use rook-ceph block storage

# PVC storage size
registry_pvc_size: 10Gi
  • registry_enabled: Enable the registry. Set to yes.

  • registry_type: Specify the registry type. Set to pv. Default is 'pv'.

  • registry_user: Specify the registry authentication user name. Change it if necessary.

  • registry_password: Specify the registry authentication password. Change it if necessary.

  • registry_pvc_storage_class: Specify the storage class of PV to be used.

  • registry_pvc_size: Specify the size of the PV to be allocated as a registry.

Follow the steps below to deploy:

$ ansible-playbook -i inventory/hosts apps.yml --tags=registry

9.1.2. Using private registry

The registry is exported as a NodePort service. To find out the port number, follow these steps:

$ kubectl -n registry get svc

The URL of the registry is https://[node]:[nodeport], where node is the address/host of one of the nodes, and nodeport is the port number of the NodePort identified above.

You can login by the following procedure.

$ docker login https://[node]:[nodeport]

10. Offline Installation

10.1. Overview

In offline installation, all the files required for the installation are obtained on a machine connected to the Internet, and then transferred to the target machine using a USB memory or hard disk.

The procedure for offline installation is as follows:

  1. On a machine connected to the Internet, use the script to acquire files to required to installation.

    • This will retrieve the RPM file of Docker/Kubeadm/Kubelet, container image files, etc.

    • The file k8s-offline-files.tar.gz will be generated contains which contains all these files.

  2. Transfer this file to the target machine using some means (USB memory, hard disk, VPN, etc.).

  3. Use the installer to run the installation.

10.2. Generating an offline installation file

This section describes the process of generating an offline installation file.

10.2.1. Requirements

  • The machines that are connected to the Internet. The machine must have the same operating system installed as the machine comprising the Kubernetes cluster.

The following steps must be performed on the above machine.

10.2.2. Preparation

If you are using RHEL 7, you must enable the rhel-7-server-extras-rpms repository.

$ subscription-manager repos --enable=rhel-7-server-extras-rpms

10.2.3. Proxy configuration

If your Internet connection needs to go through a proxy server, you need to set up proxy settings beforehand.

You can add the proxy settings to config.sh and execute sudo . /setup-proxy.sh to configure following settings.

10.2.3.1. yum

Add a line proxy=http://proxy.example.com in /etc/yum.conf to specify your proxy server.

10.2.3.2. Docker

Create /etc/systemd/system/docker.service.d/http-proxy.conf like as following, and reboot docker.

[Service]
Environment="HTTP_PROXY=http://proxy.example.com:8080" "HTTPS_PROXY=http://proxy.example.com:8080" "NO_PROXY=localhost,127.0.0.1,..."

10.2.4. Generating an offline installation file

Log in with user with have sudo privileges.

Follow these steps to generate an offline installation file.

$ sudo ./generate-offline.sh

Offline install file is generated as k8s-offline-files.tar.gz file.

10.3. Use with script based installer

Place k8s-offline-files.tar.gz in the script based installer directory.

Change OFFLINE_INSTALL= value to yes in config.sh. When you run the installation in this state, an offline installation will be performed.

10.4. Ansible installer

Extract k8s-offline-files.tar.gz file in the Ansible installer directory.

Change offline_install variable to yes in inventory/group_vars/all/offline.yml file. When you run the installation in this state, an offline installation will be performed.

11. Upgrading cluster

The steps to upgrade your Kubernetes cluster.

11.1. Notes

You can only upgrade your Kubernetes cluster to version 0.1 at a time.

11.2. Ansible installer

If you are using the the Ansible installer, the upgrade can be done automatically.

11.2.1. Prepare for offline installation

If you are using an offline installation, you must obtain and extract the offline installation file in advance.

11.2.2. Set up version

Change following variables in inventory/group_vars/all/version.yml file.

  • kube_version: Version of the Kubernetes.

  • kubeadm_version, kubelet_version, kubectl_version: Versions of kubeadm, kubelet, kubectl (RPM version)

11.2.3. Performing Upgrades

Follow the steps below to upgrade the master nodes.

$ ansible-playbook -i inventory/hosts upgrade-master.yml

Follow the steps below to upgrade the worker nodes.

$ ansible-playbook -i inventory/hosts upgrade-worker.yml

12. Operations

12.1. Renewal of certificates

Kubernetes server and client certificates are valid for one year, so you have to renew your certificates periodically.

For more information, see Certificate Management with kubeadm.

12.1.1. Automatic renewal of certificates

When you upgrade Kubernetes, kubeadm will automatically update your certificates. Kubernetes gets a minor upgrade every few months, so if you upgrade regularly in conjunction with this, your certificate will not expire. (It is a best practice to upgrade your cluster frequently in order to stay secure.)

12.1.2. Verification of Certificate Expiration Date

You can check the expiration date by performing the following procedure on each master node.

$ sudo kubeadm alpha certs check-expiration

12.1.3. Manual certificate update

You can manually update your certificate by following the steps below.

12.1.3.1. If you are using the Ansible installer

You can use the renew-server-certs playbook to manually update the certificates on all master nodes. All certificates are regenerated with an expiration date of one year.

$ ansible-playbook -i inventory/hosts renew-server-certs.yml
12.1.3.2. If you are using the script based installer

Perform the following procedures on all master nodes.

$ sudo kubeadm alpha certs renew all
$ sudo /bin/rm /var/lib/kubelet/kubelet.crt
$ sudo /bin/rm /var/lib/kubelet/kubelet.key
$ sudo systemctl restart kubelet