Requirements
Operating System
General Linux Requirements
RKE runs on almost any Linux OS with Docker installed. For details on which OS and Docker versions were tested with each version, refer to the support matrix.
-
SSH user - The SSH user used for node access must be a member of the
docker
group on the node:usermod -aG docker <user_name>
Users added to the docker
group are granted effective root permissions on the host by means of the Docker API. Only choose a user that is intended for this purpose and has its credentials and access properly secured.
See Manage Docker as a non-root user to see how you can configure access to Docker without using the root
user.
-
Swap should be disabled on any worker nodes
-
Please check the network plugin documentation for any additional requirements (for example, kernel modules)
If you or your cloud provider are using a custom minimal kernel, some required (network) kernel modules might not be present.
- Following sysctl settings must be applied
net.bridge.bridge-nf-call-iptables=1
SUSE Linux Enterprise Server (SLES) / openSUSE
If you are using SUSE Linux Enterprise Server or openSUSE follow the instructions below.
Using upstream Docker
If you are using upstream Docker, the package name is docker-ce
or docker-ee
. You can check the installed package by executing:
rpm -q docker-ce
When using the upstream Docker packages, please follow Manage Docker as a non-root user.
Using SUSE/openSUSE packaged docker
If you are using the Docker package supplied by SUSE/openSUSE, the package name is docker
. You can check the installed package by executing:
rpm -q docker
Adding the Software repository for docker
In SUSE Linux Enterprise Server 15 SP2 docker is found in the Containers module. This module will need to be added before installing docker.
To list available modules you can run SUSEConnect to list the extensions and the activation command
node:~ # SUSEConnect --list-extensions
AVAILABLE EXTENSIONS AND MODULES
Basesystem Module 15 SP2 x86_64 (Activated)
Deactivate with: SUSEConnect -d -p sle-module-basesystem/15.2/x86_64
Containers Module 15 SP2 x86_64
Activate with: SUSEConnect -p sle-module-containers/15.2/x86_64
Run this SUSEConnect command to activate the Containers module.
node:~ # SUSEConnect -p sle-module-containers/15.2/x86_64
Registering system to registration proxy https://rmt.seader.us
Updating system details on https://rmt.seader.us ...
Activating sle-module-containers 15.2 x86_64 ...
-> Adding service to system ...
-> Installing release package ...
Successfully registered system
In order to run docker cli commands with your user then you need to add this user to the docker
group.
It is preferred not to use the root user for this.
usermod -aG docker <user_name>
To verify that the user is correctly configured, log out of the node and login using SSH or your preferred method, and execute docker ps
:
ssh user@node
user@node:~> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
user@node:~>
openSUSE MicroOS/Kubic (Atomic)
Consult the project pages for openSUSE MicroOS and Kubic for installation
openSUSE MicroOS
Designed to host container workloads with automated administration & patching. Installing openSUSE MicroOS you get a quick, small environment for deploying Containers, or any other workload that benefits from Transactional Updates. As rolling release distribution the software is always up-to-date. https://microos.opensuse.org
openSUSE Kubic
Based on openSUSE MicroOS, designed with the same things in mind but is focused on being a Certified Kubernetes Distribution. https://kubic.opensuse.org Installation instructions: https://kubic.opensuse.org/blog/2021-02-08-MicroOS-Kubic-Rancher-RKE/
Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS
If using Red Hat Enterprise Linux, Oracle Linux or CentOS, you cannot use the root
user as SSH user due to Bugzilla 1527565.
Do not use the RHEL/CentOS packaged Docker because, in reality, it is using Podman and that breaks RKE installation. Instead, fetch Docker from upstream, for example by following this link's instructions.
Please follow Manage Docker as a non-root user to complete the installation.
In RHEL 8.4, two extra services are included on the NetworkManager: nm-cloud-setup.service
and nm-cloud-setup.timer
. These services add a routing table that interferes with the CNI plugin's configuration. If these services are enabled, you must disable them using the command below, and then reboot the node to restore connectivity:
systemctl disable nm-cloud-setup.service nm-cloud-setup.timer
reboot
In addition, the default firewall settings of RHEL 8.4 prevent RKE1 pods from reaching out to Rancher to connect to the cluster agent. To allow Docker containers to reach out to the internet and connect to Rancher, make the following updates to the firewall settings:
firewall-cmd --zone=public --add-masquerade --permanent
firewall-cmd --reload
Red Hat Atomic
Before trying to use RKE with Red Hat Atomic nodes, there are a couple of updates to the OS that need to occur in order to get RKE working.
OpenSSH version
By default, Atomic hosts ship with OpenSSH 6.4, which doesn't support SSH tunneling, which is a core RKE requirement. If you upgrade to the latest version of OpenSSH supported by Atomic, it will correct the SSH issue.
Creating a Docker Group
By default, Atomic hosts do not come with a Docker group. You can update the ownership of the Docker socket by enabling the specific user in order to launch RKE.
# chown <user> /var/run/docker.sock
Flatcar Container Linux
When using Flatcar Container Linux nodes, it is required to use the following configuration in the cluster configuration file:
- Canal
- Calico
rancher_kubernetes_engine_config:
network:
plugin: canal
options:
canal_flex_volume_plugin_dir: /opt/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
flannel_backend_type: vxlan
services:
kube-controller:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
rancher_kubernetes_engine_config:
network:
plugin: calico
options:
calico_flex_volume_plugin_dir: /opt/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
flannel_backend_type: vxlan
services:
kube-controller:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
It is also required to enable the Docker service, you can enable the Docker service using the following command:
systemctl enable docker.service
Software
This section describes the requirements for Docker, Kubernetes, and SSH.
OpenSSH
In order to SSH into each node, OpenSSH 7.0+ must be installed on each node.
Kubernetes
Refer to the RKE release notes for the supported versions of Kubernetes.
Docker
Each Kubernetes version supports different Docker versions. The Kubernetes release notes contain the current list of validated Docker versions.
Installing Docker
Refer to Installing Docker
Checking the Installed Docker Version
Confirm that a Kubernetes supported version of Docker is installed on your machine, by running docker version --format '{{.Server.Version}}'
.
Hardware
This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters.
Worker Role
The hardware requirements for nodes with the worker
role mostly depend on your workloads. The minimum to run the Kubernetes node components is 1 CPU (core) and 1GB of memory.
Regarding CPU and memory, it is recommended that the different planes of Kubernetes clusters (etcd, controlplane, and workers) should be hosted on different nodes so that they can scale separately from each other.
Large Kubernetes Clusters
For hardware recommendations for large Kubernetes clusters, refer to the official Kubernetes documentation on building large clusters.
Etcd Clusters
For hardware recommendations for etcd clusters in production, refer to the official etcd documentation.
Ports
RKE node:
Node that runs the rke
commands
RKE node - Outbound rules
Protocol | Port | Source | Destination | Description |
---|---|---|---|---|
TCP | 22 | RKE node | Any node configured in Cluster Configuration File | SSH provisioning of node by RKE |
TCP | 6443 | RKE node | Control plane nodes | Kubernetes API server |
etcd nodes: Nodes with the role etcd
etcd nodes - Inbound rules
Protocol | Port | Source | Description |
---|---|---|---|
TCP | 2376 | Rancher nodes | Docker daemon TLS port used by Docker Machine (only needed when using Node Driver/Templates) |
TCP | 2379 |
| etcd client requests |
TCP | 2380 |
| etcd peer communication |
UDP | 8472 |
| Canal/Flannel VXLAN overlay networking |
TCP | 9099 |
| Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 |
| kubelet |
etcd nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 |
| Rancher agent |
TCP | 379 |
| etcd client requests |
TCP | 2380 |
| etcd peer communication |
TCP | 6443 |
| Kubernetes apiserver |
UDP | 8472 |
| Canal/Flannel VXLAN overlay networking |
TCP | 9099 |
| Canal/Flannel livenessProbe/readinessProbe |
controlplane nodes: Nodes with the role controlplane
controlplane nodes - Inbound rules
Protocol | Port | Source | Description |
---|---|---|---|
TCP | 80 |
| Ingress controller (HTTP) |
TCP | 443 |
| Ingress controller (HTTPS) |
TCP | 2376 |
| Docker daemon TLS port used by Docker Machine (only needed when using Node Driver/Templates) |
TCP | 6443 |
| Kubernetes apiserver |
UDP | 472 |
| Canal/Flannel VXLAN overlay networking |
TCP | 9099 |
| Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 |
| kubelet |
TCP | 10254 |
| Ingress controller livenessProbe/readinessProbe |
TCP/UDP | 30000-32767 |
| NodePort port range |
controlplane nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 |
| Rancher agent |
TCP | 2379 |
| etcd client requests |
TCP | 2380 |
| etcd peer communication |
UDP | 8472 |
| Canal/Flannel VXLAN overlay networking |
TCP | 9099 |
| Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 |
| kubelet |
Worker nodes: Nodes with the role worker
Worker nodes - Inbound rules
Protocol | Port | Source | Description |
---|---|---|---|
TCP | 22 |
| Remote access over SSH |
TCP | 3389 |
| Remote access over RDP |
TCP | 80 |
| Ingress controller (HTTP) |
TCP | 443 |
| Ingress controller (HTTPS) |
TCP | 2376 |
| Docker daemon TLS port used by Docker Machine only needed when using Node Driver/Templates) |
UDP | 8472 |
| Canal/Flannel VXLAN overlay networking |
TCP | 9099 |
| Canal/Flannel livenessProbe/readinessProbe |
TCP | 10250 |
| kubelet |
TCP | 10254 |
| Ingress controller livenessProbe/readinessProbe |
TCP/UDP | 30000-32767 |
| NodePort port range |
Worker nodes - Outbound rules
Protocol | Port | Destination | Description |
---|---|---|---|
TCP | 443 |
| Rancher agent |
TCP | 6443 |
| Kubernetes apiserver |
UDP | 8472 |
| Canal/Flannel VXLAN overlay networking |
TCP | 9099 |
| Canal/Flannel livenessProbe/readinessProbe |
TCP | 10254 |
| Ingress controller livenessProbe/readinessProbe |
Information on local node traffic
Kubernetes health checks (livenessProbe
and readinessProbe
) are executed on the host itself. On most nodes, this is allowed by default. When you have applied strict host firewall (i.e., ptables
) policies on the node, or when you are using nodes that have multiple interfaces (multi-homed), this traffic gets blocked. In this case, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as Source or Destination in your security group, that this only applies to the private interface of the nodes/instances.
If you are using an external firewall, make sure you have this port opened between the machine you are using to run rke
and the nodes that you are going to use in the cluster.
Opening port TCP/6443 using iptables
# Open TCP/6443 for all
iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
# Open TCP/6443 for one specific IP
iptables -A INPUT -p tcp -s your_ip_here --dport 6443 -j ACCEPT
Opening port TCP/6443 using firewalld
# Open TCP/6443 for all
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --reload
# Open TCP/6443 for one specific IP
firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="your_ip_here/32"
port protocol="tcp" port="6443" accept'
firewall-cmd --reload
SSH Server Configuration
Your SSH server system-wide configuration file, located at /etc/ssh/sshd_config
, must include this line that allows TCP forwarding:
AllowTcpForwarding yes