Zephyr's Blog.

使用 kubeadm 建立 kubernetes cluster

字數統計: 1.8k閱讀時間: 9 min
2020/11/15

kubeadm

因為實在是踩太多雷了,所以才想把安裝過程記錄起來

系統環境

沒事不要用Arch Linux來灌,官方不支持以外遇到問題也很難處理
這是我親身體會的教訓

作業系統: Ubuntu 20.04.1

IP Address Host name Role CPU Memory
140.117.171.171 zephyr-lab master 4 8G
140.117.171.172 zephyr-lab2 node1 2 2G
140.117.171.173 zephyr-lab3 node2 2 2G

事前準備

更新apt

sudo apt update

安裝套件包,以讓apt允許通過HTTPS安裝存儲庫

sudo apt install gnupg-agent apt-transport-https ca-certificates curl software-properties-common

安裝Docker

官方教學

新增docker官方GPG Key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

使用 apt 安裝

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io

將自己的使用者帳號加入docker群組

sudo usermod -aG docker ${user-name}

設定開機自動啟動並且啟動 docker

sudo systemctl enable docker
sudo systemctl start docker

檢查版本:

$ docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:02:52 2020
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.13
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:01:20 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.7
GitCommit: 8fba4e9a7d01810a393d5d25a3621dc101981175
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683

安裝 Kubernetes

官方教學

事前確認:Letting iptables see bridged traffic

確保已載入br_netfilter模塊

$ lsmod | grep br_netfilter
br_netfilter 28672 0
bridge 176128 1 br_netfilter

如果沒有載入的話,輸入sudo modprobe br_netfilter來載入

確保在sysctl中的net.bridge.bridge-nf-call-iptables是設為1

$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sudo sysctl --system

開始安裝

Kubernetes v1.8+ 要求關閉系統swap

  • swapoff -a
  • 註解/etc/fstabswap掛載

新增kubernetes官方GPG Key

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

安裝 Kubernetes 元件與工具

sudo apt update
sudo apt install -y kubelet kubeadm kubectl

避免kubernetes被更新

sudo apt-mark hold kubelet kubeadm kubectl

設定開機自動啟動並且啟動 kubelet

sudo systemctl enable kubelet
sudo systemctl start kubelet

新增Master Node

預先下載 docker image

後面會執行 master, worker node 安裝,而實際上還是要先 pull docker image,這個動作可以先下載好:

$ kubeadm config images pull
k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.2
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.2
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.2
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.2
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
[config/images] Pulled k8s.gcr.io/coredns:1.7.0

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.19.2 d373dd5a8593 2 weeks ago 118MB
k8s.gcr.io/kube-controller-manager v1.19.2 8603821e1a7a 2 weeks ago 111MB
k8s.gcr.io/kube-apiserver v1.19.2 607331163122 2 weeks ago 119MB
k8s.gcr.io/kube-scheduler v1.19.2 2f32d66b884f 2 weeks ago 45.7MB
k8s.gcr.io/etcd 3.4.13-0 0369cf4303ff 5 weeks ago 253MB
k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 3 months ago 45.2MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 7 months ago 683kB

初始化Master Node

初始化kubeadm,這邊需要使用root使用者執行

會指定--pod-network-cidr=10.244.0.0/16是因為Flannel網路插件所要求的

sudo su
kubeadm init --pod-network-cidr=10.244.0.0/16

順利的話,執行過程約 2-3 分鐘,最後會出現以下訊息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 140.117.171.171:6443 --token {token} \
--discovery-token-ca-cert-hash sha256:{ca-hash}

執行最後描述的這段

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

檢查node有沒有建立成功

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
zephyr-lab NotReady master 70s v1.19.3

部署CNI

CNI全名是 Container Network Interface,是 K8S 針對容器網路介面的規範定義。

K8s有許多CNI可以選擇,在這邊使用的是Flannel

# Apply
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

接著透過 kubectl 查看 Flannel 是否正確在每個 Node 部署:

$ kubectl -n kube-system get pod -l app=flannel -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-5mgbg 1/1 Running 0 36m 140.117.171.171 zephyr-lab <none> <none>
kube-flannel-ds-ds47t 1/1 Running 0 83s 140.117.171.172 zephyr-lab2 <none> <none>
kube-flannel-ds-jmt42 1/1 Running 0 85s 140.117.171.173 zephyr-lab3 <none> <none>

$ ip -4 a show flannel.1
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
inet 10.244.0.0/32 brd 10.244.0.0 scope global flannel.1
valid_lft forever preferred_lft forever

新增Worker Node

加入集群,這邊需要進入root使用者執行以下指令:

$ kubeadm join 140.117.171.171:6443 --token {token} \
--discovery-token-ca-cert-hash sha256:{ca-hash}

如果上述token過期了,需要到Master node產生新的token:

kubeadm token create --print-join-command

重置Kubeadm

警告:此行為會刪除Node,請謹慎使用

如果很不幸的部署失敗了,或人生遇到了什麼困難
可以在root使用者執行

$ kubeadm reset

[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "zephyr-lab" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

記得也要把$HOME/.kube/config刪掉

rm $HOME/.kube/config

問題收集

Docker driver WARNING

在使用kubeadm之後如果顯示這行WARNING

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

修改/etc/docker/daemon.json,加入以下內容

{
"exec-opts": ["native.cgroupdriver=systemd"]
}

執行sudo systemctl restart docker重啟docker後即可

Reference

只要用 kubeadm 小朋友都能部署 Kubernetes
K8s 學習筆記 - kubeadm 手動安裝
Kubernetes 兩步安裝一次上手
Docker中的Cgroup Driver:Cgroupfs 与 Systemd

CATALOG
  1. 1. 系統環境
  2. 2. 事前準備
    1. 2.1. 安裝Docker
    2. 2.2. 安裝 Kubernetes
      1. 2.2.1. 事前確認:Letting iptables see bridged traffic
      2. 2.2.2. 開始安裝
  3. 3. 新增Master Node
    1. 3.1. 預先下載 docker image
    2. 3.2. 初始化Master Node
  4. 4. 部署CNI
  5. 5. 新增Worker Node
  6. 6. 重置Kubeadm
  7. 7. 問題收集
    1. 7.1. Docker driver WARNING
  8. 8. Reference