May 072021
 

以前朋友开玩笑,说我就是只会写OpenStack的安装。后来搞PaaS的时候,我就基本没动手安装过K8s,这次读张磊的大作,为了加深理解,也自己动手装一遍k8s。

现在手工装一遍k8s,如果不考虑HA,那么其实就真的用不了多久。步骤,也越来越简单。下面操作,基本都是参考kubeboard 文档

环境

3个虚拟机就可以搞定,也不需要太大的规格,其实就够用。都是2core,4G内存,40G存储,CentOS 7.6的虚拟机。

  • kubeadm-master
  • kubeadm-work1
  • kubeadm-nfs

K8S相关的软件和相关版本

  • Kubernetes v1.21.x
  • calico 3.17.1
  • nginx-ingress 1.9.1
  • Containerd.io 1.4.3

配置

下面的配置,都是需要在master节点和work节点进行相同的操作。

hostname

# 修改 hostname
hostnamectl set-hostname kubeadm-master
# 查看修改结果
hostnamectl status
# 设置 hostname 解析
echo "127.0.0.1   $(hostname)" >> /etc/hosts

NFS client

# 安装 nfs-utils
yum install -y nfs-utils wget

Containerd

# 阿里云 docker hub 镜像
export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com

# 安装 containerd

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

#Setup required sysctl params, these persist across reboots.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# Apply sysctl params without reboot
sysctl --system

# 设置 yum repository
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装 containerd
yum install -y containerd.io-1.4.3

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

sed -i "s#k8s.gcr.io#registry.aliyuncs.com/k8sxio#g"  /etc/containerd/config.toml
sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
sed -i "s#https://registry-1.docker.io#${REGISTRY_MIRROR}#g"  /etc/containerd/config.toml

systemctl daemon-reload
systemctl enable containerd
systemctl restart containerd

kubeadm

# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubelet、kubeadm、kubectl,kubeadm 依赖把kubelet kubectl 都装上。避免key验证失败

yum install -y kubeadm-1.21.0 --nogpgcheck

crictl config runtime-endpoint /run/containerd/containerd.sock

# 重启 contained,并启动 kubelet
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet

containerd --version
kubelet --version

Master节点初始化

下面的操作,就只需要在master节点进行就可以

# 只在 master 节点执行
# 替换 x.x.x.x 为 master 节点的内网IP
export MASTER_IP=10.0.38.147
# 替换 apiserver.demo 为 您想要的 dnsName
export APISERVER_NAME=apiserver.abc.com
# Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中
export POD_SUBNET=10.100.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

配置文件,必须先设置好环境变量

cat <<EOF > ./kubeadm-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
imageRepository: registry.aliyuncs.com/k8sxio
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
dns:
  type: CoreDNS
  imageRepository: swr.cn-east-2.myhuaweicloud.com
  imageTag: 1.8.0

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

拉取镜像和初始化

# 
kubeadm config images pull --config=kubeadm-config.yaml

##初始化 Master 节点
kubeadm init --config=kubeadm-config.yaml --upload-certs

配置

# 配置 kubectl
mkdir /root/.kube/
cp -i /etc/kubernetes/admin.conf /root/.kube/config

网络

# 安装 calico 网络插件

kubectl create -f https://kuboard.cn/install-script/v1.21.x/calico-operator.yaml
wget https://kuboard.cn/install-script/v1.21.x/calico-custom-resources.yaml
sed -i "s#192.168.0.0/16#${POD_SUBNET}#" calico-custom-resources.yaml
kubectl create -f calico-custom-resources.yaml

获取work节点添加所需要的token

kubeadm token create --print-join-command

work节点

对于work节点来说,要做的东西就很少,让work节点,知道api server的地址就可以。剩下就是运行join的命令,通过token,加入就可以。

# 只在 worker 节点执行
# 替换 x.x.x.x 为 master 节点的内网 IP
export MASTER_IP=x.x.x.x
# 替换 apiserver.demo 为初始化 master 节点时所使用的 APISERVER_NAME
export APISERVER_NAME=apiserver.kubeadm.abc.com
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts

把work节点添加到集群

把上面在master输入命令获取token命令的输出,copy到所有work节点运行一遍就可以。这个token,有效期是2个小时。

NFS Server

wget https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz
tar zxvf helm-v3.5.4-linux-amd64.tar.gz 
cd linux-amd64/
mv linux-amd64/helm /usr/local/bin/helm

helm version --short
helm repo add stable https://charts.helm.sh/stable
helm repo add supertetelman https://supertetelman.github.io/charts/
helm repo update
helm repo list

helm install nfs-client-provisioner --set nfs.server=10.0.38.193 --set nfs.path=/data/nfs supertetelman/nfs-client-provisioner

kubectl get storageclass

kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

备注

contained操作

参考文章

  • https://kuboard.cn/install/v3/install.html

 Leave a Reply

(required)

(required)

This site uses Akismet to reduce spam. Learn how your comment data is processed.