k8s1.26安装和组件原理
一、k8s1.26环境准备k-master 192.168.50.100
k-node1 192.168.50.101
k-node2 192.168.50.102安装docker 默认会安装containerd
config.toml里面需要配置
先要安装k8s才会有crictl命令
安装calico网络插件
v3.25
下载2个文件
修改自定义的文件,改为pod的dir网段才行kubeadm init --apiserver-advertise-address=192.168.200.10 --kubernetes-version=v1.23.0 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers
1、前置工作
1、关掉防火墙和selinux
systemctl disable firewalld.service --now
setenforce 0
vim /etc/selinux/config2、关闭swap交换分区
swapoff -a3、打开linux内核工具
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.d/k8s.conf2、重要工作
1、配置docker源和kubernetes源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg2、安装docker-ce
yum -y install docker-ce
systemctl enable docker --now3、修改config.toml文件
containerd config default > /etc/containerd/config.toml
vim config.toml
SystemdCgroup = true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
# 配置拉取镜像策略
# 因为国内访问不到dockerhub或者其他,就需要使用一些magic
# 拉取registry.k8s.io镜像从下面网站拉取
endpoint = ["https://registry-k8s-io.mirrors.sjtug.sjtu.edu.cn"]
# 拉取docker.io镜像也是这样的
endpoint = ["https://自己网站"]
systemctl restart containerd
systemctl enable containerd4、安装kubelet,kubeadm,kubectl
yum -y install kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
# 启动kubelet
systemctl enable kubelet --now5、设置容器运行时
crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
crictl config image-endpoint unix:///var/run/containerd/containerd.sock3、只在master上面做
1、kubernetes初始化
kubeadm init --apiserver-advertise-address=192.168.50.100 \
--kubernetes-version=v1.26.0 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository=registry.aliyuncs.com/google_containers
[*]--image-repository 拉取控制平面镜像的仓库,etcd,api-server等等
[*]--apiserver-advertise-address master的地址
[*]--pod-network-cid pod的网段
2、加入node节点
[*]node1,node2上面需要执行
# kubeadm join 192.168.50.100:6443 --token q6tybk.47n9q7zymfpxeufi --discovery-token-ca-cert-hash sha256:d949c3809ba2f36425000119f9e7c7e29f3715aebd568b91eb8af309a86de09a
Running pre-flight checks
: tc not found in system path
Reading configuration from the cluster...
FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Starting the kubelet
Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# kubeadm join 192.168.50.100:6443 --token q6tybk.47n9q7zymfpxeufi --discovery-token-ca-cert-hash sha256:d949c3809ba2f36425000119f9e7c7e29f3715aebd568b91eb8af309a86de09a
[*]因为没有安装网络插件,节点直接的pod不能进行通信,因此是notready状态
3、安装calico网络插件
[*]下载calico网络插件yaml文件
[*]安装的版本是v3.25.2
[*]https://archive-os-3-25.netlify.app/calico/3.25/getting-started/kubernetes/quickstart
[*]下载2个yaml文件
# 直接进行应用
kubectl create -f tigera-operator.yaml
# 修改custom-resources.yaml中ip地址即可,为pod的网段地址即可
kubectl create -f custom-resources.yaml# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-5948d966b5-c5x9j 1/1 Running 0 70m
calico-apiserver calico-apiserver-5948d966b5-q29qv 1/1 Running 0 70m
calico-system calico-kube-controllers-84dd694985-znfdz 1/1 Running 0 72m
calico-system calico-node-lf6f5 1/1 Running 0 72m
calico-system calico-node-rtbfq 1/1 Running 0 72m
calico-system calico-node-tcz85 1/1 Running 0 72m
calico-system calico-typha-665f4cfb48-4pzz5 1/1 Running 0 72m
calico-system calico-typha-665f4cfb48-q8jnw 1/1 Running 0 72m
calico-system csi-node-driver-b9fps 2/2 Running 0 72m
calico-system csi-node-driver-d4mr9 2/2 Running 0 72m
calico-system csi-node-driver-qzcwr 2/2 Running 0 72m
default centos8-demo 1/1 Running 0 40m
kube-system coredns-5bbd96d687-rsnp6 1/1 Running 0 95m
kube-system coredns-5bbd96d687-svq2d 1/1 Running 0 95m
kube-system etcd-k-master 1/1 Running 0 95m
kube-system kube-apiserver-k-master 1/1 Running 0 95m
kube-system kube-controller-manager-k-master 1/1 Running 0 95m
kube-system kube-proxy-fgct4 1/1 Running 0 93m
kube-system kube-proxy-lfsvb 1/1 Running 0 95m
kube-system kube-proxy-mk56p 1/1 Running 0 94m
kube-system kube-scheduler-k-master 1/1 Running 0 95m
tigera-operator tigera-operator-66654c8696-gxkmg 1/1 Running 0 75m4、查看node状态和创建pod测试网络
# kubectl get node
NAME STATUS ROLES AGE VERSION
k-master Ready control-plane 69m v1.26.0
k-node1 Ready <none> 68m v1.26.0
k-node2 Ready <none> 67m v1.26.0
# kubectl get pod
NAME READY STATUS RESTARTS AGE
centos8-demo 1/1 Running 0 14m
# kubectl exec -ti centos8-demo -- /bin/bash
# ping qq.com
PING qq.com (113.108.81.189) 56(84) bytes of data.
64 bytes from 113.108.81.189 (113.108.81.189): icmp_seq=1 ttl=127 time=44.10 ms5、补全功能
echo 'source
页:
[1]