摘自:https://blog.csdn.net/newcrane/article/details/78952987https://k8s-install.opsnull.com/ https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-on-centos.htmlKubernetes1.9 二进制版集群+ipvs+coredns

可使用镜像及二进制文件:链接:https://pan.baidu.com/s/1ypgC8MeYc0SUfZr-IdnHeg 密码:bb82
软件环境:
CentOS Linux release 7.4.1708 (Core) kubernetes1.8.6 etcd3.2.12 flanneld0.9.1 docker17.12.0-ce方便安装在 master 与其他两台机器设置成无密码访问
ssh-keygen ssh-copy-id -i node01 ssh-copy-id -i node02 ssh-copy-id -i node03hosts设置
for i in {100..102};do ssh 192.168.99.$i " cat <<END>> /etc/hosts 192.168.99.100 node01 etcd01 192.168.99.101 node02 etcd02 192.168.99.102 node03 etcd03 END";done设置防火墙
for i in {100..102};do ssh 192.168.99.$i "systemctl stop firewalld;systemctl disable firewalld";done配置内核参数
创建 /etc/sysctl.d/k8s.conf 文件
for i in {100..102};do ssh 192.168.99.$i " cat << EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF";done加载模块
for i in {100..102};do ssh 192.168.99.$i " modprobe br_netfilter echo 'modprobe br_netfilter' >> /etc/rc.local";done设置操作权限
for i in {100..102};do ssh 192.168.99.$i "chmod a+x /etc/rc.d/rc.local";done执行 sysctl -p 使其生效
for i in {100..102};do ssh 192.168.99.$i " sysctl -p /etc/sysctl.d/k8s.conf";done禁用SELINUX
for i in {100..102};do ssh 192.168.99.$i " setenforce 0; sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config";done关闭系统的Swap
for i in {100..102};do ssh 192.168.99.$i " sed -i 's/.*swap.*/#&/' /etc/fstab swapoff -a";done设置iptables策略为 ACCEPT
for i in {100..102};do ssh 192.168.99.$i " /sbin/iptables -P FORWARD ACCEPT; echo 'sleep 60 && /sbin/iptables -P FORWARD ACCEPT' >> /etc/rc.local";done安装依赖包
for i in {100..102};do ssh 192.168.99.$i " yum install -y epel-release; yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget";done所有的软件以及相关配置 https://pan.baidu.com/share/init?surl=AZCx_k1w4g3BBWatlomkyQ 密码:ff7y
kubernetes 系统各组件需要使用 TLS 证书对通信进行加密,本文档使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件,CA 是自签名的证书,用来签名后续创建的其它 TLS 证书。
以下操作在一台 master 上操作,,证书只需要创建一次即可,以后在向集群中添加新节点时只要将 /etc/kubernetes/ 目录下的证书拷贝到新节点上即可
创建 CA 配置文件
mkdir /root/ssl cd /root/ssl cat > ca-config.json << EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" } } } } EOF ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile; signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE; server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证; client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证;创建 CA 证书签名请求
cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF “CN”:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法; “O”:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);生成 CA 证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca创建 kubernetes 证书签名请求文件:
cat > kubernetes-csr.json << EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.99.100", "192.168.99.101", "192.168.99.102", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF hosts 中的内容可以为空,即使按照上面的配置,向集群中增加新节点后也不需要重新生成证书。 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,由于该证书后续被 etcd 集群和 kubernetes master 集群使用,所以上面分别指定了 etcd 集群、kubernetes master 集群的主机 IP 和 kubernetes 服务的服务 IP生成 kubernetes 证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes确认生成结果
ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem创建 admin 证书
cat > admin-csr.json << EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; OU 指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限生成 admin 证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin ls admin* admin.csr admin-csr.json admin-key.pem admin.pem创建 kube-proxy 证书
cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF CN 指定该证书的 User 为 system:kube-proxy; kube-apiserver 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;生成 kube-proxy 客户端证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下
for i in {100..102};do ssh 192.168.99.$i "mkdir -p /etc/kubernetes/ssl";done for i in {100..102};do scp /root/ssl/*.pem 192.168.99.$i:/etc/kubernetes/ssl;done在三个节点都安装 etcd,下面的操作需要再三个节点都执行一遍
创建工作目录
for i in {100..102};do ssh 192.168.99.$i "mkdir -p /var/lib/etcd";done创建 node01 的 systemd 文件
cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name node01 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.99.100:2380 \\ --listen-peer-urls https://192.168.99.100:2380 \\ --listen-client-urls https://192.168.99.100:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.99.100:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster node01=https://192.168.99.100:2380,node02=https://192.168.99.101:2380,node03=https://192.168.99.102:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF创建 node02 的 systemd 文件
cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name node02 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.99.101:2380 \\ --listen-peer-urls https://192.168.99.101:2380 \\ --listen-client-urls https://192.168.99.101:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.99.101:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster node01=https://192.168.99.100:2380,node02=https://192.168.99.101:2380,node03=https://192.168.99.102:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF创建 node03 的 systemd 文件
cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \\ --name node03 \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://192.168.99.102:2380 \\ --listen-peer-urls https://192.168.99.102:2380 \\ --listen-client-urls https://192.168.99.102:2379,http://127.0.0.1:2379 \\ --advertise-client-urls https://192.168.99.102:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster node01=https://192.168.99.100:2380,node02=https://192.168.99.101:2380,node03=https://192.168.99.102:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 指定 etcd 的工作目录为 /var/lib/etcd,数据目录为 /var/lib/etcd,需在启动服务前创建这个目录,否则启动服务的时候会报错“Failed at step CHDIR spawning /usr/bin/etcd: No such file or directory”; 为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file); 创建 kubernetes.pem 证书时使用的 kubernetes-csr.json 文件的 hosts 字段包含所有 etcd 节点的IP,否则证书校验会出错; –initial-cluster-state 值为 new 时,–name 的参数值必须位于 –initial-cluster 列表中;启动服务
for i in {100..102};do ssh 192.168.99.$i " systemctl daemon-reload; systemctl enable etcd; systemctl start etcd; systemctl status etcd";done最先启动的 etcd 进程会卡住一段时间,等待其它节点上的 etcd 进程加入集群,为正常现象。
验证 etcd 服务,在任何一个 etcd 节点执行
export ETCD_ENDPOINTS="https://192.168.99.100:2379,https://192.168.99.102:2379" etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ cluster-health ## 输出结果 member be2bba99f48e54c is healthy: got healthy result from https://192.168.99.100:2379 member bec754b230c8075e is healthy: got healthy result from https://192.168.99.102:2379 member dfc0880cac2a50c8 is healthy: got healthy result from https://192.168.99.101:2379 cluster is healthy在三个节点都安装 Flannel,下面的操作需要再三个节点都执行一遍
向 etcd 写入网段信息
这两个命令只需要任意一个节点上执行一次就可以
etcdctl --endpoints="https://192.168.99.100:2379,https://192.168.99.101:2379,https://192.168.99.102:2379" \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mkdir /kubernetes/network etcdctl --endpoints="https://192.168.99.100:2379,https://192.168.99.101:2379,https://192.168.99.102:2379" \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'启动 flannel
for i in {100..102};do scp /usr/lib/systemd/system/flanneld.service root@192.168.99.$i:/usr/lib/systemd/system/;done for i in {100..102};do ssh 192.168.99.$i " systemctl daemon-reload; systemctl enable flanneld; systemctl start flanneld; systemctl status flanneld";done检查 flannel 服务状态
/usr/local/bin/etcdctl \ --endpoints=https://192.168.99.100:2379,https://192.168.99.101:2379,https://192.168.99.102:2379 \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ ls /kubernetes/network/subnets /kubernetes/network/subnets/172.30.31.0-24 /kubernetes/network/subnets/172.30.51.0-24 /kubernetes/network/subnets/172.30.95.0-24kubectl 是 kubernetes 的集群管理工具,任何节点通过 kubetcl 都可以管理整个k8s集群。
本文是在 master 节点部署,部署成功后会生成 /root/.kube/config 文件,kubectl 就是通过这个获取 kube-apiserver 地址、证书、用户名等信息,所以这个文件需要保管好。
kubelet 访问 kube-apiserver 的时候是通过 bootstrap.kubeconfig 进行用户验证。
# 生成token 变量 export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF mv token.csv /etc/kubernetes/ # 设置集群参数 --server 为 master 节点 ip kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://192.168.99.100:6443 \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig mv bootstrap.kubeconfig /etc/kubernetes/上面的那一堆都是准备工作,下面开始正式部署 kubernetes
在 master 节点进行部署
启动 kube-apiserver
systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver systemctl status kube-apiserver启动服务
systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager systemctl status kube-controller-manager启动 kube-scheduler
systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler systemctl status kube-schedulermaster 节点也作为 node 节点使用,需要在三个节点都执行安装操作
同步配置到其它节点
for i in {100..102};do scp docker.service root@192.168.99.$i:/usr/lib/systemd/system/;done启动服务
for i in {100..102};do ssh 192.168.99.$i " systemctl daemon-reload; systemctl enable docker; systemctl start docker; systemctl status docker";donekubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper 角色,然后 kubelet 才有权限创建认证请求
下面这条命令只在 master 点执行一次即可
cd /etc/kubernetes kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap由于 master 已经有下面两个文件,这里只需要把他 scp 到其他两个节点即可
创建 kubelet 工作目录
for i in {100..102};do ssh 192.168.99.$i " mkdir /var/lib/kubelet";done同步配置
for i in {100..102};do scp /usr/lib/systemd/system/kubelet.service root@192.168.99.$i:/usr/lib/systemd/system/;done ssh 192.168.99.101 "sed -i 's/192.168.99.100/192.168.99.101/g' /usr/lib/systemd/system/kubelet.service" ssh 192.168.99.102 "sed -i 's/192.168.99.100/192.168.99.102/g' /usr/lib/systemd/system/kubelet.service" ssh 192.168.99.102 "cat /usr/lib/systemd/system/kubelet.service"启动 kubelet
for i in {100..102};do ssh 192.168.99.$i " systemctl daemon-reload; systemctl enable kubelet; systemctl start kubelet; systemctl status kubelet";done执行 TLS 证书授权请求
kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须授权通过后,Node 才会加入到集群中 在三个节点都部署完 kubelet 之后,在 master 节点执行授权操作
# 查询授权请求 kubectl get csr NAME AGE REQUESTOR CONDITION node-csr--eUQ3Gbj-kKAFGnvsNcDXaaKSkgOP6qg4yFUzuJcoIo 29s kubelet-bootstrap Pending node-csr-3Sb1MgoFpVDeI28qKujAkCHTvZELPMKh1QoQtLB1Vv0 29s kubelet-bootstrap Pending node-csr-4D7r9R2I7XWu1oK0d2HkFb1XggOIJ9XeXaiN_lwb0nQ 28s kubelet-bootstrap Pending #授权 kubectl certificate approve node-csr--eUQ3Gbj-kKAFGnvsNcDXaaKSkgOP6qg4yFUzuJcoIo kubectl certificate approve node-csr-3Sb1MgoFpVDeI28qKujAkCHTvZELPMKh1QoQtLB1Vv0 kubectl certificate approve node-csr-4D7r9R2I7XWu1oK0d2HkFb1XggOIJ9XeXaiN_lwb0nQ kubectl get csr # 输出结果 NAME AGE REQUESTOR CONDITION node-csr--eUQ3Gbj-kKAFGnvsNcDXaaKSkgOP6qg4yFUzuJcoIo 2m kubelet-bootstrap Approved,Issued node-csr-3Sb1MgoFpVDeI28qKujAkCHTvZELPMKh1QoQtLB1Vv0 2m kubelet-bootstrap Approved,Issued node-csr-4D7r9R2I7XWu1oK0d2HkFb1XggOIJ9XeXaiN_lwb0nQ 1m kubelet-bootstrap Approved,Issued kubectl get nodes #返回 No resources found在 master 上进行角色绑定
kubectl get nodes kubectl describe clusterrolebindings system:node kubectl create clusterrolebinding kubelet-node-clusterbinding \ --clusterrole=system:node --user=system:node:192.168.99.100 kubectl describe clusterrolebindings kubelet-node-clusterbinding # 也可以将在整个集群范围内将 system:node ClusterRole 授予组”system:nodes”: kubectl create clusterrolebinding kubelet-node-clusterbinding --clusterrole=system:node --group=system:nodes kubectl get nodes查看已加入集群的节点
kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.99.100 Ready <none> 13s v1.10.3 192.168.99.101 Ready <none> 50s v1.10.3 192.168.99.102 Ready <none> 26s v1.10.3同步配置
for i in {100..102};do scp /usr/lib/systemd/system/kube-proxy.service root@192.168.99.$i:/usr/lib/systemd/system/;done ssh 192.168.99.101 "sed -i 's/192.168.99.100/192.168.99.101/g' /usr/lib/systemd/system/kube-proxy.service" ssh 192.168.99.102 "sed -i 's/192.168.99.100/192.168.99.102/g' /usr/lib/systemd/system/kube-proxy.service" ssh 192.168.99.102 "cat /usr/lib/systemd/system/kube-proxy.service"启动 kube-proxy
for i in {100..102};do ssh 192.168.99.$i " systemctl daemon-reload; systemctl enable kube-proxy; systemctl start kube-proxy; systemctl status kube-proxy";done在另外的两个节点进行上面的部署操作,注意替换其中的 ip
在 master 主节点上进行安装操作
把文件中 $DNS_SERVER_IP 替换成 10.254.0.2
sed -i 's/$DNS_SERVER_IP/10.254.0.2/g' ./kubedns-svc.yaml mv ./kubedns-controller.yaml.sed ./kubedns-controller.yaml把 $DNS_DOMAIN 替换成 cluster.local
sed -i 's/$DNS_DOMAIN/cluster.local/g' ./kubedns-controller.yaml ls *.yaml kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml在 master 节点部署
定位到 kind: Service 部分,增加 type: NodePort
kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 8510 selector: k8s-app: kubernetes-dashboard如果 dashboard 出现 configmaps is forbidden: User “system:serviceaccount:kube-system:kubernetes-dashboard” cannot list configmaps in the namespace “default” 错误,需要对 ServiceAccount 授权
转载于:https://www.cnblogs.com/hxltianhen/p/k8s-186-er-jin-zhi-an-zhuang-xiu-zheng.html
相关资源:部署一套完整的K8s高可用集群(二进制-V1.18).zip