kubeadm 安装 k8s v1.13.1 HA 详细教程之三:安装 master

本贴最后更新于 2251 天前,其中的信息可能已经斗转星移

1. 安装 master(第一台 master)

1.1 编辑 kubeadm 配置文件

[root@k8s01 ~]# cat ~/kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kubernetesVersion: v1.13.1 controlPlaneEndpoint: k8s-cluster.smile13.com:6443 apiServer: certSANs: - k8s-cluster.smile13.com networking: serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16

1.2 提前拉取镜像

[root@k8s01 ~]# kubeadm config images pull --config kubeadm-config.yaml [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

1.3 初始化

[root@k8s01 ~]# kubeadm init --config kubeadm-config.yaml [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-cluster.smile13.com k8s-cluster.smile13.com] and IPs [10.96.0.1 192.168.158.131] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.158.131 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.158.131 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 28.003045 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s01" as an annotation [mark-control-plane] Marking the node k8s01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: t1yovr.ag1xbdhfgo36z8f7 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join k8s-cluster.smile13.com:6443 --token t1yovr.ag1xbdhfgo36z8f7 --discovery-token-ca-cert-hash sha256:ceaf1b9a9ef558ff8706331cb88e81c28d48528972cee2b92a8416364768e45d [root@k8s01 ~]# mkdir -p $HOME/.kube [root@k8s01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

1.4 查看集群状态

[root@k8s01 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [root@k8s01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s01 NotReady master 4m45s v1.13.1 注意:由于还没有安装网络插件,所以master的状态为NotReady

1.5 安装网络插件 Calico

[root@k8s01 ~]# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created [root@k8s01 ~]# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml configmap/calico-config created service/calico-typha created deployment.apps/calico-typha created poddisruptionbudget.policy/calico-typha created daemonset.extensions/calico-node created serviceaccount/calico-node created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created 注意:如果pod CIDR 不是 `192.168.0.0/16`,需要先下载配置文件,修改配置文件中的CALICO_IPV4POOL_CIDR的value为对应的pod CIDR > ######再次查看集群状态: [root@k8s01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s01 Ready master 42m v1.13.1 > [root@k8s01 ~]# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-node-vf2j5 2/2 Running 0 16m 192.168.158.131 k8s01 <none> <none> kube-system coredns-89cc84847-msbbj 1/1 Running 0 19m 10.244.0.3 k8s01 <none> <none> kube-system coredns-89cc84847-pg8l2 1/1 Running 0 19m 10.244.0.2 k8s01 <none> <none> kube-system etcd-k8s01 1/1 Running 0 18m 192.168.158.131 k8s01 <none> <none> kube-system kube-apiserver-k8s01 1/1 Running 0 18m 192.168.158.131 k8s01 <none> <none> kube-system kube-controller-manager-k8s01 1/1 Running 0 19m 192.168.158.131 k8s01 <none> <none> kube-system kube-proxy-x6v57 1/1 Running 0 19m 192.168.158.131 k8s01 <none> <none> kube-system kube-scheduler-k8s01 1/1 Running 0 18m 192.168.158.131 k8s01 <none> <none>

1.6 复制相关文件到其他 master 上

[root@k8s01 k8s-install]# cd /etc/kubernetes && tar cvzf k8s-key.tgz pki/ca.* pki/sa.* pki/front-proxy-ca.* pki/etcd/ca.* admin.conf pki/ca.crt pki/ca.key pki/sa.key pki/sa.pub pki/front-proxy-ca.crt pki/front-proxy-ca.key pki/etcd/ca.crt pki/etcd/ca.key admin.conf [root@k8s01 kubernetes]# scp /etc/kubernetes/k8s-key.tgz k8s02:/etc/kubernetes/ k8s-key.tgz 100% 11KB 3.9MB/s 00:00 [root@k8s01 kubernetes]# scp /etc/kubernetes/k8s-key.tgz k8s03:/etc/kubernetes/ k8s-key.tgz 100% 11KB 3.6MB/s 00:00 ######到对应的master解压k8s-key.tgz包 ######复制kubeadm-config.yaml到其他master(用于从阿里云下载镜像,也可以不复制,直接pull需要的镜像,这里我为了方便直接copy配置文件进行pull) [root@k8s01 ~]# scp k8s-install/kubeadm-config.yaml k8s02:~ kubeadm-config.yaml 100% 302 415.7KB/s 00:00 [root@k8s01 ~]# scp k8s-install/kubeadm-config.yaml k8s03:~ kubeadm-config.yaml 100% 302 222.8KB/s 00:00

1.7 安装其他 master(以 k8s02 为例,k8s03 一样的操作)

1.7.1 下载镜像(由于不能访问到 k8s 的官方仓库,从阿里云下载)

[root@k8s02 ~]# kubeadm config images pull --config kubeadm-config.yaml [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

1.7.2 初始化

[root@k8s02 ~]# kubeadm join k8s-cluster.smile13.com:6443 --token t1yovr.ag1xbdhfgo36z8f7 --discovery-token-ca-cert-hash sha256:ceaf1b9a9ef558ff8706331cb88e81c28d48528972cee2b92a8416364768e45d --experimental-control-plane [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "k8s-cluster.smile13.com:6443" [discovery] Created cluster-info discovery client, requesting info from "https://k8s-cluster.smile13.com:6443" [discovery] Requesting info from "https://k8s-cluster.smile13.com:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "k8s-cluster.smile13.com:6443" [discovery] Successfully established connection with API Server "k8s-cluster.smile13.com:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [join] Running pre-flight checks before initializing the new control plane instance [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s02 localhost] and IPs [192.168.158.132 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s02 localhost] and IPs [192.168.158.132 127.0.0.1 ::1] [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-cluster.smile13.com k8s-cluster.smile13.com] and IPs [10.96.0.1 192.168.158.132] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Checking Etcd cluster health [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s02" as an annotation [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node k8s02 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Master label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. [root@k8s02 ~]# mkdir -p $HOME/.kube [root@k8s02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.7.3 master 安装完成,查看集群状态

[root@k8s01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s01 Ready master 31m v1.13.1 k8s02 Ready master 4m51s v1.13.1 k8s03 Ready master 3m36s v1.13.1 [root@k8s01 ~]# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-node-62ntc 2/2 Running 0 98s 192.168.158.133 k8s03 <none> <none> kube-system calico-node-lms7b 2/2 Running 0 2m54s 192.168.158.132 k8s02 <none> <none> kube-system calico-node-vf2j5 2/2 Running 0 25m 192.168.158.131 k8s01 <none> <none> kube-system coredns-89cc84847-msbbj 1/1 Running 0 28m 10.244.0.3 k8s01 <none> <none> kube-system coredns-89cc84847-pg8l2 1/1 Running 0 28m 10.244.0.2 k8s01 <none> <none> kube-system etcd-k8s01 1/1 Running 0 27m 192.168.158.131 k8s01 <none> <none> kube-system etcd-k8s02 1/1 Running 0 2m53s 192.168.158.132 k8s02 <none> <none> kube-system etcd-k8s03 1/1 Running 0 97s 192.168.158.133 k8s03 <none> <none> kube-system kube-apiserver-k8s01 1/1 Running 0 27m 192.168.158.131 k8s01 <none> <none> kube-system kube-apiserver-k8s02 1/1 Running 0 2m54s 192.168.158.132 k8s02 <none> <none> kube-system kube-apiserver-k8s03 1/1 Running 0 98s 192.168.158.133 k8s03 <none> <none> kube-system kube-controller-manager-k8s01 1/1 Running 1 28m 192.168.158.131 k8s01 <none> <none> kube-system kube-controller-manager-k8s02 1/1 Running 0 2m54s 192.168.158.132 k8s02 <none> <none> kube-system kube-controller-manager-k8s03 1/1 Running 0 98s 192.168.158.133 k8s03 <none> <none> kube-system kube-proxy-8r9bq 1/1 Running 0 2m54s 192.168.158.132 k8s02 <none> <none> kube-system kube-proxy-bv2bf 1/1 Running 0 98s 192.168.158.133 k8s03 <none> <none> kube-system kube-proxy-x6v57 1/1 Running 0 28m 192.168.158.131 k8s01 <none> <none> kube-system kube-scheduler-k8s01 1/1 Running 1 27m 192.168.158.131 k8s01 <none> <none> kube-system kube-scheduler-k8s02 1/1 Running 0 2m54s 192.168.158.132 k8s02 <none> <none> kube-system kube-scheduler-k8s03 1/1 Running 0 98s 192.168.158.133 k8s03 <none> <none>
  • Kubernetes

    Kubernetes 是 Google 开源的一个容器编排引擎,它支持自动化部署、大规模可伸缩、应用容器化管理。

    116 引用 • 54 回帖 • 3 关注

相关帖子

欢迎来到这里!

我们正在构建一个小众社区,大家在这里相互信任,以平等 • 自由 • 奔放的价值观进行分享交流。最终,希望大家能够找到与自己志同道合的伙伴,共同成长。

注册 关于
请输入回帖内容 ...