所有初始化的文件都会放到 /opt/k8s-init 文件夹下,请自行创建 -- mkdir /opt/k8s-init
1. 设备环境
节点名称 | 节点 IP | OS | 容器运行时 | 域名 |
---|---|---|---|---|
master1 | 10.9.0.200 | ubuntu20.04 | containerd | |
master2 | 10.9.0.201 | ubuntu20.04 | containerd | |
master3 | 10.9.0.202 | ubuntu20.04 | containerd | |
vip | 10.9.0.210 | ubuntu20.04 | containerd | k8s.cloudnative.lucs.top |
2. 基础配置(全部节点,除特殊指定)
1. 配置域名映射
root@master1:~# cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 master1 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 10.9.0.210 k8s.cloudnative.lucs.top 10.9.0.200 master1 10.9.0.201 master2 10.9.0.202 master3
2. 关闭防火墙
sudo ufw status ---> inactive为已经关闭。 # 关闭防火墙 sudo ufw disable
3. 关闭 selinux
apt install policycoreutils -y sed -i 's#SELINUX=permissive#SELINUX=disabled#g' /etc/selinux/config sestatus -v SELinux status: disabled
4. 设置同步
时间同步很重要! 很重要!! 很重要!!! 后面的证书之类的都会看时间的,一定要配置,反复检查!!!!
全部节点:
apt install -y chrony systemctl enable --now chrony
master1:
root@master1:~# vim /etc/chrony/chrony.conf # - 2 sources from 2.ubuntu.pool.ntp.org which is ipv6 enabled as well # - 1 source from [01].ubuntu.pool.ntp.org each (ipv4 only atm) # This means by default, up to 6 dual-stack and up to 2 additional IPv4-only # sources will be used. # At the same time it retains some protection against one of the entries being # down (compare to just using one of the lines). See (LP: #1754358) for the # discussion. # # About using servers from the NTP Pool Project in general see (LP: #104525). # Approved by Ubuntu Technical Board on 2011-02-08. # See http://www.pool.ntp.org/join.html for more information. #pool ntp.ubuntu.com iburst maxsources 4 ##屏蔽这些时间池,用阿里云的时间服务器 #pool 0.ubuntu.pool.ntp.org iburst maxsources 1 #pool 1.ubuntu.pool.ntp.org iburst maxsources 1 #pool 2.ubuntu.pool.ntp.org iburst maxsources 2 server ntp1.aliyun.com iburst ## 添加阿里云的时钟服务器 allow 10.9.0.0/16 ## 允许 10.9.0.0/16网段的使用本节点的时钟 local stratum 10 ## 这意味着如果外部时间源不可用,系统将使用本地时间作为时间源 # This directive specify the location of the file containing ID/key pairs for # NTP authentication. keyfile /etc/chrony/chrony.keys # This directive specify the file into which chronyd will store the rate # information. driftfile /var/lib/chrony/chrony.drift # Uncomment the following line to turn logging on. #log tracking measurements statistics # Log files location. logdir /var/log/chrony # Stop bad estimates upsetting machine clock. maxupdateskew 100.0 # This directive enables kernel synchronisation (every 11 minutes) of the # real-time clock. Note that it can’t be used along with the 'rtcfile' directive. rtcsync # Step the system clock instead of slewing it if the adjustment is larger than # one second, but only in the first three clock updates. makestep 1 3
重启 chrony
root@master1:~# systemctl restart chrony
master2,master3
root@master2:~# vim /etc/chrony/chrony.conf ## 禁调时间池,添加master1为自己的时间服务器 server 10.9.0.200 iburst root@master3:~# vim /etc/chrony/chrony.conf ## 禁调时间池,添加master1为自己的时间服务器 server 10.9.0.200 iburst
重启 chrony
root@master2:~# systemctl restart chrony root@master3:~# systemctl restart chrony
chrony 的配置文件详解:
1. #pool ntp.ubuntu.com iburst maxsources 4 这几行配置原本是用于设置 Ubuntu 的默认 NTP 服务器池(通过 ntp.ubuntu.com 和 ubuntu.pool.ntp.org 提供)。每行指定了一个 NTP 服务器池,并且用 maxsources 限制了最多使用的时间源数量。 这些行被注释掉了,意味着它们不会被使用,因为你决定使用阿里云的时间服务器。 2. server ntp1.aliyun.com iburst 这一行指定了使用阿里云的 ntp1.aliyun.com 作为时间同步服务器。 iburst 选项用于在初次连接时快速同步时间。如果连接不上服务器,chrony 会快速发送一系列请求,以便更快地同步时间。 3. allow 10.9.0.0/16 这个配置允许 10.9.0.0/16 网段内的设备访问并使用此 chrony 服务器来同步时间。 这通常用于在局域网内提供时间同步服务。 4. local stratum 10 这个配置指定了本地时间源的 Stratum 值为 10。当无法访问外部时间服务器时,chrony 将使用本地系统时钟作为时间源,并标记为 Stratum 10。 Stratum 值越高,表示时间的准确性越低,这样设置的目的是确保即使外部时间源不可用,系统仍然有一个可用的时间参考。 5. keyfile /etc/chrony/chrony.keys 这个指令指定了 NTP 认证使用的密钥文件的位置。该文件包含了 ID 和密钥对,用于 NTP 服务器和客户端之间的身份验证。 6. driftfile /var/lib/chrony/chrony.drift 这个指令指定了漂移文件的位置。漂移文件用于记录本地时钟的频率偏移,这样 chrony 可以在下次启动时进行更准确的时间校准。 7. #log tracking measurements statistics 这个行被注释掉了。它的作用是开启日志记录,包括时间跟踪、测量和统计信息。你可以取消注释来启用这些日志记录。 8. logdir /var/log/chrony 这个配置指定了 chrony 日志文件的存放目录。 9. maxupdateskew 100.0 这个选项用于限制时间更新的最大偏移量,单位为秒。该设置用于防止较大的时间偏差影响系统时钟。 10. rtcsync 这个指令启用了每 11 分钟一次的内核时间同步功能,用于将系统时间与硬件实时时钟 (RTC) 同步。请注意,这个设置与 rtcfile 指令不兼容。 11. makestep 1 3 这个选项指定如果时间调整超过 1 秒,在首次的三次时间更新中,chrony 会直接跳步调整系统时间,而不是逐渐校正。这对于系统启动时确保快速同步时间非常有用。
5. 配置内核参数
安装网桥
apt -y install bridge-util
加载 br_netfilter
内核模块,用于桥接网络过滤功能的.
modprobe br_netfilter
查看模块
lsmod | grep br_netfilter
修改内核参数配置
## 进入配置文件,将下面配置复制到文件末尾 vim /etc/sysctl.conf net.ipv4.ip_forward=1 #启用ipv4转发 net.ipv6.conf.all.disable_ipv6=1 #禁用ipv6协议 vm.swappiness=0 #禁止使用swap空间,只有当系统OOM时才允许使用它 vm.overcommit_memory=1 #不检查物理内存是否够用 vm.panic_on_oom=0 #不启OOM #网桥模式开启 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-arptables = 1
修改 rp_filter 内核参数
vim /etc/sysctl.d/10-network-security.conf # Turn on Source Address Verification in all interfaces to # prevent some spoofing attacks. net.ipv4.conf.default.rp_filter=2 net.ipv4.conf.all.rp_filter=1
加载和应用内核参数
sysctl --system
6. 配置 ipvs
下载 ipvs
apt -y install ipvsadm ipset
配置参数
sudo cat > /opt/k8s-init/ipvs.modules <<'EOF' #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_lc modprobe -- ip_vs_lblc modprobe -- ip_vs_lblcr modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- ip_vs_dh modprobe -- ip_vs_fo modprobe -- ip_vs_nq modprobe -- ip_vs_sed modprobe -- ip_vs_ftp modprobe -- ip_vs_sh modprobe -- ip_tables modprobe -- ip_set modprobe -- ipt_set modprobe -- ipt_rpfilter modprobe -- ipt_REJECT modprobe -- ipip modprobe -- xt_set modprobe -- br_netfilter modprobe -- nf_conntrack EOF
加载 ipvs 配置
sudo chmod 755 /opt/k8s-init/ipvs.modules && sudo bash /opt/k8s-init/ipvs.modules ## 确保重启后依然可以加载配置 sudo cp /opt/k8s-init/ipvs.modules /etc/profile.d/ipvs.modules.sh
查看是否生效
lsmod | grep -e ip_vs -e nf_conntrack ip_vs_ftp 16384 0 ip_vs_sed 16384 0 ip_vs_nq 16384 0 ip_vs_fo 16384 0 ip_vs_dh 16384 0 ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs_lblcr 16384 0 ip_vs_lblc 16384 0 ip_vs_lc 16384 0 ip_vs 180224 22 ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp nf_nat 49152 4 xt_nat,nft_chain_nat,xt_MASQUERADE,ip_vs_ftp nf_conntrack_netlink 53248 0 nf_conntrack 180224 6 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_MASQUERADE,ip_vs nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs nf_defrag_ipv4 16384 1 nf_conntrack nfnetlink 20480 5 nft_compat,nf_conntrack_netlink,nf_tables,ip_set libcrc32c 16384 6 nf_conntrack,nf_nat,btrfs,nf_tables,raid456,ip_vs
3. 安装 containerd(全部节点)
-
containerd 安装1
-
更改配置
vim /etc/containerd/config.toml // 将镜像改为阿里云的 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
4. 安装 kubeadm、kubectl、kubelet(全部节点)
- 导入 kuberntes 的清华源
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
- 创建源文件
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.tuna.tsinghua.edu.cn/kubernetes/core:/stable:/v1.28/deb/ /" > /etc/apt/sources.list.d/kubernetes.list
- 安装工具
// 直接安装 apt install -y kubelet kubeadm kubectl //也可以安装特定版本 apt-cache madison kubectl | more export KUBERNETES_VERSION=1.28.1-00 apt-get install -y kubelet=${KUBERNETES_VERSION} kubeadm=${KUBERNETES_VERSION} kubectl=${KUBERNETES_VERSION} // 保证软件不会更新 apt-mark hold kubelet kubeadm kubectl
- 启动 kubelet
systemctl enable --now kubelet
5. 部署 kube-vip
官网地址 | kube-vip
- 生成清单(master1)
root@master1:/opt/k8s-init# export VIP=10.9.0.210 root@master1:/opt/k8s-init# export INTERFACE=ens160 root@master1:/opt/k8s-init# export KVVERSION=v0.8.0 root@master1:/opt/k8s-init# alias kube-vip="ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip" // 生成清单 root@master1:/opt/k8s-init# kube-vip manifest pod \ > --interface $INTERFACE \ > --address $VIP \ > --controlplane \ > --services \ > --arp \ > --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml
- 查看 arp 清单(master1)
root@master1:/opt/k8s-init# cat /etc/kubernetes/manifests/kube-vip.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null name: kube-vip namespace: kube-system spec: containers: - args: - manager env: - name: vip_arp value: "true" - name: port value: "6443" - name: vip_nodename valueFrom: fieldRef: fieldPath: spec.nodeName - name: vip_interface value: ens160 - name: vip_cidr value: "32" - name: dns_mode value: first - name: cp_enable value: "true" - name: cp_namespace value: kube-system - name: svc_enable value: "true" - name: svc_leasename value: plndr-svcs-lock - name: vip_leaderelection value: "true" - name: vip_leasename value: plndr-cp-lock - name: vip_leaseduration value: "5" - name: vip_renewdeadline value: "3" - name: vip_retryperiod value: "1" - name: address value: 10.9.0.210 - name: prometheus_server value: :2112 image: ghcr.io/kube-vip/kube-vip:v0.8.0 imagePullPolicy: IfNotPresent name: kube-vip resources: {} securityContext: capabilities: add: - NET_ADMIN - NET_RAW volumeMounts: - mountPath: /etc/kubernetes/admin.conf name: kubeconfig hostAliases: - hostnames: - kubernetes ip: 127.0.0.1 hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/admin.conf name: kubeconfig status: {}
- 将清单发给 master2 和 master3(master1)
root@master1:/opt/k8s-init# scp /etc/kubernetes/manifests/kube-vip.yaml master2:/etc/kubernetes/manifests/kube-vip.yaml root@master1:/opt/k8s-init# scp /etc/kubernetes/manifests/kube-vip.yaml master3:/etc/kubernetes/manifests/kube-vip.yaml
- master2 和 master3 节点分别拉取清单中的镜像(master2 和 master3)
root@master2:/opt/k8s-init# nerdctl pull ghcr.io/kube-vip/kube-vip:v0.8.0 root@master3:/opt/k8s-init# nerdctl pull ghcr.io/kube-vip/kube-vip:v0.8.0
6. 初始化集群(master1)
1. 生成配置文件
root@master1:/opt/k8s-init# kubeadm config print init-defaults --component-configs KubeProxyConfiguration > kubeadm.yaml
2. 更改配置文件
apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 10.9.0.200 // 更改为本机ip bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: master1 // 更改为本机用户名 taints: null --- apiServer: timeoutForControlPlane: 4m0s certSANs: // 为下面的域名和ip签证 - k8s.cloudnative.lucs.top - master1 - master2 - master3 - 10.9.0.210 - 10.9.0.200 - 10.9.0.201 - 10.9.0.202 controlPlaneEndpoint: k8s.cloudnative.lucs.top:6443 //开启控制平面的加入 apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers // 更改镜像仓库 kind: ClusterConfiguration kubernetesVersion: 1.28.1 // 更改为自己的版本 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 //更改 pod 地址 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 bindAddressHardFail: false clientConnection: acceptContentTypes: "" burst: 0 contentType: "" kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 0 clusterCIDR: "" configSyncPeriod: 0s conntrack: maxPerCore: null min: null tcpCloseWaitTimeout: null tcpEstablishedTimeout: null detectLocal: bridgeInterface: "" interfaceNamePrefix: "" detectLocalMode: "" enableProfiling: false healthzBindAddress: "" hostnameOverride: "" iptables: localhostNodePorts: null masqueradeAll: false masqueradeBit: null minSyncPeriod: 0s syncPeriod: 0s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration logging: flushFrequency: 0 options: json: infoBufferSize: "0" verbosity: 0 metricsBindAddress: "" mode: "ipvs" // 更改为ipvs nodePortAddresses: null oomScoreAdj: null portRange: "" showHiddenMetricsForVersion: "" winkernel: enableDSR: false forwardHealthCheckVip: false networkName: "" rootHnsEndpointName: "" sourceVip: ""
3. 初始化节点
root@master1:/opt/k8s-init# kubeadm init --upload-certs --config kubeadm.yaml
信息如下
[init] Using Kubernetes version: v1.28.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s.cloudnative.lucs.top kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3] and IPs [10.96.0.1 10.9.0.200 10.9.0.210 10.9.0.201 10.9.0.202] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [10.9.0.200 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [10.9.0.200 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 12.505639 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0 [mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965 \ --control-plane --certificate-key c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965
执行
root@master1:/opt/k8s-init# mkdir -p $HOME/.kube root@master1:/opt/k8s-init# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@master1:/opt/k8s-init# sudo chown $(id -u):$(id -g) $HOME/.kube/config
控制平面 可以通过下面加入集群
kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965 \ --control-plane --certificate-key c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0
数据平面 可以通过下面加入集群
kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965
7. 加入控制平面
master2
root@master2:~# kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965 \ > --control-plane --certificate-key c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0
信息如下:
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki" [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s.cloudnative.lucs.top kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3] and IPs [10.96.0.1 10.9.0.201 10.9.0.210 10.9.0.200 10.9.0.202] [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [10.9.0.201 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [10.9.0.201 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation [mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.
执行:
root@master2:~# mkdir -p $HOME/.kube root@master2:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@master2:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
master3:
root@master3:~# kubeadm join k8s.cloudnative.lucs.top:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:79600be0419be6be354927b90146a4e0eb1153daf551da1571c548491cfff965 \ > --control-plane --certificate-key c68c96e5b844f13f0e4aee4508c583195a821c958504a177e4a2deb6485e1ac0
信息如下:
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki" [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s.cloudnative.lucs.top kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1 master2 master3] and IPs [10.96.0.1 10.9.0.202 10.9.0.210 10.9.0.200 10.9.0.201] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master3] and IPs [10.9.0.202 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master3] and IPs [10.9.0.202 127.0.0.1 ::1] [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... {"level":"warn","ts":"2024-09-05T02:45:15.10704+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"} {"level":"warn","ts":"2024-09-05T02:45:15.216045+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"} {"level":"warn","ts":"2024-09-05T02:45:15.380315+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"} {"level":"warn","ts":"2024-09-05T02:45:15.616802+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"} {"level":"warn","ts":"2024-09-05T02:45:15.958678+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"} {"level":"warn","ts":"2024-09-05T02:45:16.504533+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"} {"level":"warn","ts":"2024-09-05T02:45:17.303892+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"} {"level":"warn","ts":"2024-09-05T02:45:18.530751+0800","logger":"etcd-client","caller":"v3@v3.5.9/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00060b880/10.9.0.201:2379","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: unhealthy cluster"} [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation [mark-control-plane] Marking the node master3 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node master3 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.
执行
root@master3:~# mkdir -p $HOME/.kube root@master3:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@master3:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
8. 去除污点
去除控制平面不能调度的污点,在任意一台执行
root@master1:/opt/k8s-init# kubectl taint nodes --all node-role.kubernetes.io/control-plane- node/master1 untainted node/master2 untainted node/master3 untainted
9. 安装 cni(calico)(master1)
官方 | calico
containerd 的插件必需安装到/opt/cni/bin 中,如下
CNI plug-in enabled
Calico must be installed as a CNI plugin in the container runtime.
This installation must use the Kubernetes default CNI configuration directory (
/etc/cni/net.d
) and binary directory (/opt/cni/bin
).
1. 下载 calico 的 manifest 文件
// 这个文件时calico的crd文件 root@master1:/opt/k8s-init# curl -LO https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/tigera-operator.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1724k 100 1724k 0 0 48087 0 0:00:36 0:00:36 --:--:-- 23252 // 这个是利用crd创建的cr root@master1:/opt/k8s-init# curl -OL https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/custom-resources.yaml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 777 100 777 0 0 219 0 0:00:03 0:00:03 --:--:-- 219
2. 部署 crd 文件(一定要用 kubectl create )
root@master1:/opt/k8s-init# kubectl create -f tigera-operator.yaml namespace/tigera-operator created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created
会创建很多的 crd,并且创建了一个 sa 和 rbac 用于提供权限,还创建了一个 deployment 作为控制器.
3. 部署 cr
修改 cr 文件
# This section includes base Calico installation configuration. # For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # 配置 Calico 网络 calicoNetwork: ipPools: - name: default-ipv4-ippool blockSize: 26 cidr: 10.244.0.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() # 配置 IP 地址自动检测方法 nodeAddressAutodetectionV4: interface: ens160 # 通过网卡名称指定 --- # This section configures the Calico API server. # For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {}
创建
root@master1:/opt/k8s-init# kubectl create -f custom-resources.yaml installation.operator.tigera.io/default created apiserver.operator.tigera.io/default created
10. 全部服务(完美收尾 😍)
root@master1:/opt/k8s-init# kubectl get pods -A -owide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-apiserver calico-apiserver-647c596749-9l22f 1/1 Running 0 4m42s 10.244.137.66 master1 <none> <none> calico-apiserver calico-apiserver-647c596749-fnclw 1/1 Running 0 4m42s 10.244.136.2 master3 <none> <none> calico-system calico-kube-controllers-6bcd4db96d-g2k9p 1/1 Running 0 5m57s 10.244.180.1 master2 <none> <none> calico-system calico-node-l5lzs 1/1 Running 0 5m58s 10.9.0.201 master2 <none> <none> calico-system calico-node-qxbrk 1/1 Running 0 5m58s 10.9.0.200 master1 <none> <none> calico-system calico-node-wnc2c 1/1 Running 0 5m58s 10.9.0.202 master3 <none> <none> calico-system calico-typha-cbb7d497f-bkwpp 1/1 Running 0 5m58s 10.9.0.202 master3 <none> <none> calico-system calico-typha-cbb7d497f-hbv9m 1/1 Running 0 5m54s 10.9.0.200 master1 <none> <none> calico-system csi-node-driver-57fb5 2/2 Running 0 5m57s 10.244.137.65 master1 <none> <none> calico-system csi-node-driver-5f9jf 2/2 Running 0 5m57s 10.244.180.2 master2 <none> <none> calico-system csi-node-driver-fzfdj 2/2 Running 0 5m57s 10.244.136.1 master3 <none> <none> kube-system coredns-66f779496c-d8b2f 1/1 Running 0 4h48m 10.4.0.5 master1 <none> <none> kube-system coredns-66f779496c-xrt24 1/1 Running 0 4h48m 10.4.0.4 master1 <none> <none> kube-system etcd-master1 1/1 Running 2 4h48m 10.9.0.200 master1 <none> <none> kube-system etcd-master2 1/1 Running 0 4h43m 10.9.0.201 master2 <none> <none> kube-system etcd-master3 1/1 Running 0 <invalid> 10.9.0.202 master3 <none> <none> kube-system kube-apiserver-master1 1/1 Running 0 4h48m 10.9.0.200 master1 <none> <none> kube-system kube-apiserver-master2 1/1 Running 0 4h43m 10.9.0.201 master2 <none> <none> kube-system kube-apiserver-master3 1/1 Running 0 4h43m 10.9.0.202 master3 <none> <none> kube-system kube-controller-manager-master1 1/1 Running 0 4h48m 10.9.0.200 master1 <none> <none> kube-system kube-controller-manager-master2 1/1 Running 0 4h43m 10.9.0.201 master2 <none> <none> kube-system kube-controller-manager-master3 1/1 Running 0 4h43m 10.9.0.202 master3 <none> <none> kube-system kube-proxy-rz59w 1/1 Running 0 4h43m 10.9.0.201 master2 <none> <none> kube-system kube-proxy-t8f2z 1/1 Running 0 4h43m 10.9.0.202 master3 <none> <none> kube-system kube-proxy-wbpmc 1/1 Running 0 4h48m 10.9.0.200 master1 <none> <none> kube-system kube-scheduler-master1 1/1 Running 0 4h48m 10.9.0.200 master1 <none> <none> kube-system kube-scheduler-master2 1/1 Running 0 4h43m 10.9.0.201 master2 <none> <none> kube-system kube-scheduler-master3 1/1 Running 0 4h43m 10.9.0.202 master3 <none> <none> kube-system kube-vip-master1 1/1 Running 0 4h37m 10.9.0.200 master1 <none> <none> kube-system kube-vip-master2 1/1 Running 1 (<invalid> ago) 4h43m 10.9.0.201 master2 <none> <none> kube-system kube-vip-master3 1/1 Running 0 4h43m 10.9.0.202 master3 <none> <none> tigera-operator tigera-operator-5d56685c77-5lgj4 1/1 Running 0 6m51s 10.9.0.201 master2 <none> <none>
测试一下:
部署一个 nginx 服务
kubernetes apply -f nginx.yaml
nginx.yaml 服务信息如下
apiVersion: apps/v1 kind: Deployment metadata: name: nginxapp spec: selector: matchLabels: app: nginxapp template: metadata: labels: app: nginxapp spec: containers: - name: nginxapp image: registry.cn-hangzhou.aliyuncs.com/lucs/nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginxapp spec: selector: app: nginxapp type: NodePort ports: - port: 80 targetPort: 80
这里使用我们的 vip,访问成功
测试一下 kube-vip 的 lb 功能,具体看这里配置 | lb
修改 nginx.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: nginxapp spec: selector: matchLabels: app: nginxapp template: metadata: labels: app: nginxapp spec: containers: - name: nginxapp image: registry.cn-hangzhou.aliyuncs.com/lucs/nginx resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginxapp spec: selector: app: nginxapp type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP loadBalancerIP: 10.9.0.233
创建
kubernetes apply -f nginx.yaml kubectl get po,svc NAME READY STATUS RESTARTS AGE pod/nginxapp-67d9758fbf-ggv9h 1/1 Running 0 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h service/nginxapp LoadBalancer 10.104.121.11 10.9.0.233 80:30166/TCP 10s
访问 10.9.0.233
containerd 安装部署
containerd 官方安装文档 | containerd
nerdctl 官方安装包 | nerdctl
一、安装二进制包
两种安装方式:
1. 单独安装(推荐):
containerd 的官方二进制版本适用于
amd64
(也称为x86_64
)和arm64
(也称为aarch64
)架构。通常,您还必须 从其官方网站安装 runc 和 CNI 插件。
步骤 1:安装 containerd
containerd-<VERSION>-<OS>-<ARCH>.tar.gz
从 https://github.com/containerd/containerd/releases 下载档案,验证其 sha256sum,并将其解压到/usr/local
:$ tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz bin/ bin/containerd-shim-runc-v2 bin/containerd-shim bin/ctr bin/containerd-shim-runc-v1 bin/containerd bin/containerd-stress
该
containerd
二进制文件是针对基于 glibc 的 Linux 发行版(例如 Ubuntu 和 Rocky Linux)动态构建的。此二进制文件可能无法在基于 musl 的发行版(例如 Alpine Linux)上运行。此类发行版的用户可能必须从源代码或第三方软件包安装 containerd。常见问题解答:对于 Kubernetes,我也需要下载吗
cri-containerd-(cni-)<VERSION>-<OS-<ARCH>.tar.gz
?答案:不是。
由于 Kubernetes CRI 功能已经包含在内
containerd-<VERSION>-<OS>-<ARCH>.tar.gz
,因此您无需下载cri-containerd-....
档案即可使用 CRI。这些
cri-containerd-...
档案已被弃用,不适用于旧的 Linux 发行版,并将在 containerd 2.0 中删除。systemd
如果你打算通过 systemd 启动 containerd,你还应该
containerd.service
从 https://raw.githubusercontent.com/containerd/containerd/main/containerd.service 下载单元文件到/etc/systemd/system/containerd.service
,并运行以下命令:systemctl daemon-reload systemctl enable --now containerd
第 2 步:安装 runc
runc.<ARCH>
从 https://github.com/opencontainers/runc/releases 下载二进制文件,验证其 sha256sum,并将其安装为/usr/local/sbin/runc
。$ install -m 755 runc.amd64 /usr/local/sbin/runc
二进制文件是静态构建的,应该可以在任何 Linux 发行版上运行。
步骤 3:安装 CNI 插件(不要更改/opt/cni/bin 的位置)
cni-plugins-<OS>-<ARCH>-<VERSION>.tgz
从 https://github.com/containernetworking/plugins/releases 下载档案,验证其 sha256sum,然后将其解压到/opt/cni/bin
:$ mkdir -p /opt/cni/bin $ tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz ./ ./macvlan ./static ./vlan ./portmap ./host-local ./vrf ./bridge ./tuning ./firewall ./host-device ./sbr ./loopback ./dhcp ./ptp ./ipvlan ./bandwidth
二进制文件是静态构建的,应该可以在任何 Linux 发行版上运行。
步骤 4: 安装 nerdctl
https://github.com/containerd/nerdctl/releases 从这个页面进行下载,然后将其解压到 /usr/local/bin
tar Cxzvvf /usr/local/bin nerdctl-1.7.6-linux-amd64.tar.gz -rwxr-xr-x root/root 25116672 2024-04-30 06:21 nerdctl -rwxr-xr-x root/root 21916 2024-04-30 06:20 containerd-rootless-setuptool.sh -rwxr-xr-x root/root 7187 2024-04-30 06:20 containerd-rootless.sh
步骤 5: 更改 containerd 配置
生成配置文件
containerd config default > /etc/containerd/config.toml
更改镜像配置目录
vim /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d" [plugins."io.containerd.grpc.v1.cri".registry.auths] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.headers] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming] tls_cert_file = "" tls_key_file = ""
2. 全部安装
步骤 1: 下载安装包
打开链接 https://github.com/containerd/nerdctl/releases ,下载带有 full 的包.
下载
curl -LO https://github.com/containerd/nerdctl/releases/download/v1.7.6/nerdctl-full-1.7.6-linux-amd64.tar.gz
步骤 2: 解压压缩包
tar Cxzvvf /usr/local nerdctl-full-1.7.6-linux-amd64.tar.gz drwxr-xr-x 0/0 0 2024-04-30 06:28 bin/ -rwxr-xr-x 0/0 27644700 2015-10-21 00:00 bin/buildctl -rwxr-xr-x 0/0 23724032 2022-09-05 09:52 bin/buildg -rwxr-xr-x 0/0 53374823 2015-10-21 00:00 bin/buildkitd -rwxr-xr-x 0/0 7277848 2024-04-30 06:26 bin/bypass4netns -rwxr-xr-x 0/0 5308416 2024-04-30 06:26 bin/bypass4netnsd -rwxr-xr-x 0/0 38946168 2024-04-30 06:27 bin/containerd -rwxr-xr-x 0/0 9474048 2023-11-02 17:34 bin/containerd-fuse-overlayfs-grpc -rwxr-xr-x 0/0 21916 2024-04-30 06:26 bin/containerd-rootless-setuptool.sh -rwxr-xr-x 0/0 7187 2024-04-30 06:26 bin/containerd-rootless.sh -rwxr-xr-x 0/0 12161024 2024-04-30 06:28 bin/containerd-shim-runc-v2 -rwxr-xr-x 0/0 45903872 2023-10-31 08:57 bin/containerd-stargz-grpc -rwxr-xr-x 0/0 20630617 2024-04-30 06:28 bin/ctd-decoder -rwxr-xr-x 0/0 18870272 2024-04-30 06:27 bin/ctr -rwxr-xr-x 0/0 29671743 2024-04-30 06:28 bin/ctr-enc -rwxr-xr-x 0/0 19931136 2023-10-31 08:58 bin/ctr-remote -rwxr-xr-x 0/0 1785448 2024-04-30 06:28 bin/fuse-overlayfs -rwxr-xr-x 0/0 65589641 2024-04-30 06:27 bin/ipfs -rwxr-xr-x 0/0 25088000 2024-04-30 06:26 bin/nerdctl -rwxr-xr-x 0/0 10666181 2024-03-05 22:20 bin/rootlessctl -rwxr-xr-x 0/0 12358373 2024-03-05 22:20 bin/rootlesskit -rwxr-xr-x 0/0 15074072 2024-04-30 06:26 bin/runc -rwxr-xr-x 0/0 2346328 2024-04-30 06:28 bin/slirp4netns -rwxr-xr-x 0/0 870496 2024-04-30 06:28 bin/tini drwxr-xr-x 0/0 0 2024-04-30 06:28 lib/ drwxr-xr-x 0/0 0 2024-04-30 06:28 lib/systemd/ drwxr-xr-x 0/0 0 2024-04-30 06:28 lib/systemd/system/ -rw-r--r-- 0/0 1475 2024-04-30 06:28 lib/systemd/system/buildkit.service -rw-r--r-- 0/0 1414 2024-04-30 06:25 lib/systemd/system/containerd.service -rw-r--r-- 0/0 312 2024-04-30 06:28 lib/systemd/system/stargz-snapshotter.service drwxr-xr-x 0/0 0 2024-04-30 06:28 libexec/ drwxr-xr-x 0/0 0 2024-04-30 06:28 libexec/cni/ -rw-r--r-- 0/0 11357 2024-03-12 10:56 libexec/cni/LICENSE -rw-r--r-- 0/0 2343 2024-03-12 10:56 libexec/cni/README.md -rwxr-xr-x 0/0 4119661 2024-03-12 10:56 libexec/cni/bandwidth -rwxr-xr-x 0/0 4662227 2024-03-12 10:56 libexec/cni/bridge -rwxr-xr-x 0/0 11065251 2024-03-12 10:56 libexec/cni/dhcp -rwxr-xr-x 0/0 4306546 2024-03-12 10:56 libexec/cni/dummy -rwxr-xr-x 0/0 4751593 2024-03-12 10:56 libexec/cni/firewall -rwxr-xr-x 0/0 4198427 2024-03-12 10:56 libexec/cni/host-device -rwxr-xr-x 0/0 3560496 2024-03-12 10:56 libexec/cni/host-local -rwxr-xr-x 0/0 4324636 2024-03-12 10:56 libexec/cni/ipvlan -rwxr-xr-x 0/0 3651038 2024-03-12 10:56 libexec/cni/loopback -rwxr-xr-x 0/0 4355073 2024-03-12 10:56 libexec/cni/macvlan -rwxr-xr-x 0/0 4095898 2024-03-12 10:56 libexec/cni/portmap -rwxr-xr-x 0/0 4476535 2024-03-12 10:56 libexec/cni/ptp -rwxr-xr-x 0/0 3861176 2024-03-12 10:56 libexec/cni/sbr -rwxr-xr-x 0/0 3120090 2024-03-12 10:56 libexec/cni/static -rwxr-xr-x 0/0 4381887 2024-03-12 10:56 libexec/cni/tap -rwxr-xr-x 0/0 3743844 2024-03-12 10:56 libexec/cni/tuning -rwxr-xr-x 0/0 4319235 2024-03-12 10:56 libexec/cni/vlan -rwxr-xr-x 0/0 4008392 2024-03-12 10:56 libexec/cni/vrf drwxr-xr-x 0/0 0 2024-04-30 06:26 share/ drwxr-xr-x 0/0 0 2024-04-30 06:26 share/doc/ drwxr-xr-x 0/0 0 2024-04-30 06:26 share/doc/nerdctl/ -rw-r--r-- 0/0 12480 2024-04-30 06:20 share/doc/nerdctl/README.md drwxr-xr-x 0/0 0 2024-04-30 06:20 share/doc/nerdctl/docs/ -rw-r--r-- 0/0 3953 2024-04-30 06:20 share/doc/nerdctl/docs/build.md -rw-r--r-- 0/0 2570 2024-04-30 06:20 share/doc/nerdctl/docs/builder-debug.md -rw-r--r-- 0/0 3996 2024-04-30 06:20 share/doc/nerdctl/docs/cni.md -rw-r--r-- 0/0 74383 2024-04-30 06:20 share/doc/nerdctl/docs/command-reference.md -rw-r--r-- 0/0 1814 2024-04-30 06:20 share/doc/nerdctl/docs/compose.md -rw-r--r-- 0/0 5329 2024-04-30 06:20 share/doc/nerdctl/docs/config.md -rw-r--r-- 0/0 9128 2024-04-30 06:20 share/doc/nerdctl/docs/cosign.md -rw-r--r-- 0/0 5660 2024-04-30 06:20 share/doc/nerdctl/docs/cvmfs.md -rw-r--r-- 0/0 2435 2024-04-30 06:20 share/doc/nerdctl/docs/dir.md -rw-r--r-- 0/0 906 2024-04-30 06:20 share/doc/nerdctl/docs/experimental.md -rw-r--r-- 0/0 14217 2024-04-30 06:20 share/doc/nerdctl/docs/faq.md -rw-r--r-- 0/0 884 2024-04-30 06:20 share/doc/nerdctl/docs/freebsd.md -rw-r--r-- 0/0 3228 2024-04-30 06:20 share/doc/nerdctl/docs/gpu.md -rw-r--r-- 0/0 14463 2024-04-30 06:20 share/doc/nerdctl/docs/ipfs.md -rw-r--r-- 0/0 1748 2024-04-30 06:20 share/doc/nerdctl/docs/multi-platform.md -rw-r--r-- 0/0 2960 2024-04-30 06:20 share/doc/nerdctl/docs/notation.md -rw-r--r-- 0/0 2596 2024-04-30 06:20 share/doc/nerdctl/docs/nydus.md -rw-r--r-- 0/0 3277 2024-04-30 06:20 share/doc/nerdctl/docs/ocicrypt.md -rw-r--r-- 0/0 1876 2024-04-30 06:20 share/doc/nerdctl/docs/overlaybd.md -rw-r--r-- 0/0 15657 2024-04-30 06:20 share/doc/nerdctl/docs/registry.md -rw-r--r-- 0/0 5088 2024-04-30 06:20 share/doc/nerdctl/docs/rootless.md -rw-r--r-- 0/0 2015 2024-04-30 06:20 share/doc/nerdctl/docs/soci.md -rw-r--r-- 0/0 10312 2024-04-30 06:20 share/doc/nerdctl/docs/stargz.md drwxr-xr-x 0/0 0 2024-04-30 06:28 share/doc/nerdctl-full/ -rw-r--r-- 0/0 1154 2024-04-30 06:28 share/doc/nerdctl-full/README.md -rw-r--r-- 0/0 6578 2024-04-30 06:28 share/doc/nerdctl-full/SHA256SUMS
包含的所有包如下:
# nerdctl (full distribution) - nerdctl: v1.7.6 - containerd: v1.7.16 - runc: v1.1.12 - CNI plugins: v1.4.1 - BuildKit: v0.12.5 - Stargz Snapshotter: v0.15.1 - imgcrypt: v1.1.10 - RootlessKit: v2.0.2 - slirp4netns: v1.2.3 - bypass4netns: v0.4.0 - fuse-overlayfs: v1.13 - containerd-fuse-overlayfs: v1.0.8 - Kubo (IPFS): v0.27.0 - Tini: v0.19.0 - buildg: v0.4.1 ## License - bin/slirp4netns: [GNU GENERAL PUBLIC LICENSE, Version 2](https://github.com/rootless-containers/slirp4netns/blob/v1.2.3/COPYING) - bin/fuse-overlayfs: [GNU GENERAL PUBLIC LICENSE, Version 2](https://github.com/containers/fuse-overlayfs/blob/v1.13/COPYING) - bin/ipfs: [Combination of MIT-only license and dual MIT/Apache-2.0 license](https://github.com/ipfs/kubo/blob/v0.27.0/LICENSE) - bin/{runc,bypass4netns,bypass4netnsd}: Apache License 2.0, statically linked with libseccomp ([LGPL 2.1](https://github.com/seccomp/libseccomp/blob/main/LICENSE), source code available at https://github.com/seccomp/libseccomp/) - bin/tini: [MIT License](https://github.com/krallin/tini/blob/v0.19.0/LICENSE) - Other files: [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
步骤 3: 配置 systemd
如果你打算通过 systemd 启动 containerd,你还应该
containerd.service
从 https://raw.githubusercontent.com/containerd/containerd/main/containerd.service 下载单元文件到/etc/systemd/system/containerd.service
,并运行以下命令:systemctl daemon-reload systemctl enable --now containerd
步骤 4: 更改 containerd 配置
生成配置文件
containerd config default > /etc/containerd/config.toml
更改 cni 目录位置
vim /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri".cni] bin_dir = "/usr/local/libexec/cni" conf_dir = "/etc/cni/net.d" conf_template = "" ip_pref = "" max_conf_num = 1 setup_serially = false
更改镜像配置目录
vim /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d" [plugins."io.containerd.grpc.v1.cri".registry.auths] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.headers] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming] tls_cert_file = "" tls_key_file = ""
二、配置加速镜像
镜像加速地址 | registry
1. config_path 了解
上面我们将文件路径改为
config_path = "/etc/containerd/certs.d"
,所以镜像的配置应该配置到该目录下.该目录的结构应该如下:$ tree /etc/containerd/certs.d /etc/containerd/certs.d └── docker.io └── hosts.toml $ cat /etc/containerd/certs.d/docker.io/hosts.toml server = "https://docker.io" [host."https://registry-1.docker.io"] capabilities = ["pull", "resolve"]
目录名为镜像仓库地址,hosts.toml 中可以指定对应的加速地址.
2.使用镜像加速
编写一个 shell 脚本
root@master1:/opt/k8s-init# cat registrys.sh #!/bin/bash dirs=(docker.io gcr.io ghcr.io k8s.gcr.io registry.k8s.io quay.io) registrys=(docker.m.daocloud.io gcr.m.daocloud.io ghcr.m.daocloud.io k8s-gcr.m.daocloud.io k8s.m.daocloud.io quay.m.daocloud.io) if [ ! -d "/etc/containerd/certs.d" ]; then mkdir -p /etc/containerd/certs.d fi for ((i=0; i<${#dirs[@]}; i++)); do if [ ! -d "/etc/containerd/certs.d/${dirs[i]}" ]; then mkdir -p /etc/containerd/certs.d/${dirs[i]} fi host="[host.\"https://${registrys[i]}\"]\n capabilities = [\"pull\", \"resolve\"]" echo -e $host > /etc/containerd/certs.d/${dirs[i]}/hosts.toml done
执行脚本
bash registrys.sh
查看
/etc/containerd/certs.d
的目录结构.tree /etc/containerd/certs.d/ /etc/containerd/certs.d/ ├── docker.io │ └── hosts.toml ├── gcr.io │ └── hosts.toml ├── ghcr.io │ └── hosts.toml ├── k8s.gcr.io │ └── hosts.toml ├── quay.io │ └── hosts.toml └── registry.k8s.io └── hosts.toml
三、进行测试
运行 nginx 容器
nerdctl --debug=true run -d --name nginx -p 80:80 nginx:alpine
查看容器
nerdctl ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fce1a2183dd7 docker.io/library/nginx:alpine "/docker-entrypoint.…" 37 minutes ago Up 0.0.0.0:80->80/tcp nginx
访问页面
停止容器
nerdctl rm -f nginx
↩
欢迎来到这里!
我们正在构建一个小众社区,大家在这里相互信任,以平等 • 自由 • 奔放的价值观进行分享交流。最终,希望大家能够找到与自己志同道合的伙伴,共同成长。
注册 关于