Q's blog

一些个人文档笔记

k8s在生产中需要注意的各种事项和便捷方法

阅读全文 »
置顶

111

配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
config:
service: |
[SERVICE]
Daemon Off
Flush {{ .Values.flush }}
Log_Level {{ .Values.logLevel }}
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port {{ .Values.metricsPort }}
Health_Check On

1
2
3
4
5
6
7
8
9
10
11
12
13
14
inputs: |
[Input]
Name tail
Path /var/log/containers/*.log
Exclude_Path /var/log/containers/*kubesphere*.log,/var/log/containers/kube-*.log,/var/log/containers/*cattle*.log,/var/log/containers/*calico*.log
Docker_Mode On
Docker_Mode_Flush 5
# 这里写的解析只管解析多行模式,不提取正则中的key
Docker_Mode_Parser java_multi_line
Parser docker
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines Off

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
filters: |
[Filter]
Name kubernetes
Match kube.*
# 开启收集注解
Annotations On
Labels Off
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
# 注解过滤(只保留有logging=java的Pod)
[FILTER]
Name grep
Match kube.*
Regex $kubernetes['annotations']['logging'] ^java$
[Filter]
Name nest
Match kube.*
Operation lift
Nested_under kubernetes
Add_prefix kubernetes_
[FILTER]
#将kubernetes_annotations 提取至根
Name nest
Match kube.*
Operation lift
Nested_under kubernetes_annotations
Add_prefix k8s_ann_
[Filter]
Name modify
Match kube.*
Remove stream
Remove kubernetes_pod_id
Remove kubernetes_docker_id
Remove kubernetes_host
Remove kubernetes_container_hash
Rename kubernetes_namespace_name namespace_name
Rename kubernetes_pod_name pod_name
Rename kubernetes_container_name container_name
Rename kubernetes_container_image container_image
Rename k8s_ann_app app
Remove_regex ^k8s_ann

[FILTER]
# 使用 过滤解析将level和time字段提取出来,并保留原始内容
Name parser
Match kube.*
Key_Name log
# 这里可以识别()里的字段
Parser java_multi_line
# 保留解析后的字段内容
Preserve_Key true
# 保留日志其他key
Reserve_Data true

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
outputs: |
[OUTPUT]
Name http
Match *
URI /api/default/ds-log/_json
Host openobserve.openobserve.svc
Port 5080
tls Off
Format json
Json_date_key _timestamp
Json_date_format iso8601
HTTP_User admin@xxxx.com
HTTP_Passwd XQHN6gqjQxDAn0HH
compress gzip
[OUTPUT]
Name stdout
Match *
Format json

1
2
3
4
5
6
customParsers: |
[PARSER]
Name java_multi_line
Key_name log
Format regex
Regex /(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}.\d{3})\s{1,3}(?<level>\w{3,6})/

安装

1
2
3
4
apt-get install -y wget default-mysql-client inotify-tools procps
dpkg -i proxysql_2.7.1-ubuntu24_amd64.deb
proxysql --reload -f /etc/proxysql.cnf

当 ProxySQL 启动后,将会监听两个端口:
Admin 管理接口:默认为 6032 该端口用于管理配置 ProxySQL。
接收业务 SQL 接口:默认为 6033 用于服务业务端口,类似于 MySQL 的 3306 端口。

请添加图片描述

直接使用 MySQL 客户端连接这个管理接口

1
mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt='Admin> '

查询 Admin 用户的密码

1
2
3
4
5
6
Admin> select @@admin-admin_credentials;
+---------------------------+
| @@admin-admin_credentials |
+---------------------------+
| admin:admin |
+---------------------------+

修改 Admin 接口密码:

1
2
3
4
5
6
7
set admin-admin_credentials='admin:YouPassword';

新增管理员账户:myuser:myuser
set admin-admin_credentials='admin:admin;myuser:myuser';

load admin variables to runtime; -- 立即生效
save admin variables to disk; -- 持久化磁盘

查询后端实例

1
select * from mysql_servers;

查询

设置监控后端使用的账号信息

1
2
3
4
5
6
set mysql-monitor_username = 'root_snowpaw';
set mysql-monitor_password = 'cS5!Z3&Qy9Nz!NRI';

-- 配置生效
load mysql variables to runtime;
save mysql variables to disk;

配置后端节点,写入后端节点的 ip、端口、主机组。

1
2
3
insert into mysql_servers (hostgroup_id, hostname, port) values (1, 'rm-bp10q6od28cazp461mo.mysql.rds.aliyuncs.com', 3306);
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

配置访问用户,这里的用户分为两种含义:

  • 前端用户:客户端用来访问 ProxySQL 的用户。
  • 后端用户:ProxySQL 用来访问后端 MySQL 节点的用户。

将后端mysql账号信息录入到 ProxySQL 中:

1
2
3
4
5
6
7
8
-- 写入用户信息
insert into mysql_users(username, password, default_hostgroup, comment) values ('root_snowpaw', 'cS5!Z3&Qy9Nz!NRI', 1, 'root用户');

-- 加载 & 持久化
# 将配置加载到RUNTIME,使其可以立马生效,并保存到disk。
load mysql users to runtime;
save mysql users to disk;

连接代理节点,访问数据库:

1
mysql -u root_snowpaw -p'cS5!Z3&Qy9Nz!NRI' -h 127.0.0.1 -P6033 -e'select @@hostname;'

现在2304已经不维护了,apt更新是全是404

需要先切换到官方提供的old源

1
2
3
4
5
6
7
8
9
10
deb http://old-releases.ubuntu.com/ubuntu/ lunar main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ lunar-security main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ lunar-updates main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ lunar-proposed main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ lunar-backports main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ lunar main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ lunar-security main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ lunar-updates main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ lunar-proposed main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ lunar-backports main restricted universe multiverse

更新所有包

1
2
apt-get update
apt-get upgrade -y

再将apt源更改为2310

1
2
3
4
5
6
7
8
9
10
deb http://old-releases.ubuntu.com/ubuntu/ mantic main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ mantic-security main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ mantic-updates main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ mantic-proposed main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ mantic-backports main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ mantic main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ mantic-security main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ mantic-updates main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ mantic-proposed main restricted universe multiverse
deb-src http://old-releases.ubuntu.com/ubuntu/ mantic-backports main restricted universe multiverse
1
2
3
4
5
6
7
8
9
10
11
apt-get update
apt-get upgrade -y
apt-get dist-upgrade
这里需要根据提示需要自动卸载一些包(apt autoremove)


执行更新系统
do-release-upgrade

一路按y
最后重启
1
2
3
4
5
6
root@v:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble

注意:

1
2
3
4
Checking for a new Ubuntu release
Please install all available updates for your release before upgrading.

需要apt-get dist-upgrade

1
C:\Program Files\Typora\resources\page-dist

安装caddy和xcaddy

1
2
下载caddy_2.8.4_linux_amd64.deb xcaddy_0.4.2_linux_amd64.deb 
dpkg -i 安装

初始化go项目

1
2
3
4
5
6
7
8
go mod init mymodule
root@v:~/mymodule# tree
.
├── caddy
├── Caddyfile
├── go.mod
├── go.sum
└── mymodule.go

mymodule.go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
package mymodule

import (
"errors"
"github.com/caddyserver/caddy/v2"
"github.com/caddyserver/caddy/v2/caddyconfig/caddyfile"
"github.com/caddyserver/caddy/v2/caddyconfig/httpcaddyfile"
"github.com/caddyserver/caddy/v2/modules/caddyhttp"
"net/http"
)

func init() {
caddy.RegisterModule(&HelloWorld{})
httpcaddyfile.RegisterHandlerDirective("hello_world", parseCaddyfile)
}

type HelloWorld struct {
Text string `json:"text,omitempty"`
}

func (h *HelloWorld) UnmarshalCaddyfile(d *caddyfile.Dispenser) error {
for d.Next() {
if !d.Args(&h.Text) {
// not enough args
return d.ArgErr()
}
if d.NextArg() {
// too many args
return d.ArgErr()
}
}
return nil
}
func parseCaddyfile(h httpcaddyfile.Helper) (caddyhttp.MiddlewareHandler, error) {
hw := new(HelloWorld)
err := hw.UnmarshalCaddyfile(h.Dispenser)
return hw, err
}
func (h *HelloWorld) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error {
err := next.ServeHTTP(w, r)
if err != nil {
return err
}
w.Write([]byte(h.Text))
return nil
}
func (h HelloWorld) Validate() error {
if h.Text == "" {
return errors.New("the text is must!!!")
}
return nil
}
func (h *HelloWorld) Provision(ctx caddy.Context) error {
//h.Text = "Hello 世界"
return nil
}

// CaddyModule returns the Caddy module information.
func (h HelloWorld) CaddyModule() caddy.ModuleInfo {
return caddy.ModuleInfo{
ID: "http.handlers.hello_world",
New: func() caddy.Module { return new(HelloWorld) },
}
}

var (
_ caddy.Provisioner = (*HelloWorld)(nil)
_ caddy.Validator = (*HelloWorld)(nil)
_ caddyhttp.MiddlewareHandler = (*HelloWorld)(nil)
_ caddyfile.Unmarshaler = (*HelloWorld)(nil)
)

Caddyfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
  order hello_world last
debug
}
:80 {
  hello_world 打印内容
}


# 或者
{
order hello_world last
}
:80 {

reverse_proxy "127.0.0.1:8080" {
handle_response {
hello_world "test"
}
}

}

查看插件

1
xcaddy list-modules

直接跑

1
xcaddy run --config Caddyfile

打包

1
2
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 xcaddy build --with mymodule=.   # .表示当前目录
./caddy run --config Caddyfile

1,现状描述

由于某些原因,导致rocketmq集群中某个master节点不能正常服务,需要为master节点做流量迁移动作,已确保消息不丢失配置策略为:

    • 异步刷盘
    • 主从异步复制

如果直接下线该master,由于主从异步复制,可能导致部分消息来不及复制到slave造成消息丢失。所以该方案不可行。
另一种方案选择:关闭该broker的写入权限,待该broker不再有写入和消费时,再下线该节点。

2.关闭broker写权限

2表示只写权限,4表示只读权限,6表示读写权限

1
./mqadmin updateBrokerConfig -b broker:port -n nameserver:prot -k brokerPermission -v 4

3.观察节点流量

1
./mqadmin clusterList -n nameserver:prot

观察InTPS和OutTPS,理想情况都为零时,并不再变化时,则该节点可下线了。
然而,在实际过程中并没有出现为零的情况,InTPS和OutTPS总是有值,有时个位数字有时是两位数字,大部分时间在20多的值。此刻要分析下broker目前的消费状态。

4.观察broker消费状态

1
./mqadmin brokerConsumeStats -b broker:prot -n nameserver:prot >> brokerConsumeStats.tmp

查看brokerConsumeStats.tmp,主要查看#LastTime和#Diff。发现%RETRY%重试类队列#Diff有很微小(1或者3)的数据,而其他topic均为0. LastTime时间最新也是发生在%RETRY%队列中。此时可以让该节点下线操作。

5.borker读写权限恢复

1
./mqadmin updateBrokerConfig -b broker:port -n nameserver:prot -k brokerPermission -v 6

观察各节点流量是否正常

1
./mqadmin clusterList -n nameserver:prot

需要epel源

安装 QEMU 二进制文件:

1
2
3
4
5
6
# 对于 Ubuntu/Debian 系统
sudo apt-get update
sudo apt-get install qemu binfmt-support qemu-user-static

# 对于 CentOS/RHEL 系统
sudo yum install qemu binfmt-support qemu-user-static

注册QEMU 二进制文件到 Docker 中:

1
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

在 Dockerfile 中使用 FROM 指令指定基础镜像的架构:

1
2
3
FROM --platform=linux/arm64 arm64v8/ubuntu:latest  # 也可ubuntu:latest 
# 或者
FROM --platform=linux/arm/v7 arm32v7/ubuntu:latest

会根据–platform=自动识别平台

初始化坏境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
#依次修改主机名*
hostnamectl set-hostname master
hostnamectl set-hostname node01
#hosts解析
cat >> /etc/hosts << EOF
172.12.1.105 master
172.12.1.106 node01
EOF
#关闭selinux、防火墙、交换分区
sed -i 's/^ *SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

systemctl stop firewalld.service ; systemctl disable firewalld

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a


# apt 源
cat >> /etc/apt/sources.list << EOF
# aliyun
deb http://mirrors.aliyun.com/ubuntu/ noble main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ noble main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ noble-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ noble-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ noble-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ noble-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ noble-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ noble-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ noble-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ noble-backports main restricted universe multiverse
EOF

#kubernetes源
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add
echo "deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" >> /etc/apt/sources.list
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

#时间同步
timedatectl set-timezone Asia/Shanghai
apt install ntpdate -y
ntpdate ntp1.aliyun.com


#修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
EOF

#重新加载文件生效系统配置
sysctl -p /etc/sysctl.d/k8s.conf

sysctl --system

modprobe br_netfilter

#这个命令的作用是应用 k8s.conf 文件中的内核参数设置,并且开启网络桥接的防火墙功能。其中 k8s.conf 文件中的内容包括以下三个参数设置:
#net.bridge.bridge-nf-call-iptables = 1 表示开启防火墙功能。
#net.bridge.bridge-nf-call-ip6tables = 1 表示开启 IPV6 的防火墙功能。
#net.ipv4.ip_forward = 1 表示开启 IP 转发功能。


#安装ipvsadm
apt-get install -y ipvsadm ipset

安装配置containerd或者docker(一个就好)

containerd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apt install -y containerd
#生成containerd的配置文件
mkdir /etc/containerd -p
#生成配置文件
containerd config default > /etc/containerd/config.toml
#编辑配置文件
vim /etc/containerd/config.toml
sandbox_image = "k8s.gcr.io/pause:3.8" //61
改成
sed -i 's/registry.k8s.io/registry.aliyuncs.com\/google_containers/' /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
#配置好文件传到node节点
#重启服务
systemctl daemon-reload
systemctl enable containerd; systemctl start containerd

docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
yum install container-selinux -y
yum install -y yum-utils device-mapper-persistent-data lvm2
yum install -y docker-ce containerd.io
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://reg-mirror.qiniu.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "500m", "max-file": "3"
}
}
EOF

#使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service

docker info | grep "Cgroup Driver"
cd /opt
# 到下面的链接下载最新版cri-docker: https://github.com/Mirantis/cri-dockerd/tags
rpm -ivh cri-dockerd-0.3.4-3.el7.x86_64.rpm

vim /lib/systemd/system/cri-docker.service
#修改ExecStart行如下
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8

systemctl daemon-reload
systemctl enable --now cri-docker

安装kubeadm,kubelet和kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
安装指定版本
echo "----------------kubelet版本信息列表------------------------"
apt list kubelet -a
echo "----------------安装kubeadm、kubelet、kubectl(master节点)--------------"
# 不指定版本就是最新版本
k8s_version=1.28.0-00
sudo apt install -y kubeadm=${k8s_version} kubelet=${k8s_version} kubectl=${k8s_version}



//查看初始化需要的镜像
kubeadm config images list --kubernetes-version 1.28.0

# 生成 kubeadm快速搭建k8s的配置文件
kubeadm config print init-defaults > kubeadm-config.yaml


apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.12.1.105
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: master #和节点名称一样
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #需要改为国内的镜像
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: "10.244.0.0/16"
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

#初始化 master
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
mkdir -p $HOME/.kube
sudo cp -fi /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "source <(kubectl completion bash)" >> /etc/profile
source /etc/profile

安装网络插件

1
2
3
4
5
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
vi custom-resources.yaml //1313gg
cidr: 192.168.0.0/16 //IP改成 10.244.0.0/16 和上面初始化一样
kubectl create -f custom-resources.yaml

加入节点

1
2
3
4
5
初始化,并安装容器运行时,安装kubectl kubeadm kubelet
重启服务
systemctl daemon-reload
systemctl restart containerd; systemctl restart containerd
kubeadm join 172.12.1.105:6443 --token xxx

升级

master

1
2
3
4
5
6
7
8
9
10
11
12
13
安装新版本
k8s_version=1.28.1-00
sudo apt install -y kubeadm=${k8s_version} kubelet=${k8s_version} kubectl=${k8s_version}
先设置节点不可调度
kubectl uncordon master
再排空节点
kubectl drain master --ignore-daemonsets
测试升级计划
kubeadm upgrade plan
kubectl update apply --etcd-upgrade
systemctl restart kubelet
恢复调度
kubectl cordon master

node

1
2
3
4
k8s_version=1.28.1-00
sudo apt install -y kubelet=${k8s_version}
systemctl restart kubelet.service
kubelet --version

ps:

https://blog.csdn.net/stanluoyuxin/article/details/137070061

https://blog.csdn.net/lhq1363511234/article/details/132379069

0%