# The HA of redis sentinel
Now, you have a sentinel cluster : 1 master + 6 slave + 7 sentinel.
To mock a disaster as all the servers of the 3rd distrinct shutdown, we have to stop 1 master, 3 slave, 4 sentinel.
In this case, we have to fix the error manually. The following is cover how to recover redis step by step.
1. stop master, 3 slave, 4 sentinel
docker stop redis_master redis_slave6 redis_slave5 redis_slave4 redis_sentinel7 redis_sentinel6 redis_sentinel5 redis_sentinel4
After this, we just have 3 slave, 3 sentinel.
docker exec -it redis_sentinel redis-cli -p 26379 -a ee06167b10a177f60766d35baa81955d
info
we can see the info of the redis sentinel cluster:
Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=default,status=odown,address=192.168.11.141:6379,slaves=6,sentinels=7
3 sentinel could not vote a master, so we have to edit the redis.conf of the sentinel, comment the 2 down redis sentinel so that
we will have 5 sentinel in all.
vim redis_sentinel/redis.conf
vim redis_sentinel2/redis.conf
vim redis_sentinel3/redis.conf
the redis.conf of sentinel may like this:
port 26379
sentinel myid 5b488b5fab7db426b3d58604704c05df5e2e38ab
sentinel deny-scripts-reconfig yes
sentinel monitor default 192.168.11.141 16382 2
sentinel down-after-milliseconds default 60000
sentinel auth-pass default ee06167b10a177f60766d35baa81955d
bind 192.168.11.141 127.0.0.1
Generated by CONFIG REWRITE
dir "/data"
sentinel config-epoch default 2
sentinel leader-epoch default 2
sentinel known-replica default 192.168.11.141 16385
sentinel known-replica default 192.168.11.141 16384
sentinel known-replica default 192.168.11.141 16383
sentinel known-replica default 192.168.11.141 6379
sentinel known-replica default 192.168.11.141 16386
sentinel known-replica default 192.168.11.141 16379
sentinel known-sentinel default 192.168.11.141 26383 4af30eead07c1baab7676f6099a565ca01404546
sentinel known-sentinel default 192.168.11.141 26385 55c654fce27a2282f683f9e413f72f12bc34b101
#sentinel known-sentinel default 192.168.11.141 26386 8061c64d8b8b66d77f2054cf7f1b7cf50104b60b
#sentinel known-sentinel default 192.168.11.141 26387 9868b0c5dd86f06baeda60f61dea4911cc9099c9
sentinel known-sentinel default 192.168.11.141 26384 cc0aee23f7544e645a6aa6a9b3413be9368726f8
sentinel known-sentinel default 192.168.11.141 26382 ffab2ba8161fe5066d6b3c0355a47a313467edd7
sentinel current-epoch 2
actually, your redis.conf is a little different.
After save the redis.conf of the 3 living sentinel, let's restart the sentinel:
docker restart redis_sentinel redis_sentinel2 redis_sentinel3
Now, we can see the log of the sentinel:
docker logs -f redis_sentinel
or we can see the info of the sentinel:
docker exec -it redis_sentinel redis-cli -p 26379 -a ee06167b10a177f60766d35baa81955d
info
Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
sentinel_simulate_failure_flags:0
master0:name=default,status=ok,address=192.168.11.141:16382,slaves=6,sentinels=5
Also, we can connect to the new master to get the info:
docker exec -it redis_sentinel redis-cli -p 16382 -a ee06167b10a177f60766d35baa81955d
Replication
role:master
connected_slaves:2
slave0:ip=192.168.11.141,port=16379,state=online,offset=24089918,lag=1
slave1:ip=192.168.11.141,port=16383,state=online,offset=24090204,lag=1
master_replid:b0b95c63725305795f2f2008968e38b94a7603c5
master_replid2:2754e76811a0bbf3ab3a14707058ac73f8419128
master_repl_offset:24090347
second_repl_offset:23860873
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:23041772
repl_backlog_histlen:1048576
欢迎来到这里!
我们正在构建一个小众社区,大家在这里相互信任,以平等 • 自由 • 奔放的价值观进行分享交流。最终,希望大家能够找到与自己志同道合的伙伴,共同成长。
注册 关于