zl程序教程

您现在的位置是:首页 >  数据库

当前栏目

Docker 网络之redis集群搭建

2023-06-13 09:15:11 时间

Redis 集群搭建

# 新建一个 redis 的网卡,该网卡下仅部署redis服务
[root]# docker network create redis --subnet 172.38.0.0/16 --driver bridge 
84cd07182d37dd7d792cf9b7996e5edc46805de849ceaca6234c4f63d22f5c9d

# 通过脚本对 redis 进行配置
for port in $(seq 1 6); \  #for循环 循环6次 
do \
#创建配置文件
mkdir -p /mydata/redis/node-${port}/conf 
#创建conf文件
touch /mydata/redis/node-${port}/conf/redis.conf

#对conf进行具体配置
cat  << EOF >/mydata/redis/node-${port}/conf/redis.conf 
port 6379   # 端口配置
bind 0.0.0.0
cluster-enabled yes  # 开启集群
cluster-config-file nodes.conf
cluster-node-timeout 5000   #设置超时时间
cluster-announce-ip 172.38.0.1${port} #连接具体的ip
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

#启动redis服务
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done

docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done


# 一次启动6个redis 服务
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done

六个redis服务启动成功:

#我们进入一个redis服务,这里进入redis-1
[root]# docker exec -it redis-1 /bin/sh  #redis没有bash目录,有sh目录

#进入之后我们开始配置集群
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 50d5736ceac77467a429af63d5341aa978541a9b 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: 680dd64e42a7480f1a635191e569b0141f967606 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: d54f24032cfc73d262c199f6b7872d5f14b87dcd 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: a7424534332bf2702a0ff164aa4e375b00c707ff 172.38.0.14:6379
   replicates d54f24032cfc73d262c199f6b7872d5f14b87dcd
S: 92297537e29795e48073261aed424b8cda53a387 172.38.0.15:6379
   replicates 50d5736ceac77467a429af63d5341aa978541a9b
S: f17683eb9d29f2326d0d2b02947f4651fa8ad9d3 172.38.0.16:6379
   replicates 680dd64e42a7480f1a635191e569b0141f967606
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: 50d5736ceac77467a429af63d5341aa978541a9b 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 92297537e29795e48073261aed424b8cda53a387 172.38.0.15:6379
   slots: (0 slots) slave
   replicates 50d5736ceac77467a429af63d5341aa978541a9b
S: f17683eb9d29f2326d0d2b02947f4651fa8ad9d3 172.38.0.16:6379
   slots: (0 slots) slave
   replicates 680dd64e42a7480f1a635191e569b0141f967606
S: a7424534332bf2702a0ff164aa4e375b00c707ff 172.38.0.14:6379
   slots: (0 slots) slave
   replicates d54f24032cfc73d262c199f6b7872d5f14b87dcd
M: d54f24032cfc73d262c199f6b7872d5f14b87dcd 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 680dd64e42a7480f1a635191e569b0141f967606 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
#到这里 集群的配置 已经完成

测试使用:

# 查看集群信息
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:441
cluster_stats_messages_pong_sent:448
cluster_stats_messages_sent:889
cluster_stats_messages_ping_received:443
cluster_stats_messages_pong_received:441
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:889
127.0.0.1:6379> 

# 查看节点数
127.0.0.1:6379> cluster nodes
92297537e29795e48073261aed424b8cda53a387 172.38.0.15:6379@16379 slave 50d5736ceac77467a429af63d5341aa978541a9b 0 1625802216341 5 connected
f17683eb9d29f2326d0d2b02947f4651fa8ad9d3 172.38.0.16:6379@16379 slave 680dd64e42a7480f1a635191e569b0141f967606 0 1625802217342 6 connected
a7424534332bf2702a0ff164aa4e375b00c707ff 172.38.0.14:6379@16379 slave d54f24032cfc73d262c199f6b7872d5f14b87dcd 0 1625802216000 4 connected
d54f24032cfc73d262c199f6b7872d5f14b87dcd 172.38.0.13:6379@16379 master - 0 1625802217844 3 connected 10923-16383
680dd64e42a7480f1a635191e569b0141f967606 172.38.0.12:6379@16379 master - 0 1625802216541 2 connected 5461-10922
50d5736ceac77467a429af63d5341aa978541a9b 172.38.0.11:6379@16379 myself,master - 0 1625802216000 1 connected 0-5460

Docker 搭建redis集群成功! 树苗如果因为怕痛而拒绝修剪,那就永远不会成材。