redis学习(9)—redis集群扩容与缩小

1、准备
回顾,上一篇把6381变成fail,6386变成master
还原:6381重新变成了master,6386变成了slave
docker stop redis-node-6
docker start redis-node-6

2、查看集群信息
登录进容器
redis-cli --cluster check 192.168.1.138:6381

3、增加2台redis节点
mkdir -p /data/redis/share/redis-node-7
mkdir -p /data/redis/share/redis-node-8
docker create --name redis-node-7 --net host --privileged=true -v /data/redis/share/redis-node-7:/data redis:5.0.7 --cluster-enabled yes --appendonly yes --port 6387
docker create --name redis-node-8 --net host --privileged=true -v /data/redis/share/redis-node-8:/data redis:5.0.7 --cluster-enabled yes --appendonly yes --port 6388

4、启动容器
docker start redis-node-7
docker start redis-node-8

5、添加master节点
登录进容器
将新节点添加进已存在的集群中
redis-cli --cluster add-node 192.168.1.138:6387 192.168.1.138:6381
参数说明:
第一个节点代表新增的
第二个节点代表原集群里面的任意节点

>>> Adding node 192.168.1.138:6387 to cluster 192.168.1.138:6381   #新增节点加入到了集群里面去
>>> Performing Cluster Check (using node 192.168.1.138:6381)
M: c790b5f9584777ba8d9f96be5fe42b2efe646635 192.168.1.138:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: b264b25c716863096d0c2fe39483130cbfeba894 192.168.1.138:6385
   slots: (0 slots) slave
   replicates f9200d36046d8325817c8014155e8f9950760da4
S: 33add5fdf6d9cc3eb8656c865403a34b237b4b73 192.168.1.138:6384
   slots: (0 slots) slave
   replicates 691e5855e8f274a6ea61dc95dbed89da87ed6053
S: d7bbc7bb0b501b421d527f6c302a8abe00aa535b 192.168.1.138:6386
   slots: (0 slots) slave
   replicates c790b5f9584777ba8d9f96be5fe42b2efe646635
M: f9200d36046d8325817c8014155e8f9950760da4 192.168.1.138:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 691e5855e8f274a6ea61dc95dbed89da87ed6053 192.168.1.138:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.1.138:6387 to make it join the cluster.   #把这个节点make it join到了集群里面
[OK] New node added correctly.   #新增节点成功

redis-cli --cluster add-node 192.168.1.138:6388 192.168.1.138:6381

>>> Adding node 192.168.1.138:6388 to cluster 192.168.1.138:6381
>>> Performing Cluster Check (using node 192.168.1.138:6381)
M: c790b5f9584777ba8d9f96be5fe42b2efe646635 192.168.1.138:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: b264b25c716863096d0c2fe39483130cbfeba894 192.168.1.138:6385
   slots: (0 slots) slave
   replicates f9200d36046d8325817c8014155e8f9950760da4
M: 448feaa7255da9d32ff1dd4b599f2cc6ff6f1b61 192.168.1.138:6387
   slots: (0 slots) master
S: 33add5fdf6d9cc3eb8656c865403a34b237b4b73 192.168.1.138:6384
   slots: (0 slots) slave
   replicates 691e5855e8f274a6ea61dc95dbed89da87ed6053
S: d7bbc7bb0b501b421d527f6c302a8abe00aa535b 192.168.1.138:6386
   slots: (0 slots) slave
   replicates c790b5f9584777ba8d9f96be5fe42b2efe646635
M: f9200d36046d8325817c8014155e8f9950760da4 192.168.1.138:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 691e5855e8f274a6ea61dc95dbed89da87ed6053 192.168.1.138:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.1.138:6388 to make it join the cluster.
[OK] New node added correctly.

6、再次查看节点信息
redis-cli --cluster check 192.168.1.138:6381

192.168.1.138:6381 (c790b5f9...) -> 0 keys | 5461 slots | 1 slaves.
192.168.1.138:6387 (448feaa7...) -> 0 keys | 0 slots | 0 slaves.   #代表当前已加入了集群,但是没有槽号
192.168.1.138:6383 (f9200d36...) -> 1 keys | 5461 slots | 1 slaves.
192.168.1.138:6388 (863bb164...) -> 0 keys | 0 slots | 0 slaves.
192.168.1.138:6382 (691e5855...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 2 keys in 5 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.138:6381)
M: c790b5f9584777ba8d9f96be5fe42b2efe646635 192.168.1.138:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: b264b25c716863096d0c2fe39483130cbfeba894 192.168.1.138:6385
   slots: (0 slots) slave
   replicates f9200d36046d8325817c8014155e8f9950760da4
M: 448feaa7255da9d32ff1dd4b599f2cc6ff6f1b61 192.168.1.138:6387
   slots: (0 slots) master
S: 33add5fdf6d9cc3eb8656c865403a34b237b4b73 192.168.1.138:6384
   slots: (0 slots) slave
   replicates 691e5855e8f274a6ea61dc95dbed89da87ed6053
S: d7bbc7bb0b501b421d527f6c302a8abe00aa535b 192.168.1.138:6386
   slots: (0 slots) slave
   replicates c790b5f9584777ba8d9f96be5fe42b2efe646635
M: f9200d36046d8325817c8014155e8f9950760da4 192.168.1.138:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 863bb16418e5460979cff11bb173950747e0e076 192.168.1.138:6388
   slots: (0 slots) master
M: 691e5855e8f274a6ea61dc95dbed89da87ed6053 192.168.1.138:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

6387、6388已加入集群中,从以上的cluster nodes看到,只是将节点加入了集群中,但是并没有分配slot,所以这个节点并没有真正的开始分担集群工作
新加入的节点,默认都是master节点

7、删除节点
redis-cli --cluster del-node 192.168.1.138:6387 448feaa7255da9d32ff1dd4b599f2cc6ff6f1b61

>>> Removing node 448feaa7255da9d32ff1dd4b599f2cc6ff6f1b61 from cluster 192.168.1.138:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

redis-cli --cluster del-node 192.168.1.138:6388 863bb16418e5460979cff11bb173950747e0e076

>>> Removing node 863bb16418e5460979cff11bb173950747e0e076 from cluster 192.168.1.138:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

8、小结
这篇增加了2个节点,把6387、6388加入到集群中,但是没有给他分配槽号

9、如何为集群中添加从节点
1)添加主节点
redis-cli --cluster add-node 192.168.1.138:6387 192.168.1.138:6381
如果报错:
[ERR] Node 192.168.1.138:6387 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
解决办法:
将/data/redis/share/redis-node-7和redis-node-8下内容删除后,重启redis后自动生成默认配置,然后再添加节点到集群
2)在集群中为6387添加slave节点6388
redis-cli --cluster add-node 192.168.1.138:6388 192.168.1.138:6381 --cluster-slave --cluster-master-id ca77bc2a223c64a7af727b24172ae8f43d3e9b5a
说明:
--cluster-slave:表示加入的是slave节点
--cluster-master-id:表示slave对应的master的node ID