服务器 发布日期:2024/12/28 浏览次数:1
什么是Redis集群
Redis集群是Redis提供的分布式数据库方案,集群通过分片(sharding)来进行数据共享,并提供复制和故障转移功能。
节点
一个Redis集群通常由多个节点(node)组成,在刚开始的时候,每个节点都是相互独立的,它们都处于一个只包含自己的集群当中,要组建一个真正可工作的集群,我们必须将各个独立的节点连接起来,构成一个包含多个节点的集群。
集群配置
配置文件
下载配置文件:https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf
调整 CLUSTER 节点配置
# 开启cluster集群 cluster-enabled yes # 集群配置文件 cluster-config-file nodes-6379.conf # 集群节点超时 cluster-node-timeout 15000
Docker快速搭建Redis集群
安装Redis
参考文章:https://www.jb51.net/article/150054.htm
准备工作
├── conf │ ├── redis.conf │ └── sentinel.conf ├── redis │ ├── data_6379 │ ├── data_6380 │ ├── data_6381 │ ├── data_6382 │ ├── data_6383 │ └── data_6384 └── scripts ├── cluster.sh ├── run.sh └── sentinel.sh
run.sh 脚本文件
#!/usr/bin/env bash set -e # 脚本当前目录 cPath=$(cd $(dirname "$0") || exit; pwd) # 根目录 dirPath=$(dirname "$cPath") # 获取端口 port="$1" if [[ ! "$port" ]]; then port=6379 fi # 创建数据目录 mkdir -p "$dirPath"/redis/data_"$port" # 删除已启动服务 containerId=$(docker ps -a | grep "redis_$port" | awk -F' ' '{print $1}') if [[ "$containerId" ]]; then docker rm -f ${containerId} > /dev/null fi # 启动服务 containerName=redis_"$port" docker run -itd --privileged=true -p "$port":6379 --name ${containerName} -v="$dirPath"/conf/redis.conf:/etc/redis/redis.conf -v="$dirPath"/redis/data_"$port":/data redis redis-server /etc/redis/redis.conf > /dev/null # 获取容器IP地址 dockerIp=$(docker inspect -f "{{.NetworkSettings.IPAddress}}" "$containerName") # 获取容器启动状态 isRunning=$(docker inspect -f "{{.State.Running}}" "$containerName") if [[ "$isRunning" == "true" ]]; then echo "容器:$containerName - IP:$dockerIp - 启动成功" fi
cluster.sh 脚本文件
#!/usr/bin/env bash set -e # 脚本当前目录 cPath=$(cd $(dirname "$0") || exit; pwd) # 启动集群数量 num="$1" if [[ ! "$num" ]]; then num=6 fi sPort=6378 for((i=1;i<=$num;i++)); do sh ${cPath}/run.sh $(($sPort+$i)) done
启动服务
执行脚本文件,默认创建6个节点
sh scripts/cluster.sh
脚本返回结果
容器:redis_6379 - IP:172.17.0.2 - 启动成功
容器:redis_6380 - IP:172.17.0.3 - 启动成功
容器:redis_6381 - IP:172.17.0.4 - 启动成功
容器:redis_6382 - IP:172.17.0.5 - 启动成功
容器:redis_6383 - IP:172.17.0.6 - 启动成功
容器:redis_6384 - IP:172.17.0.7 - 启动成功
执行 docker ps 确实是否启动成功
root@DESKTOP-Q13EI52:~/docker-config/redis# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c0601df1a456 redis "docker-entrypoint.s…" 27 seconds ago Up 26 seconds 0.0.0.0:6384->6379/tcp redis_6384 6fecf70465b8 redis "docker-entrypoint.s…" 27 seconds ago Up 26 seconds 0.0.0.0:6383->6379/tcp redis_6383 1af15e90b7a0 redis "docker-entrypoint.s…" 28 seconds ago Up 27 seconds 0.0.0.0:6382->6379/tcp redis_6382 6c495f31a5df redis "docker-entrypoint.s…" 28 seconds ago Up 28 seconds 0.0.0.0:6381->6379/tcp redis_6381 e54fd9fd0550 redis "docker-entrypoint.s…" 29 seconds ago Up 28 seconds 0.0.0.0:6380->6379/tcp redis_6380 be92ad2f7046 redis "docker-entrypoint.s…" 29 seconds ago Up 29 seconds 0.0.0.0:6379->6379/tcp redis_6379
到此为止,6个独立集群节点创建完毕,目前还无法正常工作。
创建集群
此处可以跳过,本人是为了省事
获取容器为redis_开始所有的容器IP地址
docker inspect -f "{{.NetworkSettings.IPAddress}}:6379" `docker ps | grep redis_ | awk -F' ' '{print $1}'` | sort |xargs | sed 's/ /, /g' # 返回结果 # 172.17.0.2:6379, 172.17.0.3:6379, 172.17.0.4:6379, 172.17.0.5:6379, 172.17.0.6:6379, 172.17.0.7:6379
初次创建集群执行
./redis-cli --cluster create 172.17.0.2:6379, 172.17.0.3:6379, 172.17.0.4:6379, 172.17.0.5:6379, 172.17.0.6:6379, 172.17.0.7:6379 --cluster-replicas 1
输出结果
licas 1 > Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.17.0.6:6379 to 172.17.0.2:6379 Adding replica 172.17.0.7:6379 to 172.17.0.3:6379 Adding replica 172.17.0.5:6379 to 172.17.0.4:6379 M: e8da1fef656984de3ec2a677edc8d9c48d01cd95 172.17.0.2:6379 slots:[0-5460] (5461 slots) master M: 68b925ab0fbbc1a632c1754587fb6dad3fa14c91 172.17.0.3:6379 slots:[5461-10922] (5462 slots) master M: 0a46ab2f6d176738b55fe699c2df1c34f8200d06 172.17.0.4:6379 slots:[10923-16383] (5461 slots) master S: bd3064ad5297dfc258e9236943455c589be8b2a3 172.17.0.5:6379 replicates 0a46ab2f6d176738b55fe699c2df1c34f8200d06 S: f1d8c897882d29e6538b1158525493b3b782289a 172.17.0.6:6379 replicates e8da1fef656984de3ec2a677edc8d9c48d01cd95 S: 619e1cb52f39e07b321719b77fc3631fa6293cef 172.17.0.7:6379 replicates 68b925ab0fbbc1a632c1754587fb6dad3fa14c91 Can I set the above configuration"htmlcode">> Nodes configuration updated > Assign a different config epoch to each node > Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ..... > Performing Cluster Check (using node 172.17.0.2:6379) M: e8da1fef656984de3ec2a677edc8d9c48d01cd95 172.17.0.2:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: f1d8c897882d29e6538b1158525493b3b782289a 172.17.0.6:6379 slots: (0 slots) slave replicates e8da1fef656984de3ec2a677edc8d9c48d01cd95 S: bd3064ad5297dfc258e9236943455c589be8b2a3 172.17.0.5:6379 slots: (0 slots) slave replicates 0a46ab2f6d176738b55fe699c2df1c34f8200d06 M: 0a46ab2f6d176738b55fe699c2df1c34f8200d06 172.17.0.4:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 619e1cb52f39e07b321719b77fc3631fa6293cef 172.17.0.7:6379 slots: (0 slots) slave replicates 68b925ab0fbbc1a632c1754587fb6dad3fa14c91 M: 68b925ab0fbbc1a632c1754587fb6dad3fa14c91 172.17.0.3:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. > Check for open slots... > Check slots coverage... [OK] All 16384 slots covered.连接集群
通过客户端连接
redis-cli -c <端口>执行命令:
cluster info
127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:104 cluster_stats_messages_pong_sent:120 cluster_stats_messages_sent:224 cluster_stats_messages_ping_received:115 cluster_stats_messages_pong_received:104 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:224看到:
cluster_state:ok
说明集群已可以正常工作客户端控制台:
cluster help
127.0.0.1:6379> cluster help 1) CLUSTER <subcommand> arg arg ... arg. Subcommands are: 2) ADDSLOTS <slot> [slot ...] -- Assign slots to current node. 3) BUMPEPOCH -- Advance the cluster config epoch. 4) COUNT-failure-reports <node-id> -- Return number of failure reports for <node-id>. 5) COUNTKEYSINSLOT <slot> - Return the number of keys in <slot>. 6) DELSLOTS <slot> [slot ...] -- Delete slots information from current node. 7) FAILOVER [force|takeover] -- Promote current replica node to being a master. 8) FORGET <node-id> -- Remove a node from the cluster. 9) GETKEYSINSLOT <slot> <count> -- Return key names stored by current node in a slot. 10) FLUSHSLOTS -- Delete current node own slots information. 11) INFO - Return information about the cluster. 12) KEYSLOT <key> -- Return the hash slot for <key>. 13) MEET <ip> <port> [bus-port] -- Connect nodes into a working cluster. 14) MYID -- Return the node id. 15) NODES -- Return cluster configuration seen by node. Output format: 16) <id> <ip:port> <flags> <master> <pings> <pongs> <epoch> <link> <slot> ... <slot> 17) REPLICATE <node-id> -- Configure current node as replica to <node-id>. 18) RESET [hard|soft] -- Reset current node (default: soft). 19) SET-config-epoch <epoch> - Set config epoch of current node. 20) SETSLOT <slot> (importing|migrating|stable|node <node-id>) -- Set slot state. 21) REPLICAS <node-id> -- Return <node-id> replicas. 22) SAVECONFIG - Force saving cluster configuration on disk. 23) SLOTS -- Return information about slots range mappings. Each range is made of: 24) start, end, master and replicas IP addresses, ports and ids查看客户端提供的集群相关命令:
redis-cli --cluster help
Cluster Manager Commands: create host1:port1 ... hostN:portN --cluster-replicas <arg> check host:port --cluster-search-multiple-owners info host:port fix host:port --cluster-search-multiple-owners reshard host:port --cluster-from <arg> --cluster-to <arg> --cluster-slots <arg> --cluster-yes --cluster-timeout <arg> --cluster-pipeline <arg> --cluster-replace rebalance host:port --cluster-weight <node1=w1...nodeN=wN> --cluster-use-empty-masters --cluster-timeout <arg> --cluster-simulate --cluster-pipeline <arg> --cluster-threshold <arg> --cluster-replace add-node new_host:new_port existing_host:existing_port --cluster-slave --cluster-master-id <arg> del-node host:port node_id call host:port command arg arg .. arg set-timeout host:port milliseconds import host:port --cluster-from <arg> --cluster-copy --cluster-replace