歡迎來到Linux教程網
Linux教程網
Linux教程網
Linux教程網
您现在的位置: Linux教程網 >> UnixLinux >  >> Linux基礎 >> 關於Linux

Redis入門 (CentOS7 + Redis-3.2.1)

1. 編譯安裝

1.1 下載redis

 

# cd /tmp/
# wget http://download.redis.io/releases/redis-3.2.1.tar.gz
# tar zxvf redis-3.2.1.tar.gz
# cd redis-3.2.1/

 

1.2 編譯redis

 

# make

 

錯誤:

找不到

解決:

make distclean

然後重新make;redis的代碼包裡有自帶的jemalloc;

1.3 測試

 

 

# yum install tcl

 

# make test

錯誤1:

 

You need tcl 8.5 or newer in order to run the Redis test

解決:

# yum install tcl.x86_64

錯誤2:

[exception]: Executing test client: NOREPLICAS Not enough good slaves to write..
NOREPLICAS Not enough good slaves to write.

......

Killing still running Redis server 63439
Killing still running Redis server 63486
Killing still running Redis server 63519
Killing still running Redis server 63546
Killing still running Redis server 63574
Killing still running Redis server 63591
I/O error reading reply

......

解決:

vim tests/integration/replication-2.tcl

- after 1000

+ after 10000

錯誤3:

[err]: Slave should be able to synchronize with the master in tests/integration/replication-psync.tcl
Replication not started.

解決:

遇見過一次,重試make test就ok了。

1.4 安裝redis

 

# make install
# cp redis.conf /usr/local/etc/
# cp src/redis-trib.rb /usr/local/bin/

 

 

2. standalone模式

2.1 配置redis

 

# vim etc/redis.conf    
daemonize yes
logfile "/var/run/redis/log/redis.log"
pidfile /var/run/redis/pid/redis_6379.pid
dbfilename redis.rdb
dir /var/run/redis/rdb/

 

2.2 啟動redis

 

# mkdir -p /var/run/redis/log
# mkdir -p /var/run/redis/rdb
# mkdir -p /var/run/redis/pid
# /usr/local/bin/redis-server /usr/local/etc/redis.conf
# ps -ef | grep redis
root      71021      1  0 15:46 ?        00:00:00 /usr/local/bin/redis-server 127.0.0.1:6379

 

2.3 測試redis

 

# /usr/local/bin/redis-cli
127.0.0.1:6379> set country china
OK
127.0.0.1:6379> get country
"china"
127.0.0.1:6379> set country america
OK
127.0.0.1:6379> get country
"america"
127.0.0.1:6379> exists country
(integer) 1
127.0.0.1:6379> del country
(integer) 1
127.0.0.1:6379> exists country
(integer) 0
127.0.0.1:6379>exit

 

2.4 停止redis

 

# /usr/local/bin/redis-cli shutdown  

 

3. master-slave模式

3.1 配置redis

為了測試master-slave模式,我需要在一個host上啟動2個redis實例(有條件的話,當然可以使用多個host,每個host運行一個redis實例)。為此,需要把redis.conf復制多份:

 

# cp /usr/local/etc/redis.conf /usr/local/etc/redis_6379.conf
# cp /usr/local/etc/redis.conf /usr/local/etc/redis_6389.conf

 

配置實例6379:

 


# vim /usr/local/etc/redis_6379.conf daemonize yes port 6379 logfile "/var/run/redis/log/redis_6379.log" pidfile /var/run/redis/pid/redis_6379.pid dbfilename redis_6379.rdb dir /var/run/redis/rdb/ min-slaves-to-write 1 min-slaves-max-lag 10

 

最後兩項配置表示: 在10秒內,至少1個slave ping過master;

 

配置實例6389:

 


# vim /usr/local/etc/redis_6389.conf daemonize yes port 6389 slaveof 127.0.0.1 6379 logfile "/var/run/redis/log/redis_6389.log" pidfile /var/run/redis/pid/redis_6389.pid dbfilename redis_6389.rdb dir /var/run/redis/rdb/ repl-ping-slave-period 10

 

易見,我將要啟動兩個redis實例,一個使用端口6379(默認端口),另一個使用6389;並且,前者為master,後者為slave。

repl-ping-slave-period表示slave向master發送PING的頻率,單位是秒。

 

另外,在6389的配置文件中,有如下配置可以修改(一般不需修改):

 

slave-read-only yes 
slave-serve-stale-data yes

 

第一個表示:slave是只讀的;

第二個表示:當slave在同步新數據(從master同步數據)的時候,它使用舊的數據服務client。這使得slave是非阻塞的。

在6379的配置文件中,有如下配置可以修改(一般不需修改):

 

# repl-backlog-size 1mb
repl-diskless-sync no
repl-diskless-sync-delay 5

 

對於這幾項,故事是這樣的:

1. 對於一直保持著連接的slave,可以通過增量同步來達到主從一致。

2. 斷開重連的slave可能通過部分同步來達到一致(redis 2.8之後才有這一功能,此版本之前,只能和新的slave一樣,通過完整同步來達到一致),機制是:

master在內存中記錄一些replication積壓量;重連的slave與master就replication offset和master run id進行協商:若master run id沒變(即master沒有重啟),並且slave請求的replication offset在積壓量裡,就可以從offset開始進行部分同步來達到一致。這兩個條件任何一個不滿足,就必須進行完整同步了。repl-backlog-size 1mb就是用於配置replication積壓量大小的。

3. 對於新的slave或者無法通過部分同步達到一致的重連slave,而必須進行完整同步,即傳送一個RDB文件。master有兩種方式來傳這個RDB文件:

 

disk-backed:在磁盤上生成RDB文件,然後傳送給slave;diskless:不在磁盤上生成RDB文件,而是一邊生成RDB數據,一邊直接寫到socket;

 

repl-diskless-sync用於配置使用哪種策略。對於前者,在磁盤上生成一次RDB文件,可以服務多個slave;而對於後者,一旦傳送開始,新來的slave只能排隊(等當前slave同步完成)。所以,master在開始傳送之前,可能希望推遲一會兒,希望來更多的slave,這樣master就可以並行的把生成的數據傳送給他們。參數repl-diskless-sync-delay 就是用於配置推遲的時間的,單位秒。

慢磁盤的master,可能需要考慮使用diskless傳送。

3.2 啟動master

 

# /usr/local/bin/redis-server /usr/local/etc/redis_6379.conf
# /usr/local/bin/redis-cli
127.0.0.1:6379> set country Japan
(error) NOREPLICAS Not enough good slaves to write.
127.0.0.1:6379>

 

可見,由於slave沒啟動,不滿足在10秒內,至少1個slave ping過master的條件,故出錯。

3.3 啟動slave

 

# /usr/local/bin/redis-server /usr/local/etc/redis_6389.conf
# /usr/local/bin/redis-cli
127.0.0.1:6379> set country Japan
OK

 

3.4 停止redis

 

# /usr/local/bin/redis-cli -p 6389 shutdown
# /usr/local/bin/redis-cli -p 6379 shutdown

4. cluster + master-slave模式

Cluster能做什麼: 自動的將數據集分布到不同節點上;當一部分節點故障時,能夠繼續服務; 數據分布:redis不使用一致性hash,而是另一種形式的sharding:一共用16384個hash slot,它們分布在不同的節點上(例如節點A包含0-4999,節點B包含5000-9999,節點C包含10000-16384)。key被映射到hash slot上。比如,key_foo被映射到slot 1000,而slot 1000存儲於節點A上,那麼key_foo(以及它的值)將存儲於節點A上。另外,你可以讓不同的key映射到同一個slot上,方法是使用{}把key的一部分括起來;在key到slot的映射過程中,只考慮{}之內的部分。例如,this{foo}key和another{foo}key在映射中,只有"foo"被考慮,所以它們一定映射到相同的slot。
TCP port:每個節點使用兩個端口:服務client的端口;集群bus端口。集群bus端口 = 服務client的端口 + 10000。配置過程中只需指定服務client的端口,後者由redis根據這個規則自行計算。集群bus端口用於故障偵測、配置更新、fail over鑒權等;服務client的端口除了用於服務客戶,還用於節點間遷移數據。 服務client的端口必須對client和所有其他節點開放;集群bus端口必須對所有其他節點開放。
一致性保證:redis集群不能保證強一致性。也就是說,在特定情況下,redis可能丟失寫入的數據(雖然已經回應client:已成功寫入)。 異步復制導致數據丟失:1. 用戶寫入master節點B;2. master節點B向client回復OK;3. master節點B把寫入數據復制到它的slave。可見,因為B不等slave確認寫入就向client回復OK,若master節點B在2之後故障,就會導致數據丟失。網絡分裂導致數據丟失:例如有A,B,C三個master,它們的slave分別是A1,B1,C1;如果發生網絡分裂,B和客戶端分到一側。在cluster-node-timeout之內,客戶端可以繼續向B寫入數據;當超過cluster-node-timeout時,分裂的另一側發生fail over,B1當選為master。客戶端向B寫的數據就丟失了。redis(非cluster模式)本身也可能丟失數據:1. RBD定期快照,導致快照周期內的數據丟失;2. AOF,雖然每一個寫操作都記入log,但log是定期sync的,也可能丟失sync周期內的數據。
在下文,節點和實例是可以互換的名詞(因為,我是在同一個host上試驗集群,一個節點由一個實例代替)。

我將在同一台機器上試驗redis集群模式,為此,我需要創建6個redis實例,其中3個master,另外3個是slave。它們的端口是7000-7005。

4.1 配置redis集群:

 


# cp /usr/local/etc/redis.conf /usr/local/etc/redis_7000.conf # vim /usr/local/etc/redis_7000.conf daemonize yes port 7000 pidfile /var/run/redis/pid/redis_7000.pid logfile "/var/run/redis/log/redis_7000.log" dbfilename redis_7000.rdb dir /var/run/redis/rdb/ min-slaves-to-write 0 cluster-enabled yes cluster-config-file /var/run/redis/nodes/nodes-7000.conf cluster-node-timeout 5000 cluster-slave-validity-factor 10 repl-ping-slave-period 10

這裡我把min-slave-to-write改為0,為了後文驗證fail over之後,仍能夠讀寫(否則,master crash之後,slave取代它成為master,但它沒有slave,故不能讀寫)。

 

 

 

# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7001.conf
# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7002.conf
# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7003.conf
# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7004.conf
# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7005.conf
# sed -i -e 's/7000/7001/' /usr/local/etc/redis_7001.conf
# sed -i -e 's/7000/7002/' /usr/local/etc/redis_7002.conf
# sed -i -e 's/7000/7003/' /usr/local/etc/redis_7003.conf
# sed -i -e 's/7000/7004/' /usr/local/etc/redis_7004.conf
# sed -i -e 's/7000/7005/' /usr/local/etc/redis_7005.conf

 

4.2 啟動redis實例

 

# mkdir -p /var/run/redis/log
# mkdir -p /var/run/redis/rdb
# mkdir -p /var/run/redis/pid
# mkdir -p /var/run/redis/nodes


 

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7000.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7002.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7003.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7004.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7005.conf


 

現在,在每個實例的log裡,可以看見類似這麼一行(當然,後面那個串16進制數各不相同):

 

3125:M 12 Jul 15:24:16.937 * No cluster configuration found, I'm b6be6eb409d0207e698997d79bab9adaa90348f0

 

事實上,那串16進制數就是每個redis實例的ID。它在集群的環境下,唯一標識一個redis實例。每個redis實例通過這個ID記錄其他實例,而不是通過IP和port(因為IP和port可以改變)。我們這裡所說的實例,就是集群中的節點,這個ID也即是Node ID。

 

4.3 創建redis集群

我們使用從redis源代碼中拷貝的redis-trib.rb來創建集群。它是一個ruby腳步,為了失它能夠運行,還需要如下准備工作:

 

# yum install gem
# gem install redis


 

現在可以創建集群了:

 

# /usr/local/bin/redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:7000
127.0.0.1:7001
127.0.0.1:7002
Adding replica 127.0.0.1:7003 to 127.0.0.1:7000
Adding replica 127.0.0.1:7004 to 127.0.0.1:7001
Adding replica 127.0.0.1:7005 to 127.0.0.1:7002
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:0-5460 (5461 slots) master
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:5461-10922 (5462 slots) master
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:10923-16383 (5461 slots) master
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:0-5460 (5461 slots) master
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:5461-10922 (5462 slots) master
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:10923-16383 (5461 slots) master
M: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) master
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) master
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) master
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

最後一行:

[OK] All 16384 slots covered.

表示至少有一個master能夠服務全部slot(16384個)了。可以認為集群創建成功了。從命令的輸出可以看出:

實例7000 ID:b6be6eb409d0207e698997d79bab9adaa90348f0

實例7001 ID:23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9

實例7002 ID:6b92f63f64d9683e2090a28ebe9eac60d05dc756

實例7003 ID:ebfa6b5ab54e1794df5786694fcabca6f9a37f42

實例7004 ID:026e747386106ad2f68e1c89543b506d5d96c79e

實例7005 ID:441896afc76d8bc06eba1800117fa97d59453564

其中:

7000、7001和7002是master;

7000包含的slot是0-5460,slave是7003

7001包含的slot是5461-10922,slave是7004

7002包含的slot是10923-16383,slave是7005

如果不想要slave,6個實例都做master(沒有備份),把"--replicas 1"去掉即可。

在繼續進行之前,先看看這幾項配置項的意義:

cluster-enabled:啟用集群功能。cluster-config-file:集群中的實例通過這個文件持久化集群配置:其他實例及其狀態等。不是讓人來編輯的。redis實例在收到消息(例如其他實例狀態變化的消息)時重寫。cluster-node-timeout:一個節點失聯時間大於這個值(單位毫秒)時,就會認為是故障(failing)的。它有兩個重要意義:1. master節點失聯時間大於它時,就可能被fail over到slave上;2. 一個節點如果和大多數master失聯的時間大於它時,就停止接收請求(比如由於網絡分裂,一個master被隔離出去,與大多數master失聯,當失聯時間大於這個值時,它就停止工作,因為在分裂的那一邊可能已經fail over到slave上了)。cluster-slave-validity-factor:說來話長,一個slave當發現它的數據太老時,它就不會進行fail over。如何確定它的數據年齡呢?有兩個檢查機制:機制1. 假如有多個slave都可以fail over,它們就會交換信息,根據replication offset (這個值能反映從master得到數據的多少)來確定一個級別(rank),然後根據rank來延遲fail over。Yuanguo:可能是這樣的: 根據replication offset確定哪個slave的數據比較新(從master得到的多),哪個slave的數據比較舊(從master得到的少),然後排列出一個rank。數據新的slave盡快fail over,數據老的slave延遲fail over,越老延遲的越長。機制1和這個配置項無關。 機制2. 每個slave記錄它與master最後一次交互(PING或者是命令)以來逝去的時間。若這個時間過大,則認為數據是老的。如何確定這個時間是否“過大”呢?這就是這個配置項的作用,若大於

(node-timeout * slave-validity-factor) + repl-ping-slave-period

就認為過大,數據是老的。

repl-ping-slave-period:slave向master發送PING的頻率,單位是秒。 以下這兩項,我們沒有設置:
cluster-migration-barrier:假如有一個master有3個slave,另一個master沒有任何slave。這時,需要把第一個master的slave 遷移給第二個master。但第一個master把自己的slave遷移給別人時,自己必須保有一定個數的slave。保有個數就是cluster-migration-barrier。例如,把這個值設置為3時, 就不會發生slave遷移了,因為遷移之後保有數小於3。所以,你想禁止slave遷移,把這個數設置很大即可。cluster-require-full-coverage:若設置為yes,一旦存在hash slot沒被覆蓋,則集群停止接收請求。在這種情況下,若集群部分宕機,導致一些slot沒有被覆蓋,則整個集群變得不可用。你若希望在一些節點宕機時,被覆蓋的那些slot仍能服務,把它設置為no。

4.4 測試集群

4.4.1 redis-cli的集群模式

 

# /usr/local/bin/redis-cli -p 7000
127.0.0.1:7000> set country China
(error) MOVED 12695 127.0.0.1:7002
127.0.0.1:7000> get country
(error) MOVED 12695 127.0.0.1:7002
127.0.0.1:7000> exit

# /usr/local/bin/redis-cli -p 7002
127.0.0.1:7002> set country China
OK
127.0.0.1:7002> get country
"China"
127.0.0.1:7002> set testKey testValue
(error) MOVED 5203 127.0.0.1:7000
127.0.0.1:7002> exit

# /usr/local/bin/redis-cli -p 7000
127.0.0.1:7000> set testKey testValue
OK
127.0.0.1:7000> exit

 

某一個特定的key,只能由特定的master來服務?

不是的。原來,redis-cli需要一個 -c 來表示cluster模式。使用cluster模式時,可以在任何節點(master或者slave)上存取數據:

 

# /usr/local/bin/redis-cli -c -p 7002
127.0.0.1:7002> set country America
OK
127.0.0.1:7002> set testKey testValue
-> Redirected to slot [5203] located at 127.0.0.1:7000
OK
127.0.0.1:7000> exit

# /usr/local/bin/redis-cli -c -p 7005
127.0.0.1:7005> get country
-> Redirected to slot [12695] located at 127.0.0.1:7002
"America"
127.0.0.1:7002> get testKey
-> Redirected to slot [5203] located at 127.0.0.1:7000
"testValue"
127.0.0.1:7000> set foo bar
-> Redirected to slot [12182] located at 127.0.0.1:7002
OK
127.0.0.1:7002> exit

事實上,redis-cli對cluster的支持是比較基礎的,它只是利用了節點能夠根據slot進行重定向的功能。例如在上面的例子中,在實例7002上設置testKey時,它計算得到對應的slot是5203,而5203由實例7000負責,所以它重定向到實例7000。

 

一個更好的客戶端應該能夠緩存 hash slot到節點地址的映射,然後就可以直接訪問正確的節點,免於重定向。這個映射只有在集群配置發生變化的時候才需要刷新,例如,fail over之後(slave代替了master,故節點地址改變),或者管理員添加/刪除了節點(hash slot分布發生變化)。

4.4.2 ruby客戶端:redis-rb-cluster

安裝

 

# cd /usr/local/
# wget https://github.com/antirez/redis-rb-cluster/archive/master.zip
# unzip master.zip
# cd redis-rb-cluster-master

 

測試

安裝完成之後,進入redis-rb-cluster-master目錄,發現裡面有一個example.rb,執行它:

 

# ruby example.rb 127.0.0.1 7000
1
2
3
4
5
6
^C


 

它循環向redis集群set這樣的鍵值對:

foo1 => 1

foo2 => 2

......

可通過redis-cli來驗證前面的執行結果。

4.4.3 hash slots在節點間resharding

為了展示resharding過程中IO不間斷,再打開一個終端,運行

 

# ruby example.rb 127.0.0.1 7000;

 

同時在原終端測試resharding:

 

# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)?2000                    <----  遷移多少hash slot?
What is the receiving node ID? 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9      <----  遷移目的地? 7001實例
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:all                                                           <----  遷移源?all
Do you want to proceed with the proposed reshard plan (yes/no)? yes          <----  確認

 

遷移過程中,另一個終端的IO持續沒有中斷。遷移完成之後,檢查現在hash slot的分布:

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

可見slot的分布已經發生變化:

7000:4461個

1000-5460

7001:7462個

0-999

5461-11922

7002:4461個

11923-16383

4.4.4 fail over

在測試fail over之前,先看一下包含在redis-rb-cluster中的另一個工具:consistency-test.rb;它是一個一致性檢查器:累加變量,然後檢查變量的值是否正確。

 

# ruby consistency-test.rb 127.0.0.1 7000
198 R (0 err) | 198 W (0 err) |
685 R (0 err) | 685 W (0 err) |
1174 R (0 err) | 1174 W (0 err) |
1675 R (0 err) | 1675 W (0 err) |
2514 R (0 err) | 2514 W (0 err) |
3506 R (0 err) | 3506 W (0 err) |
4501 R (0 err) | 4501 W (0 err) |

 

兩個括號中的(N err)分別代表IO錯誤數,而不是數據不一致。數據不一致在最後一個列打印(上例中沒有數據不一致現象)。為了演示數據不一致現象,我修改consistency-test.rb腳步,把key打印出來,然後在另一個終端中,通過redis-cli更改key的值。

 

# vim consistency-test.rb
            # Report
            sleep @delay
            if Time.now.to_i != last_report
                report = "#{@reads} R (#{@failed_reads} err) | " +
                         "#{@writes} W (#{@failed_writes} err) | "
                report += "#{@lost_writes} lost | " if @lost_writes > 0
                report += "#{@not_ack_writes} noack | " if @not_ack_writes > 0
                last_report = Time.now.to_i
+               puts key
                puts report
            end

 

運行腳本,我們可以看見腳本操作的每個變量的key:

 

# ruby consistency-test.rb 127.0.0.1 7000
81728|502047|15681480|key_8715
568 R (0 err) | 568 W (0 err) |
81728|502047|15681480|key_3373
1882 R (0 err) | 1882 W (0 err) |
81728|502047|15681480|key_89
3441 R (0 err) | 3441 W (0 err) |

 

在另一終端,修改key:81728|502047|15681480|key_8715的值:

 

127.0.0.1:7001> set 81728|502047|15681480|key_8715 0
-> Redirected to slot [12146] located at 127.0.0.1:7002
OK
127.0.0.1:7002>

 

然後,可以看見consistency-test.rb檢查出值不一致:

 

# ruby consistency-test.rb 127.0.0.1 7000
81728|502047|15681480|key_8715
568 R (0 err) | 568 W (0 err) |
81728|502047|15681480|key_3373
1882 R (0 err) | 1882 W (0 err) |
81728|502047|15681480|key_89
......
81728|502047|15681480|key_2841
7884 R (0 err) | 7884 W (0 err) |
81728|502047|15681480|key_308
8869 R (0 err) | 8869 W (0 err) | 2 lost |
81728|502047|15681480|key_6771
9856 R (0 err) | 9856 W (0 err) | 2 lost |
變量的值應該為2,但是丟失了(被我改掉了)。

4.4.4.1 自動fail over

當一個master crash了,一段時間後(前面配置的的5秒)會自動fail over到它的slave上。

在一個終端上,運行一致性檢查器consistency-test.rb(刪掉了打印key的語句)。然後在另一個終端上模擬crash一個master:

 

# ./redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001        <---- 可見7001是一個master
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


# /usr/local/bin/redis-cli -p 7001 debug segfault                 <---- 模擬7001 crash         
Error: Server closed the connection


# ./redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004        <---- 7001 fail over到7004;現在7004是master,並且沒有slave
   slots:0-999,5461-11922 (7462 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


在一致性檢查一側,可以看見在fail over過程中出現了一些IO錯誤。

 

 

7379 R (0 err) | 7379 W (0 err) |
8499 R (0 err) | 8499 W (0 err) |
9586 R (0 err) | 9586 W (0 err) |
10736 R (0 err) | 10736 W (0 err) |
12416 R (0 err) | 12416 W (0 err) |
Reading: Too many Cluster redirections? (last error: MOVED 11451 127.0.0.1:7001)
Writing: Too many Cluster redirections? (last error: MOVED 11451 127.0.0.1:7001)
13426 R (1 err) | 13426 W (1 err) |
Reading: Too many Cluster redirections? (last error: MOVED 5549 127.0.0.1:7001)
Writing: Too many Cluster redirections? (last error: MOVED 5549 127.0.0.1:7001)
13426 R (2 err) | 13426 W (2 err) |
Reading: Too many Cluster redirections? (last error: MOVED 9678 127.0.0.1:7001)
Writing: Too many Cluster redirections? (last error: MOVED 9678 127.0.0.1:7001)
13427 R (3 err) | 13427 W (3 err) |
Reading: Too many Cluster redirections? (last error: MOVED 10649 127.0.0.1:7001)
Writing: Too many Cluster redirections? (last error: MOVED 10649 127.0.0.1:7001)
13427 R (4 err) | 13427 W (4 err) |
Reading: Too many Cluster redirections? (last error: MOVED 9313 127.0.0.1:7001)
Writing: Too many Cluster redirections? (last error: MOVED 9313 127.0.0.1:7001)
13427 R (5 err) | 13427 W (5 err) |
Reading: Too many Cluster redirections? (last error: MOVED 8268 127.0.0.1:7001)
Writing: Too many Cluster redirections? (last error: MOVED 8268 127.0.0.1:7001)
13428 R (6 err) | 13428 W (6 err) |
Reading: CLUSTERDOWN The cluster is down
Writing: CLUSTERDOWN The cluster is down
13432 R (661 err) | 13432 W (661 err) |
14786 R (661 err) | 14786 W (661 err) |
15987 R (661 err) | 15987 W (661 err) |
17217 R (661 err) | 17217 W (661 err) |
18320 R (661 err) | 18320 W (661 err) |
18737 R (661 err) | 18737 W (661 err) |
18882 R (661 err) | 18882 W (661 err) |
19284 R (661 err) | 19284 W (661 err) |
20121 R (661 err) | 20121 W (661 err) |
21433 R (661 err) | 21433 W (661 err) |
22998 R (661 err) | 22998 W (661 err) |
24805 R (661 err) | 24805 W (661 err) |

 

注意兩點:

 

fail over完成之後,IO錯誤數停止增加,集群可以繼續正常服務。沒有出現不一致錯誤。master crash是可能導致數據不一致的(slave的數據落後於master,master crash後,slave取代它,導致使用落後的數據),但這種情況不是非常容易發生,因為master完成新的寫操作時,幾乎在回復客戶端的同時,就向slave同步了。但不代表不可能出現。

 

重啟7001,它將變成7004的slave:

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf

# ./redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
S: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001             <---- 7001已變成7004的slave
   slots: (0 slots) slave
   replicates 026e747386106ad2f68e1c89543b506d5d96c79e
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

4.4.4.2 手動fail over

有的時候,用戶可能想主動fail over,比如,想升級某個master,最好讓它變成slave,這樣能減小對集群可用性的影響。這就需要手動fail over。

手動fail over必須在slave上執行:

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
S: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001                <---- 7001是7004的slave
   slots: (0 slots) slave
   replicates 026e747386106ad2f68e1c89543b506d5d96c79e
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

# /usr/local/bin/redis-cli -p 7001 CLUSTER FAILOVER                       <---- 在slave 7001上執行fail over
OK

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004               <---- 7004變成7001的slave
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001               <---- 7001變成master
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

手動fail over的過程:

 

slave告訴master,停止處理用戶請求;master停止處理用戶請求,並回復slave它的replication offset;slave等待它的replication offset和master的匹配;即等待,直到從master接收到所有數據。這時候master和slave的數據一致,並且master不接收新數據;slave開始fail over:從大多數master獲取一個配置epoc(配置變更版本號,應該是cluster-config-file包含的信息的版本號),並廣播新配置(在新配置中,slave已經變成master);老的master接收到新配置,重新開始處理用戶請求:把請求重定向到新master;它自己已經變成slave;

 

fail over命令有兩個選項:

 

FORCE:上面fail over的過程中,需要master參與。若master處於失聯狀態(網絡故障或者master崩潰了,但沒完成自動fail over),加上FORCE選項,則fail over不與master進行握手,而是直接從第4步開始。TAKEOVER:上面fail over的過程中,需要大多數master的授權並有大多數master產生一個新的配置變更版本號。有時,我們不想與其他master達成一致,而直接fail over,則需要TAKEOVER選項。一個真實的用例是:master在一個數據中心, 所有slave在另一個數據中心,當所有master宕機或者網絡分裂時,集中地把所有處於另一數據中心的slave提升為master,來達到數據中心切換的目的。

 

4.4.5 添加節點

4.4.5.1 添加master

 

# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7006.conf
# sed -i -e 's/7000/7006/' /usr/local/etc/redis_7006.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7006.conf             <---- 1. 拷貝、修改conf,並啟動一個redis實例
# /usr/local/bin/redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000    <---- 2. 把實例加入集群
>>> Adding node 127.0.0.1:7006 to cluster 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:7006 to make it join the cluster.
[OK] New node added correctly.                                           <---- 新節點添加成功

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000                      <---- 3. 檢查
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006               <---- 新節點沒有任何slot;所以需要手動reshard
   slots: (0 slots) master
   0 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000                    <---- 4. reshard
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006
   slots: (0 slots) master
   0 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:0-999,5461-11922 (7462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 3000                  <---- 遷移3000個slot
What is the receiving node ID? 6147326f5c592aff26f822881b552888a23711c6     <---- 目的地是新加的節點
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.                    <---- 源是7001;由於上次reshard,它的slot非常多,所以遷走3000
Source node #1:23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
Source node #2:done
......
    Moving slot 7456 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
    Moving slot 7457 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
    Moving slot 7458 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
    Moving slot 7459 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
    Moving slot 7460 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
Do you want to proceed with the proposed reshard plan (yes/no)? yes         <---- 確認

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000                         <---- 5. 再次檢查
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006                   <---- 新加的節點有3000個slot
   slots:0-999,5461-7460 (3000 slots) master
   0 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:7461-11922 (4462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

4.4.5.2 添加slave

 



1. 拷貝、修改conf,並啟動一個redis實例 # cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7007.conf # sed -i -e 's/7000/7007/' /usr/local/etc/redis_7007.conf # /usr/local/bin/redis-server /usr/local/etc/redis_7007.conf 2. 作為slave添加,並指定其master [root@localhost ~]# /usr/local/bin/redis-trib.rb add-node --slave --master-id 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7007 127.0.0.1:7000 >>> Adding node 127.0.0.1:7007 to cluster 127.0.0.1:7000 >>> Performing Cluster Check (using node 127.0.0.1:7000) M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 0 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:7007 to make it join the cluster. Waiting for the cluster to join. >>> Configure node as replica of 127.0.0.1:7006. [OK] New node added correctly. 3. 檢查 # /usr/local/bin/redis-trib.rb check 127.0.0.1:7000 M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: 6d8675118da6b492c28844395ee6915506c73b3a 127.0.0.1:7007 <---- 新節點成功添加,並成為7006的slave slots: (0 slots) slave replicates 6147326f5c592aff26f822881b552888a23711c6 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 1 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.

 




上面添加slave時,指定了其master。也可以不指定master,這樣,它將隨機地成為一個master的slave;然後,可以把它遷移為指定master的slave(通過CLUSTER REPLICATE命令)。另外,還可以通過作為一個空的master添加,然後使用CLUSTER REPLICATE命令把它變為slave。

 

4.4.6 刪除節點

刪除之前,先看看當前的結構

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: 6d8675118da6b492c28844395ee6915506c73b3a 127.0.0.1:7007
   slots: (0 slots) slave
   replicates 6147326f5c592aff26f822881b552888a23711c6
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006
   slots:0-999,5461-7460 (3000 slots) master
   1 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:7461-11922 (4462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
可見:

 

master slave

7000 7003

7001 7004

7002 7005

7006 7007

我們將刪掉7007(slave)和7002(master).

4.4.6.1 刪除slave節點

刪除slave(7007)比較容易,通過del-node即可:

 

# /usr/local/bin/redis-trib.rb del-node 127.0.0.1:7000 6d8675118da6b492c28844395ee6915506c73b3a
>>> Removing node 6d8675118da6b492c28844395ee6915506c73b3a from cluster 127.0.0.1:7000
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002
   slots:11923-16383 (4461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006       <---- 7006現在沒有slave了
   slots:0-999,5461-7460 (3000 slots) master
   0 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:7461-11922 (4462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

4.4.6.2 刪除master節點

刪除master節點之前,必須確保master是空的(沒有任何slot),這可以通過reshard來完成。然後才能刪除master。

 


# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000 <---- reshard >>> Performing Cluster Check (using node 127.0.0.1:7000) M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 0 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4461 <---- 我們計劃把7002清空,所以需要遷移其所有slot,4461個 What is the receiving node ID? 6147326f5c592aff26f822881b552888a23711c6 <---- 目的地7006 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:6b92f63f64d9683e2090a28ebe9eac60d05dc756 <---- 源7002 Source node #2:done ...... Moving slot 16382 from 6b92f63f64d9683e2090a28ebe9eac60d05dc756 Moving slot 16383 from 6b92f63f64d9683e2090a28ebe9eac60d05dc756 Do you want to proceed with the proposed reshard plan (yes/no)? yes

 


檢查7002是否已經被清空:

 

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005           <---- 7002的slave,變成7006的了 (若設置了cluster-migration-barrier,如何?)
   slots: (0 slots) slave
   replicates 6147326f5c592aff26f822881b552888a23711c6
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002           <---- 7002已經被清空
   slots: (0 slots) master
   0 additional replica(s)                                           <---- 並且它的slave也不見了(因為沒有數據,slave是浪費) !!!!!!
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006
   slots:0-999,5461-7460,11923-16383 (7461 slots) master
   1 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:7461-11922 (4462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

現在可以刪除7002了:

 

 

# /usr/local/bin/redis-trib.rb del-node 127.0.0.1:7000 6b92f63f64d9683e2090a28ebe9eac60d05dc756
>>> Removing node 6b92f63f64d9683e2090a28ebe9eac60d05dc756 from cluster 127.0.0.1:7000
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

看看現在的結構:

 

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates 6147326f5c592aff26f822881b552888a23711c6
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006
   slots:0-999,5461-7460,11923-16383 (7461 slots) master
   1 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:7461-11922 (4462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
可見:

 

master slave

7000 7003

7001 7004

7006 7005

4.4.7 slave遷移

現在的拓撲結構是:

 

 

master slave

7000 7003

7001 7004

7006 7005

我們可以通過命令把一個slave分配給別的master:

 

# /usr/local/bin/redis-cli -p 7003 CLUSTER REPLICATE 6147326f5c592aff26f822881b552888a23711c6    <---- 讓7003做7006的slave
OK
# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000           <---- 7000沒有slave
   slots:1000-5460 (4461 slots) master
   0 additional replica(s)
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005           <---- 7005是7006的slave
   slots: (0 slots) slave
   replicates 6147326f5c592aff26f822881b552888a23711c6
S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003           <---- 7003是7006的slave
   slots: (0 slots) slave
   replicates 6147326f5c592aff26f822881b552888a23711c6
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006           <---- 7006有兩個slave
   slots:0-999,5461-7460,11923-16383 (7461 slots) master
   2 additional replica(s)
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:7461-11922 (4462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

另外,除了手動遷移之外,redis還會自動遷移slave。前面介紹配置項cluster-migration-barrier時也簡單解釋過:

 

redis在特定的時刻,試圖把擁有最多slave的master的slave遷移給沒有slave的master;有了這一機制,你可以簡單地向系統添加一些slave,不用指定它們的master是誰。當有master失去了所有slave時(slave一個個都故障了),系統就會自動遷移給它。cluster-migration-barrier:設置自動遷移時,一個master最少保留幾個slave。例如,這個值設置為2,我有3個slave,你沒有時,我會給你一個;我有2個slave,你沒有時,我不會給你。

 

4.4.8 升級節點

4.4.8.1 升級slave

 

停掉;使用新版本的redis啟動;

 

4.4.8.2 升級master

 

手動fail over到一個slave上;等待master變為slave;然後,作為slave升級(停掉,使用新版redis啟動);再fail over回來(可選);

 

4.4.9 集群遷移

暫時沒必要;

4.4.10 Stop/Start集群

Stop集群:只需一個個停止各個實例

 

# /usr/local/bin/redis-cli -p 7000 shutdown
# /usr/local/bin/redis-cli -p 7001 shutdown
# /usr/local/bin/redis-cli -p 7003 shutdown
# /usr/local/bin/redis-cli -p 7004 shutdown
# /usr/local/bin/redis-cli -p 7005 shutdown
# /usr/local/bin/redis-cli -p 7006 shutdown
# ps -ef | grep redis
root      26266  23339  0 17:24 pts/2    00:00:00 grep --color=auto redis
[root@localhost ~]#

 

Start集群:只需一個個啟動各個實例(沒必要再使用redis-trib.rb create)

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7000.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7003.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7004.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7005.conf
# /usr/local/bin/redis-server /usr/local/etc/redis_7006.conf

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000
   slots:1000-5460 (4461 slots) master
   1 additional replica(s)
M: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003
   slots:0-999,5461-7460,11923-16383 (7461 slots) master
   1 additional replica(s)
S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004
   slots: (0 slots) slave
   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9
M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001
   slots:7461-11922 (4462 slots) master
   1 additional replica(s)
S: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006
   slots: (0 slots) slave
   replicates ebfa6b5ab54e1794df5786694fcabca6f9a37f42
S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005
   slots: (0 slots) slave
   replicates b6be6eb409d0207e698997d79bab9adaa90348f0
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
測試IO是否正常:

 

# ruby consistency-test.rb 127.0.0.1 7000
109 R (0 err) | 109 W (0 err) |
661 R (0 err) | 661 W (0 err) |
1420 R (0 err) | 1420 W (0 err) |
2321 R (0 err) | 2321 W (0 err) |
……

 

5. 小結

本文主要記錄了redis的配置過程(包括單機模式,主備模式和cluster模式);在這個過程中,盡量對redis系統的工作方式進行解釋,即使不算詳盡。希望可以作為入門知識。

Copyright © Linux教程網 All Rights Reserved