MongoDB副本集的搭建,本文是对MongoDB副本集常用操作的2个集中

本文是对MongoDB副本集常用操作的2个集聚,同时也穿插着介绍了操作背后的规律及注意点。

正文是对MongoDB副本集常用操作的三个集中,同时也穿插着介绍了操作背后的原理及注意点。

整合此前的篇章:MongoDB副本集的搭建,大家能够在较短的小运内熟稔MongoDB的搭建和保管。

整合之前的稿子:MongoDB副本集的搭建,大家能够在较短的日子内熟知MongoDB的搭建和治本。

下边包车型大巴操作重要分为三个部分:

上边包车型客车操作首要分为多个部分:

  1. 修改节点状态
  1. 修改节点状态

    主要包涵:

    首要回顾:

    1> 将Primary节点降级为Secondary节点

    1> 将Primary节点降级为Secondary节点

    2> 冻结Secondary节点

    2> 冻结Secondary节点

    3> 强制Secondary节点进入维护格局

    3> 强制Secondary节点进入维护格局

2. 改动副本集的布局

2. 改动副本集的配置

    1> 添加节点

    1> 添加节点

    2> 删除节点

    2> 删除节点

    3> 将Secondary节点设置为延迟备份节点

    3> 将Secondary节点设置为延迟备份节点

    4> 将Secondary节点设置为隐蔽节点

    4> 将Secondary节点设置为隐蔽节点

    5> 替换当前的副本集成员

    5> 替换当前的副本集成员

    6> 设置副本集节点的预先级

    6> 设置副本集节点的事先级

    7> 阻止Secondary节点升级为Primary节点

    7> 阻止Secondary节点升级为Primary节点

    8> 怎样设置没有投票权的Secondary节点

    8> 怎么样设置没有投票权的Secondary节点

    9> 禁用chainingAllowed

    9> 禁用chainingAllowed

   10> 为Secondary节点显式内定复制源

   10> 为Secondary节点显式钦命复制源

   11> 禁止Secondary节点创制索引

   11> 禁止Secondary节点创设索引

 

 

先是查看MongoDB副本集协理的拥有操作

率先查看MongoDB副本集帮忙的全部操作

> rs.help()
    rs.status()                                { replSetGetStatus : 1 } checks repl set status
    rs.initiate()                              { replSetInitiate : null } initiates set with default settings
    rs.initiate(cfg)                           { replSetInitiate : cfg } initiates set with configuration cfg
    rs.conf()                                  get the current configuration object from local.system.replset
    rs.reconfig(cfg)                           updates the configuration of a running replica set with cfg (disconnects)
    rs.add(hostportstr)                        add a new member to the set with default attributes (disconnects)
    rs.add(membercfgobj)                       add a new member to the set with extra attributes (disconnects)
    rs.addArb(hostportstr)                     add a new member which is arbiterOnly:true (disconnects)
    rs.stepDown([stepdownSecs, catchUpSecs])   step down as primary (disconnects)
    rs.syncFrom(hostportstr)                   make a secondary sync from the given member
    rs.freeze(secs)                            make a node ineligible to become primary for the time specified
    rs.remove(hostportstr)                     remove a host from the replica set (disconnects)
    rs.slaveOk()                               allow queries on secondary nodes

    rs.printReplicationInfo()                  check oplog size and time range
    rs.printSlaveReplicationInfo()             check replica set members and replication lag
    db.isMaster()                              check who is primary

    reconfiguration helpers disconnect from the database so the shell will display
    an error, even if the command succeeds.
> rs.help()
    rs.status()                                { replSetGetStatus : 1 } checks repl set status
    rs.initiate()                              { replSetInitiate : null } initiates set with default settings
    rs.initiate(cfg)                           { replSetInitiate : cfg } initiates set with configuration cfg
    rs.conf()                                  get the current configuration object from local.system.replset
    rs.reconfig(cfg)                           updates the configuration of a running replica set with cfg (disconnects)
    rs.add(hostportstr)                        add a new member to the set with default attributes (disconnects)
    rs.add(membercfgobj)                       add a new member to the set with extra attributes (disconnects)
    rs.addArb(hostportstr)                     add a new member which is arbiterOnly:true (disconnects)
    rs.stepDown([stepdownSecs, catchUpSecs])   step down as primary (disconnects)
    rs.syncFrom(hostportstr)                   make a secondary sync from the given member
    rs.freeze(secs)                            make a node ineligible to become primary for the time specified
    rs.remove(hostportstr)                     remove a host from the replica set (disconnects)
    rs.slaveOk()                               allow queries on secondary nodes

    rs.printReplicationInfo()                  check oplog size and time range
    rs.printSlaveReplicationInfo()             check replica set members and replication lag
    db.isMaster()                              check who is primary

    reconfiguration helpers disconnect from the database so the shell will display
    an error, even if the command succeeds.

 

 

修改节点状态

修改节点状态

将Primary节点降级为Secondary节点

将Primary节点降级为Secondary节点

myapp:PRIMARY> rs.stepDown()
myapp:PRIMARY> rs.stepDown()

本条命令会让primary降级为Secondary节点,并保持60s,如若那段时日内尚未新的primary被推选出来,那几个节点能够供给重复展开大选。

以此命令会让primary降级为Secondary节点,并维持60s,若是那段时光内并未新的primary被大选出来,那么些节点可以须求再度举办大选。

也可手动钦点时间

也可手动内定时间

myapp:PRIMARY> rs.stepDown(30)
myapp:PRIMARY> rs.stepDown(30)

在执行完该命令后,原Secondary node3:27017升级为Primary。

在进行完该命令后,原Secondary node3:27017晋升为Primary。

其日记输出为:

其日记输出为:

永利官方网站 1永利官方网站 2

永利官方网站 3永利官方网站 4

2017-05-03T22:24:21.009+0800 I COMMAND  [conn8] Attempting to step down in response to replSetStepDown command
2017-05-03T22:24:25.967+0800 I -        [conn8] end connection 127.0.0.1:45976 (3 connections now open)
2017-05-03T22:24:37.643+0800 I REPL     [ReplicationExecutor] Member node3:27018 is now in state SECONDARY
2017-05-03T22:24:41.123+0800 I REPL     [replication-40] Restarting oplog query due to error: InterruptedDueToReplStateChange: operat
ion was interrupted. Last fetched optime (with hash): { ts: Timestamp 1493821475000|1, t: 2 }[-6379771952742605801]. Restarts remaining: 32017-05-03T22:24:41.167+0800 I REPL     [replication-40] Scheduled new oplog query Fetcher source: node3:27018 database: local query:
 { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1493821475000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } query metadata: { $replData: 1, $ssm: { $secondaryOk: true } } active: 1 timeout: 10000ms shutting down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler request: RemoteCommand 11695 -- target:node3:27018 db:local cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1493821475000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis: -1ms2017-05-03T22:24:41.265+0800 I REPL     [replication-39] Choosing new sync source because our current sync source, node3:27018, has a
n OpTime ({ ts: Timestamp 1493821475000|1, t: 2 }) which is not ahead of ours ({ ts: Timestamp 1493821475000|1, t: 2 }), it does not have a sync source, and it's not the primary (sync source does not know the primary)2017-05-03T22:24:41.266+0800 I REPL     [replication-39] Canceling oplog query because we have to choose a new sync source. Current s
ource: node3:27018, OpTime { ts: Timestamp 0|0, t: -1 }, its sync source index:-12017-05-03T22:24:41.266+0800 W REPL     [rsBackgroundSync] Fetcher stopped querying remote oplog with error: InvalidSyncSource: sync 
source node3:27018 (last visible optime: { ts: Timestamp 0|0, t: -1 }; config version: 1; sync source index: -1; primary index: -1) is no longer valid2017-05-03T22:24:41.266+0800 I REPL     [rsBackgroundSync] could not find member to sync from
2017-05-03T22:24:46.021+0800 I REPL     [SyncSourceFeedback] SyncSourceFeedback error sending update to node3:27018: InvalidSyncSourc
e: Sync source was cleared. Was node3:270182017-05-03T22:24:46.775+0800 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
2017-05-03T22:24:46.775+0800 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
2017-05-03T22:24:46.857+0800 I REPL     [ReplicationExecutor] VoteRequester(term 2 dry run) received a yes vote from node3:27019; res
ponse message: { term: 2, voteGranted: true, reason: "", ok: 1.0 }2017-05-03T22:24:46.858+0800 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
2017-05-03T22:24:46.891+0800 I REPL     [ReplicationExecutor] VoteRequester(term 3) received a yes vote from node3:27018; response me
ssage: { term: 3, voteGranted: true, reason: "", ok: 1.0 }2017-05-03T22:24:46.891+0800 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 3
2017-05-03T22:24:46.891+0800 I REPL     [ReplicationExecutor] transition to PRIMARY
2017-05-03T22:24:46.892+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Connecting to node3:27019
2017-05-03T22:24:46.894+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Connecting to node3:27019
2017-05-03T22:24:46.894+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to node3:27019
2017-05-03T22:24:46.895+0800 I REPL     [ReplicationExecutor] My optime is most up-to-date, skipping catch-up and completing transiti
on to primary.2017-05-03T22:24:46.895+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to node3:27019
2017-05-03T22:24:47.348+0800 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2017-05-03T22:24:49.231+0800 I NETWORK  [thread1] connection accepted from 192.168.244.30:35837 #9 (3 connections now open)
2017-05-03T22:24:49.236+0800 I NETWORK  [conn9] received client metadata from 192.168.244.30:35837 conn9: { driver: { name: "NetworkI
nterfaceASIO-RS", version: "3.4.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 6.7 (Santiago)", architecture: "x86_64", version: "Kernel 2.6.32-573.el6.x86_64" } }2017-05-03T22:24:49.317+0800 I NETWORK  [thread1] connection accepted from 192.168.244.30:35838 #10 (4 connections now open)
2017-05-03T22:24:49.318+0800 I NETWORK  [conn10] received client metadata from 192.168.244.30:35838 conn10: { driver: { name: "Networ
kInterfaceASIO-RS", version: "3.4.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 6.7 (Santiago)", architecture: "x86_64", version: "Kernel 2.6.32-573.el6.x86_64" } }
2017-05-03T22:24:21.009+0800 I COMMAND  [conn8] Attempting to step down in response to replSetStepDown command
2017-05-03T22:24:25.967+0800 I -        [conn8] end connection 127.0.0.1:45976 (3 connections now open)
2017-05-03T22:24:37.643+0800 I REPL     [ReplicationExecutor] Member node3:27018 is now in state SECONDARY
2017-05-03T22:24:41.123+0800 I REPL     [replication-40] Restarting oplog query due to error: InterruptedDueToReplStateChange: operat
ion was interrupted. Last fetched optime (with hash): { ts: Timestamp 1493821475000|1, t: 2 }[-6379771952742605801]. Restarts remaining: 32017-05-03T22:24:41.167+0800 I REPL     [replication-40] Scheduled new oplog query Fetcher source: node3:27018 database: local query:
 { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1493821475000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } query metadata: { $replData: 1, $ssm: { $secondaryOk: true } } active: 1 timeout: 10000ms shutting down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler request: RemoteCommand 11695 -- target:node3:27018 db:local cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1493821475000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis: -1ms2017-05-03T22:24:41.265+0800 I REPL     [replication-39] Choosing new sync source because our current sync source, node3:27018, has a
n OpTime ({ ts: Timestamp 1493821475000|1, t: 2 }) which is not ahead of ours ({ ts: Timestamp 1493821475000|1, t: 2 }), it does not have a sync source, and it's not the primary (sync source does not know the primary)2017-05-03T22:24:41.266+0800 I REPL     [replication-39] Canceling oplog query because we have to choose a new sync source. Current s
ource: node3:27018, OpTime { ts: Timestamp 0|0, t: -1 }, its sync source index:-12017-05-03T22:24:41.266+0800 W REPL     [rsBackgroundSync] Fetcher stopped querying remote oplog with error: InvalidSyncSource: sync 
source node3:27018 (last visible optime: { ts: Timestamp 0|0, t: -1 }; config version: 1; sync source index: -1; primary index: -1) is no longer valid2017-05-03T22:24:41.266+0800 I REPL     [rsBackgroundSync] could not find member to sync from
2017-05-03T22:24:46.021+0800 I REPL     [SyncSourceFeedback] SyncSourceFeedback error sending update to node3:27018: InvalidSyncSourc
e: Sync source was cleared. Was node3:270182017-05-03T22:24:46.775+0800 I REPL     [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms
2017-05-03T22:24:46.775+0800 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
2017-05-03T22:24:46.857+0800 I REPL     [ReplicationExecutor] VoteRequester(term 2 dry run) received a yes vote from node3:27019; res
ponse message: { term: 2, voteGranted: true, reason: "", ok: 1.0 }2017-05-03T22:24:46.858+0800 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
2017-05-03T22:24:46.891+0800 I REPL     [ReplicationExecutor] VoteRequester(term 3) received a yes vote from node3:27018; response me
ssage: { term: 3, voteGranted: true, reason: "", ok: 1.0 }2017-05-03T22:24:46.891+0800 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 3
2017-05-03T22:24:46.891+0800 I REPL     [ReplicationExecutor] transition to PRIMARY
2017-05-03T22:24:46.892+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Connecting to node3:27019
2017-05-03T22:24:46.894+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Connecting to node3:27019
2017-05-03T22:24:46.894+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to node3:27019
2017-05-03T22:24:46.895+0800 I REPL     [ReplicationExecutor] My optime is most up-to-date, skipping catch-up and completing transiti
on to primary.2017-05-03T22:24:46.895+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to node3:27019
2017-05-03T22:24:47.348+0800 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2017-05-03T22:24:49.231+0800 I NETWORK  [thread1] connection accepted from 192.168.244.30:35837 #9 (3 connections now open)
2017-05-03T22:24:49.236+0800 I NETWORK  [conn9] received client metadata from 192.168.244.30:35837 conn9: { driver: { name: "NetworkI
nterfaceASIO-RS", version: "3.4.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 6.7 (Santiago)", architecture: "x86_64", version: "Kernel 2.6.32-573.el6.x86_64" } }2017-05-03T22:24:49.317+0800 I NETWORK  [thread1] connection accepted from 192.168.244.30:35838 #10 (4 connections now open)
2017-05-03T22:24:49.318+0800 I NETWORK  [conn10] received client metadata from 192.168.244.30:35838 conn10: { driver: { name: "Networ
kInterfaceASIO-RS", version: "3.4.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 6.7 (Santiago)", architecture: "x86_64", version: "Kernel 2.6.32-573.el6.x86_64" } }

View Code

View Code

原Primary node3:27018降低为Secondary

原Primary node3:27018降低为Secondary

永利官方网站 5永利官方网站 6

永利官方网站 7永利官方网站 8

2017-05-03T22:24:36.262+0800 I COMMAND  [conn7] Attempting to step down in response to replSetStepDown command
2017-05-03T22:24:36.303+0800 I REPL     [conn7] transition to SECONDARY
2017-05-03T22:24:36.315+0800 I NETWORK  [conn7] legacy transport layer closing all connections
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 5
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 4
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 4
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 3
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 1
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 1
2017-05-03T22:24:36.382+0800 I NETWORK  [thread1] connection accepted from 127.0.0.1:43359 #8 (5 connections now open)
2017-05-03T22:24:36.383+0800 I NETWORK  [conn8] received client metadata from 127.0.0.1:43359 conn8: { application: { name: "MongoDB 
Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 6.7 (Santiago)", architecture: "x86_64", version: "Kernel 2.6.32-573.el6.x86_64" } }2017-05-03T22:24:36.408+0800 I -        [conn7] AssertionException handling request, closing client connection: 172 Operation attempt
ed on a closed transport Session.2017-05-03T22:24:36.408+0800 I -        [conn7] end connection 127.0.0.1:43355 (6 connections now open)
2017-05-03T22:24:41.262+0800 I COMMAND  [conn5] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timest
amp 1493821475000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } planSummary: COLLSCAN cursorid:12906944372 keysExamined:0 docsExamined:1 writeConflicts:1 numYields:1 nreturned:1 reslen:392 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 100ms2017-05-03T22:24:48.311+0800 I REPL     [ReplicationExecutor] Member node3:27017 is now in state PRIMARY
2017-05-03T22:24:49.163+0800 I REPL     [rsBackgroundSync] sync source candidate: node3:27017
2017-05-03T22:24:49.164+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to node3:27017
2017-05-03T22:24:49.236+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to node3:27017
2017-05-03T22:24:49.316+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to node3:27017
2017-05-03T22:24:49.318+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to node3:27017
2017-05-03T22:25:41.020+0800 I -        [conn4] end connection 192.168.244.30:36940 (5 connections now open)
2017-05-03T22:29:02.653+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to node3:27017
2017-05-03T22:29:02.669+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to node3:27017
2017-05-03T22:29:41.442+0800 I -        [conn5] end connection 192.168.244.30:36941 (4 connections now open)
2017-05-03T22:24:36.262+0800 I COMMAND  [conn7] Attempting to step down in response to replSetStepDown command
2017-05-03T22:24:36.303+0800 I REPL     [conn7] transition to SECONDARY
2017-05-03T22:24:36.315+0800 I NETWORK  [conn7] legacy transport layer closing all connections
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 5
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 4
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 4
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 3
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 1
2017-05-03T22:24:36.316+0800 I NETWORK  [conn7] Skip closing connection for connection # 1
2017-05-03T22:24:36.382+0800 I NETWORK  [thread1] connection accepted from 127.0.0.1:43359 #8 (5 connections now open)
2017-05-03T22:24:36.383+0800 I NETWORK  [conn8] received client metadata from 127.0.0.1:43359 conn8: { application: { name: "MongoDB 
Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 6.7 (Santiago)", architecture: "x86_64", version: "Kernel 2.6.32-573.el6.x86_64" } }2017-05-03T22:24:36.408+0800 I -        [conn7] AssertionException handling request, closing client connection: 172 Operation attempt
ed on a closed transport Session.2017-05-03T22:24:36.408+0800 I -        [conn7] end connection 127.0.0.1:43355 (6 connections now open)
2017-05-03T22:24:41.262+0800 I COMMAND  [conn5] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timest
amp 1493821475000|1 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 2 } planSummary: COLLSCAN cursorid:12906944372 keysExamined:0 docsExamined:1 writeConflicts:1 numYields:1 nreturned:1 reslen:392 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 100ms2017-05-03T22:24:48.311+0800 I REPL     [ReplicationExecutor] Member node3:27017 is now in state PRIMARY
2017-05-03T22:24:49.163+0800 I REPL     [rsBackgroundSync] sync source candidate: node3:27017
2017-05-03T22:24:49.164+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to node3:27017
2017-05-03T22:24:49.236+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to node3:27017
2017-05-03T22:24:49.316+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to node3:27017
2017-05-03T22:24:49.318+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to node3:27017
2017-05-03T22:25:41.020+0800 I -        [conn4] end connection 192.168.244.30:36940 (5 connections now open)
2017-05-03T22:29:02.653+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to node3:27017
2017-05-03T22:29:02.669+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to node3:27017
2017-05-03T22:29:41.442+0800 I -        [conn5] end connection 192.168.244.30:36941 (4 connections now open)

View Code

View Code

 

 

冻结Secondary节点

冻结Secondary节点

假使须求对Primary做一下护卫,不过不愿意在维护的那段时光内将其余Secondary节点大选为Primary节点,能够在每回Secondary节点上进行freeze命令,强制使它们一贯高居Secondary节点状态。

即使急需对Primary做一下有限支撑,不过不期望在保险的那段时光内将其余Secondary节点公投为Primary节点,能够在每一回Secondary节点上进行freeze命令,强制使它们一贯处在Secondary节点状态。

myapp:SECONDARY> rs.freeze(100)
myapp:SECONDARY> rs.freeze(100)

注:只还好Secondary节点上进行

注:只可以在Secondary节点上执行

myapp:PRIMARY> rs.freeze(100)
{
    "ok" : 0,
    "errmsg" : "cannot freeze node when primary or running for election. state: Primary",
    "code" : 95,
    "codeName" : "NotSecondary"
}
myapp:PRIMARY> rs.freeze(100)
{
    "ok" : 0,
    "errmsg" : "cannot freeze node when primary or running for election. state: Primary",
    "code" : 95,
    "codeName" : "NotSecondary"
}

借使要解冻Secondary节点,只需进行

设若要解冻Secondary节点,只需实施

myapp:SECONDARY> rs.freeze()
myapp:SECONDARY> rs.freeze()

 

 

强制Secondary节点进入维护格局

强制Secondary节点进入维护格局

当Secondary节点进入到保卫安全形式后,它的景况即转向为“RECOVE兰德昂科雷ING”,在这么些情形的节点,客户端不会发送读请求给它,同时它也无法当做复制源。

当Secondary节点进入到尊崇方式后,它的景色即转向为“RECOVE哈弗ING”,在这些景况的节点,客户端不会发送读请求给它,同时它也无法同日而语复制源。

跻身维护格局有三种触发方式:

进去维护情势有二种触发格局:

  1. 机关触发
  1. 自行触发

    譬如Secondary上推行压缩

    譬如Secondary上实施加压力缩

  1. 手动触发

    myapp:SECONDARY> db.adminCommand({“replSetMaintenance”:true})

  1. 手动触发

    myapp:SECONDARY> db.adminCommand({“replSetMaintenance”:true})

 

 

修改副本集的布局

修改副本集的配置

添加节点

添加节点

myapp:PRIMARY> rs.add("node3:27017")

myapp:PRIMARY> rs.add({_id: 3, host: "node3:27017", priority: 0, hidden: true})
myapp:PRIMARY> rs.add("node3:27017")

myapp:PRIMARY> rs.add({_id: 3, host: "node3:27017", priority: 0, hidden: true})

也可经过配备文件的法子

也可由此安排文件的点子

> cfg={
    "_id" : 3,
    "host" : "node3:27017",
    "arbiterOnly" : false,
    "buildIndexes" : true,
    "hidden" : true,
    "priority" : 0,
    "tags" : {

    },
    "slaveDelay" : NumberLong(0),
    "votes" : 1
}
> rs.add(cfg)
> cfg={
    "_id" : 3,
    "host" : "node3:27017",
    "arbiterOnly" : false,
    "buildIndexes" : true,
    "hidden" : true,
    "priority" : 0,
    "tags" : {

    },
    "slaveDelay" : NumberLong(0),
    "votes" : 1
}
> rs.add(cfg)

 

 

剔除节点

去除节点

首先种方法

率先种艺术

myapp:PRIMARY> rs.remove("node3:27017")
myapp:PRIMARY> rs.remove("node3:27017")

其次种格局

其次种办法

myapp:PRIMARY> cfg = rs.conf()
myapp:PRIMARY> cfg.members.splice(2,1)
myapp:PRIMARY> rs.reconfig(cfg)
myapp:PRIMARY> cfg = rs.conf()
myapp:PRIMARY> cfg.members.splice(2,1)
myapp:PRIMARY> rs.reconfig(cfg)

注:执行rs.reconfig并不必然带来副本集的重新公投,加force参数同样如此。

注:执行rs.reconfig并不自然带来副本集的再一次选举,加force参数同样如此。

The rs.reconfig() shell method can trigger the current primary to step down in some situations. 
The rs.reconfig() shell method can trigger the current primary to step down in some situations. 

 

 

修改节点的计划

修改节点的陈设

将Secondary节点设置为延迟备份节点

将Secondary节点设置为延迟备份节点

cfg = rs.conf()
cfg.members[1].priority = 0
cfg.members[1].hidden = true
cfg.members[1].slaveDelay = 3600
rs.reconfig(cfg)
cfg = rs.conf()
cfg.members[1].priority = 0
cfg.members[1].hidden = true
cfg.members[1].slaveDelay = 3600
rs.reconfig(cfg)

 

 

将Secondary节点设置为隐匿节点

将Secondary节点设置为隐匿节点

cfg = rs.conf()
cfg.members[0].priority = 0
cfg.members[0].hidden = true
rs.reconfig(cfg)
cfg = rs.conf()
cfg.members[0].priority = 0
cfg.members[0].hidden = true
rs.reconfig(cfg)

 

 

轮换当前的副本集成员

轮换当前的副本集成员

cfg = rs.conf()
cfg.members[0].host = "mongo2.example.net"
rs.reconfig(cfg)
cfg = rs.conf()
cfg.members[0].host = "mongo2.example.net"
rs.reconfig(cfg)

 

 

安装副本集节点的预先级

安装副本集节点的先行级

cfg = rs.conf()
cfg.members[0].priority = 0.5
cfg.members[1].priority = 2
cfg.members[2].priority = 2
rs.reconfig(cfg)
cfg = rs.conf()
cfg.members[0].priority = 0.5
cfg.members[1].priority = 2
cfg.members[2].priority = 2
rs.reconfig(cfg)

事先级的灵光取值是0~1000,可为小数,暗许为1

先行级的卓有功用取值是0~1000,可为小数,暗许为1

从MongoDB 3.2开始

从MongoDB 3.2开始

Non-voting members must have priority of 0.
Members with priority greater than 0 cannot have 0 votes.
Non-voting members must have priority of 0.
Members with priority greater than 0 cannot have 0 votes.

注:假设将眼下Secondary节点的先行级设置的不止Primary节点的优先级,会导致当前Primary节点的退位。

注:要是将近年来Secondary节点的先行级设置的超出Primary节点的优先级,会招致当前Primary节点的退位。

 

 

阻碍Secondary节点升级为Primary节点

堵住Secondary节点升级为Primary节点

只需将priority设置为0

只需将priority设置为0

fg = rs.conf()
cfg.members[2].priority = 0
rs.reconfig(cfg)
fg = rs.conf()
cfg.members[2].priority = 0
rs.reconfig(cfg)

 

 

如何设置没有投票权的Secondary节点

咋样设置没有投票权的Secondary节点

MongoDB限制叁个副本集最多只可以拥有肆17个分子节点,其中,最八只有四个分子节点有所投票权。

MongoDB限制3个副本集最多只好拥有肆十七个成员节点,在那之中,最六唯有几个分子节点有所投票权。

因此作此限制,首借使考虑到心跳请求导致的网络流量,终归各种成员都要向别的具有成员发送心跳请求,和推举开支的时光。

之所以作此限制,首要是考虑到心跳请求导致的网络流量,究竟每种成员都要向任何具有成员发送心跳请求,和公投开销的时间。

从MongoDB 3.2开首,任何priority大于0的节点都不足将votes设置为0

从MongoDB 3.2上马,任何priority大于0的节点都不足将votes设置为0

为此,对于从未投票权的Secondary节点,votes和priority必须同时安装为0

由此,对于尚未投票权的Secondary节点,votes和priority必须同时安装为0

cfg = rs.conf() 
cfg.members[3].votes = 0 
cfg.members[3].priority = 0 
cfg.members[4].votes = 0
cfg.members[4].priority = 0 
rs.reconfig(cfg) 
cfg = rs.conf() 
cfg.members[3].votes = 0 
cfg.members[3].priority = 0 
cfg.members[4].votes = 0
cfg.members[4].priority = 0 
rs.reconfig(cfg) 

 

 

禁用chainingAllowed

禁用chainingAllowed

私下认可情状下,允许级联复制。

私下认可境况下,允许级联复制。

即备份集中要是新添加了3个节点,这几个节点很恐怕是从在那之中叁个Secondary节点处进行复制,而不是从Primary节点处复制。

即备份集中假如新添加了一个节点,那几个节点很可能是从在那之中三个Secondary节点处实行复制,而不是从Primary节点处复制。

MongoDB依照ping时间选拔同步源,贰个节点向另一个节点发送心跳请求,就能够识破心跳请求所消耗的时日。MongoDB维护着差异节点间心跳请求的平分费用时间,选拔同步源时,会选择二个离本身相比较近而且数量比本人新的节点。

MongoDB依据ping时间接选举用同步源,二个节点向另二个节点发送心跳请求,就能够得知心跳请求所消耗的时刻。MongoDB维护着差别节点间心跳请求的平分耗费时间,选取同步源时,会挑选一个离本身相比近而且数量比自个儿新的节点。

什么样判定节点是从哪个节点处进行复制的吗?

怎么着判断节点是从哪个节点处实行理并答复制的啊?

myapp:PRIMARY> rs.status().members[1].syncingTo
node3:27018
myapp:PRIMARY> rs.status().members[1].syncingTo
node3:27018

本来,级联复制也有醒指标毛病:复制链越长,将写操作复制到全数Secondary节点所花费的时光就越长。

理所当然,级联复制也有有目共睹的后天不足:复制链越长,将写操作复制到全数Secondary节点所消费的小时就越长。

可由此如下情势禁止使用

可经过如下格局禁止使用

cfg=rs.conf()
cfg.settings.chainingAllowed=false
rs.reconfig(cfg)
cfg=rs.conf()
cfg.settings.chainingAllowed=false
rs.reconfig(cfg)

将chainingAllowed设置为false后,全体Secondary节点都会从Primary节点复制数据。

将chainingAllowed设置为false后,全体Secondary节点都会从Primary节点复制数据。

 

 

为Secondary节点显式钦点复制源

为Secondary节点显式钦赐复制源

rs.syncFrom("node3:27019")
rs.syncFrom("node3:27019")

 

 

明确命令禁止Secondary节点创立索引

取缔Secondary节点创立索引

突发性,并不必要Secondary节点拥有和Primary节点相同的目录,譬如这几个节点只是用来拍卖数据备份可能离线的批量职务。这么些时候,就足以阻止Secondary节点成立索引。

突发性,并不供给Secondary节点拥有和Primary节点相同的目录,譬如这一个节点只是用来拍卖数据备份或许离线的批量任务。那么些时候,就能够阻碍Secondary节点成立索引。

在MongoDB 3.4本子中,不相同意直接改动,只还好增加节点时显式钦赐

在MongoDB 3.4本子中,不允许间接改动,只辛亏抬高节点时显式钦点

myapp:PRIMARY> cfg=rs.conf()
myapp:PRIMARY> cfg.members[2].buildIndexes=false
false
myapp:PRIMARY> rs.reconfig(cfg)
{
    "ok" : 0,
    "errmsg" : "priority must be 0 when buildIndexes=false",
    "code" : 103,
    "codeName" : "NewReplicaSetConfigurationIncompatible"
}
myapp:PRIMARY> cfg.members[2].buildIndexes=false
false
myapp:PRIMARY> cfg.members[2].priority=0
0
myapp:PRIMARY> rs.reconfig(cfg)
{
    "ok" : 0,
    "errmsg" : "New and old configurations differ in the setting of the buildIndexes field for member node3:27017; to make this c
hange, remove then re-add the member",    "code" : 103,
    "codeName" : "NewReplicaSetConfigurationIncompatible"
}
myapp:PRIMARY> rs.remove("node3:27017")
{ "ok" : 1 }
myapp:PRIMARY> rs.add({_id: 2, host: "node3:27017", priority: 0, buildIndexes:false})
{ "ok" : 1 }
myapp:PRIMARY> cfg=rs.conf()
myapp:PRIMARY> cfg.members[2].buildIndexes=false
false
myapp:PRIMARY> rs.reconfig(cfg)
{
    "ok" : 0,
    "errmsg" : "priority must be 0 when buildIndexes=false",
    "code" : 103,
    "codeName" : "NewReplicaSetConfigurationIncompatible"
}
myapp:PRIMARY> cfg.members[2].buildIndexes=false
false
myapp:PRIMARY> cfg.members[2].priority=0
0
myapp:PRIMARY> rs.reconfig(cfg)
{
    "ok" : 0,
    "errmsg" : "New and old configurations differ in the setting of the buildIndexes field for member node3:27017; to make this c
hange, remove then re-add the member",    "code" : 103,
    "codeName" : "NewReplicaSetConfigurationIncompatible"
}
myapp:PRIMARY> rs.remove("node3:27017")
{ "ok" : 1 }
myapp:PRIMARY> rs.add({_id: 2, host: "node3:27017", priority: 0, buildIndexes:false})
{ "ok" : 1 }

从上述测试中得以看看,倘使要将节点的buildIndexes设置为false,必须同时将priority设置为0。

从上述测试中得以看出,假诺要将节点的buildIndexes设置为false,必须同时将priority设置为0。

 

 

参考

参考

1.《MongoDB权威指南》

1.《MongoDB权威指南》

  1. 永利官方网站,MongoDB官方文档
  1. MongoDB官方文书档案

 

 

相关文章