PXC、MGR、MGC集群之新建、备份、恢复操作步骤
PXC、MGR、MGC集群之新建、备份、恢复操作步骤
导读
作者:沃趣-罗小波
概要:原文包含PXC、MGR、MGC集群之新建、备份、恢复操作步骤详解,因篇幅较长,本文仅节取MGR部分内容。
若想了解完整文章,可至文末获取
一、原文目录
二、背景说明
服务器环境
kvm ip:10.10.30.162/163/164
CPU:8 vcpus
内存:16G
磁盘:data 100G flash卡,binlog 100G flash卡
数据库版本
MGC:MariaDB 10.2.12
MGR:MySQL 5.7.21
PXC:Percona-Xtradb-Cluster 5.7.21
sysbench版本
sysbench 1.0.7
造数量:8个表,单表500W数据量
PS
PXC:为Percona Xtradb Cluster的缩写
MGR:为MySQL Group Replication的缩写
MGC:为MariaDB Galera Cluster的缩写
三、原文节选:
3、MGR
3.1. 集群初始化
3.1.1. 第一个节点(init)
server_id=3306162sync_binlog=10000innodb_flush_log_at_trx_commit = 2binlog-checksum=NONEinnodb_support_xa=OFFauto_increment_increment=3auto_increment_offset=1binlog_row_image=fulltransaction-write-set-extraction=XXHASH64loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"loose-group_replication_single_primary_mode=OFFloose-group_replication_enforce_update_everywhere_checks=ONloose-group_replication_start_on_boot=onloose-group_replication_ip_whitelist='0.0.0.0/0'loose-group_replication_local_address='10.10.30.162:24901'loose-group_replication_group_seeds='10.10.30.162:24901,10.10.30.163:24901,10.10.30.164:24901'loose-group_replication_bootstrap_group=OFFreport_host='node1'
[root@localhost ~]# cat /etc/hosts ......10.10.30.162 node1 mysql110.10.30.163 node2 mysql210.10.30.164 node3 mysql3
mysqld_safe --defaults-file=/etc/my.cnf &
SET SQL_LOG_BIN=0;CREATE USER 'xtrabackup'@'localhost' IDENTIFIED BY 'xtrabackuppass';GRANT SELECT, LOCK TABLES, SUPER, REPLICATION CLIENT, REPLICATION SLAVE, RELOAD, CREATE TABLESPACE, PROCESS ON *.* TO 'xtrabackup'@'localhost';grant DROP, CREATE, INSERT on mysql.ibbackup_binlog_marker to 'xtrabackup'@'localhost';CREATE USER repl@'%' IDENTIFIED BY 'password';GRANT REPLICATION SLAVE ON *.* TO repl@'%' ;SET SQL_LOG_BIN=1;FLUSH PRIVILEGES;
INSTALL PLUGIN group_replication SONAME 'group_replication.so';CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='password' FOR CHANNEL 'group_replication_recovery';FLUSH PRIVILEGES ;
set global group_replication_bootstrap_group=ON;start group_replication;set global group_replication_bootstrap_group=OFF;
#查看状态是否为onlineroot@localhost : (none) 03:39:20> SELECT * FROM performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 2d623f55-2111-11e8-9cc3-0025905b06da | node1 | 3306 | ONLINE |+---------------------------+--------------------------------------+-------------+-------------+--------------+1 row in set (0.00 sec)
3.1.2. 第二个节点(全量复制)
server_id=3306163sync_binlog=10000innodb_flush_log_at_trx_commit = 2innodb_support_xa=OFFbinlog-checksum=NONEauto_increment_increment=3auto_increment_offset=2binlog_row_image=fulltransaction-write-set-extraction=XXHASH64loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"loose-group_replication_single_primary_mode=OFFloose-group_replication_enforce_update_everywhere_checks=ONloose-group_replication_start_on_boot=onloose-group_replication_ip_whitelist='0.0.0.0/0'loose-group_replication_local_address='10.10.30.163:24901'loose-group_replication_group_seeds='10.10.30.162:24901,10.10.30.163:24901,10.10.30.164:24901'loose-group_replication_bootstrap_group=OFFreport_host='node2'
[root@localhost ~]# cat /etc/hosts ......10.10.30.162 node1 mysql110.10.30.163 node2 mysql210.10.30.164 node3 mysql3
mysqld_safe --defaults-file=/etc/my.cnf &
SET SQL_LOG_BIN=0;CREATE USER 'xtrabackup'@'localhost' IDENTIFIED BY 'xtrabackuppass';GRANT SELECT, LOCK TABLES, SUPER, REPLICATION CLIENT, REPLICATION SLAVE, RELOAD, CREATE TABLESPACE, PROCESS ON *.* TO 'xtrabackup'@'localhost';grant DROP, CREATE, INSERT on mysql.ibbackup_binlog_marker to 'xtrabackup'@'localhost';CREATE USER repl@'%' IDENTIFIED BY 'password';GRANT REPLICATION SLAVE ON *.* TO repl@'%' ;SET SQL_LOG_BIN=1;FLUSH PRIVILEGES;
INSTALL PLUGIN group_replication SONAME 'group_replication.so';CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='password' FOR CHANNEL 'group_replication_recovery';FLUSH PRIVILEGES ;
reset master;start group_replication;
#查看状态是否为onlineroot@localhost : (none) 03:45:24> SELECT * FROM performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 2d623f55-2111-11e8-9cc3-0025905b06da | node1 | 3306 | ONLINE || group_replication_applier | f64f9fb6-2da4-11e8-96bd-525400c33752 | node2 | 3306 | ONLINE |+---------------------------+--------------------------------------+-------------+-------------+--------------+2 rows in set (0.00 sec)
root@localhost : (none) 06:48:32> select * from performance_schema.replication_group_member_stats where MEMBER_ID=@@server_uuid\G;*************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 15218000786938271:11 MEMBER_ID: 0a1e8349-2e87-11e8-8c9f-525400bdd1f2 COUNT_TRANSACTIONS_IN_QUEUE: 287640 # 该字段显示当前接收到的relay log与当前应用到的relay log之间的事务差异 COUNT_TRANSACTIONS_CHECKED: 0 COUNT_CONFLICTS_DETECTED: 0COUNT_TRANSACTIONS_ROWS_VALIDATING: 0TRANSACTIONS_COMMITTED_ALL_MEMBERS: 2d623f55-2111-11e8-9cc3-0025905b06da:1-2,aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-13779 # 该字段显示当前节点应用到的日志对应的GTID LAST_CONFLICT_FREE_TRANSACTION: 1 row in set (0.02 sec)
3.1.3. 第三个节点(backup recovery)
# 创建数据库root@localhost : (none) 01:44:33> create database sbtest;Query OK, 1 row affected (0.00 sec)# 使用sysbench造数sysbench --db-driver=mysql --time=180 --threads=8 --report-interval=1 --mysql-socket=/home/mysql/data/mysqldata1/sock/mysql.sock --mysql-port=3306 --mysql-user=qbench --mysql-password=qbench --mysql-db=sbtest --tables=8 --table-size=5000000 oltp_read_write --db-ps-mode=disable prepare# 使用sysbench加压sysbench --db-driver=mysql --time=99999 --threads=8 --report-interval=1 --mysql-socket=/home/mysql/data/mysqldata1/sock/mysql.sock --mysql-port=3306 --mysql-user=qbench --mysql-password=qbench --mysql-db=sbtest --tables=8 --table-size=5000000 oltp_read_write --db-ps-mode=disable run
server_id=3306164sync_binlog=10000innodb_flush_log_at_trx_commit = 2innodb_support_xa=OFFbinlog-checksum=NONEauto_increment_increment=3auto_increment_offset=3binlog_row_image=fulltransaction-write-set-extraction=XXHASH64loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"loose-group_replication_single_primary_mode=OFFloose-group_replication_enforce_update_everywhere_checks=ONloose-group_replication_start_on_boot=onloose-group_replication_ip_whitelist='0.0.0.0/0'loose-group_replication_local_address='10.10.30.164:24901'loose-group_replication_group_seeds='10.10.30.162:24901,10.10.30.163:24901,10.10.30.164:24901'loose-group_replication_bootstrap_group=OFFreport_host='node3'
[root@localhost ~]# cat /etc/hosts ......10.10.30.162 node1 mysql110.10.30.163 node2 mysql210.10.30.164 node3 mysql3
innobackupex --defaults-file=/etc/my.cnf --slave-info \--user=xtrabackup --password=xtrabackuppass --no-timestamp \--stream=tar ./ | ssh root@10.10.30.164 "cat - > /archive/backup/backup_`date +%Y%m%d`.tar"
# innobackupex执行apply-log并move-back备份数据到datadir下[root@localhost backup]# tar xvf backup_20180322.tar[root@localhost backup]# innobackupex --apply-log ./[root@localhost backup]# rm -rf /data/mysqldata1/{undo,innodb_ts,innodb_log,mydata,binlog,relaylog,binlog,slowlog,tmpdir}/*[root@localhost backup]# innobackupex --defaults-file=/etc/my.cnf --move-back ./......
chown mysql.mysql /data -Rmysqld_safe --defaults-file=/etc/my.cnf --loose-group_replication_start_on_boot=off &
[root@localhost backup]# cat xtrabackup_binlog_info mysql-bin.000147 103011527 2d623f55-2111-11e8-9cc3-0025905b06da:1-3,aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-87512845
root@localhost : (none) 01:38:35> reset master;Query OK, 0 rows affected (0.02 sec)root@localhost : (none) 01:39:25> set global gtid_purged='2d623f55-2111-11e8-9cc3-0025905b06da:1-3,aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-87512845';Query OK, 0 rows affected (0.00 sec)root@localhost : (none) 01:39:53> set global group_replication_start_on_boot=on;Query OK, 0 rows affected (0.00 sec)root@localhost : (none) 01:40:59> start group_replication;Query OK, 0 rows affected (2.66 sec)
#查看状态是否为online,root@localhost : (none) 11:16:52> SELECT * FROM performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 2d623f55-2111-11e8-9cc3-0025905b06da | node1 | 3306 | ONLINE || group_replication_applier | 859b114c-2e48-11e8-9eac-525400bdd1f2 | node3 | 3306 | RECOVERING || group_replication_applier | f64f9fb6-2da4-11e8-96bd-525400c33752 | node2 | 3306 | ONLINE |+---------------------------+--------------------------------------+-------------+-------------+--------------+3 rows in set (0.00 sec)......root@localhost : (none) 01:43:23> select * from performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 2d623f55-2111-11e8-9cc3-0025905b06da | node1 | 3306 | ONLINE || group_replication_applier | 62392979-2e5c-11e8-a389-525400bdd1f2 | node3 | 3306 | ONLINE || group_replication_applier | f64f9fb6-2da4-11e8-96bd-525400c33752 | node2 | 3306 | ONLINE |+---------------------------+--------------------------------------+-------------+-------------+--------------+3 rows in set (0.00 sec)
3.2. 集群新增节点
3.2.1. 使用备份集恢复
无备份时的qps曲线图
有备份时的qps曲线图
PS:
由于在多线程复制下,使用xtrabackup备份读节点高概率会导致发生锁死现象(sysbench 128线程insert操作百分百重现),所以在MGR架构上的备份只能考虑在写节点上备份(备选方案1:在集群上挂一个主从架构的备库,使用xtrabackup的--safe-slave-backup选项在加锁前停止SQL线程来避免锁死现象。备选方案2:假定半夜写压力不高,轮询线程信息,发现备份线程锁请求被锁死时,主动登录数据库kill掉加锁线程并尝试重新备份),锁死现象如下
3.2.2. 全量复制加入集群
参考3.1.2小节
无新节点全量复制加入集群时的qps曲线图
有新节点全量复制加入集群时的qps曲线图
PS: 全量复制新加节点会触发流控,导致性能陡降,关闭流控性能可恢复到正常状态,但需要所有节点同时关闭才生效
root@localhost : (none) 06:52:16> set global group_replication_flow_control_mode=DISABLED;Query OK, 0 rows affected (0.01 sec)
3.3. 单节点crash的恢复
做一些必要的检查之后,尝试直接以常规方式拉起节点,如果不能正常拉起或者crash节点数据丢失,则通过备份进行恢复(详细步骤参考3.1.3小节)。
3.4. 多数节点crash的恢复
两种方式(针对至少有一个节点的实例可登陆,且集群不可用的情况)
如果crash节点能够原地恢复(带着crash之前的数据直接拉起实例),则集群可以正常恢复到可读写状态(即存活节点恢复到>=N/2+1个,且都正确重新加入到集群中)
如果所有crash节点数据丢失,则无法重新加入集群,此时如果要恢复集群,需要在存活的节点上停止集群复制插件,使其脱离集群变为可读写状态,然后再恢复其他节点(使用xtrabackup备份存活节点数据做恢复)
3.4.1. crash节点数据未丢失且可以原地恢复
[root@localhost ~]# ssh 10.10.30.162 "killall -9 mysqld mysqld_safe 2> /dev/null &";ssh 10.10.30.163 "killall -9 mysqld mysqld_safe &> /dev/null &";
# 查看集群成员状态root@localhost : (none) 10:15:09> select * from performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 0a1e8349-2e87-11e8-8c9f-525400bdd1f2 | node3 | 3306 | ONLINE || group_replication_applier | 2d623f55-2111-11e8-9cc3-0025905b06da | node1 | 3306 | UNREACHABLE || group_replication_applier | 506759a9-2e83-11e8-b454-525400c33752 | node2 | 3306 | UNREACHABLE |+---------------------------+--------------------------------------+-------------+-------------+--------------+3 rows in set (0.00 sec)# 尝试读写操作## 读操作root@localhost : (none) 10:15:10> use sbtestDatabase changedroot@localhost : sbtest 10:15:52> select * from sbtest1 limit 1;+----+---------+----------------------------------------------------------------------+-------------------------------------------------------------+| id | k | c | pad |+----+---------+-----------------------------------------------------------------------+-------------------------------------------------------------+| 1 | 2507307 | 68487932199-96439406143-93774651418-41631865787-96406072701-20604855487-25459966574-28203206787-41238978918-19503783441 | 22195207048-70116052123-74140395089-76317954521-98694025897 |+----+---------+------------------------------------------------------------------------+-------------------------------------------------------------+1 row in set (0.01 sec)## 写操作root@localhost : sbtest 10:25:43> delete from sbtest1 where id=1; # 写操作被阻塞!!不是报错......
# 存活节点停止组复制插件root@localhost : (none) 11:51:15> stop group_replication;Query OK, 0 rows affected (1 min 27.08 sec)root@localhost : sbtest 11:52:10> set global read_only=1;set global super_read_only=1;Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.00 sec)# 可以看到未提交事务被回滚root@localhost : sbtest 11:29:02> delete from sbtest1 where id=1;......ERROR 3101 (HY000): Plugin instructed the server to rollback the current transaction.
root@localhost : (none) 12:30:20> set global group_replication_bootstrap_group=ON;Query OK, 0 rows affected (0.00 sec)root@localhost : (none) 12:33:25> start group_replication;Query OK, 0 rows affected (2.04 sec)root@localhost : (none) 12:33:29> set global group_replication_bootstrap_group=ON;Query OK, 0 rows affected (0.00 sec)root@localhost : sbtest 12:28:31> set global read_only=0;set global super_read_only=0; Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.00 sec)
mysqld_safe --defaults-file=/etc/my.cnf &
# 其他两个节点的状态已经转为onlineroot@localhost : (none) 12:35:07> SELECT * FROM performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 0a1e8349-2e87-11e8-8c9f-525400bdd1f2 | node3 | 3306 | ONLINE || group_replication_applier | 2d623f55-2111-11e8-9cc3-0025905b06da | node1 | 3306 | ONLINE || group_replication_applier | 506759a9-2e83-11e8-b454-525400c33752 | node2 | 3306 | ONLINE |+---------------------------+--------------------------------------+-------------+-------------+--------------+3 rows in set (0.00 sec)
3.4.2. crash节点数据丢失
[root@localhost ~]# ssh 10.10.30.162 "killall -9 mysqld mysqld_safe 2> /dev/null &";ssh 10.10.30.163 "killall -9 mysqld mysqld_safe &> /dev/null &";
rm -rf /data/mysqldata1/{undo,innodb_ts,innodb_log,mydata,binlog,relaylog,binlog,slowlog,tmpdir}/*
# 查看集群成员状态root@localhost : (none) 10:15:09> select * from performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 0a1e8349-2e87-11e8-8c9f-525400bdd1f2 | node3 | 3306 | ONLINE || group_replication_applier | 2d623f55-2111-11e8-9cc3-0025905b06da | node1 | 3306 | UNREACHABLE || group_replication_applier | 506759a9-2e83-11e8-b454-525400c33752 | node2 | 3306 | UNREACHABLE |+---------------------------+--------------------------------------+-------------+-------------+--------------+3 rows in set (0.00 sec)# 尝试读写操作## 读操作root@localhost : (none) 10:15:10> use sbtestDatabase changedroot@localhost : sbtest 10:15:52> select * from sbtest1 limit 1;+----+---------+--------------------------------------------------------------------------------------------+-------------------------------------------------------------+| id | k | c | pad |+----+---------+---------------------------------------------------------------------------------------------+-------------------------------------------------------------+| 1 | 2507307 | 68487932199-96439406143-93774651418-41631865787-96406072701-20604855487-25459966574-28203206787-41238978918-19503783441 | 22195207048-70116052123-74140395089-76317954521-98694025897 |+----+---------+-----------------------------------------------------------------------------------------------+-------------------------------------------------------------+1 row in set (0.01 sec)## 写操作root@localhost : sbtest 10:25:43> delete from sbtest1 where id=1; # 写操作被阻塞!!不是报错......
# 存活节点停止组复制插件root@localhost : (none) 11:51:15> stop group_replication;Query OK, 0 rows affected (1 min 27.08 sec)root@localhost : sbtest 11:52:10> set global read_only=1;set global super_read_only=1;Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 12:30:20> set global group_replication_bootstrap_group=ON;Query OK, 0 rows affected (0.00 sec)root@localhost : (none) 12:33:25> start group_replication;Query OK, 0 rows affected (2.04 sec)root@localhost : (none) 12:33:29> set global group_replication_bootstrap_group=ON;Query OK, 0 rows affected (0.00 sec)root@localhost : sbtest 12:28:31> set global read_only=0;set global super_read_only=0; Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.00 sec)
3.5. 整个集群crash的恢复
3.5.1. 使用备份恢复
3.5.2. 使用crash节点的datadir数据卷进行恢复
sysbench --db-driver=mysql --time=3600 --threads=128 --report-interval=1 --mysql-socket=/home/mysql/data/mysqldata1/sock/mysql.sock --mysql-port=3306 --mysql-user=qbench --mysql-password=qbench --mysql-db=sbtest --tables=8 --table-size=5000000 oltp_insert --db-ps-mode=disable run
[root@localhost ~]# ssh 10.10.30.162 "killall -9 mysqld mysqld_safe 2> /dev/null &";ssh 10.10.30.163 "killall -9 mysqld mysqld_safe &> /dev/null &";ssh 10.10.30.164 "killall -9 mysqld mysqld_safe &> /dev/null &"
# 第一个节点[root@localhost binlog]# mysqld_safe --defaults-file=/etc/my.cnf --loose-group_replication_start_on_boot=off --super-read-only --read-only &root@localhost : (none) 05:18:00> show master status\G;*************************** 1. row *************************** File: mysql-bin.000124 Position: 270 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 0a1e8349-2e87-11e8-8c9f-525400bdd1f2:1-148826,2d623f55-2111-11e8-9cc3-0025905b06da:1-2,aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-648795071 row in set (0.00 sec)# 第二个节点mysqld_safe --defaults-file=/etc/my.cnf --loose-group_replication_start_on_boot=off --super-read-only --read-only &root@localhost : (none) 05:21:04> show master status\G;*************************** 1. row *************************** File: mysql-bin.000002 Position: 190 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 0a1e8349-2e87-11e8-8c9f-525400bdd1f2:1-148826,2d623f55-2111-11e8-9cc3-0025905b06da:1-2,aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-648640651 row in set (0.00 sec)......# 第三个节点mysqld_safe --defaults-file=/etc/my.cnf --loose-group_replication_start_on_boot=off --super-read-only --read-only &root@localhost : (none) 05:21:43> show master status\G;*************************** 1. row *************************** File: mysql-bin.000004 Position: 190 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 0a1e8349-2e87-11e8-8c9f-525400bdd1f2:1-148826,2d623f55-2111-11e8-9cc3-0025905b06da:1-2,aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-648699151 row in set (0.00 sec)......
root@localhost : (none) 12:30:20> set global group_replication_bootstrap_group=ON;Query OK, 0 rows affected (0.00 sec)root@localhost : (none) 12:33:25> start group_replication;Query OK, 0 rows affected (2.04 sec)root@localhost : (none) 12:33:29> set global group_replication_bootstrap_group=ON;Query OK, 0 rows affected (0.00 sec)root@localhost : (none) 05:27:03> select * from performance_schema.replication_group_members where MEMBER_ID=@@server_uuid;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 0a1e8349-2e87-11e8-8c9f-525400bdd1f2 | node1 | 3306 | ONLINE |+---------------------------+--------------------------------------+-------------+-------------+--------------+1 row in set (0.00 sec)root@localhost : sbtest 12:28:31> set global read_only=0;set global super_read_only=0; Query OK, 0 rows affected (0.00 sec)
root@localhost : (none) 12:33:25> start group_replication;Query OK, 0 rows affected (2.04 sec)root@localhost : (none) 05:29:06> select * from performance_schema.replication_group_members where MEMBER_ID=@@server_uuid;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 5d78a458-30d2-11e8-a66f-5254002a54f2 | node2 | 3306 | RECOVERING |+---------------------------+--------------------------------------+-------------+-------------+--------------+1 row in set (0.00 sec)......root@localhost : (none) 05:30:12> select * from performance_schema.replication_group_members where MEMBER_ID=@@server_uuid;+---------------------------+--------------------------------------+-------------+-------------+--------------+| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |+---------------------------+--------------------------------------+-------------+-------------+--------------+| group_replication_applier | 19f71b6a-30d3-11e8-a724-525400c33752 | node2 | 3306 | ONLINE |+---------------------------+--------------------------------------+-------------+-------------+--------------+1 row in set (0.00 sec)# 等待集群状态为online之后,开放对外读写服务root@localhost : sbtest 12:28:31> set global read_only=0;set global super_read_only=0; Query OK, 0 rows affected (0.00 sec)
文章来源https://blog.csdn.net/n88Lpo/article/details/100680243?utm_medium=distribute.pc_category.none-task-blog-new-6.nonecase&dist_request_id=1332042.22892.16193302078398173&depth_1-utm_source=distribute.pc_category.none-task-blog-new-6.nonecase