阅读 172

TiUP在线布署TIDB分布式数据库集群节点删除

TiUP在线布署TIDB分布式数据库集群节点删除

检查当前集群节点状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
tiup cluster display hdcluster
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster display hdcluster
Cluster type:       tidb
Cluster name:       hdcluster
Cluster version:    v4.0.8
SSH type:           builtin
Dashboard URL:      
ID                    Role          Host            Ports                            OS/Arch       Status  Data Dir                           Deploy Dir
--                    ----          ----            -----                            -------       ------  --------                           ----------
172.16.254.91:9093    alertmanager  172.16.254.91   9093/9094                        linux/x86_64  Up      /tidb/tidb-data/alertmanager-9093  /tidb/tidb-deploy/alertmanager-9093
172.16.254.91:3000    grafana       172.16.254.91   3000                             linux/x86_64  Up      -                                  /tidb/tidb-deploy/grafana-3000
172.16.254.101:2379   pd            172.16.254.101  2379/2380                        linux/x86_64  Up      /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
172.16.254.102:2379   pd            172.16.254.102  2379/2380                        linux/x86_64  Up      /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
172.16.254.92:2379    pd            172.16.254.92   2379/2380                        linux/x86_64  Up|L    /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
172.16.254.93:2379    pd            172.16.254.93   2379/2380                        linux/x86_64  Up|UI   /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
172.16.254.94:2379    pd            172.16.254.94   2379/2380                        linux/x86_64  Up      /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
172.16.254.91:9090    prometheus    172.16.254.91   9090                             linux/x86_64  Up      /tidb/tidb-data/prometheus-9090    /tidb/tidb-deploy/prometheus-9090
172.16.254.103:4000   tidb          172.16.254.103  4000/10080                       linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
172.16.254.95:4000    tidb          172.16.254.95   4000/10080                       linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
172.16.254.96:4000    tidb          172.16.254.96   4000/10080                       linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
172.16.254.97:4000    tidb          172.16.254.97   4000/10080                       linux/x86_64  Up      -                                  /tidb/tidb-deploy/tidb-4000
172.16.254.91:9000    tiflash       172.16.254.91   9000/8123/3930/20170/20292/8234  linux/x86_64  Up      /tidb/tidb-data/tiflash-9000       /tidb/tidb-deploy/tiflash-9000
172.16.254.100:20160  tikv          172.16.254.100  20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
172.16.254.104:20160  tikv          172.16.254.104  20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
172.16.254.98:20160   tikv          172.16.254.98   20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
172.16.254.99:20160   tikv          172.16.254.99   20160/20180                      linux/x86_64  Up      /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
Total nodes: 17


依次删除上篇文章中添加的节点:及2台pd_server,1台tidb_server,1台tikv_server

删除tikv_servers:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
tiup cluster scale-in hdcluster --node 172.16.254.104:20160
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.104:20160
This operation will delete the 172.16.254.104:20160 nodes in `hdcluster` and all their data.
Do you want to continue? [y/N]: y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.100
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.92
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.93
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.94
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.101
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.102
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.98
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.99
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.97
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.104
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.95
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.96
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.103
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.104:20160] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]}
The component `tikv` will become tombstone, maybe exists in several minutes or hours, after that you can use the prune command to clean it
+ [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`''`
+ [ Serial ] - UpdateTopology: cluster=hdcluster
+ Refresh instance configs
  - Regenerate config pd -> 172.16.254.92:2379 ... Done
  - Regenerate config pd -> 172.16.254.93:2379 ... Done
  - Regenerate config pd -> 172.16.254.94:2379 ... Done
  - Regenerate config pd -> 172.16.254.101:2379 ... Done
  - Regenerate config pd -> 172.16.254.102:2379 ... Done
  - Regenerate config tikv -> 172.16.254.98:20160 ... Done
  - Regenerate config tikv -> 172.16.254.99:20160 ... Done
  - Regenerate config tikv -> 172.16.254.100:20160 ... Done
  - Regenerate config tidb -> 172.16.254.95:4000 ... Done
  - Regenerate config tidb -> 172.16.254.96:4000 ... Done
  - Regenerate config tidb -> 172.16.254.97:4000 ... Done
  - Regenerate config tidb -> 172.16.254.103:4000 ... Done
  - Regenerate config tiflash -> 172.16.254.91:9000 ... Done
  - Regenerate config prometheus -> 172.16.254.91:9090 ... Done
  - Regenerate config grafana -> 172.16.254.91:3000 ... Done
  - Regenerate config alertmanager -> 172.16.254.91:9093 ... Done
+ [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service
Scaled cluster `hdcluster` in successfully


删除pd_servers:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
tiup cluster scale-in hdcluster --node 172.16.254.101:2379
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.101:2379
This operation will delete the 172.16.254.101:2379 nodes in `hdcluster` and all their data.
Do you want to continue? [y/N]: y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.92
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.93
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.94
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.101
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.102
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.98
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.99
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.100
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.104
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.95
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.96
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.97
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.103
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.101:2379] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]}
Stopping component pd
    Stopping instance 172.16.254.101
    Stop pd 172.16.254.101:2379 success
Destroying component pd
Destroying instance 172.16.254.101
Destroy 172.16.254.101 success
- Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service]
Stopping component node_exporter
Stopping component blackbox_exporter
Destroying monitored 172.16.254.101
    Destroying instance 172.16.254.101
Destroy monitored on 172.16.254.101 success
Delete public key 172.16.254.101
Delete public key 172.16.254.101 success
+ [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.101:2379'`
+ [ Serial ] - UpdateTopology: cluster=hdcluster
+ Refresh instance configs
  - Regenerate config pd -> 172.16.254.92:2379 ... Done
  - Regenerate config pd -> 172.16.254.93:2379 ... Done
  - Regenerate config pd -> 172.16.254.94:2379 ... Done
  - Regenerate config pd -> 172.16.254.102:2379 ... Done
  - Regenerate config tikv -> 172.16.254.98:20160 ... Done
  - Regenerate config tikv -> 172.16.254.99:20160 ... Done
  - Regenerate config tikv -> 172.16.254.100:20160 ... Done
  - Regenerate config tikv -> 172.16.254.104:20160 ... Done
  - Regenerate config tidb -> 172.16.254.95:4000 ... Done
  - Regenerate config tidb -> 172.16.254.96:4000 ... Done
  - Regenerate config tidb -> 172.16.254.97:4000 ... Done
  - Regenerate config tidb -> 172.16.254.103:4000 ... Done
  - Regenerate config tiflash -> 172.16.254.91:9000 ... Done
  - Regenerate config prometheus -> 172.16.254.91:9090 ... Done
  - Regenerate config grafana -> 172.16.254.91:3000 ... Done
  - Regenerate config alertmanager -> 172.16.254.91:9093 ... Done
+ [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service
Scaled cluster `hdcluster` in successfully
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
tiup cluster scale-in hdcluster --node 172.16.254.102:2379
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.102:2379
This operation will delete the 172.16.254.102:2379 nodes in `hdcluster` and all their data.
Do you want to continue? [y/N]: y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.92
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.93
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.94
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.102
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.98
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.99
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.100
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.104
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.95
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.96
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.97
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.103
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.102:2379] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]}
Stopping component pd
    Stopping instance 172.16.254.102
    Stop pd 172.16.254.102:2379 success
Destroying component pd
Destroying instance 172.16.254.102
Destroy 172.16.254.102 success
- Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service]
Stopping component node_exporter
Stopping component blackbox_exporter
Destroying monitored 172.16.254.102
    Destroying instance 172.16.254.102
Destroy monitored on 172.16.254.102 success
Delete public key 172.16.254.102
Delete public key 172.16.254.102 success
+ [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.102:2379'`
+ [ Serial ] - UpdateTopology: cluster=hdcluster
+ Refresh instance configs
  - Regenerate config pd -> 172.16.254.92:2379 ... Done
  - Regenerate config pd -> 172.16.254.93:2379 ... Done
  - Regenerate config pd -> 172.16.254.94:2379 ... Done
  - Regenerate config tikv -> 172.16.254.98:20160 ... Done
  - Regenerate config tikv -> 172.16.254.99:20160 ... Done
  - Regenerate config tikv -> 172.16.254.100:20160 ... Done
  - Regenerate config tikv -> 172.16.254.104:20160 ... Done
  - Regenerate config tidb -> 172.16.254.95:4000 ... Done
  - Regenerate config tidb -> 172.16.254.96:4000 ... Done
  - Regenerate config tidb -> 172.16.254.97:4000 ... Done
  - Regenerate config tidb -> 172.16.254.103:4000 ... Done
  - Regenerate config tiflash -> 172.16.254.91:9000 ... Done
  - Regenerate config prometheus -> 172.16.254.91:9090 ... Done
  - Regenerate config grafana -> 172.16.254.91:3000 ... Done
  - Regenerate config alertmanager -> 172.16.254.91:9093 ... Done
+ [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service
Scaled cluster `hdcluster` in successfully


删除tidb_servers:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
tiup cluster scale-in hdcluster --node 172.16.254.103:4000
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.103:4000
This operation will delete the 172.16.254.103:4000 nodes in `hdcluster` and all their data.
Do you want to continue? [y/N]: y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.92
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.93
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.94
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.98
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.99
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.100
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.104
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.95
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.96
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.97
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.103
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.103:4000] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]}
Stopping component tidb
    Stopping instance 172.16.254.103
    Stop tidb 172.16.254.103:4000 success
Destroying component tidb
Destroying instance 172.16.254.103
Destroy 172.16.254.103 success
- Destroy tidb paths: [/tidb/tidb-deploy/tidb-4000 /etc/systemd/system/tidb-4000.service /tidb/tidb-deploy/tidb-4000/log]
Stopping component node_exporter
Stopping component blackbox_exporter
Destroying monitored 172.16.254.103
    Destroying instance 172.16.254.103
Destroy monitored on 172.16.254.103 success
Delete public key 172.16.254.103
Delete public key 172.16.254.103 success
+ [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.103:4000'`
+ [ Serial ] - UpdateTopology: cluster=hdcluster
+ Refresh instance configs
  - Regenerate config pd -> 172.16.254.92:2379 ... Done
  - Regenerate config pd -> 172.16.254.93:2379 ... Done
  - Regenerate config pd -> 172.16.254.94:2379 ... Done
  - Regenerate config tikv -> 172.16.254.98:20160 ... Done
  - Regenerate config tikv -> 172.16.254.99:20160 ... Done
  - Regenerate config tikv -> 172.16.254.100:20160 ... Done
  - Regenerate config tikv -> 172.16.254.104:20160 ... Done
  - Regenerate config tidb -> 172.16.254.95:4000 ... Done
  - Regenerate config tidb -> 172.16.254.96:4000 ... Done
  - Regenerate config tidb -> 172.16.254.97:4000 ... Done
  - Regenerate config tiflash -> 172.16.254.91:9000 ... Done
  - Regenerate config prometheus -> 172.16.254.91:9090 ... Done
  - Regenerate config grafana -> 172.16.254.91:3000 ... Done
  - Regenerate config alertmanager -> 172.16.254.91:9093 ... Done
+ [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service
Scaled cluster `hdcluster` in successfully


检查集群节点状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
tiup cluster display hdcluster
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster display hdcluster
Cluster type:       tidb
Cluster name:       hdcluster
Cluster version:    v4.0.8
SSH type:           builtin
Dashboard URL:      
ID                    Role          Host            Ports                            OS/Arch       Status     Data Dir                           Deploy Dir
--                    ----          ----            -----                            -------       ------     --------                           ----------
172.16.254.91:9093    alertmanager  172.16.254.91   9093/9094                        linux/x86_64  Up         /tidb/tidb-data/alertmanager-9093  /tidb/tidb-deploy/alertmanager-9093
172.16.254.91:3000    grafana       172.16.254.91   3000                             linux/x86_64  Up         -                                  /tidb/tidb-deploy/grafana-3000
172.16.254.92:2379    pd            172.16.254.92   2379/2380                        linux/x86_64  Up|L       /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
172.16.254.93:2379    pd            172.16.254.93   2379/2380                        linux/x86_64  Up|UI      /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
172.16.254.94:2379    pd            172.16.254.94   2379/2380                        linux/x86_64  Up         /tidb/tidb-data/pd-2379            /tidb/tidb-deploy/pd-2379
172.16.254.91:9090    prometheus    172.16.254.91   9090                             linux/x86_64  Up         /tidb/tidb-data/prometheus-9090    /tidb/tidb-deploy/prometheus-9090
172.16.254.95:4000    tidb          172.16.254.95   4000/10080                       linux/x86_64  Up         -                                  /tidb/tidb-deploy/tidb-4000
172.16.254.96:4000    tidb          172.16.254.96   4000/10080                       linux/x86_64  Up         -                                  /tidb/tidb-deploy/tidb-4000
172.16.254.97:4000    tidb          172.16.254.97   4000/10080                       linux/x86_64  Up         -                                  /tidb/tidb-deploy/tidb-4000
172.16.254.91:9000    tiflash       172.16.254.91   9000/8123/3930/20170/20292/8234  linux/x86_64  Up         /tidb/tidb-data/tiflash-9000       /tidb/tidb-deploy/tiflash-9000
172.16.254.100:20160  tikv          172.16.254.100  20160/20180                      linux/x86_64  Up         /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
172.16.254.104:20160  tikv          172.16.254.104  20160/20180                      linux/x86_64  Tombstone  /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
172.16.254.98:20160   tikv          172.16.254.98   20160/20180                      linux/x86_64  Up         /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
172.16.254.99:20160   tikv          172.16.254.99   20160/20180                      linux/x86_64  Up         /tidb/tidb-data/tikv-20160         /tidb/tidb-deploy/tikv-20160
Total nodes: 14
There are some nodes can be pruned: 
    Nodes: [172.16.254.104:20160]
    You can destroy them with the command: `tiup cluster prune hdcluster`

缩容完毕。 






来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/30135314/viewspace-2758372/,如需转载,请注明出处,否则将追究法律责任。


文章分类
后端
版权声明:本站是系统测试站点,无实际运营。本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 XXXXXXo@163.com 举报,一经查实,本站将立刻删除。
相关推荐