阅读 98

k8s源码部署01

k8s源码集群搭建

集群搭建:

image-20200506225106712.png

1、生产环境k8s平台规划

老男孩老师文档 : https://www.cnblogs.com/yanyanqaq/p/12607713.html

讲师博客: https://blog.stanley.wang/categories/Kubernetes%E5%AE%B9%E5%99%A8%E4%BA%91%E6%8A%80%E6%9C%AF%E4%B8%93%E9%A2%98/

https://blog.stanley.wang/

https://www.cnblogs.com/yanyanqaq/p/12607713.html#242%E9%85%8D%E7%BD%AE

image-20200506225510549.png
image-20200506225615170.png

生产环境规划:

​ master 建议3台

​ etcd 最少3台 (3,5,7) 奇数 解决 1比1的问题

image-20200506230558525.png
image-20200506230742207.png

image-20200507113244832.png

1、 实验环境规划和集群节点初始化

​ 集群规划:

​ 3台: 一个master, 2个 node

​ mster: k8s-master1 192.168.31.63

​ worker: k8s-node1 192.168.31.65

​ worker: k8s-node2 192.168.31.66

​ k8s 版本: 1.16

​ os 版本: 7.7

​ 安装方式: 二进制安装

​ 这里是看老男孩老师讲的课做的规划

主机配置

192.168.208.200 cahost
192.168.208.11 lb1
192.168.208.12 lb2
192.168.208.21 node1
192.168.208.22 node2如下:

所有节点基础配置的初始化:

​ 1、关闭防火墙

​ systemctl stop firewalld

​ systemctl enablefirewalld

      2、关闭 selinux 

​ setenforce 0

​ vim /etc/selinux/config, :SELINUX=disabled

​ 3、配置主机名

​ 4、名称解析

​ /etc/hosts

​ 5、时间同步

​ 选择一个节点作为服务端,其它的作客户端

​ master为服务端:端口号(123)

​ # yum install chrony -y

vim /etc/chrony.config  
 server 127.127.1.0 iburst    #上游服务器

​                                    allow   192.168.31.0/24       #访问限制

​                                   local stratum 10

​                               # systemctl start chronyd

​                               # systemctl enable chronyd
                            #   

​ node 节点配置

​ # yum install chrony -y

​ # vim /etc/chrony.config

​ server 192.168.31.63 iburst

​ # systemctl start chronyd

​ # systemctl enable chronyd

​ # chrony sources 查看

6、关闭交换分区

​ (启动交换分区可能会导致服务无法启动)

​ swapoff -a

​ vim /etc/fstab, 把最后一行 swap 注释

​ 检查 free -m 查看

这里使用老男孩机构的环境

管理主机:192.168.208.200

LB1: 192.168.208.11

LB1: 192.168.208.12

node1: 192.168.208.21

node2: 192.168.208.22

CA证书机构

CA 证书 (面试)

​ 加密方式:

​ 对称加密: 加密和解密用相同的密码

​ 非对称加密: 发送时公钥加密,相对应的私钥解密, 即使用密钥对加解密

​ 单向加密: 只能加密,不能解密, 如 md5

证书:包含以下部分:

​ 端实体

​ 注册机构

​ 签证机构

​ 证书撤销列表

​ 证书存取库

ssl 证书来源

​ 网络第三方机构购买

​ 自签证书 openssl、 cfsll 构建CA来发证书

image-20200507011319529.png

2、准备证书环境

下载: 192.168.208.200 链接https://pkg.cfssl.org/

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo

chmod a+x /usr/bin/cfssl*
mkdir -p /opt/certs

创建成生CA证书签名请求 csr 的json配置文件

vim /opt/certs/ca-csr.sjon
{
    "CN":"Tjcom", #域名
    "hosts":[
    
    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",    #国家
            "ST": "guangdong", #州 省
            "L": "shengzheng",  # 市
            "O": "tj",            # 组织
            "OU": "ops"             #单位
        }
    ],
    "ca":{
        "expiry": "175200h"
    }
}

生成根证书

[root@CA-Host certs]# pwd
/opt/certs
[root@CA-Host certs]# cfssl gencert -initca ca-csr.sjon  |cfssl-json -bare ca
[root@CA-Host certs]# ll
总用量 16
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.sjon
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem

3、准备 docker环境: 所有服务节点都安装

curl -fsSL https://get.docker.com |bash -s docker --mirror Aliyun
mkdir -p /data/docker  /etc/docker

node01: 192.168.208.21

vim /etc/docker/daemon.sjon
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","192.168.208.200"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.21.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

node02: 192.168.208.22

vim /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","192.168.208.200"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.22.1/24",  #这里地址不一样
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

管理 节点 ”192.168.208.200

vim /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","192.168.208.200"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.200.1/24",  #这里地址不一样
  "exec-opts": ["vative.cgroupdriver=systemd"],
  "live-restore": true
}

4、在 管理主机上准备 harbot私有仓库

​ 建议 1.7.6 以上的版本, https://github.com/goharbor/harbor/releases

https://github.com/goharbor/harbor/releases/download/v1.10.0/harbor-offline-installer-v1.10.0.tgz

​ [#] yum install docker-compose

解压,修改配置端口:为180,和hostname, 启动密码123456,注释掉 https 443相关的 启动

启动后使用 nginx反 向代理

/etc/nginx/conf.d/harbor.conf
server {
  listen      80;
  server_name localhost;
  client_max_body_size 1000m,
  location / {
     proxy_pass http://127.0.0.1:180;
  }

}

启动 harbor后 创建一个 public 公共库

docker pull nginx:v1.7.9

开始安装k8s

1、部署etcd集群

主机名 角色 ip
LB2 lead 192.168.208.12
node01 follow 192.168.208.21
node02 follow 192.168.208.22

etcd下载:https://github.com/etcd-io/etcd/releases 建议不使用超过 3.3的版本

https://github.com/etcd-io/etcd/releases/download/v3.2.30/etcd-v3.2.30-linux-amd64.tar.gz

步骤1:在 200节点上创建 etcd 证书文件
root@CA-Host certs]# pwd
/opt/certs
[root@CA-Host certs]# vim ca-config.json
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer":{
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
         }
    }

}
[root@CA-Host certs]# vim etcd-peer-csr.json
{
    "CN":"k8s-etcd",
    "hosts": [
        "192.168.208.11",
        "192.168.208.12",
        "192.168.208.21",
        "192.168.208.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "guangdong",
            "L": "shengzheng",
            "O": "tj",
            "OU": "OPS"
        }
    ]
}

执行生成 etcd证书文件

[root@CA-Host certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json|cfssl-json -bare etcd-peer
[root@CA-Host certs]# ll
总用量 36
-rw-r--r--. 1 root root  654 5月   8 15:24 ca-config.json
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.sjon
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem
-rw-r--r--. 1 root root 1070 5月   8 15:43 etcd-peer.csr
-rw-r--r--. 1 root root  266 5月   8 15:38 etcd-peer-csr.json
-rw-------. 1 root root 1675 5月   8 15:43 etcd-peer-key.pem
-rw-r--r--. 1 root root 1436 5月   8 15:43 etcd-peer.pem
步骤2:在第一台要安装的机器上安装 etcd ,192.168.208.12
解压至 opt 
[root@k8s-L2 ~]# useradd -s /sbin/nologin -M etcd
[root@k8s-L2 tool]# tar -xzvf etcd-v3.2.30-linux-amd64.tar.gz -C /opt/
[root@k8s-L2 opt]# mv etcd-v3.2.30-linux-amd64 etcd-v3.2.30
[root@k8s-L2 opt]# ln -s /opt/etcd-v3.2.30/ /opt/etcd
[root@k8s-L2 opt]# ll
总用量 0
lrwxrwxrwx. 1 root      root       18 5月   8 16:03 etcd -> /opt/etcd-v3.2.30/
drwxr-xr-x. 3 630384594 600260513 123 4月   2 03:01 etcd-v3.2.30
创建3个目录
[root@k8s-L2 opt]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
[root@k8s-L2 certs]# pwd
/opt/etcd/certs
拷贝证书
[root@k8s-L2 certs]# scp 192.168.208.200:/opt/certs/ca.pem ./
[root@k8s-L2 certs]# scp 192.168.208.200:/opt/certs/etcd-peer.pem ./
[root@k8s-L2 certs]# scp 192.168.208.200:/opt/certs/etcd-peer-key.pem ./
[root@k8s-L2 certs]# ll
总用量 12
-rw-r--r--. 1 root root 1346 5月   8 16:06 ca.pem
-rw-------. 1 root root 1675 5月   8 16:10 etcd-peer-key.pem
-rw-r--r--. 1 root root 1436 5月   8 16:08 etcd-peer.pem

创建启动脚本
vim /opt/etcd/etcd-server-startup.sh
#/bin/sh
./etcd --name etcd-server-208-12 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://192.168.208.12:2380 \
       --listen-client-urls https://192.168.208.12:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 800000000 \
       --initial-advertise-peer-urls https://192.168.208.12:2380 \
       --advertise-client-urls https://192.168.208.12:2379,http://127.0.0.1:2379 \
       --initial-cluster etcd-server-208-12=https://192.168.208.12:2380,etcd-server-208-21=https://192.168.208.21:2380,etcd-server-208-22=https://192.168.208.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout
 
 更改权限
[root@k8s-L2 etcd]# chown -R etcd.etcd /opt/etcd-v3.2.30
[root@k8s-L2 etcd]# chown -R etcd.etcd /data/etcd/
[root@k8s-L2 etcd]# chown -R etcd.etcd /data/logs/

使用 supervisor来启动文件
[root@k8s-L2 etcd]# yum install supervisor -y
[root@k8s-L2 etcd]# systemctl start supervisord
[root@k8s-L2 etcd]# systemctl enable supervisord
创建 supervisord.ini 的配置文件

[root@k8s-L2 etcd]# vim /etc/supervisord.d/etcd-server.ini
[program:etcd-server-208-12]
command=/opt/etcd/etcd-server-startup.sh
numprocs=1
directory=/opt/etcd
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=etcd
redirect_stderr=true
stdout_logfile=/data/logs/etcd-server/etcd/stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_evens_enabled=false

[root@k8s-L2 etcd]#supervisord update

这里可能会失败, 其实也可以直接启动 这个脚本,增加了配置文件 一定要重启 supervsio


启动脚本的另一个处理方法

服务管理脚本路径 
centos7: systemctl
/usr/lib/systemd/system

centos6:
/etc/rd.d/rcN.d

步骤3:将部署好的etcd 拷贝到其它节点

为了不重启操作,在这里将部署在 192.168.208.12 上的 /opt/etcd 复制到 192.168.208.21,192.168.208.22 上

[root@k8s-L2 opt]# scp -r etcd-v3.2.30 192.168.208.21:/opt/
[root@k8s-L2 opt]# scp -r etcd-v3.2.30 192.168.208.22:/opt/

在 192.168.208.21上增加操作如下

[root@node01 ~]# useradd -s /sbin/nologin -M etcd
[root@node01 ~]# cd /opt
[root@node01 opt]# ls
containerd  etcd-v3.2.30  rh
[root@node01 opt]# ln -s /opt/etcd-v3.2.30/ /opt/etcd
[root@node01 opt]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
[root@node01 opt]# chown -R etcd.etcd /opt/etcd-v3.2.30
[root@node01 opt]# chown -R etcd.etcd /data/etcd/
[root@node01 opt]# chown -R etcd.etcd /data/logs/
更改启动配置文件
vim /opt/etcd/etcd-server-startup.sh
#/bin/sh
./etcd --name etcd-server-208-21 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://192.168.208.21:2380 \
       --listen-client-urls https://192.168.208.21:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 800000000 \
       --initial-advertise-peer-urls https://192.168.208.21:2380 \
       --advertise-client-urls https://192.168.208.21:2379,http://127.0.0.1:2379 \
       --initial-cluster etcd-server-208-12=https://192.168.208.12:2380,etcd-server-208-21=https://192.168.208.21:2380,etcd-server-208-22=https://192.168.208.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout

在第三台 etcd 上操作如下 192.168.208.22

[root@node02 ~]# useradd -s /sbin/nologin -M etcd
[root@node02 ~]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
[root@node02 ~]# 
[root@node02 ~]# cd /opt
[root@node02 opt]# ln -s /opt/etcd-v3.2.30/ /opt/etcd
[root@node02 opt]# chown -R etcd.etcd /opt/etcd-v3.2.30
[root@node02 opt]# chown -R etcd.etcd /data/etcd/
[root@node02 opt]# chown -R etcd.etcd /data/logs/
[root@node02 opt]# 

更改启动配置文件
vim /opt/etcd/etcd-server-startup.sh
#/bin/sh
./etcd --name etcd-server-208-22 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://192.168.208.22:2380 \
       --listen-client-urls https://192.168.208.22:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 800000000 \
       --initial-advertise-peer-urls https://192.168.208.22:2380 \
       --advertise-client-urls https://192.168.208.22:2379,http://127.0.0.1:2379 \
       --initial-cluster etcd-server-208-12=https://192.168.208.12:2380,etcd-server-208-21=https://192.168.208.21:2380,etcd-server-208-22=https://192.168.208.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout

此时 所有节点都执行 sh /opt/etcd/etcd-server-startup.sh

[root@k8s-L2 opt]# netstat -luntp |grep etcd
tcp        0      0 192.168.208.12:2379     0.0.0.0:*               LISTEN      13592/./etcd        
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      13592/./etcd        
tcp        0      0 192.168.208.12:2380     0.0.0.0:*               LISTEN      13592/./etcd  
[root@node01 ~]# netstat -luntp |grep etcd
tcp        0      0 192.168.208.21:2379     0.0.0.0:*               LISTEN      13732/./etcd        
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      13732/./etcd        
tcp        0      0 192.168.208.21:2380     0.0.0.0:*               LISTEN      13732/./etcd   

[root@node02 ~]# netstat -luntp |grep etcd
tcp        0      0 192.168.208.22:2379     0.0.0.0:*               LISTEN      14118/./etcd        
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      14118/./etcd        
tcp        0      0 192.168.208.22:2380     0.0.0.0:*               LISTEN      14118/./etcd 

检查 etcd 集群健康状态:在任意一台节点上操作都可以

[root@k8s-L2 etcd]# pwd
/opt/etcd
[root@k8s-L2 etcd]# ./etcdctl cluster-health
member 27335ed5e116ecf is healthy: got healthy result from http://127.0.0.1:2379
member 9fa9f37eb6f9bb63 is healthy: got healthy result from http://127.0.0.1:2379
member e00eea0c411d3da4 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

这里三台机器都是成功的 true 表示为主节点

[root@k8s-L2 etcd]# ./etcdctl member list
27335ed5e116ecf: name=etcd-server-208-22 peerURLs=https://192.168.208.22:2380 clientURLs=http://127.0.0.1:2379,https://192.168.208.22:2379 isLeader=false
9fa9f37eb6f9bb63: name=etcd-server-208-12 peerURLs=https://192.168.208.12:2380 clientURLs=http://127.0.0.1:2379,https://192.168.208.12:2379 isLeader=true
e00eea0c411d3da4: name=etcd-server-208-21 peerURLs=https://192.168.208.21:2380 clientURLs=http://127.0.0.1:2379,https://192.168.208.21:2379 isLeader=false
ln -s /opt/etcd/etcdctl /usr/sbin/
supervisor报错调式
[root@node001 etcd]# supervisorctl tail -f etcd-server-208-21

2、部署 k8s api-server

下载地址: kubernetes , 很难下载的

https://dl.k8s.io/v1.15.11/kubernetes-server-linux-amd64.tar.gz

步骤1:

​ 签发client证书 用于 api-server与 etcd通信使用 etcd集是server端, api-server是客户端

[root@CA-Host certs]# vim client-csr.json
vim /opt/certs/client-csr.sjon
{
        "CN":"k8s-node",
        "hosts":[
        
        ],
        "key":{
                "algo":"rsa",
                "size":2048
        },
        "names":[
                {
                        "C":"CN",
                        "ST": "guangdong",
                        "L": "shengzheng",
                        "O": "tj",
                        "OU": "ops"
                }
        ]
}

生成 client 证书

[root@CA-Host certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client

查看 client 证书 此证书是 api-server 与 etcd通信的证书

[root@CA-Host certs]# ll
总用量 52
-rw-r--r--. 1 root root  654 5月   8 15:24 ca-config.json
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.json
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem
-rw-r--r--. 1 root root 1001 5月   8 20:40 client.csr
-rw-r--r--. 1 root root  190 5月   8 20:37 client-csr.json
-rw-------. 1 root root 1679 5月   8 20:40 client-key.pem
-rw-r--r--. 1 root root 1371 5月   8 20:40 client.pem

步骤2:

​ 生成 api-server 通信的证书

[root@CA-Host certs]# vim apiserver-csr.json
{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "192.168.208.10",      #api-server 可能存在的地址, 也可以没有, 可以写上去,有是一定要写的
        "192.168.208.21",
        "192.168.208.22",
        "192.168.208.23"        
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C":"CN",
            "ST": "guangdong",
            "L": "shengzheng",
            "O": "tj",
            "OU": "ops"
        }
    ]

}

生成 api-server 证书, 任何节点和server通信 都用这一套证书 即与 etcd服务通信

[root@CA-Host certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver

[root@CA-Host certs]# ll
总用量 68
-rw-r--r--. 1 root root 1257 5月   8 20:54 apiserver.csr
-rw-r--r--. 1 root root  488 5月   8 20:52 apiserver-csr.json
-rw-------. 1 root root 1679 5月   8 20:54 apiserver-key.pem
-rw-r--r--. 1 root root 1606 5月   8 20:54 apiserver.pem
-rw-r--r--. 1 root root  654 5月   8 15:24 ca-config.json
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.json
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem
步骤3:

​ 在node01 解压 kubernets

[root@node01 tool]# tar -xzvf kubernetes-server-linux-amd64.tar.gz -C /opt   
[root@node01 tool]# cd /opt/
[root@node01 opt]# ls
containerd  etcd  etcd-v3.2.30  kubernetes  rh
[root@node01 opt]# mv kubernetes kubernetes.v1.15.11
[root@node01 opt]# ln -s /opt/kubernetes.v1.15.11/ /opt/kubernetes
[root@node01 opt]# cd /opt/kubernetes/server/bin
[root@node01 bin]# pwd
/opt/kubernetes/server/bin
[root@node01 bin]# mkdir cert
[root@node01 bin]# cd cert
# 拷贝证书
[root@node01 cert]# scp 192.168.208.200:/opt/certs/ca.pem ./
[root@node01 cert]# scp 192.168.208.200:/opt/certs/ca-key.pem ./
[root@node01 cert]# scp 192.168.208.200:/opt/certs/client.pem ./
[root@node01 cert]# scp 192.168.208.200:/opt/certs/client-key.pem ./
[root@node01 cert]# scp 192.168.208.200:/opt/certs/apiserver.pem ./ 
[root@node01 cert]# scp 192.168.208.200:/opt/certs/apiserver-key.pem ./

#创建 apiserver 配置文件
[root@node01 conf]# pwd
/opt/kubernetes/server/bin/conf
vim audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
------------------------------------------------
创建 api-server 启动脚本 
vim /opt/kubenetes/server/bin/kube-apiserver.sh
#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  --audit-policy-file ./conf/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file ./cert/ca.pem \
  --requestheader-client-ca-file ./cert/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./cert/ca.pem \
  --etcd-certfile ./cert/client.pem \
  --etcd-keyfile ./cert/client-key.pem \
  --etcd-servers https://192.168.208.12:2379,https://192.168.208.21:2379,https://192.168.208.22:2379 \
  --service-account-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \
  --kubelet-client-certificate ./cert/client.pem \
  --kubelet-client-key ./cert/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./cert/apiserver.pem \
  --tls-private-key-file ./cert/apiserver-key.pem \
  --v 2

步骤4:

​ 将 node01 上配置好的kubernetes 拷贝到 node02中

[root@node01 opt]# scp -r kubernetes.v1.15.11 192.168.208.22:/opt

在 node02上的配置

[root@node02 opt]# ln -s /opt/kubernetes.v1.15.11/ /opt/kubernetes
[root@node02 opt]# cd /opt/kubernetes/server/bin

# 启动 apiserver
[root@node02 bin]# sh kube-apiserver.sh

检查启动服务

node01

[root@node01 cfg]# netstat -luntp |grep kube-apiser
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      18688/./kube-apiser 
tcp6       0      0 :::6443                 :::*                    LISTEN      18688/./kube-apiser
[root@node02 bin]# netstat -luntp |grep kube-apiser
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      18688/./kube-apiser 
tcp6       0      0 :::6443                 :::*                    LISTEN      18688/./kube-apiser

3、搭建反向代理 4层和 7层

image-20200509013712744.png

etcd: 2379 2380 配置节点IP: 192.168.208.11, 192.168.208.12

VIP :192.168.208.10 来访问 apiserver 的 6443 端口 , 6443 为 kube-aprse 端口

[root@k8s_lb1]# netstat -luntp |grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      18688/./kube-apiser 
步骤1:
[root@k8s_lb1]# yum install nginx
[root@k8s_lb1]# yum install nginx
配置反向代理
[root@k8s_lb1]# vim /etc/nginx/nginx.conf
# 放在nginx.conf 末尾
stream {
    upstream kube-apiserver {
        server 192.168.208.21:6443     max_fails=3 fail_timeout=30s;
        server 192.168.208.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}
[root@k8s_lb2]# vim /etc/nginx/nginx.conf
# 放在nginx.conf 末尾
stream {
    upstream kube-apiserver {
        server 192.168.208.21:6443     max_fails=3 fail_timeout=30s;
        server 192.168.208.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}

[root@k8s_lb1]# systemctl restart nginx
[root@k8s_lb2]# systemctl restart nginx
[root@k8s_lb1]# netstat -luntp |grep nginx
tcp        0      0 0.0.0.0:7443            0.0.0.0:*               LISTEN      19031/nginx: master
[root@k8s_lb2]# netstat -luntp |grep nginx
tcp        0      0 0.0.0.0:7443            0.0.0.0:*               LISTEN      19031/nginx: master
步骤2:

安装 keepalived

[root@k8s_lb1]#  yum install keepalived
[root@k8s_lb2]# yum install keepalived

新建检查端口的脚本

[root@k8s_lb1]# [root@node01 /]# vi /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
#    interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi

[root@node01 /]# chmod a+x /etc/keepalived/check_port.sh
-------------------------------------------------------------------
[root@k8s_lb2]# vi /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
#    interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi
[root@k8s_lb2]# chmod a+x /etc/keepalived/check_port.sh
步骤3:

​ 配置 keepalived 高可用实例

[root@k8s_lb1]#  vi /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id k8s
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    mcast_src_ip 192.168.208.21
    nopreempt   # 非抢占机制
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        192.168.208.10

    }
}

-------------------------------------------------
[root@k8s_lb2]# vi /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id k8s
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 50
    advert_int 1
    mcast_src_ip 192.168.208.22
    nopreempt  # 非抢占机制
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        192.168.208.10

    }
}

[root@k8s_lb1]#  systemctl restart keepalived
[root@k8s_lb2]# systemctl restart keepalived

检查 虚拟IP 是否启动
[root@k8s_lb1]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.208.21/24 brd 192.168.208.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.208.10/32 scope global ens33
       valid_lft forever preferred_lft forever

发现 192.168.208.10 已经启动  高可用实现

要关闭 selinux

4、部署kube-controller-manager

集群规划,,注意这两不用签证书

主机名 角色 ip
node01 controller-manager 192.168.208.21
node02 controller-manager 192.168.208.22

这里部署以 node01 为例, node02 安装部署方法类似

在 node01 上创建启动脚本

[root@node01 bin]# pwd
/opt/kubernetes/server/bin
[root@node01 bin]# ll
总用量 885348
-rwxr-xr-x. 1 root root  43551200 3月  13 05:49 apiextensions-apiserver
drwxr-xr-x. 2 root root       124 5月   8 21:11 cert
-rwxr-xr-x. 1 root root 100655136 3月  13 05:49 cloud-controller-manager
drwxr-xr-x. 2 root root        24 5月   9 01:12 conf
-rwxr-xr-x. 1 root root 200816272 3月  13 05:49 hyperkube
-rwxr-xr-x. 1 root root  40198592 3月  13 05:49 kubeadm
-rwxr-xr-x. 1 root root 164616608 3月  13 05:49 kube-apiserver
-rw-r--r--. 1 root root      1093 5月   9 01:14 kube-apiserver.sh
-rwxr-xr-x. 1 root root 116532256 3月  13 05:49 kube-controller-manager
-rwxr-xr-x. 1 root root  42997792 3月  13 05:49 kubectl
-rwxr-xr-x. 1 root root 119755824 3月  13 05:49 kubelet
-rwxr-xr-x. 1 root root  36995680 3月  13 05:49 kube-proxy
-rwxr-xr-x. 1 root root  38794336 3月  13 05:49 kube-scheduler
-rwxr-xr-x. 1 root root   1648224 3月  13 05:49 mounter


[root@node01 bin]# mkdir -p /data/logs/kubernetes/
[root@node01 bin]# vim kube-controller-manager.sh
#!/bin/sh
./kube-controller-manager \
  --cluster-cidr 172.7.0.0/16 \
  --leader-elect true \
  --log-dir /data/logs/kubernetes/kube-controller-manager \
  --master http://127.0.0.1:8080 \
  --service-account-private-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --root-ca-file ./cert/ca.pem \
  --v 2
  
  [root@node01 bin]#chmod a+x kube-controller-manager.sh
  [root@node02 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager
#将相同的脚本拷贝给 node02
[root@node01 bin]# scp kube-controller-manager.sh 192.168.208.22:/opt/kubernetes/server/bin 

在node02上操作

[root@node02 /]# cd /opt/kubernetes/server/bin/
[root@node02 bin]# pwd
/opt/kubernetes/server/bin
[root@node02 bin]# ll
总用量 885352
-rwxr-xr-x. 1 root root  43551200 5月   9 01:16 apiextensions-apiserver
drwxr-xr-x. 2 root root       124 5月   9 01:17 cert
-rwxr-xr-x. 1 root root 100655136 5月   9 01:17 cloud-controller-manager
drwxr-xr-x. 2 root root        24 5月   9 01:21 conf
-rwxr-xr-x. 1 root root 200816272 5月   9 01:17 hyperkube
-rwxr-xr-x. 1 root root  40198592 5月   9 01:17 kubeadm
-rwxr-xr-x. 1 root root 164616608 5月   9 01:17 kube-apiserver
-rwxr-xr-x. 1 root root      1093 5月   9 01:17 kube-apiserver.sh
-rwxr-xr-x. 1 root root 116532256 5月   9 01:17 kube-controller-manager
-rwxr-xr-x. 1 root root       334 5月   9 12:14 kube-controller-manager.sh
-rwxr-xr-x. 1 root root  42997792 5月   9 01:17 kubectl
-rwxr-xr-x. 1 root root 119755824 5月   9 01:17 kubelet
-rwxr-xr-x. 1 root root  36995680 5月   9 01:17 kube-proxy
-rwxr-xr-x. 1 root root  38794336 5月   9 01:17 kube-scheduler
-rwxr-xr-x. 1 root root   1648224 5月   9 01:17 mounter
[root@node02 bin]# mkdir -p /data/logs/kubernetes/
[root@node02 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager

然后启动这两脚本 就可以了

supervsorctl 启动

[program:kube-controller-manager-208-22]
command=/usr/bin/sh /opt/kubernetes/server/bin/kube-controller-manager.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=30                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                                                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log  ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                                       ; emit events on stdout writes (default false)
[root@lb002 bin]# supervisorctl update
kube-controller-manager-208-22: added process group
[root@lb002 bin]# 
[root@lb002 bin]# supervisorctl status
etcd-server-208-22               RUNNING   pid 13331, uptime 3:23:13
kube-apiserver-7-21              RUNNING   pid 13998, uptime 2:44:22
kube-controller-manager-208-22   STARTING

5、部署 kube-scheduler

和 controller-manager 一样 不需要签证书 因为是找本机的 api-server

主机名 角色 ip
node01 controller-scheduler 192.168.208.21
node02 controller-scheduler 192.168.208.22

这里部署以 node01 为例, node02 安装部署方法类似

root@node01 ~]# cd /opt/kubernetes/server/bin/
[root@node01 bin]# ll
总用量 885352
-rwxr-xr-x. 1 root root  43551200 3月  13 05:49 apiextensions-apiserver
drwxr-xr-x. 2 root root       124 5月   8 21:11 cert
-rwxr-xr-x. 1 root root 100655136 3月  13 05:49 cloud-controller-manager
drwxr-xr-x. 2 root root        24 5月   9 01:12 conf
-rwxr-xr-x. 1 root root 200816272 3月  13 05:49 hyperkube
-rwxr-xr-x. 1 root root  40198592 3月  13 05:49 kubeadm
-rwxr-xr-x. 1 root root 164616608 3月  13 05:49 kube-apiserver
-rw-r--r--. 1 root root      1093 5月   9 01:14 kube-apiserver.sh
-rwxr-xr-x. 1 root root 116532256 3月  13 05:49 kube-controller-manager
-rwxr-xr-x. 1 root root       334 5月   9 12:11 kube-controller-manager.sh
-rwxr-xr-x. 1 root root  42997792 3月  13 05:49 kubectl
-rwxr-xr-x. 1 root root 119755824 3月  13 05:49 kubelet
-rwxr-xr-x. 1 root root  36995680 3月  13 05:49 kube-proxy
-rwxr-xr-x. 1 root root  38794336 3月  13 05:49 kube-scheduler
-rwxr-xr-x. 1 root root   1648224 3月  13 05:49 mounter

# 创建启动脚本
[root@node01 bin]# vim kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
  --leader-elect  \
  --log-dir /data/logs/kubernetes/kube-scheduler \
  --master http://127.0.0.1:8080 \   
  --v 2
  
[root@node01 bin]#mkdir -p  /data/logs/kubernetes/kube-scheduler
[root@node01 bin]# 
[root@node01 bin]# chmod a+x kube-scheduler.sh 

#拷贝到 node02节点
[root@node01 bin]# scp kube-scheduler.sh 192.168.208.22:/opt/kubernetes/server/bin/

在 node02 上的操作

root@node02 ~]# cd /opt/kubernetes/server/bin/
[root@node02 bin]# ll
总用量 885352
-rwxr-xr-x. 1 root root  43551200 3月  13 05:49 apiextensions-apiserver
drwxr-xr-x. 2 root root       124 5月   8 21:11 cert
-rwxr-xr-x. 1 root root 100655136 3月  13 05:49 cloud-controller-manager
drwxr-xr-x. 2 root root        24 5月   9 01:12 conf
-rwxr-xr-x. 1 root root 200816272 3月  13 05:49 hyperkube
-rwxr-xr-x. 1 root root  40198592 3月  13 05:49 kubeadm
-rwxr-xr-x. 1 root root 164616608 3月  13 05:49 kube-apiserver
-rw-r--r--. 1 root root      1093 5月   9 01:14 kube-apiserver.sh
-rwxr-xr-x. 1 root root 116532256 3月  13 05:49 kube-controller-manager
-rwxr-xr-x. 1 root root       334 5月   9 12:11 kube-controller-manager.sh
-rwxr-xr-x. 1 root root  42997792 3月  13 05:49 kubectl
-rwxr-xr-x. 1 root root 119755824 3月  13 05:49 kubelet
-rwxr-xr-x. 1 root root  36995680 3月  13 05:49 kube-proxy
-rwxr-xr-x. 1 root root  38794336 3月  13 05:49 kube-scheduler
-rwxr-xr-x. 1 root root       143 5月   9 12:30 kube-scheduler.sh
-rwxr-xr-x. 1 root root   1648224 3月  13 05:49 mounter

[root@node02 bin]#mkdir -p  /data/logs/kubernetes/kube-scheduler

启动脚本 sh kube-scheduler.sh

这里使用 supervsiorctl

启动

[root@lb002 bin]# vi /etc/supervisord.d/kube-scheduler.ini      

[program:scheduler-208-22]
command=/usr/bin/sh /opt/kubernetes/server/bin/kube-scheduler.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                               ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                     ; directory to cwd to before exec (def no cwd)
autostart=true                                                           ; start at supervisord start (default: true)
autorestart=true                                                         ; retstart at unexpected quit (default: true)
startsecs=30                                                             ; number of secs prog must stay running (def. 1)
startretries=3                                                           ; max # of serial start failures (default 3)
exitcodes=0,2                                                            ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                          ; signal used to kill process (default TERM)
stopwaitsecs=10                                                          ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                ; setuid to this UNIX account to run the program
redirect_stderr=true                                                     ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                             ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                              ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                              ; emit events on stdout writes (default false)
[root@node001 bin]# supervisorctl status
etcd-server-208-21               RUNNING   pid 15493, uptime 3:37:12
kube-apiserver-208-21            RUNNING   pid 16089, uptime 2:52:09
kube-controller-manager-208-22   RUNNING   pid 17631, uptime 0:07:01
scheduler-208-21                 STARTING

此时可以检查集群状态了

创建命令软链接

[root@node01 ~]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@node02 ~]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl

检查集群健康状态

[root@node01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
etcd-2               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"} 
[root@node02 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
controller-manager   Healthy   ok 

到这里 集群的就部署完成了

计算节点服务搭建

1、部署 node节点kubelet

集群规则

主机名 角色 ip
node01 kubelet 192.168.208.21
node02 kubelet 192.168.208.22

先在 node01上部署 kubelet服务 要求有 ca.pem server.pem kubelte.pem

步骤1:在CA服务器上签发证书

生成证书:

vi kubelet-csr.json
{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "192.168.208.21",
    "192.168.208.22",
    "192.168.208.23",
    "192.168.208.24"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[ certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet

1、生成kubectl.kubeconfig 配置文件

​ 先拷贝证书

[root@CA-Host certs]# scp kubelet.pem kubelet-key.pem node001:/opt/kubernetes/server/bin/cert
root@node001's password: 
kubelet.pem                                                                                                                 100% 1476    68.0KB/s   00:00    
kubelet-key.pem                                                                                                             100% 1679   211.0KB/s   00:00    
[root@CA-Host certs]# scp kubelet.pem kubelet-key.pem node002:/opt/kubernetes/server/bin/cert
root@node002's password: 
kubelet.pem                                                                                                                 100% 1476     1.6MB/s   00:00    
kubelet-key.pem                                                                                                             100% 1679     1.4MB/s   00:00

​ 进入到 conf 目录 生成配置文件 4步骤

#1、set-cluster
[root@lb002 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@lb002 conf]# kubectl config set-cluster myk8s \
   --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
   --embed-certs=true \
   --server=https://192.168.208.10:7443 \
   --kubeconfig=kubelet.kubeconfig
Cluster "myk8s" set.
[root@lb002 conf]# ls
audit.yaml  kubelet.kubeconfig
#2 、set-credentials

[root@lb002 conf]# kubectl config set-credentials k8s-node --client-certificate=/opt/kubernetes/server/bin/cert/client.pem --client-key=/opt/kubernetes/server/bin/cert/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig 

User "k8s-node" set.
#3、set-context
 
[root@lb002 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=k8s-node \
  --kubeconfig=kubelet.kubeconfig
#4、use-context

[root@lb002 conf]#kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

2、创建 k8s-node.yaml 资源配置文件

[root@lb002 conf]# vi k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

生成 k8s-node 的用户  让他拥有集群计算权限(clusterrolebinding)
[root@lb002 conf]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created

查询用户
[root@lb002 conf]# kubectl get clusterrolebinding k8s-node
NAME       AGE
k8s-node   93s

......

6、将上面生成的 kubelet.kubeconfig 拷贝到 node02节点目录下

[root@node01 conf]# scp kubelet.kubeconfig 192.168.208.22://opt/kubernetes/server/bin/conf

7、在 node02上查看

[root@node02 conf]# ll
总用量 12
-rw-r--r--. 1 root root 2289 5月   9 01:17 audit.yaml
-rw-------. 1 root root 6212 5月   9 13:38 kubelet.kubeconfig

这里不需要再创建 k8s-node 资源了, 已经存在了

8、创建 kubelet启动脚本

​ 1.1: 准备 pause基础镜像

​ 主运维主机上 CA-Host 192.168.208.200 主机上操作

​ 下载pause基础镜像 重新打标签上传到 harbor

[root@CA-Host certs]# docker pull kubernetes/pause
[root@CA-Host certs]# docker images |grep kubernetes/pause
[root@CA-Host certs]# docker tag kubernetes/pause:latest 192.168.208.200/public/pause:latest
[root@CA-Host certs]# docker login http://192.168.208.200/public  #admin/123456
[root@CA-Host certs]# docker push 192.168.208.200/public/pause:latest
image-20200509135618865.png

​ 用于业务容器初始化 网络空间 ipc空间 utf 空间

​ 1.2、node01创建 kubelte 启动脚本

[root@node01 bin]# pwd
/opt/kubernetes/server/bin
[root@node01 bin]# vim kubelet.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override node01 \    #指定本机主机名
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image 192.168.208.200/public/pause:latest \  #指定启动基础镜像
  --root-dir /data/kubelet

​ 改权限 创建配置文件启动需要的目录

[root@node01 bin]# chmod +x /opt/kubernetes/server/bin/kubelet.sh 
[root@node01 bin]# mkdir -p /data/logs/kubernetes/kube-kubelet   /data/kubelet

​ 1.3 创建 node02的 kubelet 启动脚本

[root@node02 bin]# pwd
/opt/kubernetes/server/bin
[root@node02 bin]# vim kubelet.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override 192.168.208.22 \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.od.com/public/pause:latest \
  --root-dir /data/kubelet

改权限 创建配置文件启动需要的目录

[root@node02 bin]# chmod +x /opt/kubernetes/server/bin/kubelet.sh 
[root@node02 bin]# mkdir -p /data/logs/kubernetes/kube-kubelet   /data/kubelet

​ 1.4 启动 kubelet.sh

这里使用 supervsiorctl 启动

[root@node002 bin]# cat /etc/supervisord.d/kube-kubelet.ini 
[program:kube-kubelet-208-22]
command=/usr/bin/sh /opt/kubernetes/server/bin/kubelet.sh     ; the program (relative uses PATH, can take args)
numprocs=1                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin              ; directory to cwd to before exec (def no cwd)
autostart=true                                    ; start at supervisord start (default: true)
autorestart=true                                  ; retstart at unexpected quit (default: true)
startsecs=30                                      ; number of secs prog must stay running (def. 1)
startretries=3                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                       ; emit events on stdout writes (default false)

启动

[root@node002 bin]# supervisorctl update
[root@node002 bin]# supervisorctl status
etcd-server-208-22               RUNNING   pid 13331, uptime 4:20:24
kube-apiserver-7-21              RUNNING   pid 13998, uptime 3:41:33
kube-controller-manager-208-22   RUNNING   pid 15506, uptime 0:57:15
kube-kubelet-208-22              STARTING  
scheduler-208-22                 RUNNING   pid 15626, uptime 0:49:14

另一个节点一样 ,将启动配置文件拷过去,执行启动

-rwxr-xr-x. 1 root root   1648224 3月  13 05:49 mounter
[root@node002 bin]# scp kubelet.sh node001:/opt/kubernetes/server/bin/
root@node001's password: 
kubelet.sh         

修改 kubelet.sh 的主机IP地址

[root@node002 bin]# scp -r /etc/supervisord.d/kube-kubelet.ini node001:/etc/supervisord.d/ 
root@node001's password: 
kube-kubelet.ini    

修改名 ini 的名称信息
[root@node001 bin]# vim /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-208-21]

启动
[root@node001 bin]# supervisorctl update
kube-kubelet-208-21: added process group
[root@node001 bin]#
[root@node001 bin]# supervisorctl status
etcd-server-208-21               RUNNING   pid 15493, uptime 4:30:39
kube-apiserver-208-21            RUNNING   pid 16089, uptime 3:45:36
kube-controller-manager-208-22   RUNNING   pid 17631, uptime 1:00:28
kube-kubelet-208-21              STARTING
scheduler-208-21                 RUNNING   pid 17743, uptime 0:53:34

​ 验证集群节点:

[root@node01 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
node01   Ready    <none>   10m     v1.15.11
node02   Ready    <none>   9m22s   v1.15.11
[root@node02 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
node01   Ready    <none>   10m     v1.15.11
node02   Ready    <none>   9m22s   v1.15.11

​ 更改 ROLES 信息 这里是个 labe 随便改的

[root@node01 ~]# kubectl label node node01 node-role.kubernetes.io/master=
node/node01 labeled
[root@node01 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node01   Ready    master   15m   v1.15.11
node02   Ready    <none>   15m   v1.15.11

[root@node01 ~]# kubectl label node node01 node-role.kubernetes.io/node=  
node/node01 labeled
[root@node01 ~]# kubectl get nodes
NAME     STATUS   ROLES         AGE   VERSION
node01   Ready    master,node   16m   v1.15.11
node02   Ready    <none>        15m   v1.15.11

启动报错如下:

failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

解决方法:
检查 /etc/docker/daemon.json
native.cgroupdriver=systemd  配置  报错原因是 native 写成了 vative
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","192.168.208.200"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.21.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

2、部署node节点的kube-proxy

​ proxy 用于连接 pod 和集群网络

主机名 角色 IP地址
node01 kube-proxy 192.168.208.21
node02 kube-proxy 192.168.208..22

​ 通信需要使用 证书

​ 在 ca-host 上操作 192.168.208.200

1、创建证书文件

[root@CA-Host certs]# pwd
/opt/certs
[root@CA-Host certs]# vi kube-proxy-csr.json
{
    "CN": "system:kube-proxy",    #  这里就是这样写的, 不能改
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "guangdong",
            "L": "shengzheng",
            "O": "tj",
            "OU": "ops"
        }
    ]
}

2、执行创建证书

[root@CA-Host certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
[root@CA-Host certs]# ll
总用量 100
-rw-r--r--. 1 root root 1257 5月   8 20:54 apiserver.csr
-rw-r--r--. 1 root root  488 5月   8 20:52 apiserver-csr.json
-rw-------. 1 root root 1679 5月   8 20:54 apiserver-key.pem
-rw-r--r--. 1 root root 1606 5月   8 20:54 apiserver.pem
-rw-r--r--. 1 root root  654 5月   8 15:24 ca-config.json
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.json
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem
-rw-r--r--. 1 root root 1001 5月   8 20:40 client.csr
-rw-r--r--. 1 root root  190 5月   8 20:37 client-csr.json
-rw-------. 1 root root 1679 5月   8 20:40 client-key.pem
-rw-r--r--. 1 root root 1371 5月   8 20:40 client.pem
-rw-r--r--. 1 root root 1070 5月   8 15:43 etcd-peer.csr
-rw-r--r--. 1 root root  266 5月   8 15:38 etcd-peer-csr.json
-rw-------. 1 root root 1675 5月   8 15:43 etcd-peer-key.pem
-rw-r--r--. 1 root root 1436 5月   8 15:43 etcd-peer.pem
-rw-r--r--. 1 root root 1123 5月   9 12:59 kubelet.csr
-rw-r--r--. 1 root root  502 5月   9 12:57 kubelet-csr.json
-rw-------. 1 root root 1679 5月   9 12:59 kubelet-key.pem
-rw-r--r--. 1 root root 1476 5月   9 12:59 kubelet.pem
-rw-r--r--. 1 root root 1013 5月   9 15:28 kube-proxy-client.csr
-rw-------. 1 root root 1675 5月   9 15:28 kube-proxy-client-key.pem
-rw-r--r--. 1 root root 1383 5月   9 15:28 kube-proxy-client.pem
-rw-r--r--. 1 root root  272 5月   9 15:26 kube-proxy-csr.json

3、分发证书

将 kube-proxy-client.pem kube-proxy-client-key.pem 分发到节点中 /opt/kubernetes/server/bin/cert

[root@CA-Host#]scp kube-proxy-client.pem 192.168.208.21:/opt/kubernetes/server/bin/cert   
[root@CA ] scp kube-proxy-client-key.pem 192.168.208.21:/opt/kubernetes/server/bin/cert
[root@CA ] scp kube-proxy-client.pem 192.168.208.22:/opt/kubernetes/server/bin/cert
[root@CA ] scp kube-proxy-client-key.pem 192.168.208.22:/opt/kubernetes/server/bin/cert

4、在 node01 node02 节点上创建 配置文件

node01

[root@node01 conf]# pwd
/opt/kubernetes/server/bin/conf

#set-cluster
[root@node01 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.208.10:7443 \
--kubeconfig=kube-proxy.kubeconfig

#set-credentials
[root@node01 conf]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
  --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

#set-context
[root@node01 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

#use-context
[root@node01 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

查看生成的 kube-proxy.kubeconfig 配置文件
[root@node01 conf]# ll
总用量 24
-rw-r--r--. 1 root root 2289 5月   9 01:11 audit.yaml
-rw-r--r--. 1 root root  258 5月   9 13:31 k8s-node.yaml
-rw-------. 1 root root 6212 5月   9 13:28 kubelet.kubeconfig
-rw-------. 1 root root 6228 5月   9 15:45 kube-proxy.kubeconfig

5、将 生成的 kube-proxy.kubeconfig 配置文件 拷贝到 node02

[root@node01 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@node01 conf]# 
scp kube-proxy.kubeconfig 192.168.208.22:/opt/kubernetes/server/bin/conf

6、使用 ipvs 调度流量

[root@node01 ~]# vim ipvs.sh

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done
[root@node01 ~]# chmod a+x ipvs.sh 
[root@node01 ~]# sh ipvs.sh 
[root@node01 ~]# lsmod |grep ip_vs
ip_vs_wrr              12697  0 
ip_vs_wlc              12519  0 
ip_vs_sh               12688  0 
ip_vs_sed              12519  0 
ip_vs_rr               12600  0 
ip_vs_pe_sip           12740  0 
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0 
ip_vs_lc               12516  0 
ip_vs_lblcr            12922  0

7、将脚本拷到 192.168.208.22 node02

[root@node01 ~]# scp ipvs.sh 192.168.208.22:/root/

node02
[root@node02 ~]# sh ipvs.sh 
[root@node02 ~]# lsmod |grep ip_vs
ip_vs_wrr              12697  0     #加权轮询
ip_vs_wlc              12519  0     #最短连接
ip_vs_sh               12688  0 
ip_vs_sed              12519  0 
ip_vs_rr               12600  0 
ip_vs_pe_sip           12740  0 
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0 
ip_vs_lc               12516  0 

8、 最后 创建启动脚本文件

node01:
[root@node01 bin]# pwd
/opt/kubernetes/server/bin
[root@node01 bin]# vim kube-proxy.sh

#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override node01 \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig
[root@node01 bin]#chmod a+x kube-proxy.sh
[root@node01 bin]#mkdir -p /data/logs/kubernetes/kube-proxy
node02:
[root@node02 bin]# pwd
/opt/kubernetes/server/bin
[root@node02 bin]# vim kube-proxy.sh
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override node02 \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig
[root@node02 bin]#chmod a+x kube-proxy.sh
[root@node02 bin]#mkdir -p /data/logs/kubernetes/kube-proxy

启动 执行脚本

sh kube-proxy.sh

使用 supervsorctl启动

[root@node002 bin]# cat /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-208-22]
command=/usr/bin/sh /opt/kubernetes/server/bin/kube-proxy.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                           ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                 ; directory to cwd to before exec (def no cwd)
autostart=true                                                       ; start at supervisord start (default: true)
autorestart=true                                                     ; retstart at unexpected quit (default: true)
startsecs=30                                                         ; number of secs prog must stay running (def. 1)
startretries=3                                                       ; max # of serial start failures (default 3)
exitcodes=0,2                                                        ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                      ; signal used to kill process (default TERM)
stopwaitsecs=10                                                      ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                            ; setuid to this UNIX account to run the program
redirect_stderr=true                                                 ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log     ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                         ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                             ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                          ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                          ; emit events on stdout writes (default false)

启动

[root@node002 bin]# supervisorctl update
kube-proxy-208-22: added process group
[root@node002 bin]# supervisorctl status
etcd-server-208-22               RUNNING   pid 13331, uptime 4:41:52
kube-apiserver-7-21              RUNNING   pid 13998, uptime 4:03:01
kube-controller-manager-208-22   RUNNING   pid 15506, uptime 1:18:43
kube-kubelet-208-22              RUNNING   pid 16520, uptime 0:21:31
kube-proxy-208-22                STARTING  
scheduler-208-22                 RUNNING   pid 15626, uptime 1:10:42

将启动配置文件拷到另一个节点

更改配置文件名称

[root@node001 bin]# vim /etc/supervisord.d/kube-proxy.ini
[root@node001 bin]# cat /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-208-21]

执行第二个节点的启动

[root@node001 bin]# supervisorctl update
kube-proxy-208-21: added process group
[root@node001 bin]# supervisorctl status
etcd-server-208-21               RUNNING   pid 15493, uptime 4:48:58
kube-apiserver-208-21            RUNNING   pid 16089, uptime 4:03:55
kube-controller-manager-208-22   RUNNING   pid 17631, uptime 1:18:47
kube-kubelet-208-21              RUNNING   pid 20696, uptime 0:08:32
kube-proxy-208-21                RUNNING   pid 22309, uptime 0:01:29
scheduler-208-21                 RUNNING   pid 17743, uptime 1:11:53

9、安装 ipvsadm 使用 ipvs 调度流量

[root@node02 ~]# yum install ipvsadm
[root@node02 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 192.168.208.21:6443          Masq    1      0          0         
  -> 192.168.208.22:6443          Masq    1      0          0         

[root@node01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 192.168.208.21:6443          Masq    1      0          0         
  -> 192.168.208.22:6443          Masq    1      0          0         
[root@node01 ~]

已经对 21,22 节点上的 6443 进行调度

10、验证集群运行状态

[root@node002 bin]# kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.208.21   Ready    <none>   10m   v1.15.11
192.168.208.22   Ready    <none>   25m   v1.15.11

​ 集群节点需要登陆下 harbor ,docker login 192.168.208.200/public # admin/123456

1、下载nginx镜像  上传到 harbor
[root@CA-Host certs]# docker pull nginx
[root@CA-Host certs]# docker tag 602e111c06b6 192.168.208.200/public/nginx:latest
[root@CA-Host certs]# docker push 192.168.208.200/public/nginx:latest

2、编写资源配置清单
[root@node01 ~]# vim nginx-ds.yml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: 192.168.208.200/public/nginx:latest
        ports:
        - containerPort: 80
        
[root@node01 ~]# kubectl create -f nginx-ds.yml 
daemonset.extensions/nginx-ds created        
[root@node01 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-zf7hv   1/1     Running   0          11m
nginx-ds-ztcgn   1/1     Running   0          11m  
[root@node01 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-ds-zf7hv   1/1     Running   0          12m   172.7.21.2   node01   <none>           <none>
nginx-ds-ztcgn   1/1     Running   0          12m   172.7.22.2   node02   <none>           <none>

集群验证成功了 nginx 跑起来了

总结:

基础环境: centos 7.6 内核 3.8以上

关闭selinux

时间同步

调整epeo源

内核优化 (文件描述符大小 内核转发)

安装 bind dns

安装 docker 和 harbor

k8s:

​ etcd集群

​ apisever

​ contorller-manager

​ scheduler

​ kubelet

​ kube-proxy

证书相关, 查看证书过期时间

[root@CA-Host certs]# cfssl-certinfo -cert apiserver.pem
{
  "subject": {
    "common_name": "k8s-apiserver",
    "country": "CN",
    "organization": "tj",
    "organizational_unit": "ops",
    "locality": "shengzheng",
    "province": "guangdong",
    "names": [
      "CN",
      "guangdong",
      "shengzheng",
      "tj",
      "ops",
      "k8s-apiserver"
    ]
  },
  "issuer": {
    "common_name": "Tjcom",
    "country": "CN",
    "organization": "tj",
    "organizational_unit": "ops",
    "locality": "shengzheng",
    "province": "guangdong",
    "names": [
      "CN",
      "guangdong",
      "shengzheng",
      "tj",
      "ops",
      "Tjcom"
    ]
  },
  "serial_number": "406292899824335029439592092319769160797618648263",
  "sans": [
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    "127.0.0.1",
    "192.168.0.1",
    "192.168.208.10",
    "192.168.208.21",
    "192.168.208.22",
    "192.168.208.23"
  ],
  "not_before": "2020-05-08T12:50:00Z",      #签发时间   
  "not_after": "2040-05-03T12:50:00Z",       #证书过期时间
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "FC:F5:C0:6E:ED:50:8F:51:FF:93:FB:8D:29:C2:AD:D7:8E:78:1B:43",
  "subject_key_id": "D4:8E:CA:0:58:1E:5D:F5:D4:6D:1B:68:C9:2F:A4:31:B2:75:7E:F6",

}

查看其它的域名证书
[root@CA-Host certs]# cfssl-certinfo -domain www.baidu.com

从配置文件中 反解 证书

/opt/kubernetes/server/bin/conf
[root@node01 conf]# ll
总用量 24
-rw-r--r--. 1 root root 2289 5月   9 01:11 audit.yaml
-rw-r--r--. 1 root root  258 5月   9 13:31 k8s-node.yaml
-rw-------. 1 root root 6212 5月   9 13:28 kubelet.kubeconfig
-rw-------. 1 root root 6228 5月   9 15:45 kube-proxy.kubeconfig
cat kubelet.kubeconfig
把证书单独拿出来 client-certificate-data
echo client-certificate-data 的证书 |base64 -d  >123.pem
拷到 200的机器上可以查看
[root@CA-Host tool]# cfssl-certinfo -cert 123.pem  


证书过期, kubeconfig kube-proxy 配置文件要重新生成 证书也要替换

作者:呆呆了

原文链接:https://www.jianshu.com/p/1fbcc15376d1

文章分类
后端
文章标签
版权声明:本站是系统测试站点,无实际运营。本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 XXXXXXo@163.com 举报,一经查实,本站将立刻删除。
相关推荐