阅读 573

kubesphere 3.0 安装部署填坑记录

kubesphere 3.0 安装部署填坑记录

一:系统环境

1.1 系统环境初始化

系统:Centos7.8x64

cat /etc/hosts 
-----192.168.100.11  node01.flyfish.cn
192.168.100.12  node02.flyfish.cn
192.168.100.13  node03.flyfish.cn
192.168.100.14  node04.flyfish.cn
192.168.100.15  node05.flyfish.cn
192.168.100.16  node06.flyfish.cn
192.168.100.17  node07.flyfish.cn
192.168.100.18  node08.flyfish.cn-----本次安装以 前三台部署
k8s 部署说明

image_1eq1hbnj55tcn2nvf51juculo9.png-39.8kB


1.2 系统配置初始化

安装基础工具
  yum install -y wget && yum install -y vim && yum install -y lsof && yum install -y net-tools

image_1eq1hfpkp1a1bmpp1tm2b3k5drm.png-258.4kB

关闭防火墙或者阿里云开通安全组端口访问

systemctl stop firewalld

systemctl disable firewalld

执行关闭命令: systemctl stop firewalld.service

再次执行查看防火墙命令:systemctl status firewalld.service

执行开机禁用防火墙自启命令  : systemctl disable firewalld.service

关闭 selinux:

sed -i 's/enforcing/disabled/' /etc/selinux/config

setenforce 0

cat /etc/selinux/config

image_1eq1hk3a91ejq1oj6np093njqe13.png-86.5kB


关闭 swap

swapoff -a  #临时sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久free -l -h

image_1eq1hml7heuu193b1u3h6hn14fr1g.png-82.6kB


将桥接的 IPv4 流量传递到 iptables 的链
   如果没有/etc/sysctl.conf文件的话直接执行
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf

image_1eq1hr62sbmrg2918i21lgb9qt1t.png-174.3kB

1.3 部署docker

下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

以下在所有节点操作。这里采用二进制安装,用yum安装也一样。
在 node01.flyfish,node02.flyfish 与 node03.flyfish 节点上面安装

3.1 解压二进制包tar zxvf docker-19.03.9.tgzmv docker/* /usr/bin

image_1eq1i8mmbr901l11v9j1haptl9.png-65.9kB

3.2 systemd管理docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

image_1eq1iog7f1o5a144hi4blc01k7lm.png-135.3kB

3.3 创建配置文件

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

registry-mirrors 阿里云镜像加速器
3.4 启动并设置开机启动

systemctl daemon-reload
systemctl start docker
systemctl enable docker

image_1eq1iovdi1i2rkmp9it618rs13.png-162kB

二 :安装k8s 集群

安装k8s、kubelet、kubeadm、kubectl(所有节点)
配置K8S的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

安装kubelet、kubeadm、kubectlyum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

image_1eq1k04tlkfb1qfo1dc6aiv19pg1g.png-186.8kB

systemctl enable kubelet && systemctl start kubelet

image_1eq1k19611h64tb61kj31nq31g8o1t.png-104.7kB

初始化所有节点:
 下载镜像脚本:
 vim image.sh
 ---- #!/bin/bashimages=(

  kube-apiserver:v1.17.3

    kube-proxy:v1.17.3

  kube-controller-manager:v1.17.3

  kube-scheduler:v1.17.3

  coredns:1.6.5

  etcd:3.4.3-0

    pause:3.1

)for imageName in ${images[@]} ; do

    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageNamedone
 -----

image_1eq1kaan6hccof3d5c1h6m9jh2a.png-167.5kB


初始化 master节点:
注意,该操作只是在master节点之后构建环境。
kubeadm init \--apiserver-advertise-address=192.168.100.11 \--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \--kubernetes-version v1.17.3 \--service-cidr=10.96.0.0/16 \--pod-network-cidr=10.244.0.0/16

image_1eq1kg65g1o879g82u710vp1jhr2n.png-223.4kB


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

image_1eq1khr70qbsfcep0910mcncr34.png-64.2kB


部署网络插件
  kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

image_1eq1kn6aq1mt3ttkahjd6p5mp3h.png-299.8kB

image_1eq1kptkk19tg15uu8s64ar19qd3u.png-51.4kB


image_1eq1ktf6k6cd1clhus25bj1ir44b.png-209.2kB

其他节点加入:kubeadm join 192.168.100.11:6443 --token y28jw9.gxstbcar3m4n5p1a \    --discovery-token-ca-cert-hash sha256:769528577607a4024ead671ae01b694744dba16e0806e57ed1b099eb6c6c9350

image_1eq1kvda81844o7g1edm1sac1s1c4o.png-204.2kB

image_1eq1kvp7g1ds21sflrhf1eph1sqt55.png-215.5kB

image_1eq1l0j2h131pavp8chp6k16m15i.png-67.3kB


三:部署NFS 服务器

yum install -y nfs-utilsecho "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

image_1eq1l6vo2135v1q0o1drlm4d4rt5v.png-115.9kB

mkdir -p /nfs/data

systemctl enable rpcbind

systemctl enable nfs-server

systemctl start rpcbind

systemctl start nfs-serverexportfs -r

image_1eq1la8jqiev1hkg350or0f966p.png-156.2kB

image_1eq1l8d8h8m97b2tddb6o1esu6c.png-47.4kB


测试Pod直接挂载NFS了(主节点操作)

在opt目录下创建一个nginx.yaml的文件

vim nginx.yaml

----apiVersion: v1kind: Podmetadata:
  name: vol-nfs
  namespace: defaultspec:
  volumes:
  - name: html
    nfs:
      path: /nfs/data   #1000G
      server: 192.168.100.11 #自己的nfs服务器地址
  containers:
  - name: myapp
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
----
kubectl apply -f nginx.yaml

cd /nfs/data/

echo " 11111" >>> index.html

image_1eq1lv1q7np81rin14vs1i812t776.png-53.5kB


安装客户端工具(node节点操作)node02.flyfish.cnshowmount -e 192.168.100.11

image_1eq1mc39bf7sdsvs8pvgaien7j.png-39.5kB


创建同步文件夹mkdir /root/nfsmount

将客户端的/root/nfsmount和/nfs/data/做同步(node节点操作)
mount -t nfs 192.168.100.11:/nfs/data/ /root/nfsmount

image_1eq1popta5o1ucj1evbq6p1nav8d.png-296.2kB

image_1eq1ps3341s041a3ba3gbjmhej8q.png-51.5kB

image_1eq1pskkn1f7c2g81cr95ie6n097.png-36kB


四:设置动态供应链storageclass

image_1eq1q1tutp21monrlt1hf99839k.png-587.9kB

vim nfs-rbac.yaml
----apiVersion: v1kind: ServiceAccountmetadata:
  name: nfs-provisioner
---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:
   name: nfs-provisioner-runnerrules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:
  name: run-nfs-provisionersubjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: defaultroleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---kind: DeploymentapiVersion: apps/v1metadata:
   name: nfs-client-provisionerspec:
   replicas: 1
   strategy:
     type: Recreate
   selector:
     matchLabels:
        app: nfs-client-provisioner
   template:
      metadata:
         labels:
            app: nfs-client-provisioner
      spec:
         serviceAccount: nfs-provisioner
         containers:
            -  name: nfs-client-provisioner
               image: lizhenliang/nfs-client-provisioner
               volumeMounts:
                 -  name: nfs-client-root
                    mountPath:  /persistentvolumes
               env:
                 -  name: PROVISIONER_NAME
                    value: storage.pri/nfs
                 -  name: NFS_SERVER
                    value: 192.168.100.11
                 -  name: NFS_PATH
                    value: /nfs/data
         volumes:
           - name: nfs-client-root
             nfs:
               server: 192.168.100.11
               path: /nfs/data
----
kubectl apply -f nfs-rbac.yaml
kubectl get pod

image_1eq1qekdf190m16581uu1oqr1m2sa1.png-65kB

image_1eq1qghjjqpl1oq6c001nk7159mae.png-72kB

创建storageclass
vi storageclass-nfs.yaml
----apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:
  name: storage-nfsprovisioner: storage.pri/nfsreclaimPolicy: Delete----
kubectl apply -f storageclass-nfs.yaml

image_1eq1qm3001a9ne6c195319go6srar.png-47kB

#扩展"reclaim policy"有三种方式:Retain、Recycle、Deleted。Retain#保护被PVC释放的PV及其上数据,并将PV状态改成"released",不将被其它PVC绑定。集群管理员手动通过如下步骤释放存储资源:手动删除PV,但与其相关的后端存储资源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。
手动清空后端存储volume上的数据。
手动删除后端存储volume,或者重复使用后端volume,为其创建新的PV。

Delete
删除被PVC释放的PV及其后端存储volume。对于动态PV其"reclaim policy"继承自其"storage class",
默认是Delete。集群管理员负责将"storage class"的"reclaim policy"设置成用户期望的形式,否则需要用
户手动为创建后的动态PV编辑"reclaim policy"Recycle
保留PV,但清空其上数据,已废弃

kubectl get storageclass

image_1eq1qttcv17i3vaf17br1de02i1b8.png-56.2kB


改变默认sc
https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-classkubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

验证nfs动态供应

创建pvc

vim pvc.yaml
-----apiVersion: v1kind: PersistentVolumeClaimmetadata:
  name: pvc-claim-01  #annotations:
   #   volume.beta.kubernetes.io/storage-class: "storage-nfs"spec:
  storageClassName: storage-nfs  #这个class一定注意要和sc的名字一样
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
-----

kubectl apply -f pvc.yaml

image_1eq1rcgqqqgp1mie11921f5a1u0pc2.png-41.3kB

使用pvc

vi testpod.yaml
----
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: pvc-claim-01
-----
kubectl apply -f testpod.yaml

image_1eq1rg5601t481hk9jki1paq115mcf.png-39.6kB

五:安装metrics-server

1、先安装metrics-server(yaml如下,已经改好了镜像和配置,可以直接使用),这样就能监控到pod。node的资源情况(默认只有cpu、memory的资源审计信息哟,更专业的我们后面对接 Prometheus)

vim 2222.yaml
----apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"rules:- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]
---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:
  name: metrics-server:system:auth-delegatorroleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegatorsubjects:- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:
  name: metrics-server-auth-reader
  namespace: kube-systemroleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---apiVersion: apiregistration.k8s.io/v1beta1kind: APIServicemetadata:
  name: v1beta1.metrics.k8s.iospec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---apiVersion: v1kind: ServiceAccountmetadata:
  name: metrics-server
  namespace: kube-system
---apiVersion: apps/v1kind: Deploymentmetadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-serverspec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"---apiVersion: v1kind: Servicemetadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
    kubernetes.io/cluster-service: "true"spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: main-port
---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:
  name: system:metrics-serverrules:- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:
  name: system:metrics-serverroleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-serversubjects:- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
kubectl apply -f 2222.yaml

image_1eq1s4c0fvq01bvg18kgqi6g9rcs.png-118.3kB


kubetl top nodes

kubectl top nodes

image_1eq1s9mrl1i7e1lj2puh1v681c7ddm.png-156.9kB

image_1eq1sai2015malp21pgn11i718hse3.png-94.5kB


六: 安装 kubesphere

https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/

wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml

image_1eq1sh9uictr14gdneh17fr1p0meg.png-472.3kB


vim cluster-configuration.yaml
----
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0
spec:
  persistence:
    storageClass: ""        # If there is not a default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the host cluster. Retrive the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the host cluster.
  etcd:
    monitoring: true       # Whether to enable etcd monitoring dashboard installation. You have to create a secret for etcd before you enable it.
    endpointIps: 192.168.100.11  # etcd cluster EndpointIps, it can be a bunch of IPs here.
    port: 2379              # etcd port
    tlsEnable: true
  common:
    mysqlVolumeSize: 20Gi # MySQL PVC size.
    minioVolumeSize: 20Gi # Minio PVC size.
    etcdVolumeSize: 20Gi  # etcd PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    es:   # Storage backend for logging, events and auditing.
      # elasticsearchMasterReplicas: 1   # total number of master nodes, it's not allowed to use even number
      # elasticsearchDataReplicas: 1     # total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi   # Volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # Volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch, it is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
  console:    enableMultiLogin: true  # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
    port: 30880
  alerting:                # (CPU: 0.3 Core, Memory: 300 MiB) Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true
  auditing:                # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
    enabled: true
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true
    ruler:
      enabled: true
      replicas: 2
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true
    logsidecarReplicas: 2
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
    enabled: false
  monitoring:    # prometheusReplicas: 1            # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1          # AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the role of host or member cluster.
  networkpolicy:       # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
    # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
    enabled: true
  notification:        # Email Notification support for the legacy alerting system, should be enabled/disabled together with the above alerting option.
    enabled: true
  openpitrix:          # (2 Core, 3.6 G) Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management.
    enabled: true
  servicemesh:         # (0.3 Core, 300 MiB) Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology.
    enabled: true----

kubectl apply -f kubesphere-installer.yaml

kubectl apply -f cluster-configuration1.yaml

image_1eq1t3e60udqdui1k16agtftqet.png-125.2kB


查看安装进度:
   kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

image_1eq1vjdj2b5jam1c9q3lfdltlt.png-355.6kB

kubectl get pod -A

image_1eq1u209o1njh1v1v85i1vf11r0pg7.png-381.3kB

image_1eq1u2tel7g916pf1ln214hqcjogk.png-413.5kB

kubesphere-monitoring-system   prometheus-k8s-0                                    0/3     ContainerCreating   0          7m20s
kubesphere-monitoring-system   prometheus-k8s-1                                    0/3     ContainerCreating   0          7m20s

prometheus-k8s-1 这个一直在 ContainerCreating  这个 状态

image_1eq1u9dlvbfjsg8f0kpvnhl9he.png-180.6kB


kubectl describe pod prometheus-k8s-0 -n kubesphere-monitoring-systemkube-etcd-client-certs 这个证书没有找到:

image_1eq1uc2rl8v81p22s598uo15eehr.png-299.3kB

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

 kubectl get secret -A |grep etcd

image_1eq1uj35o18qkrmd1lo41o501hiki8.png-53.1kB

kubectl get pod -n kubesphere-monitoring-system

prometheus-k8s-1 这个pod 就变成Running 状态了

image_1eq1um7aj198vskr1jg714481icbil.png-167.8kB


下面根据日志提示打开kubesphere 的web 页面:

image_1eq1uq2jvasb12pg1pbtg1bqbj2.png-119.5kB

image_1eq1uselj1mtln5p1afg1fl1p2rjf.png-37.6kB

image_1eq1ut25r15mi1guoo2d1qr41nf3js.png-55.9kB

image_1eq1v2ma51fq51021n7q1e92e4uk9.png-161.4kB

image_1eq1v590r3bk151c1v88m51o6akm.png-134.3kB

image_1eq1v69tn1ukh1di91l5q1f3vec6l3.png-190.9kB

©著作权归作者所有:来自51CTO博客作者flyfish225的原创作品,谢绝转载,否则将追究法律责任

it论坛社区 http://www.137zw.com/

文章分类
后端
版权声明:本站是系统测试站点,无实际运营。本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 XXXXXXo@163.com 举报,一经查实,本站将立刻删除。
相关推荐