# 查看某种资源可以配置的一级配置
kubectl explain 资源类型
# 查看pod的一级配置
kubectl explain pod
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
FIELDS:
apiVersion
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata
# 创建pod
kubectl apply -f pod_base.yaml
# 查看Pod
kubectl get pods pod-base -n dev
NAME READY STATUS RESTARTS AGE
pod-base 1/2 ImagePullBackOff 1 2m2s
# 我们发现,当前Pod中有两个容器,但是准备就绪只有一个。
# 再次查看,发现重试了3次
kubectl get pods pod-base -n dev
NAME READY STATUS RESTARTS AGE
pod-base 1/2 CrashLoopBackOff 3 3m2s
# 查看内部详情
kubectl describe pods pod-base -n dev
# 我们只关心Events部分,
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled default-scheduler Successfully assigned dev/pod-base to node2
Normal Pulled 75s kubelet, node2 Container image "nginx:1.17.1" already present on machine
Normal Created 73s kubelet, node2 Created container nginx
Normal Started 73s kubelet, node2 Started container nginx
Normal Pulling 73s kubelet, node2 Pulling image "busybox:1.30"
Normal Pulled 49s kubelet, node2 Successfully pulled image "busybox:1.30"
Normal Pulled 25s (x2 over 47s) kubelet, node2 Container image "busybox:1.30" already present on machine
Normal Created 24s (x3 over 49s) kubelet, node2 Created container busybox
Normal Started 24s (x3 over 48s) kubelet, node2 Started container busybox
Warning BackOff 11s (x4 over 41s) kubelet, node2 Back-off restarting failed container
# 发现启动busybox失败了。
kubectl exec -it pod-env -n dev -c busybox -it /bin/sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
/ # echo $username
admin
/ # echo $password
123456
2.5、端口设置
查看posts支持的子选项
kubectl explain pod.spec.containers.ports
KIND: Pod
VERSION: v1
RESOURCE: ports <[]Object>
FIELDS:
name # 端口名称,如果指定,必须保证name在pod中是唯一的
containerPort # 容器要监听的端口(0 # 容器要在主机上公开的端口,如果设置,主机上只能运行容器的一个副本(一般省略)
hostIP # 要将外部端口绑定到的主机IP(一般省略)
protocol # 端口协议。必须是UDP、TCP或SCTP。默认为“TCP”
#----------------------------------------------------------------
KIND: Pod
VERSION: v1
RESOURCE: ports <[]Object>
DESCRIPTION:
List of ports to expose from the container. Exposing a port here gives the
system additional information about the network connections a container
uses, but is primarily informational. Not specifying a port here DOES NOT
prevent that port from being exposed. Any port which is listening on the
default "0.0.0.0" address inside a container will be accessible from the
network. Cannot be updated.
ContainerPort represents a network port in a single container.
FIELDS:
containerPort -required-
Number of port to expose on the pod‘s IP address. This must be a valid port
number, 0 < x < 65536.
hostIP
What host IP to bind the external port to.
hostPort
Number of port to expose on the host. If specified, this must be a valid
port number, 0 < x < 65536. If HostNetwork is specified, this must match
ContainerPort. Most containers do not need this.
name
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
protocol
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
kubectl create -f pod_resoures.yaml
> pod/pod-resoures created
kubectl get pods pod-resoures -n dev
NAME READY STATUS RESTARTS AGE
pod-resoures 1/1 Running 0 20s
# 启动
kubectl create -f pod_initcontainers.yaml
# 查看状态,发现没有准备就绪
kubectl get pods pod-initcontainers -n dev
NAME READY STATUS RESTARTS AGE
pod-initcontainers 0/1 Init:0/2 0 74s
开启动态查看状态并为网卡添加IP
# 动态查看pod
kubectl get pod pod-initcontainers -n dev -w
NAME READY STATUS RESTARTS AGE
pod-initcontainers 0/1 Init:0/2 0 4m30s
# 新开一个shell窗口,执行给master主机添加ip
ifconfig ens32:1 192.168.209.120 netmask 255.255.255.0 up
ifconfig ens32:2 192.168.209.121 netmask 255.255.255.0 up
# 切回原来的窗口,发现pod已经准备就绪
NAME READY STATUS RESTARTS AGE
pod-initcontainers 0/1 Init:0/2 0 4m30s
pod-initcontainers 0/1 Init:1/2 0 6m40s
pod-initcontainers 0/1 Init:1/2 0 6m41s
pod-initcontainers 0/1 PodInitializing 0 6m53s
pod-initcontainers 1/1 Running 0 6m54s
# 创建pod
kubectl create -f pod_liveness_exec.yaml
# 查看pod
kubectl get pods pod-liveness-exec -n dev
# 发现RESTARTS不为0
NAME READY STATUS RESTARTS AGE
pod-liveness-exec 1/1 Running 2 67s
# 查看pod详情
kubectl describe pods pod-liveness-exec -n dev
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled default-scheduler Successfully assigned dev/pod-liveness-exec to node1
Normal Killing 28s (x3 over 88s) kubelet, node1 Container nginx failed liveness probe, will be restarted
Normal Pulled 27s (x4 over 115s) kubelet, node1 Container image "nginx:1.17.1" already present on machine
Normal Created 27s (x4 over 114s) kubelet, node1 Created container nginx
Normal Started 27s (x4 over 114s) kubelet, node1 Started container nginx
Warning Unhealthy 18s (x10 over 108s) kubelet, node1 Liveness probe failed: /bin/cat: /tmp/hello.txt: No such file or directory
观察上面的信息就会发现nginx容器启动之后就进行了健康检查。
检查失败之后,容器被kill掉,然后尝试进行重启,这是重启策略的作用。
稍等一会之后,再观察Pod的信息,就会看到RESTARTS不再是0,而是一直增长。
删除上面创建的Pod,并修改command执行的命令,使其正常。
# 删除
kubectl delete -f pod_liveness_exec.yaml
# 修改yaml文件
command: ["/bin/ls","/tmp"] # 查看tmp目录
# 创建
kubectl create -f pod_liveness_exec.yaml
# 查看
kubectl get pods pod-liveness-exec -n dev
NAME READY STATUS RESTARTS AGE
pod-liveness-exec 1/1 Running 0 16s
# 创建
kubectl create -f pod_liveness_tcpsocket.yaml
# 查看
get pods pod-liveness-tcpsocket -n dev
NAME READY STATUS RESTARTS AGE
pod-liveness-tcpsocket 0/1 CrashLoopBackOff 4 2m39s
kubectl explain pod.spec.containers.livenessProbe
KIND: Pod
VERSION: v1
RESOURCE: livenessProbe
DESCRIPTION:
Periodic probe of container liveness. Container will be restarted if the
probe fails. Cannot be updated. More info:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
Probe describes a health check to be performed against a container to
determine whether it is alive or ready to receive traffic.
FIELDS:
exec
One and only one of the following should be specified. Exec specifies the
action to take.
failureThreshold
# 连续探测失败多少次才被认定为失败。默认是3。最小值是1
Minimum consecutive failures for the probe to be considered failed after
having succeeded. Defaults to 3. Minimum value is 1.
httpGet
HTTPGet specifies the http request to perform.
initialDelaySeconds
Number of seconds after the container has started before liveness probes
are initiated. More info:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
periodSeconds
How often (in seconds) to perform the probe. Default to 10 seconds. Minimum
value is 1.
successThreshold
Minimum consecutive successes for the probe to be considered successful
after having failed. Defaults to 1. Must be 1 for liveness and startup.
Minimum value is 1.
tcpSocket
TCPSocket specifies an action involving a TCP port. TCP hooks not yet
supported
timeoutSeconds
Number of seconds after which the probe times out. Defaults to 1 second.
Minimum value is 1. More info:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds # 容器启动后等待多少秒执行第一次探测
timeoutSeconds # 探测超时时间。默认1秒,最小1秒
periodSeconds # 执行探测的频率。默认是10秒,最小1秒
failureThreshold # 连续探测失败多少次才被认定为失败。默认是3。最小值是1
successThreshold # 连续探测成功多少次才被认定为成功。默认是1
kubectl create -f pod_restart_policy.yaml
kubectl describe pod pod-restart-policy -n dev
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled default-scheduler Successfully assigned dev/pod-restart-policy to node2
Normal Pulled 73s kubelet, node2 Container image "nginx:1.17.1" already present on machine
Normal Created 73s kubelet, node2 Created container nginx
Normal Started 73s kubelet, node2 Started container nginx
Warning Unhealthy 50s (x3 over 70s) kubelet, node2 Liveness probe failed: Get http://127.0.0.1:80/hello: dial tcp 127.0.0.1:80: connect: connection refused
Normal Killing 50s kubelet, node2 Stopping container nginx
# 我们发现容器探测失败后,直接停止容器,并没有选择重启
# 创建pod
kubectl create -f pod_nodename.yaml
# 查看
kubectl get pod pod-nodename -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 1/1 Running 0 27s 10.244.1.18 node1
当然我们也可以将pod调度到不存在node上,pod肯定也是不能正常运行。
# 删除pod
kubectl delete -f pod_nodename.yaml
# 修改pod_nodename.yaml
# 将nodeName:node1 修改为node3 (node3不存在)
# 创建和查看状态
kubectl create -f pod_nodename.yaml
kubectl get pods pod-nodename -n dev -o wide
# 我们发现pod的状态时挂起状态
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodename 0/1 Pending 0 6s node3
kubectl get pod pod-nodeselector -n dev -o wide
# 发现成功调度到node2上
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 1/1 Running 0 21s 10.244.2.20 node2
同样,我们如要调度要不存在的标签上,会发生什么情况:
# 删除pod
kubectl delete -f pod_nodeselector.yaml
# 修改pod_nodeselector.yaml文件
# 将debug改为debug1
# 创建和查看
kubectl get pod pod-nodeselector -n dev -o wide
# pod的状态为挂起状态
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeselector 0/1 Pending 0 24s
kubectl get pods pod-nodeaffinity-required -n dev -o wide
# 我们发现STATUS为Pending,这也不难解释,我们的标签env对应的值没有xxx和yyy
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity-required 0/1 Pending 0 24s
使其正常调度
# 删除pod
kuebctl delete -f pod_nodeaffinity_required.yaml
# 修改yaml文件,将“pro”添加到values列表中
# 创建
kubectl create -f pod_nodeaffinity_required.yaml
# 查看
kubectl get pods pod-nodeaffinity-required -n dev -o wide
# 发现成功调度
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity-required 1/1 Running 0 12s 10.244.1.19 node1
kubectl get pods pod-nodeaffinity-preferred -n dev -o wide
# 发现即使没有满足条件,pod也被正常调度了。
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-nodeaffinity-preferred 1/1 Running 0 57s 10.244.2.21 node2
kubectl get pods pod-podaffinity-target -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podaffinity-target 1/1 Running 0 2m5s 10.244.1.20 node1
kubectl get pods pod-podaffinity-requred -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podaffinity-requred 1/1 ContainerCreating 0 5s node1
kubectl get pods pod-podantiaffinity-requred -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-podantiaffinity-requred 1/1 Running 0 26s 10.244.2.23 node2