阅读 601

Kubernetes namespace删除失败

删除命名空间失败

 

今天在作测试的时候,清理集群。就把没用的都清理掉包括命名空间。但是发现失败了,一直卡在终止状态。

 

导致删除失败的原因一般有两种:

 

1、命名空间下还有资源在用,如果有删除,命名空间自动消失

 

2、就是名称空间下没有资源

 

我是第二个问题,那么就针对它作个处理办法梳理

 

[root@ECS1 ~]# kubectl get ns
NAME              STATUS        AGE
app-team1         Terminating   3d7h
default           Active        3d19h
internal          Active        2d23h
kube-node-lease   Active        3d19h
kube-public       Active        3d19h
kube-system       Active        3d19h


[root@ECS1 ~]# kubectl delete ns/app-team1
namespace "app-team1" deleted
^C
[root@ECS1 ~]# 

 

没办法只能手动停止另想办法,不然卡到你天荒地老。

 

找到一个神奇的地方找到这么一段话

 

There‘s one situation that may require forcing finalization for a namespace. If you‘ve deleted a namespace and you‘ve cleaned out all of the objects under it, but the namespace still exists, deletion can be forced by updating the namespace subresource, finalize. This informs the namespace controller that it needs to remove the finalizer from the namespace and perform any cleanup:

 

大体意思就是如果删除了命名空间,在已经清除空间下所有对象后。空间还在,那么需要通过更新名称空间子资源来强制删除。这种方式通知名称空间控制器,我要从命名空间中删除终结器并且执行清理所有操作。

 

这东西用的是restful请求方式,但是我这不安全端口都封掉了,开个代理吧(可以选择用证书)

 

[root@ECS1 ~]# kubectl proxy --port=8081
Starting to serve on 127.0.0.1:8081

 

开始删除

 

cat <X PUT   localhost:8081/api/v1/namespaces/app-team1/finalize   -H "Content-Type: application/json"   --data-binary @-
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "app-team1"
  },
  "spec": {
    "finalizers": null
  }
}
EOF

 

查看结果

 

[root@ECS1 ~]# cat <X PUT >   localhost:8081/api/v1/namespaces/app-team1/finalize >   -H "Content-Type: application/json" >   --data-binary @-
> {
>   "kind": "Namespace",
>   "apiVersion": "v1",
>   "metadata": {
>     "name": "app-team1"
>   },
>   "spec": {
>     "finalizers": null
>   }
> }
> EOF
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "app-team1",
    "uid": "108e6665-9b70-422c-8f94-783347101836",
    "resourceVersion": "533794",
    "creationTimestamp": "2021-06-08T23:46:24Z",
    "deletionTimestamp": "2021-06-12T06:27:33Z",
    "managedFields": [
      {
        "manager": "curl",
        "operation": "Update",
        "apiVersion": "v1",
        "time": "2021-06-12T06:58:32Z",
        "fieldsType": "FieldsV1",
        "fieldsV1": {"f:status":{"f:phase":{}}}
      }
    ]
  },
  "spec": {
    
  },
  "status": {
    "phase": "Terminating",
    "conditions": [
      {
        "type": "NamespaceDeletionDiscoveryFailure",
        "status": "True",
        "lastTransitionTime": "2021-06-12T06:27:38Z",
        "reason": "DiscoveryFailed",
        "message": "Discovery failed for some groups, 2 failing: unable to retrieve the complete list of server APIs: discovery.k8s.io/v1: the server could not find the requested resource, policy/v1: the server could not find the requested resource"
      },
      {
        "type": "NamespaceDeletionGroupVersionParsingFailure",
        "status": "False",
        "lastTransitionTime": "2021-06-12T06:27:38Z",
        "reason": "ParsedGroupVersions",
        "message": "All legacy kube types successfully parsed"
      },
      {
        "type": "NamespaceDeletionContentFailure",
        "status": "False",
        "lastTransitionTime": "2021-06-12T06:27:38Z",
        "reason": "ContentDeleted",
        "message": "All content successfully deleted, may be waiting on finalization"
      },
      {
        "type": "NamespaceContentRemaining",
        "status": "False",
        "lastTransitionTime": "2021-06-12T06:27:38Z",
        "reason": "ContentRemoved",
        "message": "All content successfully removed"
      },
      {
        "type": "NamespaceFinalizersRemaining",
        "status": "False",
        "lastTransitionTime": "2021-06-12T06:27:38Z",
        "reason": "ContentHasNoFinalizers",
        "message": "All content-preserving finalizers finished"
      }
    ]
  }
}

[root@ECS1 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   3d19h
internal          Active   2d23h
kube-node-lease   Active   3d19h
kube-public       Active   3d19h
kube-system       Active   3d19h
[root@ECS1 ~]# 

 

删除成功

 

这么作虽然能删除,但是有风险(尽量保证下面什么资源都没有后在删除空间)

 

This should be done with caution as it may delete the namespace only and leave orphan objects within the, now non-exiting, namespace - a confusing state for Kubernetes. If this happens, the namespace can be re-created manually and sometimes the orphaned objects will re-appear under the just-created namespace which will allow manual cleanup and recovery

 

译文:这样做时应该谨慎,因为它可能只删除名称空间,而将孤立对象留在现在不存在的名称空间中——这对Kubernetes来说是一种令人困惑的状态。 如果发生这种情况,可以手动重新创建名称空间,有时孤立的对象将重新出现在刚刚创建的名称空间下,这将需要手动清理和恢复 

 

原因:

官方的一段话:

当执行删除后 Kubernetes 报告该对象已被删除,但是,它尚未在传统意义上被删除。相反,它处于删除过程中。当我们再次尝试该对象时,我们发现该对象已修改,以包括删除时间戳。

所发生的是对象已更新,而不是删除。这是因为 Kubernetes 看到对象包含终结器,并将其置于仅读取状态。删除时间戳表示对象只能读取,但删除终结者密钥更新除外。换句话说,删除将不完整,直到我们编辑对象并删除终结者。

 

 

 

 

 

 

 

原文:https://www.cnblogs.com/determined-K/p/14878369.html

文章分类
代码人生
版权声明:本站是系统测试站点,无实际运营。本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 XXXXXXo@163.com 举报,一经查实,本站将立刻删除。
相关推荐