蜘蛛资讯网

您当前的位置:主页 > 蜘蛛资讯网国内 >

作者:侯龙 来源:原创 发布日期:09-17

飞机起飞偏出跑道_kubernetes pod termination pending

在将k8s从1.7.9 升级到1.10.2 之后,发现删除pod一直处于terminating状态, 调查发现删不掉的pod都有一个特点就是pod yaml中command部分写错了,如下所示:

apiVersion: v1
kind: Pod
metadata:
  name: bad-pod-termation-test
spec:
  containers:
    - image: nginx
      command:
      - xxxx
      name: pad-pod-test

可以看到此时pod中的command为一个不存在的命令,创建该yaml后会返回如下状态:

% kubectl get pods 
NAME                     READY     STATUS              RESTARTS   AGE
bad-pod-termation-test   0/1       RunContainerError   0          20s

在宿主机上docker ps -a 可以看到对应的docker是处于Creted状态的(无法正常启动的状态),因为pod起不来会重试,所以有多个docker实例:

CONTAINER ID        IMAGE                              COMMAND                  CREATED              STATUS              PORTS               NAMES
b66c1a3de3ae        nginx                              "xxxx"                   9 seconds ago        Created                                 k8s_pad-pod-test_bad-pod-termation-test_default_7786ffea-7de9-11e8-9754-509a4c2d27d1_3
148a312b89cf        nginx                              "xxxx"                   43 seconds ago       Created                                 k8s_pad-pod-test_bad-pod-termation-test_default_7786ffea-7de9-11e8-9754-509a4c2d27d1_2
6414f874ffe0        k8s.gcr.io/pause-amd64:3.1         "/pause"                 About a minute ago   Up About a minute                       k8s_POD_bad-pod-termation-test_default_7786ffea-7de9-11e8-9754-509a4c2d27d1_0

此时删除pod就会看到pod一直处于termianting状态,只能用kubectl delete pods bad-pod-termation-test --grace-period=0 --forece强制删除,但是强制删除是官方所不建议的,可能会造成资源的泄露,这种方案肯定不是长久之计。
调高kubelet的日志级别仔细查看发现kubelet一直输出一条可疑log:

I0702 19:26:43.712496   26521 kubelet_pods.go:942] Pod "bad-pod-termation-test_default(9eae939b-7dea-11e8-9754-509a4c2d27d1)" is terminated, but some containers have not been cleaned up: [0xc4218d1260 0xc4228ae540]

也就是说container没删干净,kubelet在等待container被删除。上面log打印的是指针,也就是存放container信息的变量地址,不过可以猜出这就是pod对应的container,手动docker rm上面两个created状态的container之后,pod马上就被删除不可见了,怀疑kubelet本身存在某些bug导致Created状态的container某些资源无法释放,为什么会这样呐?
查看代码发现kubelet会有一个PodCache来保存所有的pod信息,每创建一个pod就会向其中添加一条记录,且只有在container删除的时候才会将对应的cache清空,对应的cache清空后才能删除pod。

之前的环境中为了方便debug将container退出后的尸体都保存了下来,在kubelet中设置--minimum-container-ttl-duration=36h flag来保存容器尸体,该flag已经是deprecated状态了,官方不建议使用,建用--eviction-hard 或 --eviction-soft来代替,因为在1.7.9中minimum-container-ttl-duration还是可以正常使用的,倒也没在意deprecated的提醒,并且在1.10.2也同样设置了该flag,导致cache无法清空,进而无法删除pod。

按照上面的分析只有删除了container,pod才可以删除,那么设置了flag minimum-container-ttl-duration来保留container的后果岂不是所有的pod都无法删除吗?为什么之前正常的pod可以被删除?难道是正常pod的container尸体都被删了? 做了一下测试,果然删除正常pod之后容器尸体立马也被删除,设置minimum-container-ttl-duration压根没起作用,但是对于上述yaml创建的异常pod反倒起作用了,Created状态的container,直到minimum-container-ttl-duration之后才被删除。
虽然比较奇怪,但在一个deprecated的flag上面发生任何问题都是可以原谅的,官方已经明确声明不建议使用了,只能去掉该flag避免问题的出现.下了master分支最新的代码重新编译试了一下,版本如下,发现无论设不设置该flag,都会立即删除pod的container, 所以pod pending在terminating状态的问题就不存在了。

% kubectl version  
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.2-dirty", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"dirty", BuildDate:"2018-07-02T11:03:02Z", GoVersion:"go1.9.7", Compiler:"gc", Platform:"linux/amd64"}

那现在就剩一个问题了:在1.7.9的版本中设置了minimum-container-ttl-duration为什么可以正常删除pod? 为什么能够既保留容器尸体又能删除pod,通过查阅源码发现k8s通过调用PodResourcesAreReclaimed来判断资源是否回收,只有资源全部回收才可以删除pod,在1.7.9的实现代码如下,依次判断是否有正在运行中的pod,volume是否清空,sandbox(也就是pause)容器是否清理:

kubectl version Client Version: version. Info Major:" 1", Minor:" 8", GitVersion:" v1. 8. 2", GitCommit:" bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:" clean", BuildDate:" 20171024T19: 48: 57Z", GoVersion:" go1. 8. 3", Compiler:" gc", Platform:" linux amd64" Server Version: version. Info Major:" 1", Minor:" 10", GitVersion:" v1. 10. 2dirty", GitCommit:" 81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:" dirty", BuildDate:" 20180702T11: 03: 02Z", GoVersion:" go1. 9. 7", Compiler:" gc", Platform:" linux amd64" na xian zai jiu sheng yi ge wen ti le: zai 1. 7. 9 de ban ben zhong she zhi le minimumcontainerttlduration wei shen me ke yi zheng chang shan chu pod? wei shen me neng gou ji bao liu rong qi shi ti you neng shan chu pod, tong guo cha yue yuan ma fa xian k8s tong guo diao yong PodResourcesAreReclaimed lai pan duan zi yuan shi fou hui shou, zhi you zi yuan quan bu hui shou cai ke yi shan chu pod, zai 1. 7. 9 de shi xian dai ma ru xia, yi ci pan duan shi fou you zheng zai yun xing zhong de pod, volume shi fou qing kong, sandbox ye jiu shi pause rong qi shi fou qing li:

func (kl *Kubelet) PodResourcesAreReclaimed(pod *v1.Pod, status v1.PodStatus) bool {
    if !notRunning(status.ContainerStatuses) {
        // We shouldnt delete pods that still have running containers
        glog.V(3).Infof("Pod %q is terminated, but some containers are still running", format.Pod(pod))
        return false
    }
    if kl.podVolumesExist(pod.UID) && !kl.kubeletConfiguration.KeepTerminatedPodVolumes {
        // We shouldnt delete pods whose volumes have not been cleaned up if we are not keeping terminated pod volumes
        glog.V(3).Infof("Pod %q is terminated, but some volumes have not been cleaned up", format.Pod(pod))
        return false
    }
    if kl.kubeletConfiguration.CgroupsPerQOS {
        pcm := kl.containerManager.NewPodContainerManager()
        if pcm.Exists(pod) {
            glog.V(3).Infof("Pod %q is terminated, but pod cgroup sandbox has not been cleaned up", format.Pod(pod))
            return false
        }
    }
    return true
}

而在v1.10.2中的实现如下:

func (kl *Kubelet) PodResourcesAreReclaimed(pod *v1.Pod, status v1.PodStatus) bool {
    if !notRunning(status.ContainerStatuses) {
        // We shouldnt delete pods that still have running containers
        glog.V(3).Infof("Pod %q is terminated, but some containers are still running", format.Pod(pod))
        return false
    }
    // pod"s containers should be deleted
    runtimeStatus, err := kl.podCache.Get(pod.UID)
    if err != nil {
        glog.V(3).Infof("Pod %q is terminated, Error getting runtimeStatus from the podCache: %s", format.Pod(pod), err)
        return false
    }
    if len(runtimeStatus.ContainerStatuses) > 0 {
        glog.V(3).Infof("Pod %q is terminated, but some containers have not been cleaned up: %+v", format.Pod(pod), runtimeStatus.ContainerStatuses)
        return false
    }
    if kl.podVolumesExist(pod.UID) && !kl.keepTerminatedPodVolumes {
        // We shouldnt delete pods whose volumes have not been cleaned up if we are not keeping terminated pod volumes
        glog.V(3).Infof("Pod %q is terminated, but some volumes have not been cleaned up", format.Pod(pod))
        return false
    }
    if kl.kubeletConfiguration.CgroupsPerQOS {
        pcm := kl.containerManager.NewPodContainerManager()
        if pcm.Exists(pod) {
            glog.V(3).Infof("Pod %q is terminated, but pod cgroup sandbox has not been cleaned up", format.Pod(pod))
            return false
        }
    }
    return true
}

可以看出1.7.9中资源回收的逻辑与1.10.2中的不太一样,v1.10.2增加了判断cache是否为空的逻辑,上面说过只有在容器被删除之后才清空cache,1.7.9中设置了minimum-container-ttl-duration之后不会清理退出的container尸体,所以cache也未清空,其实在这种情况下是存在资源泄露的。为了验证这个结论,专门在1.7.9的PodResourcesAreReclaimed method中也加入了cache是否为空的判断逻辑,果然出现了一直pending在terminating状态的情况。  

回到我们设置 minimum-container-ttl-duration flag的初衷: container退出后保留信息方便debug,回溯状态,那如果不使用这个flag该怎么办哪?去哪里找逝去的信息? 官方的文档中对 minimum-container-ttl-duration 有句描述是`deprecated once old logs are stored outside of container’s context`,将来可能会将log保存到容器外面,但是目前显然是没有实现的。另外做了几次实验之后发现只要不手动删除pod,对应的container尸体就会一直保存下来,如果有多个退出实例尸体,不会每个实例都保存,但至少会保存一个退出实例,可以用来debug。反过来思考,如果保存每个退出实例,其实是将容器运行的上下文都保存下来了,如果一个container在writable layer写入大量的数据的话,会导致占用大量磁盘空间而不能释放,所以尽量不要保存太多退出实例,官方的保留的退出实例个数一般情况下debug就够用了,对于额外信息的保存就需要通过远程备份的方式来实现了。

当前文章:http://www.cryphone.com/3dv/203572-747188-95247.html

发布时间:04:53:24

手机最快现场开奖直播??香港马会资料大全网??二肖中特??299678.com陈教授论坛??www.gc229.com??一句解平特一肖??一点红官方网站??港台六开奖现场直播??118心水论坛??香港中特网??

Copyright @ 2016-2018 蜘蛛资讯网 版权所有