HA 구성 시 특정 노드가 NotReady 상태 입니다.

작성자 이경미 수정일 2022-12-28 10:28

오류 메세지 혹은 현상
  • 여러 노드가 조인된 상태에서 특정 노드만 NotReady 인 현상 
  • NotReady 상태인 노드에 접속하였을 때, kubelet running 
  • HA 구성이 적용된 환경



원인
  • NotReady 상태인 노드에 접속하여 journalctl 확인하였을 때 External IP 에 관련하여 connection Error 가 발생한 것을 확인하였습니다. 
  • HA 가 구성된 환경으로 L4 IP 를 바라보고 연결이 되어야하지만 master 1번 ip 를 바라보고 있는 것을 확인하였습니다. 
  • 즉, kube-proxy 와 connection 중 master 1 번 ip 로 통신을 요청하여 Connection Refused 발생 
  • journalctl Log
    journalctl status kubelet
    
    Jul 17 00:06:05 aiopsvcp02 kubelet: E0717 00:06:05.412330  119624 reflector.go:282] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: Get https://마스터1노드IP:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=115358114&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 마스터1노드IP:6443: connect: connection refused
     
    Jul 17 00:06:05 aiopsvcp02 kubelet: E0717 00:06:05.412330  119624 reflector.go:282] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: Get https://마스터1노드IP:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Daiopsvcp02&resourceVersion=115081434&timeoutSeconds=392&watch=true: dial tcp 마스터1노드IP:6443: connect: connection refused
    
    Jul 17 00:06:05 aiopsvcp02 kubelet: E0717 00:06:05.412384  119624 reflector.go:282] object-"kube-system"/"agilesoda-docker-secret": Failed to watch *v1.Secret: Get https://마스터1노드IP:6443/api/v1/namespaces/kube-system/secrets?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dagilesoda-docker-secret&resourceVersion=112933272&timeout=8m12s&timeoutSeconds=492&watch=true: dial tcp 마스터1노드IP:6443: connect: connection refused
    .
    .
    .
    중략




  • kube-proxy configmap yaml

    kubectl -n kube-system describe cm kube-proxy
    Name:         kube-proxy
    Namespace:    kube-system
    Labels:       app=kube-proxy
    Annotations:  <none>
    
    Data
    ====
    config.conf:
    ----
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 6.2.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
    kubeconfig.conf:
    ----
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://[master 1번 IP]:6443
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    Events:  <none>


문제 해결
  •  NotReady node 에 접속하여 kube-proxy 의 configMap 수정 후 kube-proxy 파드를 재기동하여 HA구성을 정상화하여 해결합니다. 


  1. kube-proxy configmap 수정
    vi /etc/kubernetes/kubelet.conf
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1URXlPVEF3TlRFeU4xb1hEVE15TVRFeU5qQXdOVEV5TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2xoCmZpRyt2Q1FubmQwYnpWNlBNT2lsbWZFWnlXelc1SlpaaXhXSnZPd245bFRwd3NPSWRnK3lPU20rUnRuMmpnRFoKaDFuVWtmbGszeUpxYmhoT3AxaWROZlZPYldJVl
    ... 중략 ... HZjTkFRRUxCUUFEZ2dFQkFCdzRVYUFPaSsyUzVad3lzakxhS0RBWFlpclYKdHYwV1lVYjFKOWFPUmlCYzlwbU1rUy9IZXVSSTB6T0VNcWlqR1VaZlViVDdEL1ZnWWc1UXJTczd2a1ErMU5XVwpqYkFjNnhjSERkV0Z2NmRxS2Y0MmxWRUlua2xRN0EwL2hLbmkwK2l6Q21FaXNmRlZTWGl4eVFZV0tRR0FXK0dFCjVnN1VKQUFMZEs0NWNmSHNxenRHUW1oT0pYei9YWjk4SEgrR1haQnd1QVkxSHhmcjZSdXBNQ2pYaGpiS3ZWeU8KcTk3S0doY2RRYW52anM1Wmo3ditLR0ZyL1hwRVNUZXREUXVFYXpGK2UvZVpFTnNRb05Fd2dKaTBlVTZwZHhYWQpERFYrc1JaUHljbW5HK3I0aU9ib1VKMWNnYTZPdlBHZExhK29xTFZMaUxCeENlQ0VYVjFNWWlPNjQ1RT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
        server: https://[L4 IP]:6443
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: system:node:basd
      name: system:node:basd@kubernetes
    current-context: system:node:basd@kubernetes
    kind: Config
    preferences: {}
    users:
    - name: system:node:basd
      user:
        client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
        client-key: /var/lib/kubelet/pki/kubelet-client-current.pem


  2. kube-proxy pod 재기동 
    kubectl get po -nkube-system -owide | grep proxy
    NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
    kube-proxy-k5mq4                           1/1     Running   0          23d   192.168.50.199   worker   <none>           <none>
    kube-proxy-zcx88                           1/1     Running   1          29d   192.168.50.198   master     <none>           <none>
    ... 중략 ...
    
    systemctl stop kubelet
    systemctl stop docker 
    systemctl daemon-reload
    systemctl start docker
    systemctl start kubelet 


  3. 정상작동 확인 
    kubectl -n kube-system describe cm kube-proxy
    Name:         kube-proxy
    Namespace:    kube-system
    Labels:       app=kube-proxy
    Annotations:  <none>
    
    Data
    ====
    config.conf:
    ----
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 6.2.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
    kubeconfig.conf:
    ----
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://[L4 IP 확인]:6443
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    Events:  <none>






아티클이 유용했나요?

훌륭합니다!

피드백을 제공해 주셔서 감사합니다.

도움이 되지 못해 죄송합니다!

피드백을 제공해 주셔서 감사합니다.

아티클을 개선할 수 있는 방법을 알려주세요!

최소 하나의 이유를 선택하세요
CAPTCHA 확인이 필요합니다.

피드백 전송

소중한 의견을 수렴하여 아티클을 개선하도록 노력하겠습니다.

02-558-8300