Skip to content

Kubernetes gRPC liveness probes fail #11

@akolybelnikov

Description

@akolybelnikov

After implementing the Deploy Locally chapter, k8s live- and readiness-probes fail with:

kubectl get pod:

NAME        READY   STATUS             RESTARTS       AGE
proglog-0   0/1     CrashLoopBackOff   12 (19s ago)   32m

kubectl describe pod;

Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  30m                  default-scheduler  Successfully assigned default/proglog-0 to kind-control-plane
  Normal   Pulling    30m                  kubelet            Pulling image "busybox"
  Normal   Pulled     30m                  kubelet            Successfully pulled image "busybox" in 3.724847794s
  Normal   Created    30m                  kubelet            Created container proglog-config-init
  Normal   Started    30m                  kubelet            Started container proglog-config-init
  Warning  Unhealthy  29m (x2 over 30m)    kubelet            Readiness probe failed: command "/bin/grpc_health_probe -addr=:8400" timed out
  Warning  Unhealthy  29m                  kubelet            Readiness probe failed: timeout: failed to connect service ":8400" within 1s
  Normal   Killing    29m                  kubelet            Container proglog failed liveness probe, will be restarted
  Warning  Unhealthy  29m (x3 over 30m)    kubelet            Liveness probe failed: command "/bin/grpc_health_probe -addr=:8400" timed out
  Warning  Unhealthy  29m                  kubelet            Readiness probe failed:
  Normal   Started    29m (x3 over 30m)    kubelet            Started container proglog
  Normal   Created    29m (x4 over 30m)    kubelet            Created container proglog
  Normal   Pulled     29m (x4 over 30m)    kubelet            Container image "github.com/akolybelnikov/proglog:0.0.1" already present on machine
  Warning  BackOff    22s (x140 over 29m)  kubelet            Back-off restarting failed container

I tried compiling and deploying the code from the cloned repository with the same result.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions