How to Debug Kubernetes Applications With Ephemeral Containers

Поділитися
Вставка
  • Опубліковано 23 лип 2023
  • How can we debug or troubleshoot applications running in Kubernetes without modifying the original image? Ephemeral containers might be the answer.
    #kubernetes #debugging #container
    Consider joining the channel: / devopstoolkit
    ▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
    ➡ Gist with the commands: gist.github.com/vfarcic/9d9de...
    🎬 Containers Are Not VMs! Which Base Container (Docker) Images Should We Use?: • Containers Are Not VMs...
    🎬 Is Timoni With CUE a Helm Replacement?: • Is Timoni With CUE a H...
    ▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬
    If you are interested in sponsoring this channel, please use calendly.com/vfarcic/meet to book a timeslot that suits you, and we'll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below).
    ▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
    ➡ Twitter: / vfarcic
    ➡ LinkedIn: / viktorfarcic
    ▬▬▬▬▬▬ 🚀 Other Channels 🚀 ▬▬▬▬▬▬
    🎤 Podcast: www.devopsparadox.com/
    💬 Live streams: / devopsparadox
  • Наука та технологія

КОМЕНТАРІ • 51

  • @DevOpsToolkit
    @DevOpsToolkit  11 місяців тому

    How do you debug containers in Kubernetes clusters when observability tools are not enough?

    • @hugolopes5604
      @hugolopes5604 10 місяців тому +1

      breakglass, and ssh into the container, and do a post mortem for the breakglass

  • @maxreuv
    @maxreuv 11 місяців тому +4

    Another one of "must haves". Thank you!

  • @hkupty
    @hkupty 11 місяців тому +2

    Thanks for that!
    I have a label in the pods that is `live = 1`, which I use for taking pods out if want to debug them (setting `live = 0` would cause deployment to spawn a new pod and service to route to the newly created pod). I can safely run kubectl debug on that pod now given it can safely be removed later anyways :)

  • @devxp-tech
    @devxp-tech 10 місяців тому +1

    Very helpful, thank you! ❤

  • @javisartdesign
    @javisartdesign 10 місяців тому +1

    thanks! really useful, sometimes it's necessary to test communications and other stuff

  • @Babbili
    @Babbili 10 місяців тому +1

    i keep on saving your videos 😆

  • @chrisre2751
    @chrisre2751 11 місяців тому +2

    Thanks!

  • @IvanRizzante
    @IvanRizzante 10 місяців тому +2

    Thanks for another great video!
    I totally agree on using ephemeral containers for debugging, but for GitOps lovers like me that kubectl debug means that I have to contact Joe, that guy that works in the basement and nobody knows...who's the only one having direct access to the cluster 😅

    • @DevOpsToolkit
      @DevOpsToolkit  10 місяців тому +2

      Not really... Kubectl debug command is just a shortcut that contacts kube api to edit the pod definition. You could modify your manifest by adding ephemeral container spec, push it to git, and get the same result. From there on it all depends on whether you have permissions to sh into that container.

    • @IvanRizzante
      @IvanRizzante 10 місяців тому +1

      @@DevOpsToolkit that's what I thought too, but as you said that would require kubectl exec in the end.
      I agree with you when you say that even in GitOps world you still need someone authorized to interact directly with the cluster, so I think that Joe would save my day in these corner cases!

    • @DevOpsToolkit
      @DevOpsToolkit  10 місяців тому +4

      To me, ephemeral containers are the last resort when all observability tools )that do not need kubectl) fail to reveal the issue. That's the "break glass" situation which always requires elevated permissions. It's "we do not know what to do so let's dig in" type of situation.

    • @IvanRizzante
      @IvanRizzante 10 місяців тому +1

      @@DevOpsToolkit I totally agree 👍 now I'm waiting for a video on kubearmor 👍

  • @GoshaDo
    @GoshaDo 10 місяців тому +1

    debugging in production is safe.
    We are doing it all the time using Rookout dynamic observability tool.
    much more advanaced tech for debugging then attach containers and stoping the application.

  • @chasim1982
    @chasim1982 11 місяців тому +2

    Great 👍 Content & Learning, one question can I do same stuff by launching a new pod of my application with debug side car? Please correct me, if I am wrong, thanks for amazing content, learned a lot from you ❤.

    • @DevOpsToolkit
      @DevOpsToolkit  11 місяців тому +1

      You can create a new pod and then attach a debug container but i think it's easier to simply use the --copy arhument.

    • @chasim1982
      @chasim1982 11 місяців тому +2

      @@DevOpsToolkit I am using eks 1.23

  • @lightman8527
    @lightman8527 11 місяців тому +2

    Great video! On a side note, never heard about kubearmor. Are you planning to make a video on it? How mature is the project in your experience?

    • @DevOpsToolkit
      @DevOpsToolkit  11 місяців тому +3

      KubeArmor is on my to-do list. That's why I mentioned it.

    • @Babbili
      @Babbili 10 місяців тому +1

      @@DevOpsToolkit is it an alternative to Falco Sysdig ?

    • @DevOpsToolkit
      @DevOpsToolkit  10 місяців тому +1

      It is :)

  • @devopssimplify9884
    @devopssimplify9884 10 місяців тому +1

    Again a great Video!!! I wanted to implement the same but the problem is my cluster has restriction for specifying resource limit and request for pod. How to mention that with kubectl debug? My debug pod needs to have resource mentioned, please suggest!!!!

    • @DevOpsToolkit
      @DevOpsToolkit  10 місяців тому

      The `kubectl debug` command does not allow you to specify resource limits and requests. As a matter of fact, `kubectl create` does not have that option either. The logic behind `kubectl` is that it provides only some of the flags when creating resources. The assumption is that anything more needs to be specified as a YAML file. That is equally valid for `kubectl run` as for `kubectl debug`.
      What that means is that you'd need to define emphemeral containers as part of your manifest if you need resource limits and/or requests.
      The spec is in: kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#ephemeralcontainer-v1-core
      So, when you want to debug, add ephemeral container to manifest with the Pod (or Deployment or StatefulSet) and execute `kubectl apply --filename ...` or whatever you're using normally.

    • @DevOpsToolkit
      @DevOpsToolkit  10 місяців тому

      The alternative solution is to use Mutating Admission Controllers with, let's say, Kyverno. You can configure it to add resource requests and limit every time an ephemeral container is created. That is probably a more elegant and user-friendly solution.

    • @devopssimplify9884
      @devopssimplify9884 10 місяців тому +1

      @@DevOpsToolkit Wow wonderful! thanks a lot!

    • @devopssimplify9884
      @devopssimplify9884 10 місяців тому +1

      @@DevOpsToolkit You are really genious!

  • @chrisre2751
    @chrisre2751 11 місяців тому +2

    Hi Victor, how do you think about distroless-images from google?

    • @DevOpsToolkit
      @DevOpsToolkit  11 місяців тому

      I used them only briefly so I'm not th Best judge of them. What i saw makes me think they're ok.

  • @DennisHaney
    @DennisHaney 11 місяців тому +1

    Tbh. If I had to resort to this for debugging, the first thing i would afterwards was to delete the pod to see if it came back with no errors

    • @DevOpsToolkit
      @DevOpsToolkit  11 місяців тому

      Yeah. That's why creating a copy of a pod is very useful.

  • @computershit
    @computershit 11 місяців тому +1

    Great tip, but is deleting a single pod really that big a deal for a stateless app?

    • @DevOpsToolkit
      @DevOpsToolkit  11 місяців тому +1

      It's often not a big deal with apps that have multiple replicas. Still, why do it if there is a mechanism to create a copy.

    • @bobbyray8457
      @bobbyray8457 10 місяців тому +2

      @@DevOpsToolkit Because despite being a "copy" it is not guaranteed to be scheduled on the same node, eliminating the ability to ascertain any node-level issues.

    • @DevOpsToolkit
      @DevOpsToolkit  10 місяців тому +1

      @@bobbyray8457 That's true. Like everything else, it's not a solution for every problem. When you do suspect being a problem with the node and when you haven't seen any clue of the issue in observability tools, than you might not want to make a copy but attach an ephemeral container directly to a specific pod. That being said, node issues do tend to pop-up in metrics so it would be fairly uncommon.

    • @MastaEda
      @MastaEda 10 місяців тому +1

      @@bobbyray8457 you can try this i guess:
      --same-node=false:
      When used with '--copy-to', schedule the copy of target Pod on the same node.

  • @squalazzo
    @squalazzo 11 місяців тому +1

    so, it seems like this is not a solution for scenarios like a statefulset with mongodb, with 2 replicas and 1 headless pod for backup, for example... or any statefulset, in the end, as you're working with data which could be already old, and not lively used by the app using that sts... right?

    • @DevOpsToolkit
      @DevOpsToolkit  11 місяців тому +1

      Debugging by entering containers is (should be) the last resort used only when observability fails to show what we're looking for. On top of that, we rarely debug third-party apps like that. That's typically reserved for our own apps. Now, if you do need to debug (like that) third-party apps, and they are stateful, and their containers do not have the tools we need (which they often do), I tend to NOT create a copy but attach ephemeral container directly to one of the STS pods.

    • @squalazzo
      @squalazzo 11 місяців тому +1

      @@DevOpsToolkit thanks!

  • @user-zb8gz1pj3j
    @user-zb8gz1pj3j 10 місяців тому +1

    I tried this, but my debug pod is failing with the following error i got from describe:
    message: '0/7 nodes are available: 1 node(s) didn''t have free ports for the requested
    pod ports. preemption: 0/7 nodes are available: 1 node(s) didn''t have free
    ports for the requested pod ports, 6 node(s) didn''t match Pod''s node affinity/selector.'

  • @testtestlast4306
    @testtestlast4306 11 місяців тому +1

    👀

  • @bhavasarpritesh
    @bhavasarpritesh 10 місяців тому +1

    How can we see pods files in debug container?

  • @oleksandrsimonov9200
    @oleksandrsimonov9200 10 місяців тому +1

    Still the question how to make a such limited container with Node, Python or Ruby? Ok, you don’t have a shell, but you need an interpreter

    • @DevOpsToolkit
      @DevOpsToolkit  10 місяців тому

      For those, i would pick ChainGuard images. They are small, (mostly) without vulnerabilities, abd built daily. You'll find a variation for almost every language.

    • @oleksandrsimonov9200
      @oleksandrsimonov9200 10 місяців тому +1

      @@DevOpsToolkit yeah... and they using alpine... Its causes a lot of issues with ffi for example, where we need to use glibc instead. For now we using a mulsistage minideb images with removing of dev packages and other not needed things on latest stage.

    • @DevOpsToolkit
      @DevOpsToolkit  10 місяців тому

      I thought that ChainGuard fixed that by using glibc instead of musl.

    • @jemag
      @jemag 10 місяців тому

      ​@@oleksandrsimonov9200they use Wolfi, not alpine. You can use glibc or musl

  • @PS-xn8ej
    @PS-xn8ej 10 місяців тому +2

    you first should show kubectl get pods on what you are working, not end command ...

  • @iximiuz
    @iximiuz 4 місяці тому +1

    While I fully aggree with the overal sentiment, I'm afraid this video is dangerously misleading in details. 1) Don't build your container images `FROM scratch` - scratch containers lack the vital parts of an OS like /etc/{passwd,group} files and TLS certificates (if you absolutely have to use a minimal image, use a slim/distroless base) 2) Ephemeral containers don't run forever - as soon as the main process exits (of crashes), the ephemeral container stops.

  • @emorycundick1904
    @emorycundick1904 10 місяців тому +1

    "promosm" 🤗