[ Kube 13 ] Using Persistent Volumes and Claims in Kubernetes Cluster

Поділитися
Вставка
  • Опубліковано 1 гру 2024

КОМЕНТАРІ • 213

  • @krishnak6602
    @krishnak6602 5 років тому +3

    Glad this video popped up in youtube results. Very helpful because everything is through command execution and verification. Thanks lot

  • @venkatnunepalli4509
    @venkatnunepalli4509 4 роки тому +3

    It's very helpful, You explained it clearly!!. Thank you very much Venkat.

  • @GunjanSharma31
    @GunjanSharma31 2 роки тому +1

    Very detailed, step-by-step video with practical demonstration. It helped me a lot. Thank you!

  • @ajit555db
    @ajit555db 5 років тому +1

    Great episode again, completed the exercise in my local "Kubeadm-dind-cluster". My head was reeling when I read the documentation. This episode helped me a lot. Completing storage related episodes (13, 20, 23) together. If you were in Hyderabad, would have treated you with beer and biryani.

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      I am travelling to India next month :) Lamb Biriyani with couple of Budweiser would do :)

    • @ajit555db
      @ajit555db 5 років тому +1

      @@justmeandopensource Really, no joking, to Hyderabad?, it would be my honor to treat you.

    • @justmeandopensource
      @justmeandopensource  5 років тому +3

      I was joking. But I am visiting Chennai for annual vacation next month. Many many thanks for your kindness.

  • @vishalkole9111
    @vishalkole9111 3 роки тому +2

    Your videos are very informative and useful. They helped me solve several issues with my deployment. Thank you!

  • @Pallikkoodam94
    @Pallikkoodam94 5 років тому +2

    Hi,
    This is the best video for understanding PV, PVC in K8S, Thank you very much.

  • @prasadrayudu2001
    @prasadrayudu2001 4 роки тому +1

    Very good explanation - Thanks for the session - Appreciate all your efforts in making this video available.

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi Prasad, many thanks for your interest in this channel. Cheers.

  • @blaupaunk
    @blaupaunk 3 роки тому +1

    Informative and simple to understand. Thank You!

  • @clementduval3917
    @clementduval3917 5 років тому +3

    it's absolutely brillant, thanks man !!

  • @xmlviking
    @xmlviking 5 років тому +1

    Great stuff! Love the rolling interaction with kubectl and the nodeSelector . :) . Excellent work!

  • @blueoceansf
    @blueoceansf 2 роки тому +1

    Thanks for sharing the knowledge..

  • @TomGrubbe
    @TomGrubbe 5 років тому +1

    This was immensely helpful. Please consider another tutorial using dynamic provisioning.

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      Hi Tom, thanks for watching this video. I have already done a video on dynamic volume provisioning as well using NFS provisioner. Please check my channel videos. Thanks

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      ua-cam.com/video/AavnQzWDTEk/v-deo.html

  • @sarankumaark
    @sarankumaark 3 роки тому +1

    Thanks, @venkat bro excellent explanation.

  • @dubareddy383
    @dubareddy383 4 роки тому +4

    If directory is not available, still we can create a PV by adding below section under
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: pv-hostpath
    spec:
    storageClassName: local-storage
    capacity:
    storage: 1Gi
    accessModes:
    - ReadWriteOnce
    hostPath:
    path: "/mnt/data"
    type: DirectoryOrCreate

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      Hi Sudhams, thanks for watching. Yes thats a local storage directory on the node.

  • @romantsyupryk3009
    @romantsyupryk3009 4 роки тому +1

    Thanks so much for this tutorial.

  • @msahsan1
    @msahsan1 4 роки тому +1

    Perfect ! thanks man

  • @ThamaraiselvamT
    @ThamaraiselvamT 5 років тому +3

    Thanks for great video, What if I have multiple pv and created one pvc and connected that pvc with a pod then pod starts to write so which pv it will be getting those writes?

    • @justmeandopensource
      @justmeandopensource  5 років тому +4

      Hi Thamaraiselvam, thanks for watching this video. If you have created a pvc outside of your pod definition, then pvc will be bound to pv and when the pod is deleted, the pvc will still be attached to the pv. You have to delete the pvc and pv yourself. So when the pod restarts, it will use the same pvc which is already attached to a pv. But if you created pvc within the pod template in the same manifest, then deleting the pod will also delete the pvc and new pod will get a new pv. Cheers.

    • @ThamaraiselvamT
      @ThamaraiselvamT 5 років тому +3

      @@justmeandopensource Thanks for the reply. my bad I thought multiple pv can be claimed with one pvc but it is not. one pv for one pvc.

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      @@ThamaraiselvamT yeah you are right.

  • @Mohammed-co3ux
    @Mohammed-co3ux Рік тому +1

    Awesome.. ❤

  • @karl-heinzpeters7283
    @karl-heinzpeters7283 4 роки тому +1

    Great video!

  • @ramalingamvarkala1571
    @ramalingamvarkala1571 5 років тому +3

    Hi Venkat, Thanks for the video,
    1 can we use aws s3 as pv ??
    2 Any other method to mount pv on all nodes

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      Hi Ramalingam,
      Thanks for watching this video.
      AWS S3 is not a supported persistent volume type in Kubernetes. For a list of persistent volume providers that you can use in Kubernetes, please check the below link.
      kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
      I don't understand your second question. Mounting PV on all pods or all nodes?
      Thanks.

  • @aimjob2788
    @aimjob2788 4 роки тому +1

    this is a brilliant video. instead of persistent volume claim we can use hostpath directly in pod config file to mount volume in pod.what is the need of creating persistent volume claim and persistent volume. iam new to kubernetes. could you please explain this

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi, thanks for watching. Yes you can hostPath directly in the pod. But that will be available only on the host where the pod is running. If the pod gets rescheduled on a different worker node, you will lose all the data in that volume. Persistent volume as the name implies is persistent through out the lifecycle of the pod. You separate the storage and manage it separately so that its available to the pod no matter where the pod is running. Cheers.

  • @FreshersKosam
    @FreshersKosam 4 роки тому +2

    Hi Venkat, I have seen your ELK videos, those are great. Now watching K8, awsome :) I have a request can you make video on ELK to be deployed with Helm?

    • @justmeandopensource
      @justmeandopensource  4 роки тому +3

      Hi Siva, thanks for watching. I have done it using Rancher but not through helm directly. See if any of the below videos are helpful.
      ua-cam.com/video/8nWh1GLd7nY/v-deo.html
      ua-cam.com/video/wL5qy34vwR0/v-deo.html
      ua-cam.com/video/PZHEgNKORbY/v-deo.html
      ua-cam.com/video/qjnT0pU0IRo/v-deo.html
      ua-cam.com/video/fNMmnN8gLCw/v-deo.html
      Cheers.

  • @iamrajbaviskar
    @iamrajbaviskar 5 років тому +2

    Great videos on kubernetes topics, could you please do one for authentication and authorization in kubernetes cluster possibly with Dex, gangway, github ?

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      Hi Rajiv, Thanks for watching this video. I have that in my list. Hopefully I will record when I get some time. There are lot other videos waiting to be released. Thanks for following this series.

  • @dipi411
    @dipi411 2 роки тому +1

    Awsome!!

  • @ponmanimaran8864
    @ponmanimaran8864 3 роки тому

    Crisp and Clear Explanation..:) Great..
    But facing the below error.
    spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations). I tried to delete the pods and created again in aws eks. No expected results for the volume mount.

  • @erpankajbatra
    @erpankajbatra 3 роки тому +1

    Hi Venkat, Great Stuff!.. i have a query.. what will happen to data on the worker node, if the node is rebooted?

    • @justmeandopensource
      @justmeandopensource  3 роки тому

      Hi Pankaj, thanks for watching. Which data in the worker node are you referring to?

  • @safsfsfdfgdg
    @safsfsfdfgdg 5 років тому +1

    Excellent video. Just a question: If I have mongoDB replicasets on different hosts that write to respective mounted HDDs on those nodes, then do I need PV / PVC or can I just mount the folder in yaml? I am trying to setup mongo replication so data should replicate across hosts in different pods. thanks

    • @justmeandopensource
      @justmeandopensource  5 років тому

      Hi Ashish, Thanks for watching this video. The idea of a replicaset is to provide high availability and resiliency and avoiding single point of failure. So having a separate volume for each replica will makes sense. If you use hostPath on the respective node, how would you make sure that the pod gets started on the same node everytime? I think having a statefulset will make your life easier with dedicated persistent volume for each replica. Thanks.

    • @safsfsfdfgdg
      @safsfsfdfgdg 5 років тому +1

      @@justmeandopensource Thanks man for the pointer. So the idea is to have a stateful set and mount the persistent volume on each node. Another question. What if I just want to use a local mounted drive instead of NFS? What kind of volume should I go with in that case (hostpath?)? I am watching your stateful set video as I comment. Thanks a lot for your community service !

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      @@safsfsfdfgdg If you want to use local volume on the nodes, you can use hostPath as persistent volume.
      kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
      Cheers.

  • @umartariq8365
    @umartariq8365 5 років тому +2

    Hi Venkat, Thanks for the video,
    I have a question, let's say I deployed WordPress site using persistent volume and persistent volume claims.
    After sometime Development team want to update the code placed on PV , how they can access or update?

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      Hi Umar, thanks for watching this video. In Kubernetes philosophy, updates are done by creating new versions of the container images and deploy them using rolling update.

    • @umartariq8365
      @umartariq8365 5 років тому +1

      @@justmeandopensource Hi Venkat, Thanks for your reply, I appreciate it.
      It's still not clear to me, how we can I update/modify the code file stored in PV,
      when I update the new image, same storage defined in PVC will be mounted.
      Maybe if you have time, you can make a Video by deploying a simple Wordpress site or any HTML site using PVC.
      then update any code file manually and then deploy again Maybe Video will help me to understand better.
      Again really appreciate your help, you are doing a very good job. keep it up.

    • @umartariq8365
      @umartariq8365 5 років тому +1

      For example, If I use below link to deploy WordPress :
      cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk
      in this example how can I update any specific file under : /var/www/html

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      Hi Umar, How are you deploying wordpress in Kubernetes cluster? As a deployment or a statefulset?
      And what is it you want to edit in /var/www/html?
      You should be managing everything via the web frontend.
      Thanks

  • @charank7852
    @charank7852 Рік тому

    Hi Venkat,
    Thanks for sharing knowledge.
    I have one question related to pv,pvc
    For example, I am doing helm release-1 which creates the pod-1,pvc-1, which is bound to pv-1 created by sc. Some data is there in pv-1 related to pod-1 now.
    After sometime, i deleted the helm release-1 which deletes the pod-1, pvc-1 but pv-1 is still remains as per retain policy.
    My question is , if i create helm release-1 again , is there any chance to REBOUND the same pv-1 so that we can get our data back ?

  • @vini007mca1
    @vini007mca1 4 роки тому +2

    one small doubt, as per video, pv is 1Gi and pvc is just 100Mi. Once pvc (request of 100Mi) is allocated/bound to pv (1Gi), what will be the status of remaining apprx 900Mi unused and cannot be used for any other pvc as the current pvc only requested 100Mi ? or i got it wrong ?

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi Vinay, thanks for watching. You are right. PVC gets bound to a PV. In this case, it will only use 100Mi and the rest of the space is locked as no other pvc can use this volume. If you used dynamic volume provisioning, you will have PV deployed automatically with the exact size you requested in PVC. Otherwise, if you are manually provisioning volumes, you will have to determine the size in advance. Cheers.

    • @vini007mca1
      @vini007mca1 4 роки тому +1

      @@justmeandopensource got it. thanks a lot

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@vini007mca1 You are welcome. Cheers.

  • @turalaliyev6288
    @turalaliyev6288 3 роки тому +1

    Thank you very much. I did the same like you but when i check /kube it is always empty. I am using minikube.

    • @justmeandopensource
      @justmeandopensource  3 роки тому +1

      Not sure where you might have gone wrong. I did a quick check and it worked as expected in my cluster. It could be something with your setup specifically. I would advise you to give it a second attempt may be on a different cluster.

  • @ilangodhandapani6232
    @ilangodhandapani6232 3 роки тому

    Hi Venkat ! Say i have an app using a PVC and PV. PV points to NAS storage. When the pod goes down for any reason and if PVC is still up, just restarting the pod will be okay and NAS should be mounted to pod. What if the node hosting PVC or PV goes down and node hosting POD is still up ? can the pod still access the NAS storage ? Do we need to delete pod and pv/pvc And bring up pv, pvc and pod again ?

  • @nareshpandian1321
    @nareshpandian1321 4 роки тому +1

    Hi Venkat ,
    I believe pvc will not work for 2 or more replicas .
    During scale out and scale up it causes issue ( failed to attached bound )

    • @nareshpandian1321
      @nareshpandian1321 4 роки тому +1

      Correct me if I'm wrong

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      @@nareshpandian1321 I would have expected the same. But I need to test it.

    • @nareshpandian1321
      @nareshpandian1321 4 роки тому

      I tested in Gke .it's not working .
      I hope nfs will solve this issue

  • @scetamit2555
    @scetamit2555 4 роки тому +1

    What is the significance of "/mydata" under mountPath? It doesn't seem to show up anywhere.

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      Hi, thanks for watching. Thats how you mount. If you login to the container, you can see that /mydata is mounted from the host path on the worker node. Start watching at 21:30. Cheers.

  • @jurkinss1
    @jurkinss1 4 роки тому +1

    thanks man !

  • @tamilselvan8343
    @tamilselvan8343 4 роки тому +1

    hi venkat for kworker2 how the kube folder will be created? because u have created a directory in kworker1 node then how it will be created on kworker2 in 29:55

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi Tamil, thanks for watching. Actually you don't have to create that kube directory. It will get created automatically. If it exists, it will be used. It was pointed out by one of the viewer. Give it a try for yourself and see whether the directory gets created. I haven't actually tested it.

  • @bhargavsharathkumarjillepa1479
    @bhargavsharathkumarjillepa1479 2 роки тому +1

    Hello Venkat,
    I really appreciate the knowledge you are sharing it is invaluable.
    I'm following elastic on cloud video and came here.
    I have a query, when I try to deploy elastic on cloud it says no persistent volume. I created persistent volume with the help of this video but no luck.

  • @Siva-ur4md
    @Siva-ur4md 5 років тому +1

    Hi Venkat, thanks for the video. I have 2 doubts, how PVC get to know it should always connect to particular PV even after the deletion of PV and PVC, for example: pv1 --> /tmp/pv1 pv2 --> /tmp/pv2, ReclaimPolicy: Retain is for both PV1 and PV2, So when I created PVC1 for the 1st time its got mapped to PV1 and stored all the data at /tmp/pv1 folder, and I deleted and created PVC1 again, I want to create pods with same last time data, How can it get to know and mapped to PV1(same which mapped last time) again, what if it's mapped PV2 because we are not mentioning PV name in PVC file to always connect to that preffered PV. If PV and PVC binding with Storageclassname, this is the only same name which we are using it in both PV and PVC file, Is storage class name always should be manual or it could be anything?. Please clarify me with this and thanks in Advance...

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      Hi Siva,
      I can understand your question.
      1. So you have pv1 created and pvc1 got bound to pv1. Reclaim policy for pv1 is retain. When you delete pvc1, pv1 will still be there with the data in it. And again when you create pvc1, it won't bind to pv1. It will bind to another persistent volume. With ReclaimPolicy set to retain, that persistent volume won't be available for another claim even when the associated pvc has been deleted. This is like a backup. You will have to manually copy the data you want to another volume and then you can delete this pv1. This is the fundamental.
      If you want to always associate a pvc with the same pv, try using Statefulsets.
      2. PV and PVC doesn't bind using Storageclass. When you use dynamic storage provisioning, you can define many dynamic provisioner. Each of them will be identified by a storageclass name. When you create a pvc, you specify which storageclass to use.
      Thanks,
      Venkat

  • @rsrini7
    @rsrini7 5 років тому +1

    Thanks Venkat. How to select correct apiVersion for the kind ? Seems, each kind have different apiVersion. Only documentation ?

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      Hi Srini, Thanks for watching this video. Yeah, its often confusing which apiversion should be used for different objects.
      You can find that in few different ways.
      $ kubectl api-versions
      $ kubectl explain
      $ kubectl explain deployment
      $ kubectl explain configmap
      These commands will give you what versions to use for each resources. It depends on what version of kubernetes you have, to find out what apiversion fo specific resource you need.
      Thanks.

    • @rsrini7
      @rsrini7 5 років тому +1

      @@justmeandopensource Thanks for quick response. Awesome tutorial :)

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      @@rsrini7 You are welcome.

  • @TheBoyYeahMan
    @TheBoyYeahMan 4 роки тому +1

    You should not use persistent volumes on /tmp. Some Linux distributions wipe the directory on restart. So you would delete all your data.

  • @oleggv5518
    @oleggv5518 3 роки тому

    Hi Venkat, thanks for the video. I have a question. Based on what a pvc binds to a pv? There are no selectors, labels or something like that in the pvc. For example, if I have 3 pods and I need 3 difference volumes for each of them, so I create 3 pv and 3 pvc respectively, so which pvc will binds to which pv? Can we control it?

  • @weitanglau162
    @weitanglau162 4 роки тому +1

    Great video!
    I have a question. Does the Persistent Data persists through cluster restarts? In my case is I restart Docker Deskstop for Windows.

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Good question. I haven't actually tried that. But this is not going to be a valid scenario in production where you wouldn't restart a cluster. You will just backup the data before restarting the cluster.

    • @weitanglau162
      @weitanglau162 4 роки тому +1

      @@justmeandopensource thanks for replying! How do you backup the entire cluster??

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      @@weitanglau162 Well there are options to backup the entire cluster. I have used Velero and also did a video on it.
      ua-cam.com/video/C9hzrexaIDA/v-deo.html

    • @weitanglau162
      @weitanglau162 4 роки тому +1

      @@justmeandopensource thanks for the help. Your videos are amazing! Keep up the good work :)

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@weitanglau162 Many thanks for your interest.

  • @jayaraju4549
    @jayaraju4549 4 роки тому +1

    Hi venkat...
    Thanks for the video . can you make the video on PV &PVC by using storage like AWS EBS .

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      Hi, thanks for watching. I have been focusing mainly on bare metal and did few videos using the Cloud. I will definitely look into this. Cheers.

    • @jayaraju4549
      @jayaraju4549 4 роки тому +1

      @@justmeandopensource ok venkat...

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@jayaraju4549 :)

  • @usweta6358
    @usweta6358 4 роки тому +3

    Hi Venkat..Its a great stuff..Thanks much for this detailed video..
    One question - I have a nfs server with a storage of 100gb.. how can i create 5 PV/PVC's of 20gb each for 5 different pods.. What if i do the partitioning on the nfs server and export 5 different path? will that work..? Or is their any other approach i should opt for..Can u take me through the steps...It would be very helpful..Thanks..

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      HI Sweta, thanks for watching. Best approach is to create dynamic nfs provisioning and have pv created automatically when you create pvc. However bear in mind when you request certain amount in your pvc, for example 20G, a persistent volume will be created for 20G. This is not a hard limit. The pod mounting this volume can use more than 20G. That is what I noticed.
      Or if you are not using dynamic volume provisioning, you can export 5 shares from nfs server each with 20G and create 5 PVs yourself.
      I have done below videos related to dynamic nfs volume provisioning.
      ua-cam.com/video/to14wmNmRCI/v-deo.html
      ua-cam.com/video/AavnQzWDTEk/v-deo.html

    • @usweta6358
      @usweta6358 4 роки тому

      @@justmeandopensource Thanku so much Venkat for the quick help..!!

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@usweta6358 You are welcome. Cheers.

  • @pareshpatel493
    @pareshpatel493 2 роки тому

    Vekant, at time 9.28, you said 'Retain' is default, but 'Delete' is default Reclaim Policy.

  • @templuismunoz
    @templuismunoz 5 років тому +1

    Hi! Thanks for sharing.
    I got a question.
    What is the advantage of using a volumeMount + Simple volume with hostPath vs. a PersistenVolume with hostPath since both are attached to the same node (server ) right ?

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      Hi Luls, thanks for watching this video.
      You are right both are essentially the same. Creating VolumesMounts (hostPath) inside a pod spec and creating a persistent volume (hostPath) and using it with pvc.
      There are not much advantages of defining a pv with hostpath and then using pvc to access it over defining a simple hostpath direct mount. You can make use of reclaimpolicy to clean up the volumes after used. There are release, retain and delete policies. It all depends on the way you want to use the data and your applications.
      Bear in mind, HostPath as the name implies, is attached to the host and if you create deployment with more replicas and each pod gets scheduled on a different node, your application might suffer due to difference in data.
      Thanks

    • @templuismunoz
      @templuismunoz 5 років тому +1

      ​@@justmeandopensource That's what I though I was deploying a mongo Sharded cluster I ended up using hostPath + nodeSelector for that same problem ( attachment with the node ).
      Keep the work this channel is awesome.

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      Is this a test environment or something? Why don't you have a proper external storage provisioner (NFS/Ceph/OpenEBS) and use pv/pvc?

    • @templuismunoz
      @templuismunoz 5 років тому +1

      ​ Just me and Opensource Nop, it's in prod. I've setup 5 nodes in online.net servers with 2 m.2 disks. Basically pricing having the same setup in AWS o Google… it's like 8k month. And here 800€, and regarding OpenEBS or NFS I research it but the HostPath gave me more control. So I can assign Indexes or Replica to the second disk in the proper mount path. I guess this can be done with OpenEBS but I've never used it. The Clone and Spanshot features look pretty interesting.
      And at the end Mongo has its own replication and auto healing system so it looked simpler to me that way. I'm not an expert though.
      OpenEBS Tutorial 😅?
      PS. I took the idea from github.com/styxlab/kubernetes-mongodb-shard#examples if someone is interested.

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      Ah I see. I am exploring OpenEBS at the moment. May be if I get something out of it, I will try to make a video. Thanks.

  • @praveenkorvi2227
    @praveenkorvi2227 4 роки тому

    Hi Venkat, Thanks for the video on PV and PVC. I have 2 doubts in this topic.
    1. If the PV is defined with retain option then will it not come to available state after PVC is deleted? it was showing in release state and you deleted the PV. So do we need to delete the PV to use it by other PVCs?
    2. If I define hostpath inside the volume spec for a pod, it will create mountpoint inside the pod and we can use it. Even with PV and PVC by using hostpath is also same. Then what is the difference between using hostpath directly under pod.yaml file instead of creating PV and PVC

    • @justmeandopensource
      @justmeandopensource  4 роки тому

      Hi Praveen, thanks for watching.
      Yes, if the volume reclaim policy is set to retain, then it won't be available for other PVC's or pods. You will have to manually delete the PV in this case.
      Yes, you can do this either through a separate pvc or directly within a pod. Its just a matter of preference and in certain cases, it will be easy to manage pvc as a separate resource instead of tying it together with the pod.

  • @omsiva4616
    @omsiva4616 4 роки тому +1

    Currently we are giving SFTP access to the customer to upload their product images on their SFTP home path.
    In fstab we made a entry for permanent mount like below.
    none bind,nobootwait 0
    So whenever customer need to upload their product images they will login to the SFTP portal and upload the required images.
    But still we are not clear in container platform how to achieve this
    I’m looking your suggestion for this.
    I can't map same like VM set up ; I have tried with bindfs / bind / mount nothing will work in container..

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi Siva, thanks for watching. You can still setup ftp server in Kubernetes. I just came across this article.
      cloudlets.io/en/kubernetes-blog/setting-up-ftp-server-in-kubernetes/

    • @omsiva4616
      @omsiva4616 4 роки тому +1

      @@justmeandopensource thanks for the update. I'll have a look on this..

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@omsiva4616 no worries.

  • @edtix
    @edtix 3 роки тому

    What is the purpose of specifying storage size in booth PV and PVC? I've created PV with 500Mi and PVC with 1Gi (claiming that PV) and nothing happened. No error even though I claimed more than PV has... And why I can resize PV and cant resize PVC? Many thanks for you work!

  • @smartaquarius2021
    @smartaquarius2021 4 роки тому +1

    Hello Venkat,
    Thanks a lot for sharing such an informative videos. I would be grateful if you can help me with one query.
    I'm trying to deploy an application inside k8 as a Deployment with 2 pod, 2 replica and mounting a common pvc of 1 Gi.
    Now, if I want to transfer any file for that application do I have to login both the pods separately using command for transferring files
    kubectl exec -it
    kubectl exec -it
    Or is there any way I can transfer the files to one specific location let say at deployment level and it will be accessible to both the pods automatically. I'm trying to host tensorflow serving and it expects its training models. I will add a polling mechanism so that it can automatically pick new files. In production we may have lot of pods and transferring files on all them could be a difficult task to maintain.
    Kindly suggest. Looking forward for your reply.

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi, thanks for watching. What you can do is create a persistent volume and then a persistent volume claim. Create a job with a simple busybox container that mounts the volume and populate the data. Then you can deploy your deployment where the pods will mount this persistent volume which already has the data.

    • @smartaquarius2021
      @smartaquarius2021 4 роки тому +1

      @@justmeandopensource Thanks for the reply. Please correct me if I'm wrong. So, I can place my file in some drive let say nas drive and K8 job will automatically pick the file from there and push it to volume. But doesn't I get access related issues here because K8 pods will never have access to any such drive.
      2 more requests, sorry for being a little greedy here. The application I'm hosting is a precompiled one: tensorflow serving. Is there any way I can authenticate the selected users to access the hosted api. I cannot do any change in image though.
      Secondly, Is there anyway I can attach custom ssl certificate to the api. Currently, it is working in http and I want to enable https.

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@smartaquarius2021 You have your file in a separate drive and you will have to share it. You can use NFS service. Then you need to create a persistent volume that connects to this nfs share. You then create a persistent volume claim and use it in your pods.
      I am not sure if you got a chance to watch my other videos related to this process. If not, please have a look, it might be helpful.
      ua-cam.com/video/to14wmNmRCI/v-deo.html
      ua-cam.com/video/AavnQzWDTEk/v-deo.html

  • @palanisamy-dl9qe
    @palanisamy-dl9qe 3 роки тому

    Hi Buddy, What kind volume type (instead of host path ) we have to use to replicate data for all running pod (all worker nodes)

  • @omsiva4616
    @omsiva4616 4 роки тому +1

    Hi Venkat,
    I have a customized nginx Image and it contains files under /var/www/x-folder ..,. I have defined the same path as a mount point in my deployment file; I can able to see my desired mount point inside the pod; However there's no file on it. it' Empty folder; I want the same files which is exactly in my nginx Image. Please let me know what I'm missing here; Thanks

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      Hi Siva, thanks for watching.
      I don't get you. You have a custom nginx image and in that image you have files in /var/www/x-folder. So if you just run that container, its going to have that directory and files. Whats the point in mounting it. And where are you mounting it from and what you want to achieve.. Please could you explain in detail. Cheers.

    • @omsiva4616
      @omsiva4616 4 роки тому +1

      @@justmeandopensource I'm trying to define the Persistent Volume for my application. In the Image the application files present under /var/www/html/index.html..
      I'm giving same path for Persistent Volume. i.e) /var/www/html.. Pod is running fine ; I can able to see the persistent volume, However that index.html file is missing; If I didn't prefer Persistent Volume the respective application files would be present.

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@omsiva4616 Still don't understand. Your image contains files under /var/www/html and when you use a persistent volume and mount it under the same path, it will be mounted on top of your existing /var/www/html. You won't be able to see your original files. Copy all the files to your persistent volume and mount it instead of baking all the files inside your container image.

    • @omsiva4616
      @omsiva4616 4 роки тому +1

      @@justmeandopensource yes for the temporary work around I'm doing the same.. But looking the permanent fix.. Please let me know if you get any fix on this.. I'm trying this in GCp.. Thx for your prompt response and your patience..

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@omsiva4616 Okay

  • @richardwang3438
    @richardwang3438 5 років тому +1

    great hands-on video!
    there is just one thing that confuses me, I will test later to confirm, after you create a hostpath-type pv and pvc(no pod created yet), does the node holding the data get chosen immediately?
    if not, creating the directory on kworker1 does not guarantee pv is on kworker1

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      Hi Richard, thanks for watching. My understanding is that, the actual directory will get created only when you launch a pod that uses that PVC/PV. I used 4-pv-hostpath.yaml first and deployed it. Then used 4-pvc-hostpath.yaml and deployed. At this point pvc is bound to pv. I think its now when I launch a pod which uses this pvc, the actual directory on the landing worker node gets created. I might be wrong. But will be easy to investigate. Cheers.

  • @tamilselvan8343
    @tamilselvan8343 4 роки тому +1

    Hi venkat thanks for the video,please rectify me is it mandatory for pv and pvc name will be same (ie metadata)

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi Tamil, thanks for watching. The name field in the metadata section of the manifest is the name of the resource you are deploying. You can't have same names for two resources of same type. For example, you can have same name for two pv are for two pvc. But pv and pvc can have same name. Its not mandatory. Its just a name. What is important is to specify the correct storage class name in the pvc.

    • @tamilselvan8343
      @tamilselvan8343 4 роки тому +1

      @@justmeandopensource thanks for the brief info Venkat

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      @@tamilselvan8343 You are welcome.

  • @naoualinizar3573
    @naoualinizar3573 5 років тому +2

    Hi Venkat, thanks for sharing this, you are awesome man. I tried runnig busybox without creating /kube on any host. To my surprise, it was automatically created! Is it because of the version of kubernetes?

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      Hi Naouali, thanks for watching this video. I haven't tested that use case but from what you mentioned and what I believe, the directory will be created automatically if it doesn't exist. Thanks for sharing this info. Might help someone with the same question. Cheers.

  • @lividfont8714
    @lividfont8714 3 роки тому

    Does persistent volume only work with one node? And this only one node is the master node as well?

  • @antoniosmoljo6472
    @antoniosmoljo6472 4 роки тому

    Hi Venkat. You're doing a great job. Why you manually created the folder /kube on kworker1 node?

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      Hi Antonio, thanks for watching. Yes I created /kube on kworker1 as I really wasn't sure whether the directory will be created automatically upon creating the persistent volume. But it looks like you don't have to do that step.
      github.com/kubernetes/kubernetes/issues/87112

  • @Rakesh-fc1wr
    @Rakesh-fc1wr 3 роки тому

    Hi venkat , Your videos are great and genuine .But im facing a issue when ever I'm mounting See ebs PVC to a tomcat container Pod webapps folder , I'm getting only lost +found ,Root folder Root.war are not present ??

  • @vinitrajput6554
    @vinitrajput6554 5 років тому +2

    This is about making payload persistent script. Right?

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      Hi Vinit, thanks for watching this video. This video is about how to use persistent volumes and persistent volume claims and how to mount them inside the container. Not sure what you mean by payload here. Thanks.

  • @nagendrareddybandi1710
    @nagendrareddybandi1710 4 роки тому +1

    Hi Sir,
    Nice Video & stuff. Thanks for this.
    pv,pvc,pod all created and /kube also created on worker1 machine. that's fine.
    If worker1 goes done, what is the situation?
    If we store the data in container i.e /mydata will it be available in /kube ?? and if worker1 crashed is the same /kube will available in worker2 as well?

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      Hi Nagendra, thanks for watching. Actually the nfs-provisioner pod just mounts the nfs share and provides it to the pods. The actual volume and data is in the nfs directory on your nfs server. If worker1 crashes, still you will have nfs-provisioner pod relaunched and it will mount the /kube directory from the nfs server.

    • @nagendrareddybandi1710
      @nagendrareddybandi1710 4 роки тому +1

      Okay Sir. Thanks

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@nagendrareddybandi1710 No worries. You are welcome.

  • @nikolailiev3967
    @nikolailiev3967 11 місяців тому +1

    awesome man, I spend 4 months in a company no one showed me this stuff. I had to read my ass of from documentation still nothing.
    I wish you finished the nfs service.
    I had one issue, by default my PV and PVC was created with 30GB storage, when I tried to make it 50GB (commands*), it was successful but later the pvc didn`t work. So, after that I configured the data disk to 30GB again and the pods continued working without issues.
    (commands*)
    1) crc config set disk-size 50GB
    2) crc stop
    3) crc start
    Any chance to show how to epand the disk on the PV and PVC

    • @justmeandopensource
      @justmeandopensource  10 місяців тому

      Hi, thanks for watching and glad you found it useful. I did this video a while ago and would need to refresh my memory on this concept I am afraid.

  • @fz-ssaravanan2036
    @fz-ssaravanan2036 4 роки тому +1

    1. Is there any way to create pv and pvc in diff namespace if possible how??
    2. How to create pv in diff namespace using storage class and pvc
    Am get confusing kindly help me

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi Saravanan,
      Thanks for watching. PVs are not namespaced. They are cluster wide. PVCs are namespaced and they belong to a particular namespace.
      Try running this command and look for persistent volume and persistent volume claims. You will find it yourself.
      $ kubectl api-resources
      Cheers.

    • @fz-ssaravanan2036
      @fz-ssaravanan2036 4 роки тому +1

      @@justmeandopensource thanks bro

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@fz-ssaravanan2036 You are welcome

  • @Tshadowburn
    @Tshadowburn 5 років тому +1

    thank you for this video it is very helpful, but I checked the playlist there is no video talking about glusterfs, I've tried to use it as a persistent volume but it does not work I get this error when I try to deploy my container in the kubectl describe pod :
    [2019-11-25 11:29:08.244765] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
    [2019-11-25 11:29:08.244841] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/data)
    thank you for your help

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      HI, thanks for watching this video. I haven't tried GlusterFS yet. I have that in my list. I will see if I can record a video for that. Thanks.

    • @Tshadowburn
      @Tshadowburn 5 років тому +1

      @@justmeandopensource hi thanks for your reply i've been able to mount the volume, but now I don't know how to make my python code to create a csv file in that volume ? I've tried with exec command you showed in the video, it works but the i can find the csv file. do you have an idea how to make it work ? thank you for your help

    • @justmeandopensource
      @justmeandopensource  5 років тому +2

      @@Tshadowburn Can you post your outputs and what you have tried in a pastebin.com link?

    • @Tshadowburn
      @Tshadowburn 5 років тому +1

      @@justmeandopensource I managed to make it work :) it work just fine now, I'll prepare you pastebin thhe issue is that the folder in the gluster volume must have the same structure as described in the python code: for ex if it create the csv in a folder name opportunity, I must have in my gluster_vol that i mount a folder named opportunity (at the same lvl it would within the python code

    • @Tshadowburn
      @Tshadowburn 5 років тому +1

      @@justmeandopensource but thak you man I wouldn't have been so far in kubernetes without your video serie

  • @leo5946
    @leo5946 Рік тому

    why cannot we claim the volume again after one claim in done already and released.

  • @yanivcode9724
    @yanivcode9724 2 роки тому

    Nice video !! working great !
    When I stop/restart k8s/docker the data wont save. its beacuse its save on my pc ? or there is work around ? below the simple pv
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: pv-auth
    labels:
    type: local
    spec:
    storageClassName: auth
    capacity:
    storage: 1Gi
    accessModes:
    - ReadWriteOnce
    hostPath:
    path: "/data/mongo"

  • @manikanthkommoju3176
    @manikanthkommoju3176 6 років тому +1

    i setup kubeadm in my vmware workstation, my pods arent spinning up. but when i setup kubeadm in aws pods are kicked in immediately. what might be the reason although i have sufficient network, memory and cpu ? Also i am so happy if you come up with an video of troubleshoots

    • @justmeandopensource
      @justmeandopensource  6 років тому +1

      Hi Mani, when pods are not starting, you can use "kubectl describe pod " and look in the events sections near the bottom.
      How are you deploying the pods? DIrectly as single pod or as daemonsets/replicasets/deployments?

    • @manikanthkommoju3176
      @manikanthkommoju3176 6 років тому +1

      Just me and Opensource pods sandbox changed. it will be killed and recreated. wheather i use deployments or pods or rc i am getting same issue. one mor thing i am unable to launch dashboard while seting up kubeadm in aws default vpc do you have any work around

  • @anbarasanvaikundanathan2401
    @anbarasanvaikundanathan2401 3 роки тому

    Hi bro
    Can we resize the PV as our desired or requirements?

  • @nah0221
    @nah0221 4 роки тому +1

    Thanks Venkat from 2020 ! but how you make your shell predict what you're going to type ?!!!!! :D

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi Nur, I use zsh shell with zsh-autosuggestion plugin which suggests command from my history as I start typing.

    • @nah0221
      @nah0221 4 роки тому +1

      Just me and Opensource thanks !!

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@nah0221 No worries.

  • @brijeshgpt7
    @brijeshgpt7 4 роки тому +1

    Can you share your terminal color theme and font ?

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      Hi Brijesh, thanks for watching.
      This was long time ago and I used Manjaro Gnome edition with gnome-terminal, tmux, zsh, zsh-autosuggestions, robby-russel zsh theme. You can find the details in the below video.
      ua-cam.com/video/soAwUq2cQHQ/v-deo.html
      Cheers.

  • @lathagr8574
    @lathagr8574 4 роки тому +1

    Hi even if i delete pv, pvc
    the status will be terminating but it has not deleted , how can i do that ?

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      Hi Latha, thanks for watching. Try deleting the pod that uses this pv first and wait for to completely terminated. Then delete the pvc and finally the pv. I have see this happening in my setup as well occassionaly.

    • @lathagr8574
      @lathagr8574 4 роки тому +1

      Hi Sir, after Configuring pv in pod and if I add some data to pod then I am not able to see that data how can i see

    • @lathagr8574
      @lathagr8574 4 роки тому +1

      In consul helm chart in deployment yaml file configured pv after deleting pod and let if i install once again I am not seeing data

    • @lathagr8574
      @lathagr8574 4 роки тому +1

      If I create any key value pair in pod I am unable to see you that data on mounted path

    • @justmeandopensource
      @justmeandopensource  4 роки тому +1

      @@lathagr8574 Lets take one step at a time. What is it you want to achieve? Please tell me step by step. If you could paste the output of all the commands to pastebin.com and share it, it would be helpful for troubleshooting. Cheers.

  • @abdulshaikh6807
    @abdulshaikh6807 6 років тому

    Can we get the job if we doing docker and kubernetes or we need to do other course as well. And what the learning path for docker and kubernetes

    • @justmeandopensource
      @justmeandopensource  6 років тому +3

      Hi Abdul, Just doing a course wouldn't fetch you jobs. You have to understand the fundamentals of the technology and confident about it. You can do some certifications if you want to prove your skills but that is not mandatory. First you have to decide where your interests are and then pursue a career on or around that area. Just because containers/cloud/kubernetes is a hot technology, doesn't mean you can easily get a job. There will be wide opportunities in this area but equally there will be many people willing to get into it. So you have to be ready to prove to the employer that you know what your are talking about.
      Hope this makes sense. Whatever your interests please follow and you will definitely achieve. Don't just go for what is popular today.
      Thanks

    • @abdulshaikh6807
      @abdulshaikh6807 6 років тому +1

      Thanks for your comment it's really help

    • @justmeandopensource
      @justmeandopensource  5 років тому +1

      @@abdulshaikh6807 You are welcome. Cheers.

  • @gautamsoni17990
    @gautamsoni17990 4 роки тому +1

    Hey Venkat,
    This is really very interesting and i have one doubt here, could you please define little more.
    Suppose i am using NFS instead of hostpath and in nfs i have enable the nfs service and checked the showmount from worker node and everything was working perfectly fine.
    lets say i have created one pv and then one pvc. connection between them is properly in bound state. now i have created one pod where i have mentioned mountPath: /mydata.
    when i check the same thing whether this is mounted or not in container then it is fine, everything is mounted on /mydata directory.
    But my question here is how can i check this mounting situation on my worker node where container is running. like suppose pod runs on kworker2 node and when i logged into kworker2 node then i am not able to see any mount path which is connected with NFS path.
    [vagrant@kmaster yamls]$ kubectl exec -it pod-using-nfs mount | grep -i master
    kmaster.example.com:/srv/nfs/kube on /mydata type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.28.128.102,local_loc
    k=none,addr=172.28.128.100)
    [vagrant@kmaster yamls]$
    [vagrant@kmaster yamls]$ ssh kworker2 "mount | grep -i master"
    kmaster.example.com:/srv/nfs/kube on /var/lib/kubelet/pods/5487d853-d47f-4e52-99cb-c52d7fe7c2ae/volumes/kubernetes.io~nfs/pv007 type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen
    =255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.28.128.102,local_lock=none,addr=172.28.128.100)
    [vagrant@kmaster yamls]$
    In the above logs, in worker2 node, nfs is mounted on /var/lib/kubelet/pods and which is not justified i guess.

    • @justmeandopensource
      @justmeandopensource  4 роки тому +2

      Hi Gautam, thanks for watching.
      This looks right to me. I just tested this.
      So you are using kmaster as your nfs server. Thats fine. Your pod is running on kworker2. So behind the scene, its kworker2 that mounts the nfs share under /var/lib/kubelet/pods//volumes/.
      In both your about mount command outputs, if you look at the client_addr, they are the IP address of kworker2. The pod (or the container) doesn't actually mount directly from kmaster. Kworker2 mounts it and presents it to the container.

    • @lathagr8574
      @lathagr8574 4 роки тому

      pastebin.com/VU3GaTWg

  • @Kumbutranjaami
    @Kumbutranjaami 2 роки тому +1

    Very detailed video. One correction is, you speak like you just want to be sound like a foreigner that makes you so weird to hear. Be yourself bro. You don't need to be a foreginer to be informatic.

    • @justmeandopensource
      @justmeandopensource  2 роки тому

      Thanks for watching. And I am sorry you feel it that way. People (especially from Tamilnadu) think that I put a lot of effort to sound like a foreigner. I don't have to and this is my natural way of speaking. I wish people change their mindset and see if they can get some value out of the content. I value your feedback. Thanks for taking time to share what you think. Cheers.

    • @Kumbutranjaami
      @Kumbutranjaami 2 роки тому

      @@justmeandopensource I got value out of your content and that's why I said "Very detailed video" in my first (now unedited) comment. Natural foreigners (or people who thinks english is a language to convey the information world wide) won't talk like you talk and that's why your way of speaking makes your viewers think that you speak like you just want to be sound like a foreigner.