Really enjoy your videos! Very easy to follow and packed with information, while avoiding unnecessary details. On a separate thread, can you please share/post a short video on how you have your zsh configured? Appreciate all you do. Please keep these coming! Kudos.
Hi, thanks for watching. I have done a video long time ago about my terminal setup. ua-cam.com/video/soAwUq2cQHQ/v-deo.html But since then I have changed a lot and now I am using Gentoo but zsh and its configurations still remain the same. Basically, its zsh > zsh-autosuggestions > oh-my-zsh I have been wanting to do a video of my current setup which I will at some point. Cheers.
It is bit late I am watching your this video, but still it is relevant. I have a use case. Can we somehow trigger the cron job manually outside its set trigger schedule?
Hi Jayesh, thanks for watching. This is a cron job and gets run as per the schedule you defined. If you want to trigger it manually, then there is kubernetes jobs.
Thanks Venkat! Basically I m thinking of end to end and monkey test scenario and came up with some really weird use-cases. I wonder if u r aware of any opensouce tooling for running end to end tests in a cluster?
Hi Magesh, thanks for watching this video. Well you could write a kubectl plugin that inspects each job/cronjob and deletes it when the status is completed. Thanks, Venkat
Great video, Can you please provide a tutorial real scenario like :- nginx pod and particular script to run on specific time. In your tutorial cronjob create a separate pod and cron run's in it. But what if i want to run the cronjob on particular pod like :- nginx pod
Hi Sarfaraz, thanks for watching this video. The way I explained in this video is the way it is supposed to work. When you create cronjob, it will create a pod and run it on the schedule you specified in the cronjob manifest. The below StackOverflow discussion might clear your doubts. stackoverflow.com/questions/41192053/cron-jobs-in-kubernetes-connect-to-existing-pod-execute-script Thanks.
Hello Dear, Thanks for your videos! actually u can explain kubernetes like no body else! Anyway I have a question for u. Can we run a job inside specific pod? and how?
Hi Majid, thanks for watching. When you create Kubernetes job, it will create a pod (and container within) to run that job and the pod terminates. You can't run a job inside a specific pod. Each container/pod has specific purpose. If you want to alter something inside a running pod/container or get some data off a particular pod/container, then you have to make sure to expose the data from within the container (usually using a Kubernetes service) which can then be accessed by the pod your kubernetes job runs on. Cheers.
Great video! Thanks for sharing! Let me ask you two things, one is how you accomplish kubectl completition + constantly watch kubectl get all ? Thanks!
hi bro, please do a video on "clean-up-finished-jobs-automatically" using TTlController, using 'ttlSecondsAfterFinished' in our yaml file at spec.ttlSecondsAfterFinished. thanks
HI Naveen, thanks for watching this video. I will test that and do a video. I think I tried that and didn't find it working as expected. But anyways, will give it a try..
Hi Saoud, thanks for watching. completions 2 means that the workload should be run 2 times. With just completions 2 (and no parallelism), you will have first pod launched, completes the work and then second pod launches and completes the work. Now when you also specify parallelism 2, two pods will launch and run at the same time. Hope this makes sense. Cheers.
Hi Raja, Thanks for watching this video. I am gonna ask you to take a look at below official documentation instead of trying to explain myself. kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ See if that makes sense. Thanks
Hi Brijesh, thanks for watching. RestartPolicy option is used to control what to do when a container fails (not the pod). Please refer to the documentation for more details. kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures
@@brijeshgpt7 Hi, all the best for your ckad prepartion. I haven't done any of the kubernetes certification. Just practice and be more confident by practising more.
Excellent video! I would like to know how to execute several commands (or a shell script) in order to create a Mysql backup dump I tried many things but when I execute the cronjob It seems that I can´t start a mysql session, by the way Im using OpenShift and the command to login into the Mysql POD is: oc rsh , any idea or any suggestions? thanks a lot
Hi Jaime, thanks for watching this video. There are lots of examples I could find online about mysqldump cron job or job in Kubernetes. I had a quick look at the below link. See if that helps. github.com/camilb/kube-mysqldump-cron You can modify the manifests as per your need. Thanks, Venkat
We can add volume in job section and we can able to mouth same volume in main container ? I tried to add emptdir it fails to loads . Can able to help s on this .
Hi Shanmugam, thanks for watching. Kubernetes jobs are similar to pods and you can mount volumes inside the container. Can you share more details about what you are trying to do? You can paste command outputs and your manifests in pastebin.com.
Hi, I have a python script with some arguments which needs to be run inside the pod/container. Can you please tell me job.yaml file syntax ?? How to execute this python script ?
Hi Aman, thanks for watching. Could you please explain in a bit more detailed manner as to what you are trying to achieve? You could share the scripts and outputs in pastebin.com.
Hi Venkat, thanks for the detailed video. A quick question.... If I want to run parallel jobs for this use case, how to do it? Use case: For benchmarking purpose, let's say I have assigned a range for CPU values to do deployments , like 1cpu-100cpu. And I want to do a load test for every CPU value in the given range but I don't want to wait by running deployment one by one with new configuration of cpu value. Instead, want to run jobs parallely in batches like cpu range from 1 to 10 in one batch and cpu range from 11 to 20 in another batch and so on. How can I achieve this parallel batch jobs execution? Please help.
Hi, Thanks for your training. I'm trying to install a cluster using vagrant up and I have an issue when trying to Deploy flannel network. Could you check it with me please.. kmaster: [TASK 3] Deploy flannel network kmaster: Unable to connect to the server: net/http: TLS handshake timeout kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh kmaster: timed out waiting for the condition The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong.
HI Abdel, I just tried it on my machine and it worked perfectly fine. My guess is that it didn't fail on the "[TASK 3] Deploy flannel network" but it should have failed before that during Kubernetes initialization. The cluster failed to initialize and hence deploying flannel network failed. To troubleshoot this issue, edit vagrant-provisioning/bootstrap_kmaster.sh script and remove output redirection on Line 5 "kubeadm init command" On that line just remove ">> /root/kubeinit.log 2>/dev/null". Delete all the VMs and run vagrant up again. You should now be able to see exactly why its failing. Thanks, Venkat
Hi Vijay, jobs serves specific purpose. Yes you can simply run kubectl command to run a pod. Please explain me how you would achieve the below requirement. This is just for the purpose of understading why jobs/cronjobs are helpful. There is a mysql server running somewhere and I want to do a mysqldump every Wednesday at 10PM. And if something goes wrong with the mysqldump process, I need the ability to look through the logs to find out what happened. Let me know how you would do this using simple kubectl command. Cheers.
@@justmeandopensource Thanks for replying. 1) We can create script and run manually. 2) I got the point,that it can be achived by running cronjob at particular time.
@@VijayYadav-rc2is Yeah. Just for automating your manual tasks. You can create a script and run manually. But where would you place the script and how would you trigger the script. Would you place the script on a different machine and execute it via cron? Then what happens if that machine is down? So for things like these, jobs/cronjobs are designed to be run withing the cluster so you don't need any external resources. Cheers.
Hi Vucomir, thanks for watching this video. Cronjob is a deployable resource in kubernetes like a pod. If you want to deploy multiple cronjobs, then write multiple cronjob manifest yamls and deploy them. I am not sure if I got your question right. Thanks.
In my case pods already running Now I want to run Cron to send email so in this case if I set the job and job will be executed , will it create new pods every time ? Or it will be executed on running pods ?
@@ahamedrizwan8719 Instead of scaling the pods at specific time, if you scale the pods based on certains utlization metrics, it will be more practical and better utilization. If you are interested you can watch my below video. ua-cam.com/video/uxuyPru3_Lc/v-deo.html Or if you want to go with your approach, you can set up a cron job on the host machine where you are running kubectl commands from, to autoscale. Or the below link might be helpful as well. Thanks, Venkat
Hi Karthik, sorry which file are you referring to? The cronjob manifest is in my kubernetes repository under yamls directory. Github link in video description.
Im following your Kubernetes tutorial and just to ensure i get all in, i want to do it in Linux too (right now doing in windows). May i know what flavor of Linux are you using?
Hi Senthil, thanks for watching. I used Manjaro Linux with I3 tiling window manager. If you want to follow my videos on Linux, you can do so in any Linux distribution. It really doesn't matter what distro you use.
Just me and Opensource thank you. I agree but I liked your Linux distro. From ur wallpaper I figured and already installed Manjaro Gnome. Getting started with lxc.
Also I’m using it with virtual box on windows guest in my work laptop . Planning to get a laptop so I can have Linux as host. Do you recommend xps13 2020. I see mixed reviews.
@@SenthilRameshJV Mine is a Dell XPS 13 9370 2018 model. I love it performance wise. Its a beast in that tiny case. If I were to buy another laptop now, I would definitely choose xps 13 2020 edition. The only thing I didn't like about mine is the position of the webcam which is at the bottom of the screen. In 2020 edition, it has been moved back to the top.
@@SenthilRameshJV :) I am no longer using Manjaro Gnome. I used to distro hop every now and then to find a perfect distro that I am satisfied in all respects. I am on Gentoo for the last few weeks and will probably stick with it for the rest of my life (hopefully)
Hi venkat, for kubectl v16 kworker1 & kworker2 showing as not ready by default while running the command kubectl get nodes Any idea how to downgrade v16 to v12, v13 or v15
Hi Gajanan, thanks for watching this video. I haven't tried Kubernetes version 1.16.0 yet. According to the releases page github.com/kubernetes/kubernetes/releases, the latest release is 1.15.4 and 1.16.0 is not yet a stable release. But I will try this version at some point. What method you used to provision your cluster? I can try to do that in my environment and see how it goes. Thanks.
Hi Gajanan, I spent some time this morning investigating the issue. I had lots of problems. But I believe I have sorted them all. Basically, things have changed slightly in k8s v1.16.0 that required few changes in my github repo. I have been using Flannel as the default overlay network and I see problems getting it to work with v1.16.0. So I have updated my scripts to deploy Calico as the default overlay network. All looking good and I have verified it. Please do a fresh clone of my kubernetes repository and do vagrant up. Everything should be fine by now. Let me know if it worked. Cheers.
@@chaitalichakraborty2786 Job is nothing but a process that runs in a container. SO you first need to containerize your task (what you want to achieve by running the job) and then push it to an image registry.
Hi, video was very well explained. thanks for that. I just have 2 questions, first, will the pod that is scheduled to get terminated in cronjob will get terminated if pod is in Running/CrashLoopBackOff status and the next scheduled pod to spin up? and second, how to design "schedule" to spin up a pod immediately as we deploy cronjob.yaml and then 2 hours thereforth?
Hi Chinmay, thanks for watching. I have done a follow up video to this one explaining how to get rid of pods after completion. ua-cam.com/video/g0dmgd27DRg/v-deo.html Otherwise even when the job is complete the pod will remain in completed state. This applies to pods in crashloop state as well. For you second question, if you specify the cron schedule to run every 2 hours, it should start the first run immediately or in few minutes.
Hi Abdul, If you decide to go with Cloud technology and that is where your interests are, then there are lots you can follow to prepare yourself. You can learn AWS, Google Cloud, Docker stuff, Kubernetes etc. If you gain confidence you can then go for certifications like AWS Solutions Architect. Kubernetes certification is also available from CNCF (Certified Kubernetes Administrator). The pre-requisites are that you have some basic knowledge about cloud and containers. Amazon provides free tier for first 12 months to try their AWS resources. Google Cloud also has similar free tier for your to learn. Thanks
i am getting this issue NOTE: Gem::Specification.default_specifications_dir is deprecated; use Gem.default_specifications_dir instead. It will be removed on or after 2020-02-01. Gem::Specification.default_specifications_dir called from /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/bundler.rb:428. NOTE: Gem::Specification.default_specifications_dir is deprecated; use Gem.default_specifications_dir instead. It will be removed on or after 2020-02-01. Gem::Specification.default_specifications_dir called from /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/bundler.rb:428. /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/plugins/kernel_v2/config/vm.rb:354: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/plugins/kernel_v2/config/vm_provisioner.rb:92: warning: The called method `add_config' is defined here /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/errors.rb:103: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call /usr/share/rubygems-integration/all/gems/i18n-1.8.2/lib/i18n.rb:195: warning: The called method `t' is defined here Vagrant failed to initialize at a very early stage: Failed to locate the powershell executable on the available PATH. Please ensure powershell is installed and available on the local PATH, then run the command again.
Hello!!! My Question is irrelevant to this Video Topic, But I'm hopping You may help me in this, Laravel FFMPEG Error Encoding Failed using Laravel Queue in Docker I am running my laravel application on docker container. I am using ffmpeg build johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz i can confirm that ffmpeg and ffprobe are installed. when I submit video to my application it is passed to a queue which is responsible to converts two versions one is 720,1280 and other is 640, 360 what is happening at the moment, I am successfully able to convert uploaded files into 720,1280 but when it comes to converting 640,360 it throws Encoding Failed error. I have switched conversion pattern like first I m trying to convert 640,360 then trying to convert 720,1280. what have I noticed is that first conversion happens successfully, the exception is always thrown at second conversion here is my code sample $format ->setKiloBitrate($bit_rate) ->setAudioChannels(2)->setAudioKiloBitrate(128); $seven_twenty = new Dimension(1280, 720); $video->filters()->resize($seven_twenty)->synchronize(); $video->save($format, $this -> Video['destination'].'-720-1281.mp4'); $format ->setKiloBitrate($bit_rate) -> setAudioChannels(2) -> setAudioKiloBitrate(128); $three_sixty = new Dimension(640, 360); $video->filters()->resize($three_sixty)->synchronize(); $video->save($format, $this -> Video['destination'].'-360-640.mp4'); can You please help me on this.
Nice Video. One gotcha at ua-cam.com/video/uJKE0d6Y_yg/v-deo.html : When you have successfulJobsHistoryLimit and failedJobsHistoryLimit to zero then deployment gets delete automatically and Kubernetes lost the connection between cronjob and pod which were done by jobs that's why you have to delete pods manually otherwise while deleting the job it will delete pod automatically. Two questions apart from this: 1. You are using busybox in this video, why not alpine? 2. How autocomplete works in your laptop? Thanks for your time.
Hi Amit, thanks for watching this video. I got used to typing busybox. This is just to show a quick example and didn't bother to use alpine. And for kubernetes auto-completion, I used kubectl plugin for oh-my-zsh on top of my zsh shell. If you are using bash, you can follow the steps in the below link. kubernetes.io/docs/tasks/tools/install-kubectl/ Thanks, Venkat
you are one of those people who make the world a better place!! .. keep it up.. Excellent videos
Hi, many thanks for following my videos and taking time to comment/appreciate. Cheers.
Really enjoy your videos! Very easy to follow and packed with information, while avoiding unnecessary details. On a separate thread, can you please share/post a short video on how you have your zsh configured?
Appreciate all you do. Please keep these coming! Kudos.
Hi, thanks for watching.
I have done a video long time ago about my terminal setup.
ua-cam.com/video/soAwUq2cQHQ/v-deo.html
But since then I have changed a lot and now I am using Gentoo but zsh and its configurations still remain the same.
Basically, its zsh > zsh-autosuggestions > oh-my-zsh
I have been wanting to do a video of my current setup which I will at some point.
Cheers.
I wish Venkat's videos get included in Kubernetes official documentation
Wow. That was a great compliment. Thanks for these words. Cheers.
hey Venkat, Im just looking forward to seeing 10,000 subcribers by the end of this year, wouldn't it be great!
Hi Richard, i wish the same but it won't happen before the end of the year. As per the rate of increase, I will hit 10k in the first week of January.
@@justmeandopensource let's see!
@@richardwang3438 Actually it might happen. 2 more days to go and I am now at 9974.
@@justmeandopensource yes! my friend, I am keeping an eye on it :)
@@richardwang3438 :)
Thanks for a clear and concise video. Subscribed.
Hi Brynjar, Thanks for watching.
Fantastic video, exactly what I was looking for
Hi Thomas, thanks for watching.
Thanks a bunch. This video explained jobs very well, and helped me tremendously. Other videos confused me.
HI Pacnet, thanks for watching.
Really nice video & demo. Well done. Well prepared. Great examples for starter with this K8S subject. Thumbs up! :-)
Hi Jeff, thanks for watching. Cheers.
subbed, unbelievably clear explanation
Glad to hear that. Many Thanks for watching.
Thank you so much Venkat for all your awesome videos. Very helpful.
Hi Zulhilmi, thanks for watching.
hi venkat..nice explanation..keep up the good work.
Hi Uday, thanks for watching. Cheers.
Great demo , coverd all the aspects. Thank you!!
Hi Pravesh, thanks for watching.
Thanks for the great vid! Would love to see some on Argo Workflows and Argo Events for triggered pipelines!
HI Yovan, thanks for watching. I will surely look into Argo stuff.
@@justmeandopensource it's really cool! You can define steps/graphs of containers to run in sequence and stuff.
Love the channel!
@@yovangrbovich3577 Thanks.
Great video, thank you for explaining that so well.
Hi Samit, Thanks for watching this video.
Very well explained 👍
Hi Manish, thanks for watching.
Thank you that was really useful.
Thanks for watching. Cheers.
Amazing content , thanks for the video
great video
Hi Bharat, thanks for watching. Cheers.
Wonderfull man, thanks a lot :)
Hi, Thanks for watching.
It is bit late I am watching your this video, but still it is relevant. I have a use case. Can we somehow trigger the cron job manually outside its set trigger schedule?
Hi Jayesh, thanks for watching. This is a cron job and gets run as per the schedule you defined. If you want to trigger it manually, then there is kubernetes jobs.
Thanks Venkat! Basically I m thinking of end to end and monkey test scenario and came up with some really weird use-cases. I wonder if u r aware of any opensouce tooling for running end to end tests in a cluster?
Thanks so much for your wonder video, wondering if there is any programmatic way to cleanup completed pod and job
Hi Magesh, thanks for watching this video. Well you could write a kubectl plugin that inspects each job/cronjob and deletes it when the status is completed.
Thanks,
Venkat
awesome explanation
Hi Huzaifa, thanks for watching.
I really enjoy the course. Can you please help me to setup a yaml job , which need to be wait until the job was not finished.
Great video, Can you please provide a tutorial real scenario like :- nginx pod and particular script to run on specific time. In your tutorial cronjob create a separate pod and cron run's in it. But what if i want to run the cronjob on particular pod like :- nginx pod
Hi Sarfaraz, thanks for watching this video. The way I explained in this video is the way it is supposed to work. When you create cronjob, it will create a pod and run it on the schedule you specified in the cronjob manifest.
The below StackOverflow discussion might clear your doubts.
stackoverflow.com/questions/41192053/cron-jobs-in-kubernetes-connect-to-existing-pod-execute-script
Thanks.
Hello Dear, Thanks for your videos! actually u can explain kubernetes like no body else! Anyway I have a question for u. Can we run a job inside specific pod? and how?
Hi Majid, thanks for watching. When you create Kubernetes job, it will create a pod (and container within) to run that job and the pod terminates. You can't run a job inside a specific pod. Each container/pod has specific purpose. If you want to alter something inside a running pod/container or get some data off a particular pod/container, then you have to make sure to expose the data from within the container (usually using a Kubernetes service) which can then be accessed by the pod your kubernetes job runs on. Cheers.
@@justmeandopensource Thanks a lot for ur fast reply :) Where I can find a video or material for using a Kubernetes service to run a job?
Great video! Thanks for sharing! Let me ask you two things, one is how you accomplish kubectl completition + constantly watch kubectl get all ? Thanks!
Bro Many thanks for your video which is really great!!!
Please help me out - how do i run any cronjob in Openshift for specific time zone
hi bro, please do a video on "clean-up-finished-jobs-automatically" using TTlController, using 'ttlSecondsAfterFinished' in our yaml file at spec.ttlSecondsAfterFinished. thanks
HI Naveen, thanks for watching this video. I will test that and do a video. I think I tried that and didn't find it working as expected. But anyways, will give it a try..
At 11:24
parallelism is 2
Completion is 2
So I runs 2 pod parallel and these two pods completed 2 iterations.. please reply
Hi Saoud, thanks for watching. completions 2 means that the workload should be run 2 times. With just completions 2 (and no parallelism), you will have first pod launched, completes the work and then second pod launches and completes the work. Now when you also specify parallelism 2, two pods will launch and run at the same time. Hope this makes sense. Cheers.
Thanks alot thankss
@@saoudahmedkhan2549 You are welcome. Cheers.
Thanks so much for this tutorial.
Hi Roman, thanks for watching.
Hi bro please make and upload videos on service discovery in kubernetes.we need your tutorial videos in this topic very much.
Hi Mohammed, thanks for watching. i will see if I can do it. Cheers.
Thanks for this session, can you explain "startingDeadlineSeconds" this too
Hi Raja, Thanks for watching this video. I am gonna ask you to take a look at below official documentation instead of trying to explain myself.
kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
See if that makes sense. Thanks
Can you please help share MySQL dump backup yaml using cronjob..
I have OpenShift MySQL pod running and I want to schedule cronjob to take backup every weekend once
What is restartPolicy= Never says. I also see that the RESTART count is not increasing.. Is that mean that on falling that pod "Do not restart pod"?
Hi Brijesh, thanks for watching.
RestartPolicy option is used to control what to do when a container fails (not the pod). Please refer to the documentation for more details.
kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures
@@justmeandopensource: Got it and thanks for correcting me. Well I'm planning to give CKAD any useful tips ?
@@brijeshgpt7 Hi, all the best for your ckad prepartion. I haven't done any of the kubernetes certification. Just practice and be more confident by practising more.
thank you for creating video
Hi Sanjay, thanks for watching.
Excellent video! I would like to know how to execute several commands (or a shell script) in order to create a Mysql backup dump I tried many things but when I execute the cronjob It seems that I can´t start a mysql session, by the way Im using OpenShift and the command to login into the Mysql POD is: oc rsh , any idea or any suggestions? thanks a lot
Hi Jaime, thanks for watching this video. There are lots of examples I could find online about mysqldump cron job or job in Kubernetes.
I had a quick look at the below link. See if that helps.
github.com/camilb/kube-mysqldump-cron
You can modify the manifests as per your need.
Thanks,
Venkat
We can add volume in job section and we can able to mouth same volume in main container ? I tried to add emptdir it fails to loads . Can able to help s on this .
Hi Shanmugam, thanks for watching. Kubernetes jobs are similar to pods and you can mount volumes inside the container. Can you share more details about what you are trying to do? You can paste command outputs and your manifests in pastebin.com.
@@justmeandopensource apiVersion: batch/v1
kind: Job
metadata:
name: temp-job
spec:
template:
metadata:
name: Testing
spec:
containers:
- name: testing
image: busybox
volumeMounts:
- name: share-volume
mountPath: /mnt/share
command: [ "sh", "-c", "copy for m image " ]
restartPolicy: Never
volumes:
- name: share-volume
Volume
name: share-volume
Emptdir: {}
This not working, I try access this volume in deployment section
Hi,
I have a python script with some arguments which needs to be run inside the pod/container. Can you please tell me job.yaml file syntax ??
How to execute this python script ?
Hi Aman, thanks for watching. Could you please explain in a bit more detailed manner as to what you are trying to achieve? You could share the scripts and outputs in pastebin.com.
Hey were you able to figure out the syntax to schedule a python script with arguments?
@@shreyansshah1477
See if this helps.
pastebin.com/r9kF2hSk
How to run a job for rolling upgrade in kubernetes cluster?
Hai make a video for k8s trabuleshooting
awesome video
Glad you enjoyed it
thanks for video, can someone kindly explain difference between spec.activeDeadlineSeconds and spec.startingDeadlineSeconds in cronjob
Hi Venkat, thanks for the detailed video.
A quick question....
If I want to run parallel jobs for this use case, how to do it?
Use case:
For benchmarking purpose, let's say I have assigned a range for CPU values to do deployments , like 1cpu-100cpu. And I want to do a load test for every CPU value in the given range but I don't want to wait by running deployment one by one with new configuration of cpu value. Instead, want to run jobs parallely in batches like cpu range from 1 to 10 in one batch and cpu range from 11 to 20 in another batch and so on. How can I achieve this parallel batch jobs execution? Please help.
Hi,
Thanks for your training. I'm trying to install a cluster using vagrant up and I have an issue when trying to Deploy flannel network. Could you check it with me please..
kmaster: [TASK 3] Deploy flannel network
kmaster: Unable to connect to the server: net/http: TLS handshake timeout
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: timed out waiting for the condition
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Hi Abdel, thanks for watching this video. I will do a vagrant up on my workstation and see if anything has changed. Will updated you shortly. Thanks.
HI Abdel,
I just tried it on my machine and it worked perfectly fine.
My guess is that it didn't fail on the "[TASK 3] Deploy flannel network" but it should have failed before that during Kubernetes initialization.
The cluster failed to initialize and hence deploying flannel network failed.
To troubleshoot this issue, edit vagrant-provisioning/bootstrap_kmaster.sh script and remove output redirection on Line 5 "kubeadm init command"
On that line just remove ">> /root/kubeinit.log 2>/dev/null". Delete all the VMs and run vagrant up again. You should now be able to see exactly why its failing.
Thanks,
Venkat
Why we need a jobs?
if we can run the pods simply by running the kubectl command and using the replicaset.
Hi Vijay, jobs serves specific purpose.
Yes you can simply run kubectl command to run a pod.
Please explain me how you would achieve the below requirement.
This is just for the purpose of understading why jobs/cronjobs are helpful.
There is a mysql server running somewhere and I want to do a mysqldump every Wednesday at 10PM. And if something goes wrong with the mysqldump process, I need the ability to look through the logs to find out what happened.
Let me know how you would do this using simple kubectl command.
Cheers.
@@justmeandopensource Thanks for replying.
1) We can create script and run manually.
2) I got the point,that it can be achived by running cronjob at particular time.
@@VijayYadav-rc2is Yeah. Just for automating your manual tasks. You can create a script and run manually. But where would you place the script and how would you trigger the script. Would you place the script on a different machine and execute it via cron? Then what happens if that machine is down? So for things like these, jobs/cronjobs are designed to be run withing the cluster so you don't need any external resources. Cheers.
Cron job 18:00
Hi Vatan, thanks for watching this video. Whats at 18:00, I didn't get it sorry.
@@justmeandopensource Just a tag for me to get there directly. Thanks. Great tutorial :)
@@vatansoni6439 You are welcome. Cheers.
very thanks Sir.
Hi Nguyễn, thanks for watching.
greate video,
i have a question, how can you deploy multiple cronjobs ?
Hi Vucomir, thanks for watching this video. Cronjob is a deployable resource in kubernetes like a pod. If you want to deploy multiple cronjobs, then write multiple cronjob manifest yamls and deploy them. I am not sure if I got your question right. Thanks.
In my case pods already running Now I want to run Cron to send email so in this case if I set the job and job will be executed , will it create new pods every time ? Or it will be executed on running pods ?
Hi Praveen, thanks for watching. Each iteration of the job will be done in its own pod. You will have to clean up the pods if you want. Cheers.
Excellent Video, How i can configure cron job to execute commands in running pod
Hi Ahamed, thanks for watching this video. I don't know why would do that. If you give me a specific use case, that will be helpful. Thanks.
@@justmeandopensource if i want to scale the pod at specific time and down on specific time
@@ahamedrizwan8719 Instead of scaling the pods at specific time, if you scale the pods based on certains utlization metrics, it will be more practical and better utilization.
If you are interested you can watch my below video.
ua-cam.com/video/uxuyPru3_Lc/v-deo.html
Or if you want to go with your approach, you can set up a cron job on the host machine where you are running kubectl commands from, to autoscale.
Or the below link might be helpful as well.
Thanks,
Venkat
@@justmeandopensource link please
@@ahamedrizwan8719 Sorry. Here it is github.com/kubernetes/kubernetes/issues/50569
bro please provide the file to schedule a cronjob to run a script shell in kuberenetes.....
Hi Karthik, sorry which file are you referring to? The cronjob manifest is in my kubernetes repository under yamls directory. Github link in video description.
Im following your Kubernetes tutorial and just to ensure i get all in, i want to do it in Linux too (right now doing in windows). May i know what flavor of Linux are you using?
Hi Senthil, thanks for watching. I used Manjaro Linux with I3 tiling window manager. If you want to follow my videos on Linux, you can do so in any Linux distribution. It really doesn't matter what distro you use.
Just me and Opensource thank you. I agree but I liked your Linux distro. From ur wallpaper I figured and already installed Manjaro Gnome. Getting started with lxc.
Also I’m using it with virtual box on windows guest in my work laptop . Planning to get a laptop so I can have Linux as host. Do you recommend xps13 2020. I see mixed reviews.
@@SenthilRameshJV Mine is a Dell XPS 13 9370 2018 model. I love it performance wise. Its a beast in that tiny case. If I were to buy another laptop now, I would definitely choose xps 13 2020 edition. The only thing I didn't like about mine is the position of the webcam which is at the bottom of the screen. In 2020 edition, it has been moved back to the top.
@@SenthilRameshJV :)
I am no longer using Manjaro Gnome. I used to distro hop every now and then to find a perfect distro that I am satisfied in all respects. I am on Gentoo for the last few weeks and will probably stick with it for the rest of my life (hopefully)
Can we write YAML files without using colon. Like "apiVersion batch/v1" instead of "apiVersion:batch/v1"?
Hi Harsh, thanks for watching. You can't. blog.stackpath.com/yaml/
Who's watching this masterpiece in 2023?
Hi Akshay, Thanks for watching.
Like your accent brah!!
Thanks for the complement Bala. Hope you found this video useful. Cheers.
Hi venkat,
for kubectl v16
kworker1 & kworker2 showing as not ready by default while running the command
kubectl get nodes
Any idea how to downgrade v16 to v12, v13 or v15
Hi Gajanan, thanks for watching this video. I haven't tried Kubernetes version 1.16.0 yet. According to the releases page github.com/kubernetes/kubernetes/releases, the latest release is 1.15.4 and 1.16.0 is not yet a stable release. But I will try this version at some point.
What method you used to provision your cluster? I can try to do that in my environment and see how it goes.
Thanks.
Hi Venkat
As per your session I am provision the cluster
Okay. Thanks. I will check in my machine and let you know.
Hi Gajanan,
I spent some time this morning investigating the issue. I had lots of problems. But I believe I have sorted them all.
Basically, things have changed slightly in k8s v1.16.0 that required few changes in my github repo. I have been using Flannel as the default overlay network and I see problems getting it to work with v1.16.0.
So I have updated my scripts to deploy Calico as the default overlay network. All looking good and I have verified it.
Please do a fresh clone of my kubernetes repository and do vagrant up. Everything should be fine by now.
Let me know if it worked. Cheers.
Can we create any job for a specific pod?
Hi Chaitali, thanks for watching. I don't get your question though.
Hi Chaitali, thanks for watching. I don't get your question though.
I mean how to run a job inside a pod? Or can job run inside a pod or only at cluster level?
@@chaitalichakraborty2786 Job is nothing but a process that runs in a container. SO you first need to containerize your task (what you want to achieve by running the job) and then push it to an image registry.
Hi, video was very well explained. thanks for that. I just have 2 questions, first, will the pod that is scheduled to get terminated in cronjob will get terminated if pod is in Running/CrashLoopBackOff status and the next scheduled pod to spin up? and second, how to design "schedule" to spin up a pod immediately as we deploy cronjob.yaml and then 2 hours thereforth?
Hi Chinmay, thanks for watching.
I have done a follow up video to this one explaining how to get rid of pods after completion.
ua-cam.com/video/g0dmgd27DRg/v-deo.html
Otherwise even when the job is complete the pod will remain in completed state. This applies to pods in crashloop state as well.
For you second question, if you specify the cron schedule to run every 2 hours, it should start the first run immediately or in few minutes.
What is the learning path and what is the prerequisite
Hi Abdul,
If you decide to go with Cloud technology and that is where your interests are, then there are lots you can follow to prepare yourself. You can learn AWS, Google Cloud, Docker stuff, Kubernetes etc. If you gain confidence you can then go for certifications like AWS Solutions Architect. Kubernetes certification is also available from CNCF (Certified Kubernetes Administrator).
The pre-requisites are that you have some basic knowledge about cloud and containers. Amazon provides free tier for first 12 months to try their AWS resources. Google Cloud also has similar free tier for your to learn.
Thanks
i am getting this issue NOTE: Gem::Specification.default_specifications_dir is deprecated; use Gem.default_specifications_dir instead. It will be removed on or after 2020-02-01.
Gem::Specification.default_specifications_dir called from /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/bundler.rb:428.
NOTE: Gem::Specification.default_specifications_dir is deprecated; use Gem.default_specifications_dir instead. It will be removed on or after 2020-02-01.
Gem::Specification.default_specifications_dir called from /usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/bundler.rb:428.
/usr/share/rubygems-integration/all/gems/vagrant-2.2.6/plugins/kernel_v2/config/vm.rb:354: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/usr/share/rubygems-integration/all/gems/vagrant-2.2.6/plugins/kernel_v2/config/vm_provisioner.rb:92: warning: The called method `add_config' is defined here
/usr/share/rubygems-integration/all/gems/vagrant-2.2.6/lib/vagrant/errors.rb:103: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call
/usr/share/rubygems-integration/all/gems/i18n-1.8.2/lib/i18n.rb:195: warning: The called method `t' is defined here
Vagrant failed to initialize at a very early stage:
Failed to locate the powershell executable on the available PATH. Please
ensure powershell is installed and available on the local PATH, then
run the command again.
hey, do you have a slack channel ?
Hi, I do and the invitation link will be in my channel's banner.
@@justmeandopensource got it , thx !!
@@realthought2262 Welcome
Hello!!! My Question is irrelevant to this Video Topic, But I'm hopping You may help me in this,
Laravel FFMPEG Error Encoding Failed using Laravel Queue in Docker
I am running my laravel application on docker container.
I am using ffmpeg build johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz
i can confirm that ffmpeg and ffprobe are installed.
when I submit video to my application it is passed to a queue which is responsible to converts two versions one is 720,1280 and other is 640, 360
what is happening at the moment, I am successfully able to convert uploaded files into 720,1280 but when it comes to converting 640,360 it throws Encoding Failed error.
I have switched conversion pattern like first I m trying to convert 640,360 then trying to convert 720,1280. what have I noticed is that first conversion happens successfully, the exception is always thrown at second conversion
here is my code sample
$format ->setKiloBitrate($bit_rate) ->setAudioChannels(2)->setAudioKiloBitrate(128);
$seven_twenty = new Dimension(1280, 720);
$video->filters()->resize($seven_twenty)->synchronize();
$video->save($format, $this -> Video['destination'].'-720-1281.mp4');
$format ->setKiloBitrate($bit_rate) -> setAudioChannels(2) -> setAudioKiloBitrate(128);
$three_sixty = new Dimension(640, 360);
$video->filters()->resize($three_sixty)->synchronize();
$video->save($format, $this -> Video['destination'].'-360-640.mp4');
can You please help me on this.
Hi Teej, thanks for watching. I am afraid I don't have any experience on this to answer your question.
Nice Video. One gotcha at ua-cam.com/video/uJKE0d6Y_yg/v-deo.html :
When you have successfulJobsHistoryLimit and failedJobsHistoryLimit to zero then deployment gets delete automatically and Kubernetes lost the connection between cronjob and pod which were done by jobs that's why you have to delete pods manually otherwise while deleting the job it will delete pod automatically.
Two questions apart from this:
1. You are using busybox in this video, why not alpine?
2. How autocomplete works in your laptop?
Thanks for your time.
Hi Amit, thanks for watching this video. I got used to typing busybox. This is just to show a quick example and didn't bother to use alpine.
And for kubernetes auto-completion, I used kubectl plugin for oh-my-zsh on top of my zsh shell.
If you are using bash, you can follow the steps in the below link.
kubernetes.io/docs/tasks/tools/install-kubectl/
Thanks,
Venkat