Great video as always Jim! Some additional settings that might be helpful: Under options,. disable the "disable tablet for pointer" (saves resources) Set the bios to UEFI: qm set 5000 --bios ovmf --efidisk0 local-lvm:1,format=qcow2,efitype=4m,pre-enrolled-keys=1 qm set 5000 --agent enabled=1 to enable the QM agent Once you started the VM installing the QM Agent: sudo apt install -y qemu-guest-agent
Thank you for this video! really learned a lot. As a token of my appreciation here's a small contribution: the command lines to fully create the VM in the command line, including the tweaks mentioned in the modification section (no ballooning, CPU set to host, adding a vlan, resizing the disk to 10GB and finally enabling SSD Emulation. The rest I let as is in the video: qm create 9000 --memory 4092 --core 2 --numa 1 --cpu host --balloon 0 --name Ubuntu-CloudIMG-Template --net0 virtio,bridge=vmbr0,tag=900 cd /var/lib/vz/template/iso/ qm importdisk 9000 ubuntu-noble-server-cloudimg-amd64.img local-lvm qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0 qm disk resize 9000 scsi0 +6416M qm set 9000 --scsi0 local-lvm:vm-9000-disk-0,ssd=1 qm set 9000 --ide2 local-lvm:cloudinit qm set 9000 --boot c --bootdisk scsi0 qm set 9000 --serial0 socket --vga serial0 And yes this is for people like me that are just too lazy and want to copy and paste.
Hi Jim! Great video! Creating virtual machine templates WITH cloud-init is probably the most valuable thing I learned in 2023! This has saved me so much time! Thanks for such great content. I am really enjoying watching your videos and I have learned so much!
I have been using Proxmox VE for years now and never learnt the Cloud-Init! Thank you for teaching this, and actually mentioning about it in one of your your more recent videos.
following the guide in the Rancher video, my cluster kept dying (kubernetes stopped responding). Took me a beat to figure out the VM had run out of disk space (3.5G doesn't get one far). Unless I missed it, you might want to add a step to resize the disk on the template before deploying the VMs. However, I also want to give you props for your work! It is awesome, very well explained and detailed and definitely a great help!
Nice guide, i am trying to follow up. You should make a custom youtube playlist just for this series, it will be easier to follow up than just 1 play list with all your videos
Great tutorial Jim, really love the way you explain it clearly and also it's so important that people aren't made to run on this before they can walk. I see so many UA-cam videos that show how to get setup but don't explain or worse, don't even set things up well.
Just used this video to setup cloud-init and some vms.....your videos are great! Continuing to watch the k3s ones as well. Thanks so much for taking to the time to explain and show how to do all this in detail.
One thing to note here is that at least as of today (Mar 2024), this approach won't work in case you wanted to deploy these cloud-init images on iSCSI drives - at least not with TrueNAS. From what I was able to gather, the PVE clone operation inherently requires virtual disk allocation and this is not possible with iSCSI-only based storage. I don't think a workaround is possible at this moment and I don't know many more details - kind of surprised, though, as I would imagine cloud-init deployments on SAN drives would be _ideal_ for deployments at scale. Nevertheless, this approach works great with local Proxmox storage - Jim, thanks a lot for all these videos, they're one of the best I've come across. Hope you do a full K8s tutorial one day!
Damn, that was going to be my exact use case... having headless, minimalist (at least in term of storage) Proxmox nodes and having the storage be remote on a NAS. Have you found a work around ?
For anyone who is wondering about the issue with an extremely small boot drive, once the img file is downloaded, head over to /var/lib/vz/template/iso and run this command to resize the bootdrive as per your needs. "qemu-img resize noble-server-cloudimg-amd64.img 20G" you can change it to 20G to 80G or any number as per your needs if you need a larger boot drive and not planning to use Longhorn or NFS with the template.
Jim, I really appreciate the efforts you put into your videos. I've been wanting to spin up a K8s cluster for awhile. Following along and looking forward to the next video. Cheers!
is there a way to install additional packages into template before we start cloning, i do not want run ansible on many VMs, would be great clone ready to work VM just after cloning
Hi Jim, thanks for the vids, really like your clear calm concise instruction. I was really keen to work through this but got completely stopped at the ssh keys in the cloud-init. You seemed to just step over the key with "I used the key associated with proxmox". Where from? Is this a step I've missed from a different video?
Nice. Another nice feature is the behind the scenes is the auto partition resize. The auto update is for each restart or only for the first run only? It would be nice to use Terraform for templates and VMs provisioning.
@@Jims-Garage it looks it doesn't. started deploying k3s but it failed, after sshing in i realized the VMs run out of disk space. resizing the disk in proxmox and rebooting did not expand the partition - at least on debian cloud images
Great video! But I have two questions: Why are we using the CLI to create the template? Are any of the options/steps not possible when using the UI? In an answer to a comment, you said to avoid the KVM cloud images due to issues with virtual networking. Was that only for Ubuntu 23.10 or should all versions be avoided?
nice video! It could be worth noticing that proxmox supports(since some time now) the --import-from, like in the command --virtio0 :0,import-from= to cut the import-assign commands and make it a one liner.(I would use virtio as disk type instead of scsi) Also, I don't know if it's actually intended, but you did not resize the disks?
Ooh good to know. Thankfully disk resize is simple on a cloud image. Simply click expand disk and reboot. It's automatically configured to expand the partition, no need to resize in the command line
Hi! Nice video! I have a question: How did you manage to give the VM internet access? Did you set up IP tables in the Proxmox console or something similar? I’m asking because I configured the VM’s network as you mentioned, using vmbr0, but I still don’t have internet access.
Nice video like always. Something I noticed is that you didn't enable the QEMU Guest Agent directly in the template, but I suppose that you do it for your existing VMs right?
Awesome work! Any chance you could get into setting up a virtual pfsense or firewall? If you're getting into K8s content, I found there was a significant drop in content around observability (specifically, prometheus and alert manager)
I covered virtual firewalls twice, including ha firewalls. Both with Sophos XG (it's free). I also cover the networking side in the homelab guide. Have you seen those? I have done two videos on monitoring, including some of those tools (Grafana, Prometheus, Telegraf, influxdb). I will come on to monitoring Kubernetes soon with Prometheus and Grafana.
Thanks for making these! That said, I am incredibly hung up at the SSH key part. Everything is being ran on a single host on your "proxmox-dell". I have three different Lenovo Thinkcentre's in a proxmox cluster I want to distribute the 5 VM's across (for high availability reasons; master and worker on first machine, master worker on second, and a third master node VM on the third proxmox host). If I am understanding right, for Kubernetes this is the approach most would want to take, rather than running VM's on a single host. With the script, I am totally blind on how I would integrate three different SSH proxmox host keys into the mix with the 5 VMs. Also, rather than setup an admin VM, it would be cool to just use a bare metal terminal on my Windows machine. Anyone have any idea how to do this? Or is this script just incapable of working with a HA setup like this?
Thanks! You can share the same proxmox key across all nodes, you could create the VMs on a single host and migrate them, or just create a custom SSH key. The script should run fine on WSL in Windows - give it a go and let me know. I need people to test :D
@@Jims-GarageGotcha, figured that would be the approach, but wanted to confirm from someone smarter than me! So, believe it or not though, I don't think WSL2 would work out of the box because it doesn't have a traditional network setup. For example, my WSL2 Ubuntu instance has an IP address totally outside of my LAN subnet. Not sure how to configure WSL2's network settings to be it's own thing, if that makes sense. But, will definitely further test and get back to you.
@@Jims-GarageUpdate: It worked great. I decided to do 3 masters and 3 workers and just edited the script around it. WSL2 via Ubuntu worked great. I had a scare as (still unknown) I wasn't able to ping the VM at a certain point. Also, if you see this, I'm wondering what you think about potentially integrating the option for a HA Nginx situation. I'm totally new, but my understanding is, if Nginx goes down, so does the whole cluster. Curious to see if I can throw a backup Nginx somewhere on my network for a fail safe. Thanks again for these vids.
I have a bit of a question. Your "nvme" disk is local to the machine you are running this (proxmox-dell). This would not work for cloning VMs onto the other machine (proxmox-asus). Is that right? I know at this point I have options for shared storage (NFS, ZFS?, ceph?) If I want my template to create VMs on any of my servers (I have 3) I need to make all this on a shared storage setup. Not sure which to choose. Any suggestions?
No, you cannot clone to another machine. You can however clone then migrate, it doesn't take long. A more comprehensive solution is shared storage as mentioned. I just don't currently have the need for that (something I will do later).
Thanks for this guide, whole channel serves great content. I have a question - Is it supposed to work for cloudinit + pxe 8.1.4 + Telmate pxe provider 2.9.14 OR 3.0.1-rc1? On new, clean setup 2.9.14 crashes on VM creation, 3.0.1-rc1 drops all disks on VM creation and we land in ipxe loop Regrads,
Nice tutirial! @Jim's Garage, If you are getting an error like : unable to parse directory volume name 'vm-900-disk-0.raw' when running qm set 5000--scsihw virtio-scsi-pci --scsi0 local:vm-5000-disk-0 try qm set 5000--scsihw virtio-scsi-pci --scsi0 local:5000/vm-5000-disk-0.raw
Thanks for the video Jim! I just have one tiny problem. The clones that I`m making from the template wont get an ip and therefor I cant ssh to them, do you have any idea of what can I do to fix this?
Thanks for this video. Only issue I have: my vms are not getting any IP. I'm using IPv6 only but neither ip6=dhcp nor ip6=auto works. Any help on that? :\
Tnak you very much Jim. My CloudInit and VM machines storage is in my NAS storage name NFSSHARE nd after enterning this command :"qm set 5000 --scsihw virtio-scsi-pci --scsi0 NFSSHARE:vm-5000-disk-0" I tget the following error:"unable to parse directory volume name 'vm-5000-disk-0'", What am I missing ?
Hi Jim, if you keep it up, you'll hit the 100,000 subscriber mark by the middle of next year at the latest! But very important: do it without pressure and just for the fun of it!
Thanks! Not feeling any pressure yet and really enjoying it so far. If that changes I'll take the necessary steps to rebalance it. Only bit I'm struggling with is responding to everyone. Unfortunately I can only see that worsening with growth. Thankfully there's loads of awesome people on Discord.
Two questions: 1) Is there a way to point the apt update sources to a local repository or do you need to use a DNS server to perform that DNS redirection? (Normally, I edit /etc/apt/sources.list to point it to my local repository. It doesn't look like that would be an option for this, pre-start.) 2) With this being a VM, I am assuming that if you were to pass a GPU through to it, only the VM will be able to use said GPU and no other VMs created through this process will be able to use any other GPUs that might otherwise be physically installed in your system, correct? Thank you.
Hi Jim, Do you happen to know why Ubuntu 23.10 (Mantic) does not have kvm optimized images like what Lunar has? Should something else be used these days?
Thanks for the tutorial! I couldn't work out what on my system corresponds to your storage named 'nvme'. I tried 'local-zfs' and got the result: 'local-zfs:vm-5000-disk-0' . Which appears in the VM-Disks section of my 'local-zfs' storage in the gui. Well and good but.. How do I find this storage location using the cli?
@Jims-Garage I noticed something interesting. When setting up an Ubuntu from scratch I can easily pass-through my iGPU (i630). But when using the cloud image, it just doesn't work. see the pci with lspci, but no /dev/dri... I figured I was missing drivers, but nothing I did seemed to work. Do you have any explanation for this? Is the Kernel different (light(er))? Anything you can do to decode this would be appreciated?
@@Jims-Garage holy ty for the response! Just after asking. I tried that and worked! Once more, ty for your videos! Be sure I will post more question in this series. :D
Not all the steps worked for me. I had to set the remaining task manually. qm create 9000 --memory 2048 --core 2 --name ubuntu-cloud --net0 virtio,bridge=vmbr0 cd /var/lib/vz/template/iso/ qm importdisk 9000 lunar-server-cloudimg-amd64-disk-kvm.img local qm set 9000 --ide2 local:cloudinit qm set 9000 --serial0 socket --vga serial0 Thanks for the great explanation!!!
Hi Great stuff as usual. when typing "qm set 9000 --scsihw virtio-scsi-pci --scsi0 local:vm-9000-disk-0". I always get "unable to parse directory volume name 'vm-9000-disk-0'". Local being the name of my storage. Any ideas? Thanks!
Thanks I found a work around that seems to work using the gui to set scsi host. Now I cant ssh into any of the clones I aways get Permission denied (publickey). I ve been looking all over for a solution but havnt found anything Thanks!
I'm wondering if i'm the only one having this issue. When i start the VM, it seems to not complete cloud-init and kind of freeze. Console is showing a login windows but the configured cloud-init user isn't working. Trying to reboot the machine trough proxmox fails. Only the stop command works. On 2nd boot it seems that cloud-init is working and the user account, network config and SSH all start working. Or any idea what i'm doing wrong? But very nice video/serie, will follow for sure as i'm running k8s already in my homelab.
Everything is working till the SSH part, not very sure how to copy the keys and use the same keys to ssh into all the VMs that we created. Can you help to explain more details how to copy out the keys.
@@lawrenceneo2294 if you use the cloud init method the keys are taken care of. Otherwise you need to copy to the .ssh folder on home directory. The best way is to use ssh-copy, check this out in my recent ansible videos, it's as simple as one command.
hi! love your videos, verry helpfull. i have one qustion. i have the servers up and running, i can acces them with ssh from winscp, but i cannot access it from my linux terminal via ssh. "Permission denied (publickey)." . When i setup a ubuntuserver manualy, this is no problem
i found a workaround, to add more ssh public -keys in cloud-init.. But i belive it has to be a better way, i just dont know about it yet :) . but thank you for answering. i will join you on the next video@@Jims-Garage
I'm getting error 'unable to parse directory volume name 'vm-9002-disk-0' when running 'qm set 9002 --scsihw virtio-scsi-pci --scsi0 proxmox-img:vm-9002-disk-0' any clue what might be the issue?
Great tutorial but OMG u could ran into so much problems if u try a little bit different config. I tried with QEMU guest agent active. The serial terminal then shows only connected. I switched to the default Display and saw "GRUB_FORCE_PARTUUID set, attempting initrdless boot" doing nothing. I searched it up and found out the problem gets solved with the normal could init image not the kvm one. With that one u don't need the serial terminal, but I ran into another problem. U can't shutdown the VM if u have QEMU guest agent active and not installed. Don't mix it. Now I'm using the KVM image and no guest agent active. I hope this helps someone who also tries some other config.
Unfortunately, when I use Ubuntu 22.04 it seems to work, does all the update and then gets locked on "starting serial terminal on interface serial0" and it cannot be stopped. So, unfortunately, bit of a waste of an hours for me here. Never mind.
I have been stuck on being able to SSH into the VM's. I've followed your steps exactly but get an error trying to SSH in that says "Permission Denied (Public Key)". Ive gone back through and reinstalled everything several times from scratch so this has been an all-day exercise for me. During one of the clean installs I tried removing the cloud-init SSH public key entry and still nothing. Strange thing is when I install openssh on any other Ubuntu desktop VM I can SSH into those just fine. Ubuntu 'Server' is my nemesis! Please if anyone can help. I've tried rebooting, several google searches, maybe its a permission issue on the VM?, wrong public key from the Proxmox host?
@@DinoSpider1234 you need to copy the certs from your Proxmox host to the admin VM (or whatever certs you used). Just paste them into the home directory along with the script (make sure the home user owns them). The script will take care of the rest.
Of note, you do mention that i'll have to download the Public key but I am trying to SSH into the VM (on proxmox) from my main PC (windows). I've never had to do this before so unfamiliar with this process. Traditionally with Ubuntu Desktop i simply install openssh-server and set it to run. I then jump over to my Windows PC and use WinSCP and enter the IP address, username and password and Voila, SSH'd into the VM.
@@Jims-Garage I really do appreciate the time you've taken on these videos and the additional time in chat. I don't mean to take up too much of your time here so I'll end up finding a workaround bypassing the Cloud-init aspects of this. I hope it doesnt mess with the upcoming series. The conundrum I am facing is that although I can SSH into my Proxmox Node (where the VM is located) to copy the certs 'from', I have no way to move them into the VM itself. I only have VNC via Proxmox to that VM. Proxmox VNC doesnt have a 'paste' feature that I am aware of. So until I can resolve the SSH connection, I cant copy over the certs to allow SSH. Forgive me if I seem to over complicate things, I dont mean to.
I tried this but fail early in the process. qm importdisk 5000 lunar-server-cloudimg-amd64-disk-kvm.img local --> Successfully imported disk as 'unused0:local:5000/vm-5000-disk-0.raw' qm set 5000 --scsihw virtio-scsi-pci --scsi0 local:vm-5000-disk-0 --> unable to parse directory volume name 'vm-5000-disk-0' I'm running a 3 node HA Proxmox cluster on 8.1.4 PVE.
The cloud-init tutorial alone will save me a lot of time. Thanks Jim
You're welcome, get it ready in time for k8s!
Great video as always Jim!
Some additional settings that might be helpful:
Under options,. disable the "disable tablet for pointer" (saves resources)
Set the bios to UEFI: qm set 5000 --bios ovmf --efidisk0 local-lvm:1,format=qcow2,efitype=4m,pre-enrolled-keys=1
qm set 5000 --agent enabled=1 to enable the QM agent
Once you started the VM installing the QM Agent: sudo apt install -y qemu-guest-agent
Thank you for this great video! It's so much better than all the documents and threads I've come across. You made my day.
Glad it was helpful!
Thank you for this video! really learned a lot. As a token of my appreciation here's a small contribution: the command lines to fully create the VM in the command line, including the tweaks mentioned in the modification section (no ballooning, CPU set to host, adding a vlan, resizing the disk to 10GB and finally enabling SSD Emulation. The rest I let as is in the video:
qm create 9000 --memory 4092 --core 2 --numa 1 --cpu host --balloon 0 --name Ubuntu-CloudIMG-Template --net0 virtio,bridge=vmbr0,tag=900
cd /var/lib/vz/template/iso/
qm importdisk 9000 ubuntu-noble-server-cloudimg-amd64.img local-lvm
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
qm disk resize 9000 scsi0 +6416M
qm set 9000 --scsi0 local-lvm:vm-9000-disk-0,ssd=1
qm set 9000 --ide2 local-lvm:cloudinit
qm set 9000 --boot c --bootdisk scsi0
qm set 9000 --serial0 socket --vga serial0
And yes this is for people like me that are just too lazy and want to copy and paste.
Hi Jim! Great video! Creating virtual machine templates WITH cloud-init is probably the most valuable thing I learned in 2023! This has saved me so much time! Thanks for such great content. I am really enjoying watching your videos and I have learned so much!
Thanks, that's great to hear! Likewise, I'm constantly creating Kubernetes clusters and it's a lifesaver!
I have been using Proxmox VE for years now and never learnt the Cloud-Init! Thank you for teaching this, and actually mentioning about it in one of your your more recent videos.
@@mvadu you're welcome. I've used it almost extensively for a couple of years now. Many hours saved!
following the guide in the Rancher video, my cluster kept dying (kubernetes stopped responding). Took me a beat to figure out the VM had run out of disk space (3.5G doesn't get one far). Unless I missed it, you might want to add a step to resize the disk on the template before deploying the VMs. However, I also want to give you props for your work! It is awesome, very well explained and detailed and definitely a great help!
Thanks, appreciate I didn't make that obvious (I have added in the GitHub instructions, and apologise on the next video. Oops 😬)
🙏 Very fluent, comprehensible, and useful video. Thank you
You're very welcome!
Thanks for the demo and info, have a great day
Thanks, you too!
Nice guide, i am trying to follow up.
You should make a custom youtube playlist just for this series, it will be easier to follow up than just 1 play list with all your videos
Thanks, good idea.
Thanks. I wish Proxmox would make this easier, so we don't need to use commandline at all.. but following your tutorial make it doable!
I agree it would be useful
Thanks for your good and clear explanation!
Appreciate the feedback 🙂
Great tutorial Jim, really love the way you explain it clearly and also it's so important that people aren't made to run on this before they can walk. I see so many UA-cam videos that show how to get setup but don't explain or worse, don't even set things up well.
Thanks, appreciate the feedback
Just used this video to setup cloud-init and some vms.....your videos are great! Continuing to watch the k3s ones as well. Thanks so much for taking to the time to explain and show how to do all this in detail.
Awesome, appreciate the feedback 🙂
One thing to note here is that at least as of today (Mar 2024), this approach won't work in case you wanted to deploy these cloud-init images on iSCSI drives - at least not with TrueNAS. From what I was able to gather, the PVE clone operation inherently requires virtual disk allocation and this is not possible with iSCSI-only based storage. I don't think a workaround is possible at this moment and I don't know many more details - kind of surprised, though, as I would imagine cloud-init deployments on SAN drives would be _ideal_ for deployments at scale.
Nevertheless, this approach works great with local Proxmox storage - Jim, thanks a lot for all these videos, they're one of the best I've come across. Hope you do a full K8s tutorial one day!
Damn, that was going to be my exact use case... having headless, minimalist (at least in term of storage) Proxmox nodes and having the storage be remote on a NAS. Have you found a work around ?
Doesn't work for me with Ubuntu 22.04 either. The Serial0 dies after updates and it's totally locked. Wasted an hours on this video. Never mind.
@@RobertLeatherCan you share how you overcame it.
Great series. Thanks.
You're welcome 😁
3 wheeks ago I couldn't pronounce ProxMox, But, thanx in part to this, and the other series, I'm Terraforming ProxMox with Ansible! ;-)
Wahey that's awesome 😎
really excited to see more of this series
Thanks 😊
yup, very well layed out
Thanks for the awesome tutorial. Would it be nice to also see how this can be done using Hashicorp Packer and Terraform to create the VM templates
Yes, I'll look into that later
Great video, thanks!
You're welcome 😁
For anyone who is wondering about the issue with an extremely small boot drive, once the img file is downloaded, head over to /var/lib/vz/template/iso and run this command to resize the bootdrive as per your needs. "qemu-img resize noble-server-cloudimg-amd64.img 20G" you can change it to 20G to 80G or any number as per your needs if you need a larger boot drive and not planning to use Longhorn or NFS with the template.
Thanks 👍 you can also do this in the web UI using resize disk.
@@Jims-Garage Was completely unaware of this! Thanks a ton Jim, love your channel and has been a daily visit for my homelab obsession!
@@kamikaze_twist thanks, really appreciate the feedback
Thanks Jim!
You're welcome!
Part 2 already?! Jim you're the man!
All finished with part 2's setup, worked like a charm. Super excited for part 3!
Jim, I really appreciate the efforts you put into your videos. I've been wanting to spin up a K8s cluster for awhile. Following along and looking forward to the next video. Cheers!
Appreciate the feedback. Hoping to have a single click deploy solution ready. Doing some extensive testing ATM.
Really helpful!
@@johnappleseed5091 thanks 👍
Fantastic :) Thank you
You're welcome 😁
is there a way to install additional packages into template before we start cloning, i do not want run ansible on many VMs, would be great clone ready to work VM just after cloning
Brilliant!!! Thx a lot!
When you import the disk, you can specify the format that you want like --format qcow2 etc.
Algorithm... Give this man views!
Great work Jim
Thanks 👍
Great and well made tutorial . Thank you a lot Jim ! But i do have a question : why choose serial0 socket for the video output instead of vga ?
Less overhead. It's designed to be a cloud image so no need to replicate a VGA. This essentially just streams text.
Hi Jim, thanks for the vids, really like your clear calm concise instruction. I was really keen to work through this but got completely stopped at the ssh keys in the cloud-init. You seemed to just step over the key with "I used the key associated with proxmox". Where from? Is this a step I've missed from a different video?
It's your default key from proxmox. You'll find it in /root/.ssh
@@Jims-Garage Is that for any node or the main cluster manager?
@@GaryBarclay use it for all
thank you will try it
Great, makes life so much easier.
Nice. Another nice feature is the behind the scenes is the auto partition resize.
The auto update is for each restart or only for the first run only?
It would be nice to use Terraform for templates and VMs provisioning.
It'll auto resize whenever you reboot after changing.
@@Jims-Garage "auto update" 😃
@@Jims-Garage it looks it doesn't. started deploying k3s but it failed, after sshing in i realized the VMs run out of disk space. resizing the disk in proxmox and rebooting did not expand the partition - at least on debian cloud images
@@demorez5 you need to shut down then start (not reboot)
Great video!
But I have two questions:
Why are we using the CLI to create the template? Are any of the options/steps not possible when using the UI?
In an answer to a comment, you said to avoid the KVM cloud images due to issues with virtual networking. Was that only for Ubuntu 23.10 or should all versions be avoided?
Awesome, been waiting for these. When you set this up does the disk size expand as needed or was it static. Noticed yours was only 3GB or so
Just edit and add more. It dynamically expands upon reboot. Another benefit of cloud images
nice video!
It could be worth noticing that proxmox supports(since some time now) the --import-from, like in the command --virtio0 :0,import-from= to cut the import-assign commands and make it a one liner.(I would use virtio as disk type instead of scsi)
Also, I don't know if it's actually intended, but you did not resize the disks?
Ooh good to know. Thankfully disk resize is simple on a cloud image. Simply click expand disk and reboot. It's automatically configured to expand the partition, no need to resize in the command line
Hi! Nice video! I have a question: How did you manage to give the VM internet access? Did you set up IP tables in the Proxmox console or something similar? I’m asking because I configured the VM’s network as you mentioned, using vmbr0, but I still don’t have internet access.
It's likely DNS. Run a ping to 1.1.1.1 then run a ping to google.com. if the first works and second doesn't check your DNS
Nice video like always. Something I noticed is that you didn't enable the QEMU Guest Agent directly in the template, but I suppose that you do it for your existing VMs right?
Thanks, no I purposefully prefer to keep it minimal but you can add if you want. I plan to do ansible in the future for these use cases.
Love it! Take your time, but I can't wait!
Thanks 😊 testing a single click deployment at the moment. Last teething issues to overcome
how do you expand the disk size after you load the img to it, there is no option on the GUI
found it: qm resize 5000 scsi0 +10G
Edit disk, resize in the GUI
i wonder if there is an image that already has the qemu-guest-agent installed, because I normally add that after the VM comes up.
@@DavidVincentSSM there are, and you can also use Ansible to deploy it automatically etc
Nice videos and can't wait to see the next one.
Thanks 👍
Awesome work! Any chance you could get into setting up a virtual pfsense or firewall? If you're getting into K8s content, I found there was a significant drop in content around observability (specifically, prometheus and alert manager)
I covered virtual firewalls twice, including ha firewalls. Both with Sophos XG (it's free). I also cover the networking side in the homelab guide. Have you seen those?
I have done two videos on monitoring, including some of those tools (Grafana, Prometheus, Telegraf, influxdb). I will come on to monitoring Kubernetes soon with Prometheus and Grafana.
Thanks for making these! That said, I am incredibly hung up at the SSH key part. Everything is being ran on a single host on your "proxmox-dell". I have three different Lenovo Thinkcentre's in a proxmox cluster I want to distribute the 5 VM's across (for high availability reasons; master and worker on first machine, master worker on second, and a third master node VM on the third proxmox host). If I am understanding right, for Kubernetes this is the approach most would want to take, rather than running VM's on a single host. With the script, I am totally blind on how I would integrate three different SSH proxmox host keys into the mix with the 5 VMs. Also, rather than setup an admin VM, it would be cool to just use a bare metal terminal on my Windows machine. Anyone have any idea how to do this? Or is this script just incapable of working with a HA setup like this?
Thanks! You can share the same proxmox key across all nodes, you could create the VMs on a single host and migrate them, or just create a custom SSH key.
The script should run fine on WSL in Windows - give it a go and let me know. I need people to test :D
@@Jims-GarageGotcha, figured that would be the approach, but wanted to confirm from someone smarter than me! So, believe it or not though, I don't think WSL2 would work out of the box because it doesn't have a traditional network setup. For example, my WSL2 Ubuntu instance has an IP address totally outside of my LAN subnet. Not sure how to configure WSL2's network settings to be it's own thing, if that makes sense. But, will definitely further test and get back to you.
@@Jims-GarageUpdate: It worked great. I decided to do 3 masters and 3 workers and just edited the script around it. WSL2 via Ubuntu worked great. I had a scare as (still unknown) I wasn't able to ping the VM at a certain point.
Also, if you see this, I'm wondering what you think about potentially integrating the option for a HA Nginx situation. I'm totally new, but my understanding is, if Nginx goes down, so does the whole cluster. Curious to see if I can throw a backup Nginx somewhere on my network for a fail safe. Thanks again for these vids.
Hi Jim, cannot seem to use Putty to SSH into the newly minted VMs.... hoping you can shed some lights... Thank you.
You need to specify the same certificates as those used in cloud-init
I have a bit of a question.
Your "nvme" disk is local to the machine you are running this (proxmox-dell).
This would not work for cloning VMs onto the other machine (proxmox-asus). Is that right?
I know at this point I have options for shared storage (NFS, ZFS?, ceph?)
If I want my template to create VMs on any of my servers (I have 3) I need to make all this on a shared storage setup. Not sure which to choose. Any suggestions?
No, you cannot clone to another machine. You can however clone then migrate, it doesn't take long. A more comprehensive solution is shared storage as mentioned. I just don't currently have the need for that (something I will do later).
@Jims-Garage Thanks bud... That gives me food for thought.
I really appreciate how responsive you are.
I wish cloud-init had an option for Git URL for init script that it would run on first run.
Thanks for this guide, whole channel serves great content.
I have a question - Is it supposed to work for cloudinit + pxe 8.1.4 + Telmate pxe provider 2.9.14 OR 3.0.1-rc1? On new, clean setup 2.9.14 crashes on VM creation, 3.0.1-rc1 drops all disks on VM creation and we land in ipxe loop
Regrads,
Nice tutirial! @Jim's Garage,
If you are getting an error like : unable to parse directory volume name 'vm-900-disk-0.raw' when running qm set 5000--scsihw virtio-scsi-pci --scsi0 local:vm-5000-disk-0
try qm set 5000--scsihw virtio-scsi-pci --scsi0 local:5000/vm-5000-disk-0.raw
I think you can do everything from GUI apart from assigning the cloud init image.
Missed the word GUI
Good to know, thanks.
You can add a cloud-init drive on the hardware screen like you would add a disk drive for an ISO
@@medivalone yeah, I meant the cloud iso. That you need to assign to the vm with command line.
Thanks for the video Jim! I just have one tiny problem. The clones that I`m making from the template wont get an ip and therefor I cant ssh to them, do you have any idea of what can I do to fix this?
Thanks for this video. Only issue I have: my vms are not getting any IP. I'm using IPv6 only but neither ip6=dhcp nor ip6=auto works. Any help on that? :\
Do you have a DHCP server configured on your router?
Tnak you very much Jim. My CloudInit and VM machines storage is in my NAS storage name NFSSHARE nd after enterning this command :"qm set 5000 --scsihw virtio-scsi-pci --scsi0 NFSSHARE:vm-5000-disk-0" I tget the following error:"unable to parse directory volume name 'vm-5000-disk-0'", What am I missing ?
Try pressing tab as you're typing, it might need the .raw extension
Hi Jim, if you keep it up, you'll hit the 100,000 subscriber mark by the middle of next year at the latest! But very important: do it without pressure and just for the fun of it!
Thanks! Not feeling any pressure yet and really enjoying it so far. If that changes I'll take the necessary steps to rebalance it.
Only bit I'm struggling with is responding to everyone. Unfortunately I can only see that worsening with growth. Thankfully there's loads of awesome people on Discord.
Two questions:
1) Is there a way to point the apt update sources to a local repository or do you need to use a DNS server to perform that DNS redirection?
(Normally, I edit /etc/apt/sources.list to point it to my local repository. It doesn't look like that would be an option for this, pre-start.)
2) With this being a VM, I am assuming that if you were to pass a GPU through to it, only the VM will be able to use said GPU and no other VMs created through this process will be able to use any other GPUs that might otherwise be physically installed in your system, correct?
Thank you.
Hi Jim, Do you happen to know why Ubuntu 23.10 (Mantic) does not have kvm optimized images like what Lunar has? Should something else be used these days?
I would avoid the KVM images, they have issues with virtual networking. Stick to non-KVM :)
@@Jims-Garage thanks will do that..
@@Jims-Garage”avoid kvm images”. Does that only apply to 23.10 or all versions?
Is it possible to create a template for Debian with LVM included?
Yes, any Linux OS
Commenting to trick UA-cam algorithm
Haha, nice idea 🙏
Thanks for the tutorial!
I couldn't work out what on my system corresponds to your storage named 'nvme'. I tried 'local-zfs' and got the result: 'local-zfs:vm-5000-disk-0' . Which appears in the VM-Disks section of my 'local-zfs' storage in the gui. Well and good but..
How do I find this storage location using the cli?
~# pvesm path local-zfs:vm-5000-disk-0
/dev/zvol/rpool/data/vm-5000-disk-0
Hi, nvme is a volume I have created (it's a quad nvme raid I have). Simply change it to something that exists on your setup.
Thank you so much for this videos. Can i install v1.29.2+k3s1 version?
Yes, but it's quite new. I believe Rancher officially supports max 1.28
Rancher support only up to v1.28
@@Jims-Garage Also kube-vip v0.8.0 is released, I will make a try with this version
Is there a way to install the Guest agent tools via cloud-init?
Yes, but I recommend doing a script installation afterwards and keeping the base image clean.
Hey, how is it if i wanna use debian 12 for instance? they are only providing .raw and .qcow2 images no img or iso?
@Jims-Garage I noticed something interesting. When setting up an Ubuntu from scratch I can easily pass-through my iGPU (i630). But when using the cloud image, it just doesn't work. see the pci with lspci, but no /dev/dri... I figured I was missing drivers, but nothing I did seemed to work. Do you have any explanation for this? Is the Kernel different (light(er))? Anything you can do to decode this would be appreciated?
Interesting, tried with a debian12 cloud image with no issues.
I'm using XenServer and XenOrchestra, lets see if they have cloud init. I have an ryzen mini, i could install proxmox on 🤔
Hopefully they do, it's a common standard.
@@Jims-Garage Yes they seem to have it, I'll try it tomorrow.
@@rudypieplenbosch6752 great, let me know how you do
Decided to install Proxmox on a small mini, i try XCP-ng later.
I'm having hdd problems. How do I expand the hdd after creating the vm?
@@ricardocosta9336 turn off the VM, hardware, edit disk and assign more storage. Should dynamically update on a cloud image.
@@Jims-Garage holy ty for the response! Just after asking. I tried that and worked! Once more, ty for your videos! Be sure I will post more question in this series. :D
@@ricardocosta9336 you're welcome
Not all the steps worked for me. I had to set the remaining task manually.
qm create 9000 --memory 2048 --core 2 --name ubuntu-cloud --net0 virtio,bridge=vmbr0
cd /var/lib/vz/template/iso/
qm importdisk 9000 lunar-server-cloudimg-amd64-disk-kvm.img local
qm set 9000 --ide2 local:cloudinit
qm set 9000 --serial0 socket --vga serial0
Thanks for the great explanation!!!
Thanks, glad you were able to make it work.
Hi Great stuff as usual. when typing "qm set 9000 --scsihw virtio-scsi-pci --scsi0 local:vm-9000-disk-0". I always get "unable to parse directory volume name 'vm-9000-disk-0'". Local being the name of my storage. Any ideas? Thanks!
Make sure it's the same number and perhaps add the extension.
Thanks I found a work around that seems to work using the gui to set scsi host. Now I cant ssh into any of the clones I aways get Permission denied (publickey). I ve been looking all over for a solution but havnt found anything Thanks!
Great video. Thank you
Thanks 👍 it's a great way to spin up consistent VMs rapidly.
Jim is it possible to login as root when you use these cloud images? Thanks in advance.
Sure, sudo su, or edit ssh config to enable root login
Brilliant! That works without knowing the root password. Thank you and keep up the great work.
I'm wondering if i'm the only one having this issue. When i start the VM, it seems to not complete cloud-init and kind of freeze. Console is showing a login windows but the configured cloud-init user isn't working.
Trying to reboot the machine trough proxmox fails. Only the stop command works.
On 2nd boot it seems that cloud-init is working and the user account, network config and SSH all start working.
Or any idea what i'm doing wrong?
But very nice video/serie, will follow for sure as i'm running k8s already in my homelab.
That is odd behaviour I haven't seen before. Strange it works on a reboot... Sorry I cannot be of more use :/ I guess at least it works?
Guess so indeed :)
@@Skoucail I was noticing this too - ended up using the the non "-kvm" version of the cloud image and it works properly now
@@ABAReaper Thx will try this!
Everything is working till the SSH part, not very sure how to copy the keys and use the same keys to ssh into all the VMs that we created. Can you help to explain more details how to copy out the keys.
Currently, i am stuck where the only machine that I can use to ssh into the newly created machines is the Proxmox machine. Is this the intention?
@@lawrenceneo2294 if you use the cloud init method the keys are taken care of. Otherwise you need to copy to the .ssh folder on home directory. The best way is to use ssh-copy, check this out in my recent ansible videos, it's as simple as one command.
hi! love your videos, verry helpfull. i have one qustion. i have the servers up and running, i can acces them with ssh from winscp, but i cannot access it from my linux terminal via ssh. "Permission denied (publickey)." . When i setup a ubuntuserver manualy, this is no problem
I believe you'll need to specify the certificate when you try to connect with SSH.
i found a workaround, to add more ssh public -keys in cloud-init.. But i belive it has to be a better way, i just dont know about it yet :) . but thank you for answering. i will join you on the next video@@Jims-Garage
wow big thanks for the nice videos
You're welcome 😁
I'm getting error 'unable to parse directory volume name 'vm-9002-disk-0' when running 'qm set 9002 --scsihw virtio-scsi-pci --scsi0 proxmox-img:vm-9002-disk-0' any clue what might be the issue?
Solved. The solution was 'qm set 9002 --scsihw virtio-scsi-pci --scsi0 proxmox-img:9002/vm-9002-disk-0.raw'
@@FilipeNeto616 Thanks, this helped. also I had issues with kvm image and when switched to non kvm issues it worked.
Great tutorial but OMG u could ran into so much problems if u try a little bit different config. I tried with QEMU guest agent active. The serial terminal then shows only connected. I switched to the default Display and saw "GRUB_FORCE_PARTUUID set, attempting initrdless boot" doing nothing. I searched it up and found out the problem gets solved with the normal could init image not the kvm one. With that one u don't need the serial terminal, but I ran into another problem. U can't shutdown the VM if u have QEMU guest agent active and not installed. Don't mix it. Now I'm using the KVM image and no guest agent active. I hope this helps someone who also tries some other config.
Unfortunately, when I use Ubuntu 22.04 it seems to work, does all the update and then gets locked on "starting serial terminal on interface serial0" and it cannot be stopped.
So, unfortunately, bit of a waste of an hours for me here. Never mind.
Try increasing the default size from 3GB to 10GB
I have been stuck on being able to SSH into the VM's. I've followed your steps exactly but get an error trying to SSH in that says "Permission Denied (Public Key)". Ive gone back through and reinstalled everything several times from scratch so this has been an all-day exercise for me. During one of the clean installs I tried removing the cloud-init SSH public key entry and still nothing. Strange thing is when I install openssh on any other Ubuntu desktop VM I can SSH into those just fine. Ubuntu 'Server' is my nemesis! Please if anyone can help. I've tried rebooting, several google searches, maybe its a permission issue on the VM?, wrong public key from the Proxmox host?
Show the permissions and owner of the certs you copied to your admin machine
@@Jims-Garage I dont recall seeing that step in the video so I dont think that was ever done.
@@DinoSpider1234 you need to copy the certs from your Proxmox host to the admin VM (or whatever certs you used). Just paste them into the home directory along with the script (make sure the home user owns them). The script will take care of the rest.
Of note, you do mention that i'll have to download the Public key but I am trying to SSH into the VM (on proxmox) from my main PC (windows). I've never had to do this before so unfamiliar with this process. Traditionally with Ubuntu Desktop i simply install openssh-server and set it to run. I then jump over to my Windows PC and use WinSCP and enter the IP address, username and password and Voila, SSH'd into the VM.
@@Jims-Garage I really do appreciate the time you've taken on these videos and the additional time in chat. I don't mean to take up too much of your time here so I'll end up finding a workaround bypassing the Cloud-init aspects of this. I hope it doesnt mess with the upcoming series.
The conundrum I am facing is that although I can SSH into my Proxmox Node (where the VM is located) to copy the certs 'from', I have no way to move them into the VM itself. I only have VNC via Proxmox to that VM. Proxmox VNC doesnt have a 'paste' feature that I am aware of. So until I can resolve the SSH connection, I cant copy over the certs to allow SSH. Forgive me if I seem to over complicate things, I dont mean to.
Trust me, it's better to eat like an Italian and ride like an American :D
Haha, you might be right 😂
i have a script to make the templet for prokmox
Great, can you share it, interested to see. Eventually we can do all of this with ansible
I tried this but fail early in the process.
qm importdisk 5000 lunar-server-cloudimg-amd64-disk-kvm.img local --> Successfully imported disk as 'unused0:local:5000/vm-5000-disk-0.raw'
qm set 5000 --scsihw virtio-scsi-pci --scsi0 local:vm-5000-disk-0 --> unable to parse directory volume name 'vm-5000-disk-0'
I'm running a 3 node HA Proxmox cluster on 8.1.4 PVE.
qm set 5000 --scsihw virtio-scsi-pci --scsi0 local:5000/vm-5000-disk-0
Maybe add file extension of your disk. (.raw or .qcow2)
sed 's/ubuntu/debian/g' 😎
Thanks, will amend.