3 Node Hyperconverged Proxmox cluster: Failure testing, Ceph performance, 10Gb mesh network

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ • 101

  • @ZimTachyon
    @ZimTachyon Рік тому +38

    You are genuinely great at presenting this content. You first hinted at not using a switch which caught my attention right away. Then you showed the triangle configuration to answer how you anticipate it would work. Finally you asked and answered the same questions I had like how do you avoid loops. Excellent presentation and extremely valuable.

  • @iamweave
    @iamweave Рік тому +8

    Nice. Usually I watch instructional videos at 1.25x or 1.5x -- yours is the first one I thought I was going to have to run it at lower than 1x!

  • @TooLazyToFail
    @TooLazyToFail Рік тому +3

    This is a cool project! I (unintentionally) learn things that help me at work every time you post one of these.

  • @MikeDeVincentis
    @MikeDeVincentis Рік тому +7

    Nice job. Explained very well.

  • @allards
    @allards Рік тому +4

    Thanks you for this video, I never heard of the Proxmox Full Mesh Network Ceph feature before.
    I recently bought three Mini-pc’s for the purpose of building an Proxmox HA cluster. I was planning on getting a small 2.5 GB switch for the storage.
    Since the Mini-pc’s have two 2.5 GB ports I will use them in a Full Mesh Network buying separate USB-C to Ethernet adapters for the LAN connectivity.
    For my Homelab such a setup is more than powerful enough.
    Going to have a lot of fun (and frustration 😅) with an advanced Proxmox setup and Kubernetes Cluster on top of it..

  • @martyewise
    @martyewise Рік тому

    Thanks! Super vid! Searching for parts and planning construction of my own PVE cluster.

  • @davidkamaunu7887
    @davidkamaunu7887 Рік тому +3

    Awesome presentation! and @03:13 free range chickens in the background!! 🤠👏

  • @pauliussutkus526
    @pauliussutkus526 Рік тому +9

    Would love to have video with detailed explanation on how you setup proxmoxes, put them in cluster, setup mesh network using frr, cheking connectivity between the nodes(iperf3 or route ip 6 route). Adding subnet for cluster(ceph). Setup ceph and best pratice to use 2 copies of file or 2+1(parity). Also how to avoid fails. I think full tutorial of that would be great. Or you can divide into parts. Anyway good job.

  • @ominousred
    @ominousred Місяць тому +1

    This was a great watch. Very helpful. Thank you.

  • @Ronaaronhunt
    @Ronaaronhunt Рік тому +12

    Great content. It would be great if you could cover maintenance of the cluster. Things like upgrading a hard drive and/or replacing one of cluster PCs if there is a hardware failure.

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +11

      Thanks for the idea. I'll start planning for this video soon

    • @jamescross2652
      @jamescross2652 Рік тому +2

      @@ElectronicsWizardry and also updating it. I assume that should be straight forward? if a node reboot is required, you also just do that? For ZFS I found you could just plug in a new drive and then it just finds it and you can increase your storage that way. You cannot however, decrease it. And if you want to upgrade all drives, I suspect it might be better to build that array and then migrate to it. It would be good if you can just pull a drive and replace it with one twice the size and it just deals with it.

    • @dtom19
      @dtom19 Рік тому +1

      Ceph prefers to have drives be of the same size and type. It will work without them being identical, but performance would suffer. Having said that, if you want to increase the size of all disks, it’s not too bad. Mark an OSD as down, let Ceph rebuild its PG’s, stop the down OSD, destroy it, replace it with bigger drive, create a new OSD with the new drive.

  • @bluesquadron593
    @bluesquadron593 Рік тому +7

    Running this kind of setup on three identical Elitedesk sff nodes with dedicated m.2 drive for ceph. Even with a single 1Gb connection to a router everything works great. Ceph likes memory so have to run with at least 24 GB.

  • @GapYouIn2
    @GapYouIn2 10 місяців тому

    Good stuff! Good to see someone show that you can just grab the commodity hardware from wherever and make a cluster that is fault tolerant lol. I run several ceph clusters and it definitely gets better with scale, but still a lot to be desired. Interesting to see it so well integrated with proxmox.

  • @subpixel2234
    @subpixel2234 Рік тому +2

    Great content. I'd really like to watch a deep drive on network setup that covers separate networks for Ceph (>=10Gb), VM access outside of the cluster, and a intra-cluster management network (

  • @KLANGOBRA
    @KLANGOBRA Рік тому

    Valeu!

  • @lquezada914
    @lquezada914 Рік тому +3

    You’re a champ - thanks for all that information in such a short time. I am currently working with passthrough, trying to get my rtx 2060 to detect on a windows 11 vm. Hopefully I can figure it by this week.

    • @hotrodhunk7389
      @hotrodhunk7389 Рік тому +1

      Did ya get it? For me windows just doesn't want GPU passthrough in a vm.

    • @lquezada914
      @lquezada914 Рік тому

      @@hotrodhunk7389
      When I had my lab setup. I did get it to work but you have to make sure the GPU is compatible with Linux. As some cards have more stable drivers then others. I tried with a 2060 and a 1070. The 1070 worked fine but the 2060 gave me troubles but it did work.

  • @rocketi05
    @rocketi05 Рік тому +2

    I love the random chickens behind you. Great content!

  • @dtom19
    @dtom19 Рік тому +4

    Love the content. I currently have a 4 node cluster in production with PVE and Ceph. I currently have VM storage on SSD’s and cold storage on spinning rust with ssd db/wal, but would like to see something on ec pool. I know you can create 4+2 on 3 nodes by sending them in pairs, but I can’t quite get my head around the CRUSH rule for it. The logic behind this is to increase storage efficiency.

  • @flahiker
    @flahiker Рік тому +1

    Great Video. A suggestion to test your setup is to simulate a power outage and see how the cluster responds. I have a 3 node ProxMox cluster running Ceph and I am setting up an extra cheap PC to run NUT to manage the UPS. My goal is to simulate a power outage (un plug the UPS) and have the cluster "Gracefully" shutdown and restart when power is restored.

    • @Mr.Leeroy
      @Mr.Leeroy Рік тому

      what is the point of 4th PC if it is still a single point of failure for UPS NUT master? Just connect it to either of 3 nodes.
      Having an UPS with network card you could probably access it from any node. In case of USB UPS some sort of hardware hack is probably required like an "Arduino" controlled USB switch based on ATX PS_ON logic.

  • @RyouConcord
    @RyouConcord 11 місяців тому +1

    god dang wizard indeed. your content is rad man

  • @MikeDent
    @MikeDent Рік тому

    Amazing knowledge and enthusiasm. I think Proxmox should employ you.

  • @joshuamaserow
    @joshuamaserow Рік тому

    Well done dude. You leveled up your game. Glad I subscribed.

  • @MickeyMishra
    @MickeyMishra Рік тому +1

    I love it when old hardware gets used. Sure it may take more power, but in my experience? Mixing and matching may be hard to do? But its overall a better idea for uptime. Chances that 3 sets of gear fail at the same time from different product lines? Yea, not going to happen!
    This is wonderful that more people are using the DAC cables. I stepped away from home server stuff years ago, but its nice seeing other folks keep the hobby alive.

  • @CD3WD-Project
    @CD3WD-Project Рік тому

    Great video, I just finished getting our last VMs off a way overpriced Nutanix cluster and I was looking at putting Proxmox on it and you have me sold. No I did not buy the Nutanix as they got it 6 months before I took over.

  • @shivex
    @shivex Рік тому

    Great video, very cool to see all this in action. You are spot on with your content, loving it!

  • @cmacpher2009
    @cmacpher2009 Рік тому

    Since you asked, how about reliable VL intensive OLTP database using no data loss log shipping, very fast failover on multi-node active/passive HA cluster config with enterprise class database products like Oracle and HANA. Hit it hard, every server hardware, OS, network, database, heartbeat, corruption, simulated WAN, DC environment, and disaster failure scenario you can come up with. Show that this product can compete in enterprise environments. Perhaps it can. Enjoy the challenge. I look forward to viewing more of your videos. Amazing talent you have, loved the chickens.

  • @reasoningCode
    @reasoningCode Рік тому +1

    Love your contents!

  • @roybatty2268
    @roybatty2268 Рік тому +2

    You have chickens. You are cool! Love your channel.

  • @TheBlaser55
    @TheBlaser55 Рік тому

    WOW I have been looking at something like this for a while.

  • @shephusted2714
    @shephusted2714 Рік тому +1

    try adding stuff like nvme and think about going to 40g or 10g bridged/bonded and then se where you get the best perf boosts - good video! 40g dual port connect-x cards on ebay are about 50 bucks if you shop around - this would likely prevent any disk i/o issues but you would also probably have to goto nvme to really take advantage of this but since hw is getting more reasonable and the burgeoning refurb mkt i think upgrading cluster is a good way to go before adding nodes - look forward to updates and followups - please talk about proxmox backup and show how long a backup of cluster takes before and after upgrades - this topic is great for smb who need 5 nines uptime!

  • @markjones9180
    @markjones9180 Рік тому

    Awesome video, learnt alot, thanks for sharing!!!

  • @curtalfrey1636
    @curtalfrey1636 Рік тому

    nice man, i got 2 laptops dell 5755, a server z590/i7, and 3 other PC's that i need help with setting up if you got time to help

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +1

      Sure. What parts would you like to have more help on? I also have my email in my about page if you want to send me a message.

    • @curtalfrey1636
      @curtalfrey1636 Рік тому

      @@ElectronicsWizardry sent message, thanks 😁

    • @gamerboyznet1597
      @gamerboyznet1597 Рік тому

      @@ElectronicsWizardry this is Curt Alfrey. On my other account 😁

  • @marcorobbe9003
    @marcorobbe9003 11 місяців тому

    Hi and thanks for your great videos 🙏
    I am planning to set up a HA cluster with three zimaboards or something in that range for home automation (NodeRed, Grafana, ...).
    Right now I am starting with Proxmox and hanging with one topic. (How) is it possible to share data between VMs or Container?
    I think, Proxmox will run on its own disk. The VMs, and containers are on a seperate SSD - later on a Ceph storage.
    When I am setting up a container, that container gets his own virtual hdd assigned that is placed on the external SSD / the Ceph disk.
    Is it possible / how to have a folder / diskarea, partition, ... lets call it "shared folder" where different containers or maybe also VMs can read and write data.
    Later on there is maybe a container with a simple NAS software solution or just a SMB share, that gives me acces to that "shared folder" via LAN so I can look at that data or also backup that data from time to time.
    Yes, I can run an external NAS and connect an SMB-Share to the containers but that is not what I want.
    I would be very happy, if someone can help me out how to do that.
    Thanks a lot

  • @GuillermoPradoObando
    @GuillermoPradoObando Рік тому

    Great work, thanks for share it with us

  • @RobertoRubio-z3m
    @RobertoRubio-z3m 3 місяці тому

    Excellent video. Do you mind sharing how you tweaked the write cache? Cheers.

    • @ElectronicsWizardry
      @ElectronicsWizardry  3 місяці тому

      The cache tweaking was for the VMs disk. Under the disk settings there is a cache mode setting in the top right.

    • @RobertoRubio-ij3ms
      @RobertoRubio-ij3ms 3 місяці тому

      @@ElectronicsWizardry Thanks a lot for replying. I really appreciate your content and the time you took to reply to my inquiry.

  • @TheRaginghalfasian
    @TheRaginghalfasian Рік тому

    great video, thanks for making it. i thought when you said you were gonna cause a failure you would just cut the power from one of them.

  • @rbjohnson78
    @rbjohnson78 6 місяців тому +1

    Did you setup ceph before frr? In the mesh document, frr is setup first and then ceph.
    I'm trying to get frr working right now, but it doesn't appear the steps in the document actually turned up the interfaces.

    • @rbjohnson78
      @rbjohnson78 6 місяців тому +1

      I figured it out. I had to set the interfaces I was using to auto start
      Side note... Great job on the basics. A video showing step by step would be great for newbies. Such as myself.

  • @LiebJohnson
    @LiebJohnson Рік тому

    Putting together a three node ceps cluster which needs to be powered efficient, quiet, and have ~50tb of available storage. Would love a parts recommendation.

    • @martyewise
      @martyewise 11 місяців тому

      Not sure if you've already worked this out, but I recently put together a small cluster along those lines... Not sure it can quite achieve your desired 50TB with this config, but it can get pretty close (if you're willing to spend the $ on the SSDs)...
      I used 3 Dell Precision 3430/3431 SFF systems I bought on EBay for a decent price. They're i7-8700 (6c/12t) w/ 64GB RAM. I found some decent dual 10Gbe NICs I installed into the PCIEx16 slot in each (this NICs are really only PCIEx8), and added an m.2 NVME adapter in the remaining PCIEx4 slot in each. This gives me 2x NVME SSD slots in each that I filled with 2TB SSDs, and 2x SATA slots in each for additional SATA SSDs. I found some 64GB SATA DOMs for boot devices that replace the DVD in each system. If I were to max out all the NVME and SATA SSDs with the largest available devices, it could get close to what you're looking for.
      The problem I found is that with non-enterprise/server class hardware you're likely to run out of PCIE lanes (only 16 total on most available consumer CPU/mobos). Moving to server hardware will mean more noise and power consumption than I was comfortable with (a zillion tiny little fans in most 1U servers generate a lot of noise!).
      I'm not sure if this is what you have in mind, but this arrangement seemed to be the "sweet spot" for me in terms of cost/performance/power/noise. I haven't had it up and running for long, so I don't have a ton of experience with it yet, but things look promising. I'm currently working through the SDN config for the cluster (there doesn't seem to be a lot of info on these details available).
      Good luck. Have fun.😃

  • @Darkk6969
    @Darkk6969 Рік тому +1

    I've ran something like this for work. Had 4 node cluster with CEPH. The only issue I had with CEPH is the rebuild performance. It would slow down almost all the VMs to a crawl. Sometimes the VMs would stall and crash. I think my issue was a combination of things like large 8TB drives with no cache and 10 gig network connected to a switch. Each node had two 10 gig connections to the switch. Running vmware with vsan right now and have plans to go back to ProxMox with better hardware. Not sure if I'll use CEPH again but with ZFS replication.

    • @angelg3986
      @angelg3986 Рік тому

      Why do you plan to replace vmware with ProxMox ?

    • @Darkk6969
      @Darkk6969 Рік тому

      @Dyeffson Dorsaint I had two separate dedicated 24 port 10 gig switches just for CEPH traffic without any connection to other network. I did it this way on purpose to isolate CEPH traffic from everything. I was able to manage the switches using dedicated mgt port.

    • @LampJustin
      @LampJustin Рік тому +1

      @@Darkk6969 the reason for the slow downs is, that you have to limit the rebuild traffic. You can set a limit, so that'll not use that much bandwidth. Since you mentioned HDDs ceph's pretty slow with small amounts of drives and you'll definitely want to put your db/wal on a SSD. If you want something "simple" like vsan you could also use linbit drbd9. They do have a proxmox integration. Since it's simple block replication it's fast af and great for nvme or SSDs. Reads are local so you'll get full speed. It just does not do EC.

  • @martyewise
    @martyewise 10 місяців тому

    Thanks again. I've got a small cluster up and running with ceph, etc. I'm trying to work out some of the network config using SDN... Not a lot of info out there that I've been able to find going over those details... Any chance you've got a video in the pipeline going over SDN in detail?
    Thanks again for your time and effort on this stuff. 🙂

    • @ElectronicsWizardry
      @ElectronicsWizardry  10 місяців тому

      A networking/SDN video is planned for the future. I'm glad you like my videos.

  • @jamescross2652
    @jamescross2652 Рік тому

    Supposing one of your nodes is gone and you have a bare metal replacement. How easy is it to get that back in to the cluster? We have a 3 node system without ceph using replication, it works fine but if a node dies then the HA starts on a new node but its obviously slightly behind by up to 15 min. And because of our antiquated VM thats a problem because it can't redo those transactions. But, if we have shared storage, it will just be the inconvenience of a reboot, which we can deal with much easier if I understand correctly. we're in a position to build a new one, the risk is that ceph is new to us.

  • @gustersongusterson4120
    @gustersongusterson4120 Рік тому

    How do you have to configure VMs to have HA on nodes with different hardware? I've read a little about this but haven't gone in depth and I'm curious. Great video, thanks for it!

  • @chaxiongyukonhiatou607
    @chaxiongyukonhiatou607 10 місяців тому

    Very good video. Could you show your connection topology?

    • @ElectronicsWizardry
      @ElectronicsWizardry  9 місяців тому

      I don't have a diagram, but I'll try to explain it here. Each node has a dual 10GBe NIC and a 1GBe Nic. The 1GBe NIC on the servers are all connected to a switch, and the low bandwidth network is to be used for internet access, management, an.
      Then there are 10GBE links between every server. For example if the servers are A, B, and C. The links would be A to B, A to C, and B to C. These links are routed using using frr so the shortest path is taken, but will take an alternate path in case of a failure.
      Hopefully this helps explain my setup.

    • @chaxiongyukonhiatou607
      @chaxiongyukonhiatou607 9 місяців тому

      @@ElectronicsWizardry Thank you so much for your explanation

  • @krzycieslik6650
    @krzycieslik6650 Рік тому

    Coudl someone tell me, where i can find instruction, how i suppouse do configure ceph with this cache? I have similar problem with 1,53 MB/s in crystalDiskMark...

  • @colorxlabs7200
    @colorxlabs7200 Рік тому +1

    Great info as always! Any chance you’d experiment with harvester?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +2

      Thanks for introducing me to harvester. It looks like a cool project, and I want to start testing it soon.

  • @cvntechnologies
    @cvntechnologies Рік тому

    Could you post a network diagram so I can build the same setup? I am not sure how to do the mesh network

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому

      Take a look at this page on the Proxmox wiki. pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server. It goes over a lot of the information you could want to know for a setup like this. I ran a 10G link between all nodes in the cluster, and a 1gbe link to a main network switch.

  • @psutkus
    @psutkus Рік тому

    Is where a way to kill 1 node(for example electricity gonr) and still have working vm without any interuption? Or at first works fence and after that restarting vm and loads up? So it usually around 1-2 minutes. I would like know is there any suggestions to have 0 downtime, because data is in shared ceph.

  • @TheOnlyEpsilonAlpha
    @TheOnlyEpsilonAlpha Рік тому

    Impressive, i wonder: You called up that WebUI over a direct IP right? A reasonable addition, to make also that be fault-tolerant, would be to set up a load balancing setup for the Web UI, so you would have a DNS Name to call your Interface which routes to a functional node at all times.
    Or do you have something like vIP running already, which routes to a functional virtual IP?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +1

      Yea the Webui is over a direct IP and a node failure would take out that web interface. I didn't go into the details of how to deal with this, but using DNS or a Proxy may be a good idea.
      I plan on going over Ceph and other HA topics in Proxmox in later videos.

  • @pankajjoshi4206
    @pankajjoshi4206 Рік тому

    1. Does it increases speed?
    2. How to connect more than onr pls show.
    I have 20 old dual core PCs in my lab , how can I parallely use these processor.
    Thank you

  • @EusebioResende
    @EusebioResende Рік тому

    Great video. Will it work in similar fashion with containers running on the nodes?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +1

      Yea containers will work in almost the same way as VMs here. The only difference I know of is containers can’t be live migrated between nodes. You would need to shutdown the container before moving it. All other features like ha and shared storage will work find

  • @tim_allen_jr
    @tim_allen_jr 10 місяців тому

    You're the Merlin of computers.🧠📈✨️

  • @Burntcrayon-cb7eh
    @Burntcrayon-cb7eh Рік тому +1

    It would be cool to see you do some HPC setups with this hardware. I’m currently using 4 Xeon e5 dual cpu compute nodes linked with infiniband to run computational fluid dynamic simulations in parallel over MPI, but you rarely see this type of content on UA-cam. I would be interested in the different setting and hardware optimizations that can be done on these types of setups to increase performance etc.

    • @banzooiebooie
      @banzooiebooie Рік тому

      Reading your comment make me want to see your video! But yes, more of this you explain.

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому

      I don't have much experience in the HPC field and how to correctly setup, test, and use these programs. Do you know of any good resources that cover these topics?

    • @Burntcrayon-cb7eh
      @Burntcrayon-cb7eh Рік тому

      @@ElectronicsWizardry it is a pretty vast field with many different technologies for many different use cases, but at least what I use requires extremely low latency so I use infiniband (

  • @MrEtoel
    @MrEtoel Рік тому

    I want to try something similar, but my proxmox cluster consists of 3 Intel 13th Gen NUCs with only one Nvme drive each. Would it still be feasible to run Ceph? I guess I need to use what you demonstrated in your gparted/dd video to resize the nvme drive because I made the mistake of assigning it all to lvm. That would be a cool video. My NUCs have 2 thunderbolt ports (40gbit) each, imagine if I could use those for Ceph links. That would be awesome.

    •  11 місяців тому

      The 13th-gen i3/i5/i7 nuc features a b-key 2242 m.2 sata ssd slot. Try that for proxmox installation. Relatively inexpensive 256GB m.2 sata SSDs should be sufficient for OS and images. This leaves nvme free for ceph.

  • @homehome4822
    @homehome4822 6 місяців тому

    Would you be able to create a stretched cluster via tail scale? OR would it have too much latency to work?

    • @ElectronicsWizardry
      @ElectronicsWizardry  6 місяців тому +1

      Depends on what you’re doing with the cluster. I could see some programs working that just need to sync a state(life if you want to manage multiple proxmox servers as one) but storage clusters would likely preform very badly due to the latency and limited bandwidth.

  • @rodfer5406
    @rodfer5406 Рік тому

    You’re the man*** 👍

  • @leftblank131
    @leftblank131 Рік тому

    Yea, but has it eliminated side fumbling?

  • @shephusted2714
    @shephusted2714 Рік тому +1

    you can and probably should upgrade this to warlock pentagram topology - still no switch needed but you will gain overall cluster robustness and be able to scale out storage - it is just a double triangle, going to 40g or 100g will be more common place and with no switch needed you save a couple grand right there - consider making the management network 2.5gbe

  • @RNSounds7
    @RNSounds7 Рік тому

    Very nice 👍

  • @bhupindersingh3880
    @bhupindersingh3880 Рік тому

    Great Video Look for more stuff

  • @davidkamaunu7887
    @davidkamaunu7887 Рік тому

    I would try making an IPFS cluster with that hardware.

  • @esra_erimez
    @esra_erimez Рік тому

    Wow, this is impressive! My company give me some old InfiniBand cards I'd love to try this with. I just need the servers

  • @Ingeanous
    @Ingeanous Рік тому

    Your talking a little fast to follow.. but i get it... that means your are passionate about the subject!

  • @arthurd6495
    @arthurd6495 11 місяців тому

    nice

  • @KILLERTX95
    @KILLERTX95 Рік тому

    Just saying, if the ceph ssd your using has "power protection" it massively improves performance. it's like a better version of write caching and speeds things up immensely.
    To avoid confusion, power protection isn't a UPS 😂. In this case it's a feature on the SSD usually found on enterprise SSD's.

  • @DocMacLovin
    @DocMacLovin Рік тому

    Imagine finding one of those in a dark server room in the last corner. Brrr. Creepy.

  • @banzooiebooie
    @banzooiebooie Рік тому +1

    Funny is that in the real world, many companies uses this (well they use ESXi but it is almost the same thing...just more expensive) to create VMs to run Kubernetes/OpenShift in a cluster. That is failover on top of failover.

  • @enderst81
    @enderst81 Рік тому +2

    Try GlusterFS instead of Ceph. Should get better speeds.

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +3

      Thanks for the suggest. I’ll try it out in my hardware and see how it works.

    • @MarkConstable
      @MarkConstable Рік тому +1

      GlusterFS is file system storage only for VMs, ie; qcow2. It does not provide VM volume storage like ZFS or CEPH.

  • @AdrianuX1985
    @AdrianuX1985 Рік тому

    +1

  • @ShimoriUta77
    @ShimoriUta77 Рік тому

    The content is awesome, thanks bro.
    But, how can a dude look so young yet so old at the same time. So beautiful yet so ugly 😂 The duality of men

  • @alejandroberistain4831
    @alejandroberistain4831 Рік тому

    Awesome video, thank you for sharing!