Hey man, this was a great introductory explanation of differences between docker and full VMs. You started getting into the stateful vs stateless stuff at the end, but as somebody who does work with containers in a hyperscale environment I wanted to say that our containerized workloads are almost exclusively stateless services and our stateful layer (databases) are for the most part run on their own VMs. But you wrote this for an audience that probably won't separate their environment based on workloads like that, so your explanation and examples we're just really solid.
Can I think it as in most case, docker is always for application server (node app) but upon we need to consider data persistence / back up for data,we can’t just use docker to run a Mysql DB, we should to use VM unless we want to use docker volume to mount data to our host os
I found that if you want to run an app on Docker, and you don't have the technical skills to create a Docker image, you have to settle for the pre-built Docker images. And, those are usually not the sort of apps that most home users would run. Or, those apps are already available to run on a NAS or Windows so why bother with Docker for the most part. For example, I have a legacy Windows mapping app which is long past its support period. I have no idea how to build a Docker image with it and I bet most people watching this don't either. However, I was able to create an image of my Windows laptop which had the mapping app installed and I learned how to create a VM of it on my QNAP. I still love the technology and the learning experience but at this point, it's still a solution in search of a problem for me. Thanks so much the video though; great as always.
I would absolutely agree with your first statement, for any normal user, docker is just pre built apps. But there can be a bunch of really useful apps that you may want to deploy, like next cloud, bind, or really any non custom service. But for legacy stuff, it pretty much always will work on a good old VM!
As a home user. I use parallels, running 2 VMs. One with Ubuntu and docker, homebridge and scrypted. Other with Home Assistant OS. I know there’s overhead but it’s too easy. Snapshot. Clone for testing purposes.
Thanks man…that was an awesome explanation of Docker vs VM’s really, really helped me grok it a little better. I know your a Mac guy and Windows has its weird WSL with Docker Desktop.. but I’d love to understand and see a UML deployment/component diagram with Windows to understand what’s all the abstracted parts are. Again thanks AWESOME video!
I'm also waiting for more info regarding Synology's full volume encryption feature. If it's what I expect it to be, then I don't mind redo'ing everything on two of my Synology NAS'es.
You can Montior the Docker machines i have the Notifaction section setup so if it fails I get notified by the DSM interface and by email you missed that docker has the Notifactions on Synology you get that option when you setup Docker containers
Question: is it possible to assign for container specific lan? I have 3 LAN and run on my NAS home assistant- for security reasons I want to isolate my container to separate vlan .
2nd Question dear Rex, any recommendation about 2 factor authentication on Synology devices? i.e. a physical USB key, Android phone, etc., Would appreciate any input you have in this field. Thanks for your ongoing inputs! Much appreciated!
This is the video I was looking for. Very well explained. It still leaves open a question I have however. A Virtual Machines gets RAM assigned and I understand this is fixed for that VM. However it also gets CPU assigned, but I do not understand whether this CPU is also fixed. Can you help me out here?
Hi Rex, enjoying your inputs/reviews etc., Any 'update' on NAS HDD comparisons? I saw a review from you about a year ago. Any update re: Seagate Ironwolf v Pro v Toshiba etc., I tend to avoid WD based on your feedback ;-)
hello sorry @SpaceRex i need help on the twingate docker video. What are the firewall Port rules? I can't get twingate to connect if i turn on synology firewall with TCP port rules 30000-31000 and 443 as twingate website suggests. If i disable firewall I can connect to twingate with no problem, but that is not good practice with no firewall rules on.
You only show 1 of the 2 types of VMs. You can run VMs at the hardware level, these are called Type 1 Hypervisors. They do not run within a host OS. Your example shows type 2 Hypervisors that are installed within an OS (like VirtualBox).
In my experience updating is actually a pain with Docker, maybe I'm doing something wrong? When a service runs on Linux I can just apt update && apt upgrade, and it will be kept up to date. When it's running in a Docker container it will NEVER be upgraded until you actually go and manually delete the container and deploy a new one.
So you run apt update && apt upgrade on Linux and delete && redeploy on Docker. It seems pretty much the same amount of work to me. However you can automatically update all docker containers, unsupervised, using another docker container (eg. watchtower).
Hey man, this was a great introductory explanation of differences between docker and full VMs. You started getting into the stateful vs stateless stuff at the end, but as somebody who does work with containers in a hyperscale environment I wanted to say that our containerized workloads are almost exclusively stateless services and our stateful layer (databases) are for the most part run on their own VMs.
But you wrote this for an audience that probably won't separate their environment based on workloads like that, so your explanation and examples we're just really solid.
Can I think it as in most case, docker is always for application server (node app) but upon we need to consider data persistence / back up for data,we can’t just use docker to run a Mysql DB, we should to use VM unless we want to use docker volume to mount data to our host os
I found that if you want to run an app on Docker, and you don't have the technical skills to create a Docker image, you have to settle for the pre-built Docker images. And, those are usually not the sort of apps that most home users would run. Or, those apps are already available to run on a NAS or Windows so why bother with Docker for the most part. For example, I have a legacy Windows mapping app which is long past its support period. I have no idea how to build a Docker image with it and I bet most people watching this don't either. However, I was able to create an image of my Windows laptop which had the mapping app installed and I learned how to create a VM of it on my QNAP. I still love the technology and the learning experience but at this point, it's still a solution in search of a problem for me. Thanks so much the video though; great as always.
I would absolutely agree with your first statement, for any normal user, docker is just pre built apps.
But there can be a bunch of really useful apps that you may want to deploy, like next cloud, bind, or really any non custom service.
But for legacy stuff, it pretty much always will work on a good old VM!
Awesome , thanks for a good explanation of docker.
As a home user. I use parallels, running 2 VMs. One with Ubuntu and docker, homebridge and scrypted. Other with Home Assistant OS. I know there’s overhead but it’s too easy. Snapshot. Clone for testing purposes.
Thanks man…that was an awesome explanation of Docker vs VM’s really, really helped me grok it a little better. I know your a Mac guy and Windows has its weird WSL with Docker Desktop.. but I’d love to understand and see a UML deployment/component diagram with Windows to understand what’s all the abstracted parts are. Again thanks AWESOME video!
Nice presentation! May I suggest a video on Xen virtualization ? Tks
Off topic. Any updates on synology full encryption security?
BTW I just know now you have a forum. You should empathize more on the video beginning
Haha I am not good at self marketing!
As for full volume encryption have a video next week coming out! The fixed it!
I'm also waiting for more info regarding Synology's full volume encryption feature. If it's what I expect it to be, then I don't mind redo'ing everything on two of my Synology NAS'es.
You can Montior the Docker machines i have the Notifaction section setup so if it fails I get notified by the DSM interface and by email you missed that docker has the Notifactions on Synology you get that option when you setup Docker containers
Great video I’ve always eod weed the differences
Question: is it possible to assign for container specific lan? I have 3 LAN and run on my NAS home assistant- for security reasons I want to isolate my container to separate vlan .
2nd Question dear Rex, any recommendation about 2 factor authentication on Synology devices? i.e. a physical USB key, Android phone, etc., Would appreciate any input you have in this field. Thanks for your ongoing inputs! Much appreciated!
Excellent video.
Great video.
My Ds1522+ is indexing a lot. I have to log and "pause to 6 hours' every time. There is a way to schedule that?
This is the video I was looking for. Very well explained. It still leaves open a question I have however. A Virtual Machines gets RAM assigned and I understand this is fixed for that VM. However it also gets CPU assigned, but I do not understand whether this CPU is also fixed. Can you help me out here?
Hi Rex, enjoying your inputs/reviews etc., Any 'update' on NAS HDD comparisons? I saw a review from you about a year ago. Any update re: Seagate Ironwolf v Pro v Toshiba etc., I tend to avoid WD based on your feedback ;-)
hello sorry @SpaceRex i need help on the twingate docker video. What are the firewall Port rules?
I can't get twingate to connect if i turn on synology firewall with TCP port rules 30000-31000 and 443 as twingate website suggests.
If i disable firewall I can connect to twingate with no problem, but that is not good practice with no firewall rules on.
You only show 1 of the 2 types of VMs. You can run VMs at the hardware level, these are called Type 1 Hypervisors. They do not run within a host OS. Your example shows type 2 Hypervisors that are installed within an OS (like VirtualBox).
Can you not just set up a cron job to dump the database inside a docker container running mariadb?
In my experience updating is actually a pain with Docker, maybe I'm doing something wrong? When a service runs on Linux I can just apt update && apt upgrade, and it will be kept up to date. When it's running in a Docker container it will NEVER be upgraded until you actually go and manually delete the container and deploy a new one.
So you run apt update && apt upgrade on Linux and delete && redeploy on Docker. It seems pretty much the same amount of work to me. However you can automatically update all docker containers, unsupervised, using another docker container (eg. watchtower).