I have a sense of what n8n cam do. This is how I found your video, thanks for doing the video. It helped a lot. It is hard to find good videos, and good hostess-teachers like yourself. You make it easy to follow.
I'm satisfied running n8n and other AI tools using HarborAI on a LXC container with Nvidia GPU pass through on Proxmox VE. I use only open source that can be self-hosted.
For me your pedagogy by impossibility and possibility is perfect or this way of declaring your negative and positive approach to techniques. Good luck my brother "carbon entity" .
I run n8n on mac with docker compose with one significant difference to your setup. I run it with mounted volume. This way the n8n data files are not in the container, but directly on the computer - problem 2 on the vidieo. I chose this as I can really back up all my containers data files onto my NAS with a simple process, and data recovery and setup in case of disk failure or migration is very quick. It is important to provide not just the username (email) and password with the .yml file (or related .env file), but also the encryption key, otherwise in case of restore or migration the credentials will be not available and have to be reset all.
My main use case scenario is a bit different: my MacStudio is where I’m running ollama (LLM inference node) and its various frontends used to handle assistants; since I don’t like the idea of letting the AI assistants/agents play with my Mac filesystem, I then use a VM to run N8N (npm installation) so that any node requiring AI interaction can point to ollama on the Mac, while any node requiring local shell command execution or remote (cloud) execution has all the sw needed to accomplish the task (in an isolated fashion). The only actual nuisance with this setup is that without SSL in place N8N does NOT seem to allow http access from another node even if the host is on the same subnet…or is there anything I could do to overcome that?
i like your comparison of defy / langflow w/ n8n, i'm trying to get more into n8n, i hope you can make some more vids, hopefully they'll decide to sponsor you
my man, I just spent the last 3 days getting a kubernetes cluster set up with n8n in queue mode with workers. i have no experience with kubernetes so man was it a nightmare!
Interesting. The n8n AI starter kit page suggests one option for Mac is using the docker container but installing Ollama externally of docker. It seems like you could get at least some of the benefits of docker for n8n that way on a Mac, but I don't understand if there are pros and cons of this method over npm for n8n also. Do you have any thoughts on that?
Great video. I'm curious why you didn't consider MQTT as a way to trigger an AI action on your home device rather than going through the database record approach.
Sorry for the newbie question. But I did not understand benefits of using npm vs docker. I have n8n running on my Mac and a vps but I used docker for both.
Good setup if you only have a single machine. I cannot live on one machine. There is an AI workstation along side my macbook. All of my problems are solved with a Zero Trust network powered by nebulae on a $3 VPS. No cloud services needed, no port forwarding. A small pay forward to avoid years of headaches and microfees.
Curious as to how Podman might have let you down. Serious question as I am looking to formalize a dev base for SOP and was leaning toward Podman for container standards compliance and FOSS (though RH can vendor 🍆block even FOSS projects, I digress)... Alignment of RedHat vs Docker as the leading dev entity associated aside... how did it fail YOU?
I can’t remember what the details were with podman. Brett and I had the main devs on our show Devops and docker talk and started using it after that. Then pretty soon went back to docker. Not sure what the rh connection is. Other than perhaps being generally disliked more than docker.
they aren't comparable. langflow is AI stuff only. n8n is that and so much more. That said, I had a negative opinion of it before trying it, but now I see its not related to langchain....maybe I should give it a shot.
I do not have a lot of experience with docker, maybe someone may correct me, but I do not like it very much. Uses way more memory, any change or different configuration is a pain to setup, specially with the "network", it is messy, it is slower than the real thing. I do understand that, if you are a developer and you are very lazy, maybe is a good way out. I also understand that you will not have to handle installations, compilations, etc, everything works out of the box, until it did not, it crashes and it becomes very messy to debug. But again, maybe I am the lazy one as I do not want to learn deeply about it.
Actually extra memory use is pretty minimal and networking is really cool when you start using it. It can allow for a lot of scenarios that would be impossible without it. Which is why it’s been a pretty critical piece of software for most ops orgs for a decade.
Two comments so far on the shirt. One super positive and the other one is the other way. Strong opinions in both directions is a sign I am making a good choice.
Dang, all the new cool stuff is getting too cryptic and nerdy… I’ve done plenty of OG Perl,script installs, PHP program installs and stuff on a server, but this docker and all the new stuff to learn and as cryptic as it is now just wants to add itself to an already busy life… How are micro-nerds supposed to keep up? I guess it’s sink or swim… whoops gotta go I think I see a shark in the water…
I think N8N he is the right person to sponsor because his explanations are so clear and easy to understand.
Love you to go in depth with a bunch of workflows.
Like an Ollama rag chat agent that forwards email summaries of conversations.
I have a sense of what n8n cam do. This is how I found your video, thanks for doing the video. It helped a lot. It is hard to find good videos, and good hostess-teachers like yourself. You make it easy to follow.
Thanks you, very clear information. I used to n8n in local, using docker and cloudflare zerotrust. Pretty well, solve what that I need.
Thanks for creating this video. It was really insightful. I just subscribed!
I tried your suggestion on my 2017 Mac locally with PWD and it worked fine.
I'm satisfied running n8n and other AI tools using HarborAI on a LXC container with Nvidia GPU pass through on Proxmox VE. I use only open source that can be self-hosted.
great stuff bro, I come to you with most of my AI questions, lol,
Great video! Thank you!
Have you tried MQTT for your messaging between instances ? Would it work ... ?
For me your pedagogy by impossibility and possibility is perfect or this way of declaring your negative and positive approach to techniques.
Good luck my brother "carbon entity" .
Great stuff!Show us a more lengthier tutorial with real life examples.
I run n8n on mac with docker compose with one significant difference to your setup. I run it with mounted volume. This way the n8n data files are not in the container, but directly on the computer - problem 2 on the vidieo. I chose this as I can really back up all my containers data files onto my NAS with a simple process, and data recovery and setup in case of disk failure or migration is very quick. It is important to provide not just the username (email) and password with the .yml file (or related .env file), but also the encryption key, otherwise in case of restore or migration the credentials will be not available and have to be reset all.
I also use a volume mounted. Never run a container with data in the container. Always save to a mounted volume
My main use case scenario is a bit different: my MacStudio is where I’m running ollama (LLM inference node) and its various frontends used to handle assistants; since I don’t like the idea of letting the AI assistants/agents play with my Mac filesystem, I then use a VM to run N8N (npm installation) so that any node requiring AI interaction can point to ollama on the Mac, while any node requiring local shell command execution or remote (cloud) execution has all the sw needed to accomplish the task (in an isolated fashion). The only actual nuisance with this setup is that without SSL in place N8N does NOT seem to allow http access from another node even if the host is on the same subnet…or is there anything I could do to overcome that?
you are the king !!
Thank you again sir
i like your comparison of defy / langflow w/ n8n, i'm trying to get more into n8n, i hope you can make some more vids, hopefully they'll decide to sponsor you
Thanks for sharing your tools and workflow
plz plzz share how you are using/ configuring telegram as an api
my man, I just spent the last 3 days getting a kubernetes cluster set up with n8n in queue mode with workers. i have no experience with kubernetes so man was it a nightmare!
Interesting. The n8n AI starter kit page suggests one option for Mac is using the docker container but installing Ollama externally of docker. It seems like you could get at least some of the benefits of docker for n8n that way on a Mac, but I don't understand if there are pros and cons of this method over npm for n8n also. Do you have any thoughts on that?
Great video. I'm curious why you didn't consider MQTT as a way to trigger an AI action on your home device rather than going through the database record approach.
Thank you
Thanks very helpful ❤
You are the man!
i would like to install n8n with plesk docker but i was not able to make it
Sorry for the newbie question. But I did not understand benefits of using npm vs docker. I have n8n running on my Mac and a vps but I used docker for both.
If you need to run any executable locally or need to do anything with local ai, docker won’t work locally.
Thanks man
Good setup if you only have a single machine. I cannot live on one machine. There is an AI workstation along side my macbook. All of my problems are solved with a Zero Trust network powered by nebulae on a $3 VPS. No cloud services needed, no port forwarding. A small pay forward to avoid years of headaches and microfees.
Yes, it also works with wireguard
What about tailscale?
Solves a different problem. Love tailscale and wireguard but doesn’t work here
so for me to have n8n working all fine i have to use all the tools he mentioned ? (i am not technical )
Just sign up to their site and you are done.
In all the use cases ive tried to use n8n so far, im not able to debug it because 20mb of test data is enough to overload the visual browser editor
I use a n8n similar to yours but I access to it via VPN Wireguard when I'm not in the LAN
⚡
I watched complete video in hope to know what is n8n and what's its full form.
That wasn’t the goal here. I did that video earlier
Why not create a powershell script to launch n8n and put it on you desktop folder
Curious as to how Podman might have let you down. Serious question as I am looking to formalize a dev base for SOP and was leaning toward Podman for container standards compliance and FOSS (though RH can vendor 🍆block even FOSS projects, I digress)... Alignment of RedHat vs Docker as the leading dev entity associated aside... how did it fail YOU?
I can’t remember what the details were with podman. Brett and I had the main devs on our show Devops and docker talk and started using it after that. Then pretty soon went back to docker. Not sure what the rh connection is. Other than perhaps being generally disliked more than docker.
Orbstack was another one. That was more recent and it just randomly deleted volumes.
I seem to remember performance was very disappointing
@@technovangelist Thanks so much. I will be sure to double down on comparing performance metrics for my load case. ;)
VPS + security hardening + coolify + n8n
Sure, it would be a bit better to complicate things like this.
Also, why n8n and say not the Langflow?
they aren't comparable. langflow is AI stuff only. n8n is that and so much more. That said, I had a negative opinion of it before trying it, but now I see its not related to langchain....maybe I should give it a shot.
Fair enough
I do not have a lot of experience with docker, maybe someone may correct me, but I do not like it very much. Uses way more memory, any change or different configuration is a pain to setup, specially with the "network", it is messy, it is slower than the real thing. I do understand that, if you are a developer and you are very lazy, maybe is a good way out. I also understand that you will not have to handle installations, compilations, etc, everything works out of the box, until it did not, it crashes and it becomes very messy to debug. But again, maybe I am the lazy one as I do not want to learn deeply about it.
Actually extra memory use is pretty minimal and networking is really cool when you start using it. It can allow for a lot of scenarios that would be impossible without it. Which is why it’s been a pretty critical piece of software for most ops orgs for a decade.
Ok but for windows users we can access our gpu from within docker.
Correct
Sweet shirt
Two comments so far on the shirt. One super positive and the other one is the other way. Strong opinions in both directions is a sign I am making a good choice.
geni.us/mhawaii2
0:33 Sorry, I do feel for you.
i kid i kid
I don’t get it
Wow, that was complicated.
Problem with n8n is that it’s not open source ❌
It’s a choice but hardly a problem. It has zero impact. It’s just how one defines the word.
Hey man, that shirt is a bit distracting
with the tshirts I blended into the background. I couldn't see me. shield you eyes, brighter shirts to come.
if you buy it, you can get used to it: geni.us/mhawaii2
Distractingly awesome 😎
Dang, all the new cool stuff is getting too cryptic and nerdy… I’ve done plenty of OG Perl,script installs, PHP program installs and stuff on a server, but this docker and all the new stuff to learn and as cryptic as it is now just wants to add itself to an already busy life… How are micro-nerds supposed to keep up? I guess it’s sink or swim… whoops gotta go I think I see a shark in the water…
If the stuff that’s 10 years old is to cryptic, like docker, then that’s going to be a problem.