DUDE! As a python dev trying to justify low code, you have given me the permission I needed to fully embrace n8n. Not to proud to admit I love their easy integrations. And the speed? OMG!
Let’s go indeed. I’ve been asking for slack integration and you hit it out of the park. I’m excited to use langchsin with Python. I’m not a coder but sending images of Pyton script into gpt and asking the right questions and being. fearless in a couple loops there’s no reason anyone can’t do this. Just need some patience. And for me a cup of Joe. Keep up the great work!
I appreciate it a lot Sean! I'm glad you benefitted from my Slack integration - I purposefully added that in here as another golden nugget so I appreciate you calling that out. I respect you using AI to help you code with LangChain + Python even though you aren't a coder! I hope that goes well for you!
Exactly the approach I'm using. Another use-case for n8n is fast prototyping. It's much easier to assemble something simple in n8n for, let's say, presale than to code it in python / js
Absolutely! I just said the same thing today. I’ve been an engineer for 10 years and have recently discovered n8n and it is so good for prototyping and visualising a full stack app and micro-service architecture. It’s been fun!
Hey Cole, this is the kind of video we want, I appreciate the courage. One question, I want to make personal assistants with n8n, but do you think these will become obsolete when big companies like Google or Microsoft implement theirs in the operating system, search engine, etc.?
Thank you very much Marc! That is a great question - good for you for considering this so you don't waste your time! It depends a lot on how specific the tasks will be for your personal assistant. If it's more general tasks like helping you find files on your computer, write emails, etc. then that kind of assistant would definitely be replaced by a copilot developed by Google/Microsoft/Apple. But if you have very specific tools/platforms (especially if you made them yourself) that you want your assistant to work with, then what you create probably wouldn't become obsolete anytime soon!
Cool! Thanks for a good video! I would love to see a video where you use OpenWebUI and it's features for tools and connect to N8N. That would be awesome!
Thank you Fredrik and I appreciate the suggestion a lot! You certainly aren't the first to suggest OpenWebUI for integrating with N8N, so I a definitely going to be creating a video on this in the near future!
Excellent video, Cole! Thank you so much for providing such valuable content, your way of explaining things is incredible. I wanted to ask you a question: I know how to create an AI agent with n8n, but not with LangChain. Although I’m trying to learn how to create an AI agent with LangChain. What is the difference between creating an AI agent with n8n and with LangChain in Python? What does LangChain allow me to do that n8n doesn’t? Thank you so much for your videos! Best regards.
Thank you very much Cristian - your kind words mean a lot to me! n8n is fantastic for creating no code AI Agents super quickly. However, writing your own code, though it takes longer, allows you to do practically anything you want. With n8n, you are limited to what the nodes on the platform can do. So creating AI Agents with Python + LangChain allows you to code everything, so you have all the control you could possibly want and aren't limited to the tools provided by a no code solution like n8n. For example: n8n doesn't have access to every single LLM. But with LangChain + Python, you can essentially use any LLM you could possibly want. Also, with n8n you can't implement complex RAG pipelines that you could with custom code (handling things like reranking, embedded tables, recursive retrieval, etc.). All more complex RAG topics but you'll start to need those if your use case gets complex!
@@ColeMedin Thank you so much for your response! It has truly been incredibly valuable to me. I’ll definitely follow your advice and get to work on learning how to create AI agents with Python and LangChain, as I can see the potential is indeed enormous. We’re really excited and looking forward to your next video. Thanks for adding so much value to this fascinating world of AI! Best regards!
The n8n integration was way too restrictive for my use case with its hidden prompts and bugs, so I noped out immediately and developed my own methods and functions in a gradual fashion, abstracting away from the API layer. I developed a highly modular approach using n8n javascript, including a system to monitor and manage workflow interdependencies. My first big n8n project was a general-purpose research agent that consumes a task queue. It does simple research and summarizing tasks on the web, reddit and youtube, but is far more capable. It was implemented on a 5 level architecture entirely in n8n: API layer (using OpenRouter for LLMs and a dozen endpoints for the tools), tools (some of them are simple LLM chains themselves), tool-use interpreter (aged me considerably), main agent (control loop), task monitor (trigger). The modular approach I took allowed me to reuse the methods and functions in other projects. For example, I have integrated the research agent's tools into my context-sensitive clipboard manager and the Obsidian frontend I developed for chatting with LLMs, so the chat agents can now also use these tools supervised. The creative writing agent I am working on now will use the research agent for factual grounding. It simulates a panel of writers and the human creative process. This project requires an entirely different prompting paradigm based on herding and coaxing base models, so I ditched langchain for good + the entire ecosystem surrounding it. Cole's approach is the first one that makes sense, and prompted me to rethink my design principles for a certain aspect of the creative writing agent. Great stuff, interesting times ahead.
I'm glad my approach resonates with you - thank you for the kind words! And thanks for sharing all your thoughts here - I love your project and how you are going about it.
Thank you! I didn't share them actually because they were "simpler" but actually I will be sharing them in my video tomorrow since I use them there as well!
Super nice to see this functional fusion of langchain and n8n. I had thought about doing basically the same thing, but I haven't yet encountered a situation where I need langchain to develop an agent that I can't develop with n8n. Do you have any use cases in mind?
Yeah there are a ton of use cases that require something more advanced that needs to be created with LangChain! One good example is if you need more accurate RAG with techniques like reranking or summarizing chunks before putting them in the final prompt.
Thanks so much! Keeping it simple is the goal! I would suggest RunPod if you want to run local LLMs, otherwise DigitalOcean if you want to just use an API through something like OpenAI/Anthropic/Groq for your LLMs.
Have you given Vectorshift a shot? Its exceptional. I replaced n8n with it as it can do so much more and overlaps much of what you say in Langchain. Not Open Source, though... but known for extreme security so if you are deploying to a security conscious industry they are highly respected.
Well done this is super comprehensive and well explained! As you hard coded the chanel 'youtube' how would i add in just another user rsther than group/chanel
Thank you very much!! So in order to get away from having the channel/user/group hard coded and have a dynamic user instead, you would have to add a tool to the agent to look up users in Slack. Then once it looks up users, it will have their IDs so it can pass the ID of the correct user into the workflow that summarizes a conversation or sends a message. That would involve simply adding another parameter to those workflows, changing the resource from "channel" to "user", and then changing the user from "fixed" to "expression" where you would then pass in the user ID given to the workflow. I hope that makes sense!
My pleasure! And great question! Unfortunately it doesn't look like FB messenger is directly supported in n8n. So you would probably need to code your own integration to work with FB messenger or set up FB messenger to send webhook requests into an n8n workflow. That would be a bit more advanced so it's hard to go into detail on that here! But you could rely a lot on the FB messenger developer documentation for getting started. For example, here is their page for handling messenger webhooks to receive events like a new message: developers.facebook.com/docs/messenger-platform/webhooks/
Nice solution. I like n8n but I’ve been feeling like the gui was slowing me down for more complex projects. I’ve been using langchain because I can use llm coding tools to develop these solutions much faster than I can when I have to drag and drop everything. Combining the two is a great idea. No code tools are beginning to be something I’m trying to avoid to get to market faster.
Wow that's interesting that no code tools are actually slower for you, thanks for sharing that! I can definitely see how that would be the case since you can use AI tools to code so fast now. I'm curious though, how often do you have to correct the LangChain code the model spits out and which model are you using?
Thank you, I'm glad you found it helpful! The main limitation with the code node in n8n is there is a limited amount of libraries that are available to use. So if you want to use a library like boto3 to interact with AWS (for example), you simply can't. That's why I'm encouraging you here to create custom coded AI agents outside of n8n but still leveraging n8n for service integrations!
Yes, you can run n8n without internet! Most of the nodes won't work because they rely on an internet connection if it's an external service like Slack or Google Drive. But you could create a full local RAG AI agent without internet using n8n - that would be sweet!
Sorry, could you clarify what you are asking here? The JWT secret isn't something needed in this video! That is only needed when you use the local AI starter kit with n8n. When I hosted n8n myself on a DigitalOcean droplet following the instructions for that, I didn't have to set up a JWT secret. Hopefully that helps! Otherwise please feel free to expand on what you're asking!
@@ColeMedin I'm talking about the video posted on 9/16 called "Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)" In the video there's is a .env file. It looks like this POSTGRES_USER=root POSTGRES_PASSWORD= POSTGRES_DB=n8n N8N_ENCRYPTION_KEY= N8N_USER_MANAGEMENT_JWT_SECRET= I don't know where to find my N8N_USER_MANAGEMENT_JWT_SECRET sorry I am very new to n8n. I just downloaded it from github and could only find my N8N_ENCRYPTION_KEY in the config file generated.
Ohhhh got it! Sorry I was confused since we aren't in the comment section for that other video. You actually can create your own JWT secret! So you can just put a random bunch of alphanumeric characters here for both N8N_USER_MANAGEMENT_JWT_SECRET and N8N_ENCRYPTION_KEY.
DUDE! As a python dev trying to justify low code, you have given me the permission I needed to fully embrace n8n. Not to proud to admit I love their easy integrations. And the speed? OMG!
Haha I'm glad - you're on the exact same page as me!
Let’s go indeed. I’ve been asking for slack integration and you hit it out of the park. I’m excited to use langchsin with Python. I’m not a coder but sending images of Pyton script into gpt and asking the right questions and being. fearless in a couple loops there’s no reason anyone can’t do this. Just need some patience. And for me a cup of Joe. Keep up the great work!
I appreciate it a lot Sean! I'm glad you benefitted from my Slack integration - I purposefully added that in here as another golden nugget so I appreciate you calling that out.
I respect you using AI to help you code with LangChain + Python even though you aren't a coder! I hope that goes well for you!
Exactly the approach I'm using. Another use-case for n8n is fast prototyping. It's much easier to assemble something simple in n8n for, let's say, presale than to code it in python / js
Very true, love your thoughts here Paul!
Absolutely! I just said the same thing today. I’ve been an engineer for 10 years and have recently discovered n8n and it is so good for prototyping and visualising a full stack app and micro-service architecture.
It’s been fun!
Hey Cole, this is the kind of video we want, I appreciate the courage. One question, I want to make personal assistants with n8n, but do you think these will become obsolete when big companies like Google or Microsoft implement theirs in the operating system, search engine, etc.?
Thank you very much Marc!
That is a great question - good for you for considering this so you don't waste your time! It depends a lot on how specific the tasks will be for your personal assistant.
If it's more general tasks like helping you find files on your computer, write emails, etc. then that kind of assistant would definitely be replaced by a copilot developed by Google/Microsoft/Apple. But if you have very specific tools/platforms (especially if you made them yourself) that you want your assistant to work with, then what you create probably wouldn't become obsolete anytime soon!
Cool! Thanks for a good video! I would love to see a video where you use OpenWebUI and it's features for tools and connect to N8N. That would be awesome!
Thank you Fredrik and I appreciate the suggestion a lot!
You certainly aren't the first to suggest OpenWebUI for integrating with N8N, so I a definitely going to be creating a video on this in the near future!
@@ColeMedin Magic! Thanks Cole!
Of course!
Excellent video and repo, G
Thank you my man, I appreciate it a lot!
Excellent video, Cole! Thank you so much for providing such valuable content, your way of explaining things is incredible.
I wanted to ask you a question: I know how to create an AI agent with n8n, but not with LangChain. Although I’m trying to learn how to create an AI agent with LangChain.
What is the difference between creating an AI agent with n8n and with LangChain in Python?
What does LangChain allow me to do that n8n doesn’t?
Thank you so much for your videos! Best regards.
Thank you very much Cristian - your kind words mean a lot to me!
n8n is fantastic for creating no code AI Agents super quickly. However, writing your own code, though it takes longer, allows you to do practically anything you want. With n8n, you are limited to what the nodes on the platform can do.
So creating AI Agents with Python + LangChain allows you to code everything, so you have all the control you could possibly want and aren't limited to the tools provided by a no code solution like n8n.
For example: n8n doesn't have access to every single LLM. But with LangChain + Python, you can essentially use any LLM you could possibly want. Also, with n8n you can't implement complex RAG pipelines that you could with custom code (handling things like reranking, embedded tables, recursive retrieval, etc.). All more complex RAG topics but you'll start to need those if your use case gets complex!
@@ColeMedin Thank you so much for your response! It has truly been incredibly valuable to me. I’ll definitely follow your advice and get to work on learning how to create AI agents with Python and LangChain, as I can see the potential is indeed enormous. We’re really excited and looking forward to your next video. Thanks for adding so much value to this fascinating world of AI!
Best regards!
Of course Cristian - thank you for the kind words! It means a lot to me :)
The n8n integration was way too restrictive for my use case with its hidden prompts and bugs, so I noped out immediately and developed my own methods and functions in a gradual fashion, abstracting away from the API layer. I developed a highly modular approach using n8n javascript, including a system to monitor and manage workflow interdependencies.
My first big n8n project was a general-purpose research agent that consumes a task queue. It does simple research and summarizing tasks on the web, reddit and youtube, but is far more capable. It was implemented on a 5 level architecture entirely in n8n: API layer (using OpenRouter for LLMs and a dozen endpoints for the tools), tools (some of them are simple LLM chains themselves), tool-use interpreter (aged me considerably), main agent (control loop), task monitor (trigger).
The modular approach I took allowed me to reuse the methods and functions in other projects. For example, I have integrated the research agent's tools into my context-sensitive clipboard manager and the Obsidian frontend I developed for chatting with LLMs, so the chat agents can now also use these tools supervised.
The creative writing agent I am working on now will use the research agent for factual grounding. It simulates a panel of writers and the human creative process. This project requires an entirely different prompting paradigm based on herding and coaxing base models, so I ditched langchain for good + the entire ecosystem surrounding it.
Cole's approach is the first one that makes sense, and prompted me to rethink my design principles for a certain aspect of the creative writing agent. Great stuff, interesting times ahead.
I'm glad my approach resonates with you - thank you for the kind words!
And thanks for sharing all your thoughts here - I love your project and how you are going about it.
Super Video, comprehensive and thorough
Thank you very much!!
your videos are pure gold 🪙
Thank you man!
Great video, Cole. I can't find the n8n workflows (json) used in this video on the repo.
Thank you! I didn't share them actually because they were "simpler" but actually I will be sharing them in my video tomorrow since I use them there as well!
Super nice to see this functional fusion of langchain and n8n. I had thought about doing basically the same thing, but I haven't yet encountered a situation where I need langchain to develop an agent that I can't develop with n8n.
Do you have any use cases in mind?
Yeah there are a ton of use cases that require something more advanced that needs to be created with LangChain! One good example is if you need more accurate RAG with techniques like reranking or summarizing chunks before putting them in the final prompt.
Incredible, so simple. Where would you suggest hosting, RunPod?
Thanks so much! Keeping it simple is the goal!
I would suggest RunPod if you want to run local LLMs, otherwise DigitalOcean if you want to just use an API through something like OpenAI/Anthropic/Groq for your LLMs.
@@ColeMedin Alright, thx! Loving the content btw, great work there, very useful.
You bet! Thank you for the kind words!
Have you given Vectorshift a shot? Its exceptional. I replaced n8n with it as it can do so much more and overlaps much of what you say in Langchain. Not Open Source, though... but known for extreme security so if you are deploying to a security conscious industry they are highly respected.
I'm so glad you mentioned Vectorshift... I'm actually creating a video on it this Friday! It is an incredible platform so I'm with you there!
Well done this is super comprehensive and well explained! As you hard coded the chanel 'youtube' how would i add in just another user rsther than group/chanel
Thank you very much!!
So in order to get away from having the channel/user/group hard coded and have a dynamic user instead, you would have to add a tool to the agent to look up users in Slack. Then once it looks up users, it will have their IDs so it can pass the ID of the correct user into the workflow that summarizes a conversation or sends a message. That would involve simply adding another parameter to those workflows, changing the resource from "channel" to "user", and then changing the user from "fixed" to "expression" where you would then pass in the user ID given to the workflow.
I hope that makes sense!
@@ColeMedin thank you this is useful
Of course - glad to help!!
Excellent! Thanks for a good video! 🙂
Thank you, I'm glad you enjoyed it!
Very interesting vid. Thanks!
Thank you, my pleasure! 😄
How does the LLM know which tools has to call depending on the prompt? it seems like magic 🤔
aaah I see it uses the docstrings of each tool, as you said in the video
Great question! Your follow up reply is correct!
Hi Cole, thanks for this vid. I want to build something similar for a Facebook chatbot. How will i connect n8n with FB messenger?
My pleasure! And great question!
Unfortunately it doesn't look like FB messenger is directly supported in n8n. So you would probably need to code your own integration to work with FB messenger or set up FB messenger to send webhook requests into an n8n workflow.
That would be a bit more advanced so it's hard to go into detail on that here! But you could rely a lot on the FB messenger developer documentation for getting started. For example, here is their page for handling messenger webhooks to receive events like a new message:
developers.facebook.com/docs/messenger-platform/webhooks/
Thanks, Cole! Your reply means a lot to me and for my future plans.
Of course, glad to help!
Nice solution. I like n8n but I’ve been feeling like the gui was slowing me down for more complex projects. I’ve been using langchain because I can use llm coding tools to develop these solutions much faster than I can when I have to drag and drop everything. Combining the two is a great idea. No code tools are beginning to be something I’m trying to avoid to get to market faster.
Wow that's interesting that no code tools are actually slower for you, thanks for sharing that! I can definitely see how that would be the case since you can use AI tools to code so fast now. I'm curious though, how often do you have to correct the LangChain code the model spits out and which model are you using?
Tnx for posting
My pleasure! :)
thanks it's amazing
Yet another helpful video Cole. Why didn't you use the n8n code node for the python?
Thank you, I'm glad you found it helpful!
The main limitation with the code node in n8n is there is a limited amount of libraries that are available to use. So if you want to use a library like boto3 to interact with AWS (for example), you simply can't. That's why I'm encouraging you here to create custom coded AI agents outside of n8n but still leveraging n8n for service integrations!
Can n8n be run locally without internet?
Yes, you can run n8n without internet! Most of the nodes won't work because they rely on an internet connection if it's an external service like Slack or Google Drive. But you could create a full local RAG AI agent without internet using n8n - that would be sweet!
How to I find my n8n user management jwt secret from your video about running local AI agent
Sorry, could you clarify what you are asking here? The JWT secret isn't something needed in this video! That is only needed when you use the local AI starter kit with n8n. When I hosted n8n myself on a DigitalOcean droplet following the instructions for that, I didn't have to set up a JWT secret.
Hopefully that helps! Otherwise please feel free to expand on what you're asking!
@@ColeMedin I'm talking about the video posted on 9/16 called "Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)"
In the video there's is a .env file. It looks like this
POSTGRES_USER=root
POSTGRES_PASSWORD=
POSTGRES_DB=n8n
N8N_ENCRYPTION_KEY=
N8N_USER_MANAGEMENT_JWT_SECRET=
I don't know where to find my N8N_USER_MANAGEMENT_JWT_SECRET
sorry I am very new to n8n. I just downloaded it from github and could only find my N8N_ENCRYPTION_KEY in the config file generated.
Ohhhh got it! Sorry I was confused since we aren't in the comment section for that other video.
You actually can create your own JWT secret! So you can just put a random bunch of alphanumeric characters here for both N8N_USER_MANAGEMENT_JWT_SECRET and N8N_ENCRYPTION_KEY.
🙌🙌
More from n8n please
Thanks for the suggestion! I will have a LOT more on n8n in the very near future :)
Let me guess you own n8n don't you