This channel is incredibly informative and efficient-packed with valuable content, no fluff, just pure knowledge. Great job, Cole, on another outstanding knowledge share!
00:00:00 Introduction 00:00:20 Tip 1: Use Superbase for Scalable AI Agents 00:01:15 Tip 2: Choosing the Right Large Language Models 00:02:15 Tip 3: Extracting Text from Different File Types 00:02:45 Tip 4: Referencing Previous Node Outputs 00:03:30 Tip 5: Building AI Agents as API Endpoints 00:04:30 Tip 6: Handling Multiple Items in a Single Node Output 00:05:30 Tip 7: Using Data Pinning to Save Test Event Outputs 00:06:00 Tip 8: Creating Error Workflows 00:06:45 Tip 9: Using the Schedule Trigger to Automate Workflows 00:07:30 Tip 10: Exploring the n8n Workflow Library 00:08:30 Conclusion
Thank you! This is one of the most helpful videos for n8n out there. Your explanation is clear, and the examples make it so easy to understand. It’s amazing how you simplify complex topics. Keep up the great work-your content is invaluable for the community!
W0ot! Happy to say i'm using 9 of these tips already. Gotta get those error workflows going. As always, thank you for your videos. Much of my skills have started with, rounded out or completely been from you!
I'm seeing all kinds of videos popping up rehashing your work. The supabase based rag is a great example. Just saw two today... what? 2 months after yours dropped? Keep leading the way my man.
Awesome! Thx a lot! I've just made a PG Vector instead of Supabase, which is working well and all locally, thx for all your tutorials it's all handy and simple!
When you say you will have an ai that will have the knowledge of all these workflows, is that like a fork of bolt where you ask it to build a particular workflow for you? That would be awesome
That is one of the end goals! To start though it will more just be able to reference an existing knowledgebase of workflows to give you ones that are similar to a workflow you describe that you want to build. And just have extended knowledge of n8n compared to a base LLM. But then yes the primary goal eventually is to have it build out full workflows for you.
Thanks Sebastian - will do! To update the n8n container in the local AI starter kit, you can run the command: docker compose pull This will update all the images to latest including n8n if there is an update!
Hi Cole. I was working in n8n cloud today and found they have a Grafana connector. I am dying to try it. Can not find any videos on it. I want to test it with Gemini. I used Gemini on your vectorshift project. My prompt was, " you are a data analyst etc. So Gemini tells me about all the data analysis it can do including regeession. Sounds pretty advanced. Learned about Google big query which is one way to connect data to Gemini but I think we can do the same with n8n or vectorshift. Super cool. Thanks for all the videos.!
Thank you Cole for this valuable advice. I'm just starting in the AI agents world and I hesitate between 2 solutions: Which is better & cost effective N8N or Flowise local?
You are so welcome! There is actually a really good opportunity to use them together. Flowise has better integrations with LLMs, and then n8n has better integrations with other services (Slack, Google Drive, Asana, etc.). So you can build your agents with Flowise and have the tool workflows in n8n! Both are very cost effective since you can self-host both.
Great question! I haven't tried fine tuning specifically within n8n but you can call any fine tuning API from n8n so you certainly could! It would honestly probably be easier to follow a Python tutorial for this though.
Can you do a comparison video of n8n flowise ai, and langflow? There are so many of these low code style things now it’s hard to know which one to use.
For the vector store almost all the videos i have seen use pinecone vector store. How does pinecone compare to supabase? Is supabase better than pinecone? Is pinecone not production ready? Thx!
Pinecone is definitely production ready as long as you aren't on their free tier! I only prefer Supabase because it's nice to have the SQL database (for agent chat memory) and embeddings for RAG be in the same platform, but Pinecone actually starts to outperform Supabase once you get to a large number (hundreds of thousands) of vectors.
Thanks man! Self hosting is definitely the way to go! These workflows were all created as little examples except for the two larger ones that I have videos on and I have the workflows downloadable as JSON files there!
Hey, Im having trouble with creating an AI asisstent with googlecalendar tool and local ollama as ai agent. I tried to set up the description of the ai agent so it would handle my calendar correctly but it keep messing up the event title and start time end time. I would really appriciate a video about it. How would you implement it ? Thanks! Keep up the good work!
Sounds like an interesting use case! How exactly does it help you manage your calendar? I'd be curious to know more before I speak to what content I would create soon that could relate! Thanks!
@@ColeMedin Since than, i managed to make it work. It can create events for me or check my calendar what i do and so on... I have different flows for the calendar handling agents. Because the system prompts are special for all, and the respond with a fixed json which is the input of the calendar tool. My plan is to create a router agent in the main flow which decides which agent should handle the task. if i talk about creating an event it gives the info to the respected agent. Or if i just asking something which in my vector database it goes in a different path. But to be honest, its a bit complicated with ollama. if i use the new gpt it understands everything easily, but ollama is sometimes a bit stupid to handle everything.
Glad you figured it out - sounds awesome! Yeah local LLMs are often not going to do as well as GPT-4o or other larger models like Claude 3.5 Sonnet. At least that's the case for now...
Great question! There is actually a really good opportunity to use them together. Flowise has better integrations with LLMs, and then n8n has better integrations with other services (Slack, Google Drive, Asana, etc.). So you can build your agents with Flowise and have the tool workflows in n8n!
Cole, why use n8n at all? why not just create agents directly using relavant api in vScode using python? Same goes for langchain? Why even use any of these when an LLM can be called with a simple python code?
Super fair question - thank you! The biggest reason to use n8n is it's still the fastest way to build workflows that integrate with a bunch of different services and AI agents, even with the latest and greatest AI coding assistants out there. It's super easy for non-technical people to do amazing things with it and for more technical people like me (and you too I'm assuming), it still saves a lot of time!
@@ColeMedin I'm non technical actually. But when i look at n8n i don't see how it's more beneficial than using code. To me every llm gives direct access to their api right. why would a more efficient solution to be to use third parties i guess i'm trying to understand. i have this same issue with trying to understand langchain.
Yeah I get it! Overall these abstractions are meant to save time because they handle a lot for you. But of course you have less room for customizing with these abstractions, so it's pros and cons between convenience/speed and customizability/transparency.
this dude has an impeccable knack for coming through with the most relevant and well timed contents!
Haha I'm glad!
This channel is incredibly informative and efficient-packed with valuable content, no fluff, just pure knowledge. Great job, Cole, on another outstanding knowledge share!
Thank you so much - that means a lot! :D
00:00:00 Introduction
00:00:20 Tip 1: Use Superbase for Scalable AI Agents
00:01:15 Tip 2: Choosing the Right Large Language Models
00:02:15 Tip 3: Extracting Text from Different File Types
00:02:45 Tip 4: Referencing Previous Node Outputs
00:03:30 Tip 5: Building AI Agents as API Endpoints
00:04:30 Tip 6: Handling Multiple Items in a Single Node Output
00:05:30 Tip 7: Using Data Pinning to Save Test Event Outputs
00:06:00 Tip 8: Creating Error Workflows
00:06:45 Tip 9: Using the Schedule Trigger to Automate Workflows
00:07:30 Tip 10: Exploring the n8n Workflow Library
00:08:30 Conclusion
Thank you! This is one of the most helpful videos for n8n out there. Your explanation is clear, and the examples make it so easy to understand. It’s amazing how you simplify complex topics. Keep up the great work-your content is invaluable for the community!
Thank you so much Mike! That means a lot to me!
Thanks!
You are so welcome - thank you so much for your support!
W0ot! Happy to say i'm using 9 of these tips already. Gotta get those error workflows going. As always, thank you for your videos. Much of my skills have started with, rounded out or completely been from you!
I love it - you are so welcome!
I'm seeing all kinds of videos popping up rehashing your work. The supabase based rag is a great example. Just saw two today... what? 2 months after yours dropped? Keep leading the way my man.
Haha that's awesome, thanks Jose!
Awesome! Thx a lot! I've just made a PG Vector instead of Supabase, which is working well and all locally, thx for all your tutorials it's all handy and simple!
Glad it helped! Thank you for the kind words!
When you say you will have an ai that will have the knowledge of all these workflows, is that like a fork of bolt where you ask it to build a particular workflow for you? That would be awesome
That is one of the end goals! To start though it will more just be able to reference an existing knowledgebase of workflows to give you ones that are similar to a workflow you describe that you want to build. And just have extended knowledge of n8n compared to a base LLM. But then yes the primary goal eventually is to have it build out full workflows for you.
Keep em coming!! Is there any special considerations on updating n8n in if it was installed with with your Ai kit?
Thanks Sebastian - will do!
To update the n8n container in the local AI starter kit, you can run the command:
docker compose pull
This will update all the images to latest including n8n if there is an update!
This was extremely helpful, thank you!
You bet! :)
wonderful content! please keep up the good work
Thank you! I sure will! :D
this is what i needed. thank you!
more n8n content if possible
Of course! Yes - more n8n content coming very soon!
Hi Cole. I was working in n8n cloud today and found they have a Grafana connector. I am dying to try it. Can not find any videos on it. I want to test it with Gemini. I used Gemini on your vectorshift project. My prompt was, " you are a data analyst etc. So Gemini tells me about all the data analysis it can do including regeession. Sounds pretty advanced. Learned about Google big query which is one way to connect data to Gemini but I think we can do the same with n8n or vectorshift. Super cool. Thanks for all the videos.!
That is super cool! You are so welcome!
Best content about n8n
Thank you very much :D
Thank you Cole for this valuable advice. I'm just starting in the AI agents world and I hesitate between 2 solutions: Which is better & cost effective N8N or Flowise local?
You are so welcome! There is actually a really good opportunity to use them together. Flowise has better integrations with LLMs, and then n8n has better integrations with other services (Slack, Google Drive, Asana, etc.). So you can build your agents with Flowise and have the tool workflows in n8n! Both are very cost effective since you can self-host both.
You're so cool. Thanks, Cole!
You're welcome! Thank you!
you’re truly awesome. thanks for the tips.
You are so welcome!
You have a great style of teaching. Are you launching a skool or other coaching based community?
Thank you very much! I have a Discourse community that I just started which I will be building up instead of a Skool group!
thinktank.ottomator.ai
Lots of good tips, thank you for that!
My pleasure! Glad it was helpful!
More N8N content please ❤
More coming soon!
Another banger🔥
Hi Cole, is it possible to fine tune ollama model with own dataset using n8n?
Great question! I haven't tried fine tuning specifically within n8n but you can call any fine tuning API from n8n so you certainly could! It would honestly probably be easier to follow a Python tutorial for this though.
amazing video thank you so much !
Thank you - you bet!
Can you do a comparison video of n8n flowise ai, and langflow? There are so many of these low code style things now it’s hard to know which one to use.
Yes I am actually planning on this for next month!
Good job, very useful tips,...
Glad it was helpful! Thanks!
You forget the Gemini models, truly Gemini flash is surprising me and I start replacing 4o mini in all my test phase for almost free
Wow if Gemini Flash is actually performing as well as GPT-4o-mini for you that's incredible! Thanks for sharing!
do you think it is possible run llamafile LLM with n8n? if yes...do a video please!
For the vector store almost all the videos i have seen use pinecone vector store. How does pinecone compare to supabase? Is supabase better than pinecone? Is pinecone not production ready? Thx!
Pinecone is definitely production ready as long as you aren't on their free tier! I only prefer Supabase because it's nice to have the SQL database (for agent chat memory) and embeddings for RAG be in the same platform, but Pinecone actually starts to outperform Supabase once you get to a large number (hundreds of thousands) of vectors.
I LOVE YOUR VIDEO ! Thank you so muuuuuch to exit !
You're welcome!!
Cole, can your 'local-swarm-agent' be used as a n8n workflow? If so, can you show how?
You certainly could! I'm going to be making agent type workflows for n8n for some content in the future!
Great video. I self host N8N. Do you have these workflows available?
Thanks man! Self hosting is definitely the way to go!
These workflows were all created as little examples except for the two larger ones that I have videos on and I have the workflows downloadable as JSON files there!
@@ColeMedinn8n saves so much time compared to coding from the ground up. Awesome content. Thanks mate I'll check them out
It sure does! Thank you!
Hey,
Im having trouble with creating an AI asisstent with googlecalendar tool and local ollama as ai agent. I tried to set up the description of the ai agent so it would handle my calendar correctly but it keep messing up the event title and start time end time. I would really appriciate a video about it. How would you implement it ?
Thanks!
Keep up the good work!
Sounds like an interesting use case! How exactly does it help you manage your calendar? I'd be curious to know more before I speak to what content I would create soon that could relate! Thanks!
@@ColeMedin Since than, i managed to make it work. It can create events for me or check my calendar what i do and so on...
I have different flows for the calendar handling agents. Because the system prompts are special for all, and the respond with a fixed json which is the input of the calendar tool.
My plan is to create a router agent in the main flow which decides which agent should handle the task. if i talk about creating an event it gives the info to the respected agent. Or if i just asking something which in my vector database it goes in a different path.
But to be honest, its a bit complicated with ollama. if i use the new gpt it understands everything easily, but ollama is sometimes a bit stupid to handle everything.
Glad you figured it out - sounds awesome! Yeah local LLMs are often not going to do as well as GPT-4o or other larger models like Claude 3.5 Sonnet. At least that's the case for now...
Forgive me for asking off-topic, but I'm very curious in what program you create these fancy thumbnails for UA-cam?
Good question! I use Photoshop!
Is n8n better than flowise? I’ve been a flowise user but am getting intrigued by the latest n8n content 🤔
Great question! There is actually a really good opportunity to use them together. Flowise has better integrations with LLMs, and then n8n has better integrations with other services (Slack, Google Drive, Asana, etc.). So you can build your agents with Flowise and have the tool workflows in n8n!
@ that makes a lot of sense thank you for explaining! Thanks for the great helpful content as well!
Awesome, you bet!
Thanks !!!!!
Thx, the ship towards Flowise. I'm waiting to dock in port
Yes I am actually putting out Flowise content soon!
I couldn't use Gemini with agent ai and supabase, any help please
Is there a specific error you are getting?
Cole, why use n8n at all? why not just create agents directly using relavant api in vScode using python? Same goes for langchain? Why even use any of these when an LLM can be called with a simple python code?
Super fair question - thank you! The biggest reason to use n8n is it's still the fastest way to build workflows that integrate with a bunch of different services and AI agents, even with the latest and greatest AI coding assistants out there. It's super easy for non-technical people to do amazing things with it and for more technical people like me (and you too I'm assuming), it still saves a lot of time!
@@ColeMedin I'm non technical actually. But when i look at n8n i don't see how it's more beneficial than using code. To me every llm gives direct access to their api right. why would a more efficient solution to be to use third parties i guess i'm trying to understand. i have this same issue with trying to understand langchain.
Yeah I get it! Overall these abstractions are meant to save time because they handle a lot for you. But of course you have less room for customizing with these abstractions, so it's pros and cons between convenience/speed and customizability/transparency.
Also, I'm flabbergasted that make doesn't also have a pin feature... or maybe I'm not. Thats a deal breaker for me
at my end "$json?" not work with multiple trigger :S
The format is {{ $json.data }}
replace data with the attribute you are trying to access
Top!!!!
Thanks Anna!
solid content, i just dont need it right now
That's totally fair! What are you looking for specifically?
@@ColeMedin it's just too advanced for me :) im just very beginner. but enjoy watching
Fair enough, I appreciate the honesty!
I sent you a purchase request email but no response, please reply and thank you.
Sorry which email was that?