How do you handle large unstructured documents that would require large context window to enable more accurate results. For example, would this workflow work effectively for a large text file that contains exported chats from a group chat in WhatsApp. Would I be able to get an accurate result if I prompt the chat in the workflow you have shown to export all the names of people in JSON format
I had played around with this but for simplicity sake I split them up. Was running into issues trying to reference the file ID to put it into the metadata in Supabase. I was playing around with operators like ?? and || but I wanted to keep this tutorial more straightforward. Great point though thank you!
I wonder if is possible to make a flow that utilize the PGVector to memorize the important personal thinks during the conversation with AI. I don't need the documents, I need that AI decide if an information is important and then update PGVector with this information in order to take is if I'll ask that information next year :-) For example today in a chat session I tell to IA that I just buy a mobile phone model XYZ... next year I wont to ask the IA "witch model of mobile phone I have and how much time I used". Is this possible (locally better) ? do you have a video that is near to what I ask please ? Thanks a lot
From my POV, they got much better as I haven’t explored this method yet 😅 Hallucinations always happen but that’s where iterating and refining comes into play, a huge percentage of time should be spent here when building out workflows.
It'd be helpful if you outlined the total monthly cost of the basic tech stack needed to run a small business with a manager agent that uses 4+ support agents.
This is really inspiting! Thank you for keeping updated with the latest tech
I've been working on this myself using your UA-cam data as a base for Gemini and now here you are delivering.
Concise and straightforward. Excellent walkthrough and breakdowns. Thank you.
Good stuff, Nate. Also, i like the Jazz background music for Sunday builds.
Thanks for such valuable content, you're the best much appreciate you 🤝👍
Thank you!
very nice tutorial, well explained, ty!
How do you handle large unstructured documents that would require large context window to enable more accurate results. For example, would this workflow work effectively for a large text file that contains exported chats from a group chat in WhatsApp. Would I be able to get an accurate result if I prompt the chat in the workflow you have shown to export all the names of people in JSON format
after your "limit" node couldnt you just feed back into the original "downloading file" node instead of creating a new one?
I had played around with this but for simplicity sake I split them up. Was running into issues trying to reference the file ID to put it into the metadata in Supabase.
I was playing around with operators like ?? and || but I wanted to keep this tutorial more straightforward.
Great point though thank you!
How do you deal with multiple files uploaded at once?
What is the benefit for n8n over make?
Great video Nate!
Thank you!
If you would indicate budget 0 instead of “we have no money” it would probably be more accurate.
I wonder if is possible to make a flow that utilize the PGVector to memorize the important personal thinks during the conversation with AI.
I don't need the documents, I need that AI decide if an information is important and then update PGVector with this information in order to take is if I'll ask that information next year :-)
For example today in a chat session I tell to IA that I just buy a mobile phone model XYZ... next year I wont to ask the IA "witch model of mobile phone I have and how much time I used".
Is this possible (locally better) ? do you have a video that is near to what I ask please ?
Thanks a lot
Can we build multiple RAG ai agent in one workflow.
What would be the use case? How would you want to interact with each one?
Is this faster than just putting the google drive folder into the agent ai tools so it can search the docs and answer questions based on the files?
Yes having data in a vector database will be much more efficient especially as you start to continuously add more information
“Got even better,” I thought N8N got some updates or something. Lol😅. How are the hallucinations when you have a complex data?
From my POV, they got much better as I haven’t explored this method yet 😅
Hallucinations always happen but that’s where iterating and refining comes into play, a huge percentage of time should be spent here when building out workflows.
I keep getting this error when I run the SQL query on supabase:
ERROR: 42710: extension "vector" already exists
This is likely because your table has already been created
Just tested: 'file updated' also triggers on new files
Try splitting them into two different workflows or making sure everything is configured correctly within google drive
First one!
🔥
Monthly cost for running efficient RAG system like this.
It is really going to depend on how much data you have, hard for me to give you a ballpark without knowing more
It'd be helpful if you outlined the total monthly cost of the basic tech stack needed to run a small business with a manager agent that uses 4+ support agents.