In this video, I’m excited to showcase how you can harness the power of two leading AI-driven platforms in the no-code/low-code space. Discover how to seamlessly integrate AI agent building with AI automation workflows using n8n and Flowise, step by step. This masterclass will provide valuable insights into what’s possible when you combine these cutting-edge tools, empowering you to unlock new levels of efficiency and innovation. I hope you find this content helpful! Feel free to share your thoughts in the comments-I’d love to hear your feedback and ideas.
Love this video.. Please make a complete tutorial on AI SDR Agent to manage cold outbound system end to end without human intervention. Is it possible?
Glad you find this video valuable! Great suggestion, yes it's possible but I'm afraid a complete AI SDR end to end build might not fit into 1 video. Fundamental build maybe? I'll look into this type of content soon! Thank you man.
Can you add Groq somewhere in there to enhance the speed? Also, in custom tools where an api key or bearer token is required, where do I add env variables in flowise cloud? Or do I hardcode in JavaScript in custom tool node please?
@@MH-xx6df Groq or Llama for speed is a great suggestion, however, my problem there is the reliability of the coding prowess which is the main point of this video. As explained in the video, I used Claude latest since it is perfect for code use in data analytics and visualization. Haven't tried setting up an env for credential tokens yet, but I guess it is doable since Flowise and n8n is meant for production use. Great idea though, I'll look into this and maybe create some video for this types of solutions in future.
@@limitlessai_media cool and yeah, certainly get all the coding capabilities of 3.5 Sonnet. Use it all day long. Just thought might be away to have Groq help. Sorry, was thinking out loud.
Very cool. I’m glad to see you chose to chose open source apps such as n8n and Flowise. Is there a reason to use airtable instead of google sheets? Most importantly, how did you learn to build these flows?
Thanks for watching the video @vitalis! For this specific demo build there really is no advantage in using airtable over google sheets since we're just getting a minimal amount of data and google sheets would be enough. I made use of Airtable in this tutorial to show what's possible when building this project for a production environment where scalability and stability is of the essence for the applications you are going to build. Learned building these flows through messing around and being passionate about AI.
@ looking forward to see this channel grow. I’ve been looking for airtable alternatives with AI integration for automation but there’s only baserow. Upcoming are nocodb and teable. I’m now trying Rows, but agentic workflows are the next level.
Can I use realtime data feed in low latency in this structure? What if I implement swing trades instructions once recognized triggers an alert on telegram or anything else? I mean run this 1000x per second? Like jigsaw, exochsrts, money flow and other data
a 1000X in seconds might not be possible, you are relying on the tokens/min response rate from an LLM. The data scraping part is good for faster and almost real time execution, but for the part where analytics and visualization are executed, LLMs are not able to keep up as they have rate limits imposed in a per minute basis. Doable if you scale into production level set-up, you need enterprise privilege to have higher rate limit of token utilization per minute.
I don't get it, why do we need this when I can analyze a dataset with chatgpt? I can load the csv file and chatgpt will generate a summary with relevant charts.
For personal use maybe? The capability to scale and customize how the system fits to your needs? For data security? Or, some real-world application where data is supposed to be protected and not let some third party big company like OpenAI handle your organization's data? Just some of my 2 cents, but well, there's always ChatGPT and you can do whatever you like man. Anyways, thanks for watching the video, appreciate it so much.
I would love to if only n8n has the integration for E2B sandbox interpreter tool already. This amazing tool has just been available in Flowise. This video is meant for no-code/low-code people, If you do this in n8n you might need code solution to integrate a sandbox code interpreter tool like E2B and access it by an agent using n8n. Doable but requires you to code.
In this video, I’m excited to showcase how you can harness the power of two leading AI-driven platforms in the no-code/low-code space. Discover how to seamlessly integrate AI agent building with AI automation workflows using n8n and Flowise, step by step.
This masterclass will provide valuable insights into what’s possible when you combine these cutting-edge tools, empowering you to unlock new levels of efficiency and innovation.
I hope you find this content helpful! Feel free to share your thoughts in the comments-I’d love to hear your feedback and ideas.
Love this video..
Please make a complete tutorial on AI SDR Agent to manage cold outbound system end to end without human intervention. Is it possible?
Glad you find this video valuable! Great suggestion, yes it's possible but I'm afraid a complete AI SDR end to end build might not fit into 1 video. Fundamental build maybe? I'll look into this type of content soon! Thank you man.
This is awesome. Thank you! Will be watching this over and over until. Very valuable.
You're awesome as well! Hope you get valuable bits on this tutorial. Happy building!
Can you add Groq somewhere in there to enhance the speed? Also, in custom tools where an api key or bearer token is required, where do I add env variables in flowise cloud? Or do I hardcode in JavaScript in custom tool node please?
@@MH-xx6df Groq or Llama for speed is a great suggestion, however, my problem there is the reliability of the coding prowess which is the main point of this video. As explained in the video, I used Claude latest since it is perfect for code use in data analytics and visualization. Haven't tried setting up an env for credential tokens yet, but I guess it is doable since Flowise and n8n is meant for production use. Great idea though, I'll look into this and maybe create some video for this types of solutions in future.
@@limitlessai_media cool and yeah, certainly get all the coding capabilities of 3.5 Sonnet. Use it all day long. Just thought might be away to have Groq help. Sorry, was thinking out loud.
Very cool. I’m glad to see you chose to chose open source apps such as n8n and Flowise. Is there a reason to use airtable instead of google sheets?
Most importantly, how did you learn to build these flows?
Thanks for watching the video @vitalis! For this specific demo build there really is no advantage in using airtable over google sheets since we're just getting a minimal amount of data and google sheets would be enough. I made use of Airtable in this tutorial to show what's possible when building this project for a production environment where scalability and stability is of the essence for the applications you are going to build.
Learned building these flows through messing around and being passionate about AI.
@ looking forward to see this channel grow. I’ve been looking for airtable alternatives with AI integration for automation but there’s only baserow. Upcoming are nocodb and teable. I’m now trying Rows, but agentic workflows are the next level.
nicely done - instant liked & subbed! keep up the good work Jeff.
Thank you @SoloJetMan, glad you liked this video! Hope you got a few bits of value out of this tutorial.
Can I use realtime data feed in low latency in this structure? What if I implement swing trades instructions once recognized triggers an alert on telegram or anything else? I mean run this 1000x per second? Like jigsaw, exochsrts, money flow and other data
a 1000X in seconds might not be possible, you are relying on the tokens/min response rate from an LLM. The data scraping part is good for faster and almost real time execution, but for the part where analytics and visualization are executed, LLMs are not able to keep up as they have rate limits imposed in a per minute basis.
Doable if you scale into production level set-up, you need enterprise privilege to have higher rate limit of token utilization per minute.
I don't get it, why do we need this when I can analyze a dataset with chatgpt? I can load the csv file and chatgpt will generate a summary with relevant charts.
For personal use maybe? The capability to scale and customize how the system fits to your needs? For data security? Or, some real-world application where data is supposed to be protected and not let some third party big company like OpenAI handle your organization's data?
Just some of my 2 cents, but well, there's always ChatGPT and you can do whatever you like man. Anyways, thanks for watching the video, appreciate it so much.
Why not use the n8n ai and skip flowise entirely
I would love to if only n8n has the integration for E2B sandbox interpreter tool already. This amazing tool has just been available in Flowise. This video is meant for no-code/low-code people, If you do this in n8n you might need code solution to integrate a sandbox code interpreter tool like E2B and access it by an agent using n8n. Doable but requires you to code.
@limitlessai_media one should be able to implement it with api. Import the "curl". I am no coder, but is pretty easy to set up.