Flowise rocks! Henry, you have the best AI tool out there hands down! I've tried them all. Bot Press, Langflow and a few others. Definitely the best of the best. Keep up the awesome work! 👍
I am very glad to have found your channel. The explanations from H. Heng are very clear and understandable, especially regarding deploy and to use the FAISS vector database for locally environment. Perhaps it would be possible to have a deep-dive tutorial on how to develop an LLM app with Flowise and the Faiss vector database?
Great video. Quick question, I am trying to call the flowise running on kubernetes(GCP). When I make concurrent(say 2) HTTP request using CURL to the flowise, I see the answers being mismatched(Question 2 gets answer of Question 1). How do I solve it?
Great that it's possible to handle with Flowise up to 1000 req/s with basic DO instance, but I'm wondering how to deal with OpenAI rate limits? GPT-3.5 manages 3.5k requests per minute, GPT-4 only 200, there are also tokens per minute limits and server overloaded issue, does Flowise handle these exceptions? Langchain provides maxConcurrency option, but I'm not sure, if it's implemented in Flowise? I have also another question not related to concurrency, does Flowise support OpenAI Function Calling?
What I am trying to achieve with Flowise is mixing Function calling with Vector data access, so it answers data based on PDF data but has access to other functions, all in the same flow. I have achieved this but only through a hack (set up a function call to another flow that does the the vector search and returns the output to the main flow). but there must be a better way.
@@menloparklab that's what I've done, it works, but it feels more like a hack than a solution. I ain't seen nobody do a post or video on this, mixing functions and vector DB knowledge.
I have a question with people who might have used this for real use-cases. Is flowise really good for production-level use in businesses? Is it scalable enough?
I'm assuming the customer pays for the openai api use?, but how do you charge them for that? is the open ai api bill automatically charged to them, is it in their name? how do I set that up?, or predict how much openai will charge them to give them an estimate quotation so that they can decide if it's worth it for them or can afford it?
Flowise rocks! Henry, you have the best AI tool out there hands down! I've tried them all. Bot Press, Langflow and a few others. Definitely the best of the best. Keep up the awesome work! 👍
True
This is such a through explanation supported by solid documentation. Thank you!
I am very glad to have found your channel. The explanations from H. Heng are very clear and understandable, especially regarding deploy and to use the FAISS vector database for locally environment. Perhaps it would be possible to have a deep-dive tutorial on how to develop an LLM app with Flowise and the Faiss vector database?
This is awesome, thank you! :)
Perfect walkthrough, recommend everyone
Awesome! Hey Henry we get to see you! 😉
Great video. Thank you!
Henry is the GOD of Ai!
Great video.
Quick question,
I am trying to call the flowise running on kubernetes(GCP). When I make concurrent(say 2) HTTP request using CURL to the flowise, I see the answers being mismatched(Question 2 gets answer of Question 1). How do I solve it?
Flowise Rocks! could you please explain how to deploy chroma locally and use it in flowise?
Great that it's possible to handle with Flowise up to 1000 req/s with basic DO instance, but I'm wondering how to deal with OpenAI rate limits? GPT-3.5 manages 3.5k requests per minute, GPT-4 only 200, there are also tokens per minute limits and server overloaded issue, does Flowise handle these exceptions? Langchain provides maxConcurrency option, but I'm not sure, if it's implemented in Flowise? I have also another question not related to concurrency, does Flowise support OpenAI Function Calling?
That would be something to check with the Flowise team.
Yes, flowise supports OpenAI function calling.
How do we add document uploader option for user in flowise?
What I am trying to achieve with Flowise is mixing Function calling with Vector data access, so it answers data based on PDF data but has access to other functions, all in the same flow. I have achieved this but only through a hack (set up a function call to another flow that does the the vector search and returns the output to the main flow). but there must be a better way.
You could try an agent that does the retrieval and also has access to other functions as tools. OpenAI function block works good.
@@menloparklab that's what I've done, it works, but it feels more like a hack than a solution. I ain't seen nobody do a post or video on this, mixing functions and vector DB knowledge.
I actually doing the same. Have you figured out a better way?
No, but I might try again this week. This is what clients REALLY want, to be able to answer questions AND capture leads. in 1 flow!@@NexusDwynged
I'm literally stuck on this as well
how come my flowise doesn't have some of the features that Henry has? I thought I had the latest version, can I update those modules?
Yes you can.
PLease update Flowise
I have a question with people who might have used this for real use-cases.
Is flowise really good for production-level use in businesses? Is it scalable enough?
Yes, it is good for production-level use cases, and can be scaled.
The Flowise API works really slow 20/28 seconds to respond. Installed in a basic Digital Ocean Droplet. Any clue?:
I'm assuming the customer pays for the openai api use?, but how do you charge them for that? is the open ai api bill automatically charged to them, is it in their name? how do I set that up?, or predict how much openai will charge them to give them an estimate quotation so that they can decide if it's worth it for them or can afford it?
You get an API key from your own OpenAI account and it is charged directly to your account.
Is there a support of GPT4 in flowise?
Yes there is. You can select any of the GPT models.
No wonder the video is so long! Half of it is him frozen. 😂