Hello, Leon! I'm Brazilian and I consider you the best programming teacher on the internet. Your videos are very well explained and concise. I'm learning programming and I've learned a lot from you. God bless you!
I'm a computer science student in South Korea. Your lectures make me happy. I used to consider droppout university, but now I'm a genius student in here, because of you haha. My professor respects you too. Thank you!!
Thank you so much Leon! as always, very good content. I want to use the Nvidia ai api key, but i didn't found it in the list of credentials. Thanks for your help.
You could use Groq to run these models in production. Alternatively you can try to self host Ollama, but the hardware requirements might make this an expensive option.
hi leon , running locally and using vision would slow down the system too much, what about GPU installation of the same ? which one GPU Specs would u recommend ? thanks
Thank you Leon, again a very interesting video. Would like to know if you also tried connecting the llama models to database - MS SQL? Could you make a video of such an integration?
Good question. I just checked and the node does not include image uploads. However, I don't think it's a shortcoming in FW. Looking at the Groq API documentation, it doesn't seem like *they* support images yet. I'll keep an eye on this and will create a video once it is supported by Groq.
Hi Leon, i tried your tutorial to run Llama 3.2 vision locally but i get a response fetch failed. I did follow your step to download ollama and ollama run llama3.2-vision:11b. Do i need GPU to run 11b model?
Hello Leon, Unfortunately i did upgrade to Flowise lastest version 2.1.5 but still in Ollama LLM i can't able to see vision option there, so how can i fix it?
sorry, yes i used Ollama Chat but Vision option not there, also i did reinstall for Flowise on my Mac Laptop but still this feature not yet available, really i don't why?
Hi Leon, thank you for all your great videos. However when I try to include ChatOllama into the Flowise chain I always get an error "fetch failed" as an chatbot answer. If I use OpenAI everything is fine. I cannot figure out what the problem is. Obviously I got Ollama and the models installed and when I use them from the console everything works fine. Has it happened to you as well ? Any hints would be highly appreciated. Thanks!
Do you see any errors in the Flowise logs? Did you set up Ollama the same way I did, or are you rinniyit in a Docker container or something? I can only imagine that the URL might be different. You could also try to run the command "ollama serve" in the command prompt to ensure that the ollama server is running.
@@leonvanzyl very thankful for your respond! I did setup Ollama like you did. Very basic - no docker. Can do everything with it from the command prompt (like you showed in the videos) - only Flowise does npot function with it. However: I tried your command "ollama serve" and get the message: Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted ?!? Sorry to bother you. Would be so happy to get it to run!!!
@@leonvanzyl thank you very much for your answer. I appreciate it very much! I was able to fix it finally. Dumb error: ChatOllama Base URL has to be: 127.0.0.1:11434 (at least at my setup instead of the localhost...) Now it finally works. Very happy. Will keep on exploring.
It is. It's open source and free to use. You can self host it as well. Have a look at my Flowise Tutorial series to learn how to run it locally or in the cloud. You might be referring to their fully managed cloud service.
Hey Bird Man Phil. I'm really sorry about that. Must admit, I'm very behind on emails and making drastic changes and bringing in help to improve things for the new year. Did you send an email to my Gmail account?
Hello, Leon! I'm Brazilian and I consider you the best programming teacher on the internet. Your videos are very well explained and concise. I'm learning programming and I've learned a lot from you. God bless you!
This is just amazing! Thank you. Glad I could help 🙏
Hi Leon, Yes would love a video about passing attachments to flowise via API
Thanks! Will do.
me too!
I'm a computer science student in South Korea. Your lectures make me happy.
I used to consider droppout university, but now I'm a genius student in here,
because of you haha.
My professor respects you too. Thank you!!
That is amazing!!
Thank you 🙏
You always dropping jewels Mr Leon 🔥 Would most definitely love to see a video on parsing attachments to flowise API.
Coming soon!
Yes please 👍👍👍👍
Ek moet sê, jy is briljant. Dankie vir jou videos. Groete uit die Laeveld
Vreeslik baie dankie! 🙏
excellent solution again from you Leon
Thank you!
great workflow - thanks:)
You're welcome!
Hi Leon, I would like to see a dedicated video about passing attachments :)
Noted!
Thank you Leon!
You're welcome
Another great vid!👏
Glad you enjoyed!
Thank you so much Leon! as always, very good content. I want to use the Nvidia ai api key, but i didn't found it in the list of credentials. Thanks for your help.
How to deploy NVIDIA'S AI models as API Using flowise.
How to install and use Llama on a cloud pc like digital ocean?
Hi Leon.thanks you share the flowise releated videos. in this video.can I know what's the env that you run ollama+llama3.2-vision? and how many VRAM ?
I have an RTX 4070.
Hi Leon, loved this tutorial. Can you deploy using these LLM models?
You could use Groq to run these models in production.
Alternatively you can try to self host Ollama, but the hardware requirements might make this an expensive option.
hi leon ,
running locally and using vision would slow down the system too much, what about GPU installation of the same ? which one GPU Specs would u recommend ?
thanks
Thank you Leon, again a very interesting video. Would like to know if you also tried connecting the llama models to database - MS SQL? Could you make a video of such an integration?
Thank you!
Yes, Llama 3.2 supports tool calling, which includes interacting with databases.
I'll create a SQL video soon
@@leonvanzyl Thanks for your reply, eagerly waiting, try to use MS sql server database
what hardware should i have on my pc to use this?
Groq has the llama 3.2 vision model, does the node for groq in flowise has the upload image option???
Good question. I just checked and the node does not include image uploads.
However, I don't think it's a shortcoming in FW. Looking at the Groq API documentation, it doesn't seem like *they* support images yet.
I'll keep an eye on this and will create a video once it is supported by Groq.
@ but the vision models availables on the web, what for?? I thinks its supported.
Hey Leon! Is it suitable for multipage pdf invoices?
The vision model is meant for images. I have a separate video on other files, like PDFs, that you might be interested in
@@leonvanzyl could you please share link? What I know is 'chat with pdf approach' and I looking for structured parse pdf with certain amount entities.
How can we use llama 3.2 vision using the API and not Ollama locally?
Does this setup work on Flowise Cloud? I get a "Fetch failed" message when I run a simple "Hello" test. Thanks.
Keep in mind that FW Cloud wouldn't be able to access Ollama on your local machine
How come I don't see Allow Image Uploads in my ChatOllama model?
You probably need to update your FW instance
Hi Leon, i tried your tutorial to run Llama 3.2 vision locally but i get a response fetch failed. I did follow your step to download ollama and ollama run llama3.2-vision:11b. Do i need GPU to run 11b model?
That usually happens when ollama is not running. In the terminal, try running Ollama serve.
Hello Leon, Unfortunately i did upgrade to Flowise lastest version 2.1.5 but still in Ollama LLM i can't able to see vision option there, so how can i fix it?
You need to use the chat node, not LLM.
Use ChatOllama node.
@ yes I used chat node not LLM , but Vision option not there only section to add name of LLM, temperature and prompt button
sorry, yes i used Ollama Chat but Vision option not there, also i did reinstall for Flowise on my Mac Laptop but still this feature not yet available, really i don't why?
@@karimsaid1549 Please try npm update -g flowise, the update may find the plugin.
Hi Leon, thank you for all your great videos. However when I try to include ChatOllama into the Flowise chain I always get an error "fetch failed" as an chatbot answer. If I use OpenAI everything is fine. I cannot figure out what the problem is. Obviously I got Ollama and the models installed and when I use them from the console everything works fine. Has it happened to you as well ? Any hints would be highly appreciated. Thanks!
Do you see any errors in the Flowise logs?
Did you set up Ollama the same way I did, or are you rinniyit in a Docker container or something?
I can only imagine that the URL might be different.
You could also try to run the command "ollama serve" in the command prompt to ensure that the ollama server is running.
@@leonvanzyl very thankful for your respond! I did setup Ollama like you did. Very basic - no docker. Can do everything with it from the command prompt (like you showed in the videos) - only Flowise does npot function with it.
However: I tried your command "ollama serve" and get the message: Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted ?!?
Sorry to bother you. Would be so happy to get it to run!!!
@@leonvanzyl thank you very much for your answer. I appreciate it very much! I was able to fix it finally. Dumb error: ChatOllama Base URL has to be: 127.0.0.1:11434 (at least at my setup instead of the localhost...)
Now it finally works. Very happy. Will keep on exploring.
@@tommoves9935 I have exactly the same issue ....
Can you plz prepare a video of creating offline bot who can generate coding based on technical product training videos?
That's a cool idea! Thank you
Thx
You're welcome
but FlowiseAI is not free?
It is. It's open source and free to use. You can self host it as well.
Have a look at my Flowise Tutorial series to learn how to run it locally or in the cloud.
You might be referring to their fully managed cloud service.
@@leonvanzyl many thanks! will do, have a nice day : )
I dont know why you refuse to respond to my attempts to hire you for a project but im sincerely disappointed Leon
Hey Bird Man Phil.
I'm really sorry about that. Must admit, I'm very behind on emails and making drastic changes and bringing in help to improve things for the new year.
Did you send an email to my Gmail account?
@leonvanzyl yes a few times