Build AI Vision Apps FREE: FlowiseAI + Llama 3.2 Vision

Поділитися
Вставка
  • Опубліковано 25 січ 2025

КОМЕНТАРІ •

  • @toquevascaino
    @toquevascaino Місяць тому +1

    Hello, Leon! I'm Brazilian and I consider you the best programming teacher on the internet. Your videos are very well explained and concise. I'm learning programming and I've learned a lot from you. God bless you!

    • @leonvanzyl
      @leonvanzyl  Місяць тому +1

      This is just amazing! Thank you. Glad I could help 🙏

  • @WayneBruton
    @WayneBruton 2 місяці тому +17

    Hi Leon, Yes would love a video about passing attachments to flowise via API

  • @handsomguymin
    @handsomguymin Місяць тому

    I'm a computer science student in South Korea. Your lectures make me happy.
    I used to consider droppout university, but now I'm a genius student in here,
    because of you haha.
    My professor respects you too. Thank you!!

    • @leonvanzyl
      @leonvanzyl  Місяць тому

      That is amazing!!
      Thank you 🙏

  • @wolfanalysis4860
    @wolfanalysis4860 2 місяці тому +3

    You always dropping jewels Mr Leon 🔥 Would most definitely love to see a video on parsing attachments to flowise API.

  • @mickipixel
    @mickipixel Місяць тому

    Ek moet sê, jy is briljant. Dankie vir jou videos. Groete uit die Laeveld

    • @leonvanzyl
      @leonvanzyl  Місяць тому

      Vreeslik baie dankie! 🙏

  • @youtubeccia9276
    @youtubeccia9276 2 місяці тому +1

    excellent solution again from you Leon

  • @micbab-vg2mu
    @micbab-vg2mu 2 місяці тому +1

    great workflow - thanks:)

  • @choistella5863
    @choistella5863 2 місяці тому +1

    Hi Leon, I would like to see a dedicated video about passing attachments :)

  • @zkiyyeller3525
    @zkiyyeller3525 2 місяці тому

    Thank you Leon!

  • @mikew2883
    @mikew2883 2 місяці тому

    Another great vid!👏

  • @iamliam1241
    @iamliam1241 2 місяці тому

    Thank you so much Leon! as always, very good content. I want to use the Nvidia ai api key, but i didn't found it in the list of credentials. Thanks for your help.

    • @iamliam1241
      @iamliam1241 2 місяці тому

      How to deploy NVIDIA'S AI models as API Using flowise.

  • @pushingpandas6479
    @pushingpandas6479 2 місяці тому +1

    How to install and use Llama on a cloud pc like digital ocean?

  • @zhiwei1471
    @zhiwei1471 Місяць тому

    Hi Leon.thanks you share the flowise releated videos. in this video.can I know what's the env that you run ollama+llama3.2-vision? and how many VRAM ?

  • @WayneBruton
    @WayneBruton 2 місяці тому

    Hi Leon, loved this tutorial. Can you deploy using these LLM models?

    • @leonvanzyl
      @leonvanzyl  2 місяці тому

      You could use Groq to run these models in production.
      Alternatively you can try to self host Ollama, but the hardware requirements might make this an expensive option.

  • @khalidkifayat
    @khalidkifayat Місяць тому

    hi leon ,
    running locally and using vision would slow down the system too much, what about GPU installation of the same ? which one GPU Specs would u recommend ?
    thanks

  • @univer6979
    @univer6979 2 місяці тому

    Thank you Leon, again a very interesting video. Would like to know if you also tried connecting the llama models to database - MS SQL? Could you make a video of such an integration?

    • @leonvanzyl
      @leonvanzyl  2 місяці тому

      Thank you!
      Yes, Llama 3.2 supports tool calling, which includes interacting with databases.
      I'll create a SQL video soon

    • @univer6979
      @univer6979 2 місяці тому

      @@leonvanzyl Thanks for your reply, eagerly waiting, try to use MS sql server database

  • @ott0class
    @ott0class 2 місяці тому

    what hardware should i have on my pc to use this?

  • @juanignaciocolella5665
    @juanignaciocolella5665 Місяць тому

    Groq has the llama 3.2 vision model, does the node for groq in flowise has the upload image option???

    • @leonvanzyl
      @leonvanzyl  Місяць тому

      Good question. I just checked and the node does not include image uploads.
      However, I don't think it's a shortcoming in FW. Looking at the Groq API documentation, it doesn't seem like *they* support images yet.
      I'll keep an eye on this and will create a video once it is supported by Groq.

    • @juanignaciocolella5665
      @juanignaciocolella5665 Місяць тому

      @ but the vision models availables on the web, what for?? I thinks its supported.

  • @SuperLiberty2008
    @SuperLiberty2008 2 місяці тому +1

    Hey Leon! Is it suitable for multipage pdf invoices?

    • @leonvanzyl
      @leonvanzyl  2 місяці тому

      The vision model is meant for images. I have a separate video on other files, like PDFs, that you might be interested in

    • @SuperLiberty2008
      @SuperLiberty2008 2 місяці тому

      @@leonvanzyl could you please share link? What I know is 'chat with pdf approach' and I looking for structured parse pdf with certain amount entities.

  • @CreativAItion
    @CreativAItion 2 місяці тому

    How can we use llama 3.2 vision using the API and not Ollama locally?

  • @mdmanalytics
    @mdmanalytics Місяць тому

    Does this setup work on Flowise Cloud? I get a "Fetch failed" message when I run a simple "Hello" test. Thanks.

    • @leonvanzyl
      @leonvanzyl  Місяць тому

      Keep in mind that FW Cloud wouldn't be able to access Ollama on your local machine

  • @CaspianStudio
    @CaspianStudio Місяць тому

    How come I don't see Allow Image Uploads in my ChatOllama model?

    • @leonvanzyl
      @leonvanzyl  Місяць тому

      You probably need to update your FW instance

  • @ganapathyshankar2994
    @ganapathyshankar2994 2 місяці тому

    Hi Leon, i tried your tutorial to run Llama 3.2 vision locally but i get a response fetch failed. I did follow your step to download ollama and ollama run llama3.2-vision:11b. Do i need GPU to run 11b model?

    • @leonvanzyl
      @leonvanzyl  2 місяці тому

      That usually happens when ollama is not running. In the terminal, try running Ollama serve.

  • @karimsaid1549
    @karimsaid1549 2 місяці тому

    Hello Leon, Unfortunately i did upgrade to Flowise lastest version 2.1.5 but still in Ollama LLM i can't able to see vision option there, so how can i fix it?

    • @leonvanzyl
      @leonvanzyl  2 місяці тому

      You need to use the chat node, not LLM.
      Use ChatOllama node.

    • @karimsaid1549
      @karimsaid1549 2 місяці тому

      @ yes I used chat node not LLM , but Vision option not there only section to add name of LLM, temperature and prompt button

    • @karimsaid1549
      @karimsaid1549 2 місяці тому

      sorry, yes i used Ollama Chat but Vision option not there, also i did reinstall for Flowise on my Mac Laptop but still this feature not yet available, really i don't why?

    • @Daniel-Liu90
      @Daniel-Liu90 Місяць тому

      @@karimsaid1549 Please try npm update -g flowise, the update may find the plugin.

  • @tommoves9935
    @tommoves9935 2 місяці тому

    Hi Leon, thank you for all your great videos. However when I try to include ChatOllama into the Flowise chain I always get an error "fetch failed" as an chatbot answer. If I use OpenAI everything is fine. I cannot figure out what the problem is. Obviously I got Ollama and the models installed and when I use them from the console everything works fine. Has it happened to you as well ? Any hints would be highly appreciated. Thanks!

    • @leonvanzyl
      @leonvanzyl  2 місяці тому

      Do you see any errors in the Flowise logs?
      Did you set up Ollama the same way I did, or are you rinniyit in a Docker container or something?
      I can only imagine that the URL might be different.
      You could also try to run the command "ollama serve" in the command prompt to ensure that the ollama server is running.

    • @tommoves9935
      @tommoves9935 2 місяці тому

      @@leonvanzyl very thankful for your respond! I did setup Ollama like you did. Very basic - no docker. Can do everything with it from the command prompt (like you showed in the videos) - only Flowise does npot function with it.
      However: I tried your command "ollama serve" and get the message: Error: listen tcp 127.0.0.1:11434: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted ?!?
      Sorry to bother you. Would be so happy to get it to run!!!

    • @tommoves9935
      @tommoves9935 2 місяці тому

      @@leonvanzyl thank you very much for your answer. I appreciate it very much! I was able to fix it finally. Dumb error: ChatOllama Base URL has to be: 127.0.0.1:11434 (at least at my setup instead of the localhost...)
      Now it finally works. Very happy. Will keep on exploring.

    • @choistella5863
      @choistella5863 2 місяці тому

      @@tommoves9935 I have exactly the same issue ....

  • @PAKYOUTHISM
    @PAKYOUTHISM 2 місяці тому

    Can you plz prepare a video of creating offline bot who can generate coding based on technical product training videos?

    • @leonvanzyl
      @leonvanzyl  2 місяці тому

      That's a cool idea! Thank you

  • @MeTuMaTHiCa
    @MeTuMaTHiCa 2 місяці тому

    Thx

  • @thatOne873
    @thatOne873 2 місяці тому

    but FlowiseAI is not free?

    • @leonvanzyl
      @leonvanzyl  2 місяці тому +2

      It is. It's open source and free to use. You can self host it as well.
      Have a look at my Flowise Tutorial series to learn how to run it locally or in the cloud.
      You might be referring to their fully managed cloud service.

    • @thatOne873
      @thatOne873 2 місяці тому

      @@leonvanzyl many thanks! will do, have a nice day : )

  • @BirdManPhil
    @BirdManPhil 2 місяці тому

    I dont know why you refuse to respond to my attempts to hire you for a project but im sincerely disappointed Leon

    • @leonvanzyl
      @leonvanzyl  2 місяці тому

      Hey Bird Man Phil.
      I'm really sorry about that. Must admit, I'm very behind on emails and making drastic changes and bringing in help to improve things for the new year.
      Did you send an email to my Gmail account?

    • @BirdManPhil
      @BirdManPhil 2 місяці тому

      @leonvanzyl yes a few times