How I Build Local AI Agents with LangGraph & Ollama

Поділитися
Вставка
  • Опубліковано 20 сер 2024

КОМЕНТАРІ • 46

  • @NoCodeFilmmaker
    @NoCodeFilmmaker 2 місяці тому +19

    Bro, you're tutorials are GOLD. You are literally the only one I've seen on the platform breaking it down like this. You fking ROCK 🤘

  • @ramishelh9998
    @ramishelh9998 2 місяці тому +2

    The way you handle software engineering principle is absolutely amazing!

  • @carktok
    @carktok 2 місяці тому +1

    I have massive ADHD and for some reason your minimal chill communication style is easy to listen to and your visuals are super helpful. Thanks for putting these together!

  • @tommayhew8925
    @tommayhew8925 2 місяці тому +3

    Hi! I just wanted to say a huge thank you for the incredible work you’re doing and the knowledge you’re sharing. Your videos are full of inspiring content and valuable information that really help with personal development. I appreciate your effort and dedication. Thanks a lot, and I can’t wait for more great content!

  • @RazorCXTechnologies
    @RazorCXTechnologies 25 днів тому

    Amazing tutorial. I just tested your app using my RTX4090 and Llama3.1:8b. The results were impressive and latency was OK considering its running locally. I also tried with Llama3.1:70b and it worked great but too slow running locally. Llama3.1 looks like a game changer for local LLM apps.

  • @lLvupKitchen
    @lLvupKitchen 12 днів тому

    Only AI channel that actually is helpful.

  • @unveilingtheweird
    @unveilingtheweird 2 місяці тому +1

    you method is leaps and bounds better than most. I enjoy your tutorials very much

  • @ZacMagee
    @ZacMagee 2 місяці тому

    Just wanted to say again, great content, mate. As a self-taught/teaching AI engineer/programmer/content creator, your content is an incredible resource and inspiration. Keep it coming!

  • @Active-AI
    @Active-AI 2 місяці тому

    Fantastic and inspiring. At the end of your video you also answered a question I had regarding smaller LLM and hardware restrictions.

  • @jarad4621
    @jarad4621 2 місяці тому +1

    If you still struggling like i do as a non coder is upload the entire video and code to Gemini 1.5 pro and ask it what you want, like how integrate to openrouter and it will do everything explain easier and update code

  • @akmlatc
    @akmlatc Місяць тому

    I really like content you produce man! keep it up! cheerss

  • @vispinet
    @vispinet 2 місяці тому

    This makes so much sense... I designed a small workflow (no agents involved) to parse some tabular text data and do some reasoning on each row... I used llama3 8B. It worked ok but every few rows the response would not return in the correct format. Sometimes one of the main headers would come back with a typo. The solution I found was to catch the errors and when they occurred re-run the function. Not ideal of course but did the trick as it was a small job... now I understand it may just be that these smaller models are just not reliable when you need to work with structured responses...

  • @pedrorafaelnunes
    @pedrorafaelnunes 2 місяці тому +3

    Hey brother ! Possible using Groq ? 👀

  • @CUCGC
    @CUCGC 2 місяці тому +1

    Awesome, I would like to see how your unique approach works with incorporate an ollama embedding model + vector store.

  • @ManjaroBlack
    @ManjaroBlack 2 місяці тому

    I love your implementation. I’ll modify my pull request to use your Ollama implementation and resubmit for the SearXNG feature. I’ll try and follow your style to select between SearXNG and Serper.

  • @restrollar8548
    @restrollar8548 Місяць тому

    Great stuff

  • @marguskokk4293
    @marguskokk4293 2 місяці тому

    Thanks for doing these! This is EXACTLY what I needed EXACTLY at the right time in my learning process. Suggestion for next tutorial: how to get two different models talking to each other and run python scripts as tools :). Love your work.

    • @Data-Centric
      @Data-Centric  2 місяці тому

      Thanks for the suggestion and thanks for watching. Glad it has been helpful.

  • @SimonMariusGalyan
    @SimonMariusGalyan 2 місяці тому

    Thank you 😊

  • @free_thinker4958
    @free_thinker4958 2 місяці тому

    Hats down my friend 👏🙏🎩 we would like you to dedicate a video on using crewai within langgraph ❤

  • @Haiyugin
    @Haiyugin 2 місяці тому

    Great content and presentation, thank you. I would really like to see a workflow that uses local models to generate components of the output and then any one of the non-local models to synthesize the final output, a neo4j knowledge graph for shared memory between agents would be an amazing next step.

  • @mihaitanita
    @mihaitanita 2 місяці тому

    Hi, for me - so far, the best Ollama's structured output model was `codestral` (22b, and - if matters, has a non-commercial licence). I agree, we are not there yet with those SLMs. Maybe later, nov-dec this year.

  • @IkshanBhardwaj
    @IkshanBhardwaj Місяць тому +1

    How is this approach different than rather using ChatOllama instance of langchain, doesn't that handle everything on the backend?

  • @JohnSmith-ld6qy
    @JohnSmith-ld6qy 2 місяці тому +1

    yay!

  • @TANVEER991164
    @TANVEER991164 2 місяці тому

    As usual great tutorial , would also love if you can create similar tutorials on CrewAI as well ,thanks.

  • @jarad4621
    @jarad4621 2 місяці тому

    Your solurion to the perfect agent is Gemini flash due to cost speed and quality and of course the large context, try that next and watch it do what you want

  • @Whiskey9o5
    @Whiskey9o5 2 місяці тому

    Great content!

  • @SonGoku-pc7jl
    @SonGoku-pc7jl 2 місяці тому

    tbhanks, well explained.

  • @HomeIDHomeID-uy6nq
    @HomeIDHomeID-uy6nq 2 місяці тому

    Super cool video

  • @woojay
    @woojay 2 місяці тому

    Thank you so much.

  • @landob6393
    @landob6393 2 місяці тому

    Hey, just curious: why not use the langchain wrappers of serper api and ollama api?

  • @banalMinuta
    @banalMinuta 2 місяці тому

    Honestly this was a little over my head and I didn't fully grasp everything you said.
    I've only been programming for a total of 3 months and then Python for less than a month, as a beginner programmer what are your thoughts about just trying to write Python scripts and using complex Python logic to try to pass responses and prompts between endpoints for ollama?
    Like I said I'm a beginner so maybe I'm missing something do I need to have LangChain involved to just mess around like that?

  • @saabirmohamed636
    @saabirmohamed636 3 дні тому

    Hi, Thanks for you videos.
    im testing and testing ollama in tools like aider etc , even my own python apps , just finding it a struggle always (not integration and setup)
    results are so bad ...
    with aider for example..always messes up writing the files and choosing wrong folders all that ..
    even with tools like agent-zero all that makes you go mad.
    then you switch to sonnet or gpt 4....ALL runs like magic.
    this local models ..if i can just get it to work well..
    in research and testing i end up using soooo much bucks

  • @user-du6zo7zp2k
    @user-du6zo7zp2k 2 місяці тому

    Great vid, your explanations and the experience you bring are in a different league. About using LangGraph with open source, I am thinking of using litelLM proxy to simplify the building of different models. Of course that limits open source models to those provided by litellm. Any thoughts on this approach? anyone?

  • @satjeet
    @satjeet 2 місяці тому

    I would love to see a groq model, is very easy to use their API.

  • @JeomonGeorge
    @JeomonGeorge 2 місяці тому

    Can you make ReAct agents from scratch without using langchain with Json output (like in the case with create_structed_agent()). because when am using the std. way of langchain is giving me parsing errors. Plz

  • @jithinwork
    @jithinwork 2 місяці тому

    i try to run React agent tools calling using llama3:instruct with langchain and llama-Index
    and lamaIndex was able to call any local functions seemlessly without any formating issues.But langchain failed becuase the agent was not able to convert the parameters to int rather than string
    i used on a multplication function and a vectore db restriver case .
    only issue with llama_index is it doesnt have langgraph 🤣

  • @ademczuk
    @ademczuk 2 місяці тому

    Could you test LLama 3 instruct on python coding

  • @clt7640
    @clt7640 2 місяці тому

    great content. is it possible to download the LLM model locally eg from HuggingFace and then incorporate to your script to run without calling to Ollama.

    • @Data-Centric
      @Data-Centric  2 місяці тому

      It is possible to do this. Hugging Face has it's own interface for inference. You could just create your own Hugging Face module similarly to how I showed with Ollama. Although you wouldn't be sending POST requests, you would just be running the model with the hugging face API.

  • @muraliytm3316
    @muraliytm3316 2 місяці тому

    yes your explanation is good but, it is difficult to understand and if you explain it along with writing code, so instead of writing all the code and explain it later, explain it step by step by writing. It would be helpful for even beginners to interact

    • @Data-Centric
      @Data-Centric  2 місяці тому +1

      Thanks for the feedback. This is something that takes a lot of skill and time to execute well. I'll consider it for future videos.

  • @nullvoid12
    @nullvoid12 2 місяці тому

    Why don't you use lightning studio? Thanks