Getting Started with OLLAMA - the docker of ai!!!

Поділитися
Вставка
  • Опубліковано 28 січ 2024
  • chris explores how ollama could be the docker of AI. in this video he gives a tutorial on how to get started with ollama and run models locally such as mistral-7b and llama-2-7b. he looks at how ollama operates and how it works very similarly to docker including the concept of the model library. chris also shows how you can create customized models and how to interact with the built-i fastapi server as well as use the javascript ollama library to interact with the models using node.js and bun. at the end of this tutorial you'll have a great understanding of ollama and it's importance in AI Engineering
  • Наука та технологія

КОМЕНТАРІ • 17

  • @bharatarora9036
    @bharatarora9036 5 місяців тому +3

    Thank you @Chris For Sharing This. Very Informative

    • @chrishayuk
      @chrishayuk  5 місяців тому +1

      Glad it was helpful!

  • @sollywelch
    @sollywelch 5 місяців тому +3

    Great video, really enjoyed this! Thanks Chris

    • @chrishayuk
      @chrishayuk  5 місяців тому +2

      Thank you, wasn’t the video I intended to record that day, glad it worked well, and you enjoyed it. Thank you

  • @sbudaland
    @sbudaland 4 місяці тому +2

    You are a great teacher and you speak tech very well in such a way that it encourages one to watch the whole video

    • @chrishayuk
      @chrishayuk  4 місяці тому

      Thank you so much 🙂

  • @NicolaDeCoppi
    @NicolaDeCoppi 5 місяців тому +5

    Great video Chris! You're one of the smartest person I know!!!

    • @chrishayuk
      @chrishayuk  5 місяців тому +2

      Too kind and right back atcha

  • @mechwarrior83
    @mechwarrior83 4 місяці тому +1

    What a great little underrated channel. I love how you present information in such a clear manner. Instant subscribe!

    • @chrishayuk
      @chrishayuk  4 місяці тому +1

      Thank you, glad you enjoyed it. Underrated is perfectly fine with me, channel is really about organising my thoughts, just feel lucky other people find it useful

  • @crabbypaddy5549
    @crabbypaddy5549 3 місяці тому

    I installed the llama2:70b wow it is super good, but it is heavy on my machine. it uses up 50 gb ram, and running my 5090x at 70 percent and still it is nearly uses up all of my 3090 GPU. it is a bit slower than the 7b but the answers are so much more complex and nuanced. Im blown away.

  • @zscoder
    @zscoder 4 місяці тому +1

    Curious how we could setup use case for project context prompts?
    Thanks for this awesome video, subbed 🙌

  • @jocool7370
    @jocool7370 4 дні тому

    Thanks for making this video. I've just tried OLLAMA. It gave wrong answers to 3 of my 4 first (and only) prompts. Uninstalled it.

  • @iamdaddy962
    @iamdaddy962 4 місяці тому +5

    really wish your channel got more attention compared to the L4 "influencers"....seems like youtube "programmers" prefer entry level sensationalist memelords )):

    • @chrishayuk
      @chrishayuk  4 місяці тому +3

      I’m okay with the level of attention it gets, the channel is my tech therapy. I just feel very lucky that other people don’t mind watching my therapy sessions

    • @iamdaddy962
      @iamdaddy962 4 місяці тому +3

      @@chrishayuk i appreciate all REAL senior level wisdom you've bestowed on the internet!! thinking about how the techlead still gets hundreds of thousands of views sometimes makes me have an aneurysm haha

    • @chrishayuk
      @chrishayuk  4 місяці тому +2

      Very very kind of you