How to Download Llama 3 Models (8 Easy Ways to access Llama-3)!!!!

Поділитися
Вставка
  • Опубліковано 17 кві 2024
  • 🔗 Links 🔗
    This tutorial shows how to download the newly released Meta AI's Llama 3 models.
    you'll learn to download and use the Llama 3 models locally and also on free websites!
    llama.meta.com/docs/getting_t...
    llama.meta.com/llama-downloads/
    huggingface.co/meta-llama/Met...
    www.kaggle.com/models/metares...
    huggingface.co/NousResearch/M...
    huggingface.co/mlx-community/...
    www.meta.ai/
    huggingface.co/chat/
    labs.perplexity.ai/
    ❤️ If you want to support the channel ❤️
    Support here:
    Patreon - / 1littlecoder
    Ko-Fi - ko-fi.com/1littlecoder
    🧭 Follow me on 🧭
    Twitter - / 1littlecoder
    Linkedin - / amrrs
  • Наука та технологія

КОМЕНТАРІ • 41

  • @1littlecoder
    @1littlecoder  2 місяці тому

    How to use Llama 3 Locally (Full Tutorial) - ua-cam.com/video/ZrqCm5jE_nQ/v-deo.html

  • @rodvik
    @rodvik 2 місяці тому

    great work getting these videos up in such short time! really helpful!

  • @NoTimeWaste1
    @NoTimeWaste1 2 місяці тому +1

    Thank you for providing so many different ways to access Llama 3. I didn't even know half of them before watching the video.

  • @lexuscrow1932
    @lexuscrow1932 2 місяці тому +5

    For reference: I have 12gb VRAM and 32gb RAM and I can run the llama3 70b 4bit quant (barely by splitting the ram and VRAM so that 11gb vram used and 31gb ram used). It takes me a minute for each word but it works. I recommend trying 3bit quant or sticking to llama3 8b unless you have patience or better hardware :)

  • @sajeebhussain
    @sajeebhussain 2 місяці тому

    When u run llama2 locally using ollama which gpu is adviceable.

  • @PRFKCT
    @PRFKCT 2 місяці тому +1

    please make videos how we can use these models,test the models for different scenarios or may be using on web apps. there are no videos on these on youtube

  • @snehitvaddi
    @snehitvaddi 2 місяці тому

    I have been trying the image generation on this, and this is substantially better and fast.
    In future, if possible can you make a tutorial on LLama 3 with images?

  • @emanuelec2704
    @emanuelec2704 2 місяці тому

    When I use llama 3 8B on ollama or LM Studio, it is much dumber than on OpenRouter. Even after resetting all parameters to factory and loading the llama 3 preset. Even with the full non-quantized 8-bit version on LM studio.

  • @YG-wk7qm
    @YG-wk7qm 2 місяці тому +1

    i had a very bad experience downloading it, i have macbook air m2 8gb and it lagged it so hard it was like using a cheap laptop. also i had installed llama3 which provided false code when i asked a program for prime numbers and when i talked about ollama it told me to "keep things respectful and not use any vulgur languages"

  • @johnsaxz
    @johnsaxz 2 місяці тому +4

    Bro you are the GOAT
    I was so confused when reading the README

    • @1littlecoder
      @1littlecoder  2 місяці тому +1

      Thanks bro!

    • @Edoras5916
      @Edoras5916 2 місяці тому +1

      also its now so over used huggingface is down 4-22-24 in my region. cant access it

    • @borntodoit8744
      @borntodoit8744 2 місяці тому +1

      AGreed - great video to understand our options to get ANY model up and running. I've opted for Ollama as everything else seems too complicated or too expensive.

    • @johnsaxz
      @johnsaxz 2 місяці тому

      to be very honest this is my first time trying something like this. I lowkey need a step by Step tutorial especially for windows😪

  • @nikhilmish001
    @nikhilmish001 2 місяці тому

    I only have 8 gb ram. Are those 2 or 3 bit quantized versions any good? because i can run only those.

    • @breakingtwitting
      @breakingtwitting 2 місяці тому

      8gb is for office user not for running models

  • @322ss
    @322ss 2 місяці тому +1

    Hey big thanks!

    • @1littlecoder
      @1littlecoder  2 місяці тому

      You're welcome! In case if this can help ua-cam.com/video/ZrqCm5jE_nQ/v-deo.htmlsi=3cVqxFerw-I2CRni

  • @__________________________6910
    @__________________________6910 2 місяці тому +2

    Where is the google colab code

  • @Edoras5916
    @Edoras5916 2 місяці тому

    your right perplexity labs has llama3 running at it fast

  • @NeoIntelGore
    @NeoIntelGore 16 годин тому

    install ollama, open cmd, type ollama run llama3 .. done

  • @patrickwasp
    @patrickwasp 2 місяці тому

    Ollama run llama3 will only have a 2k context window?

    • @1littlecoder
      @1littlecoder  2 місяці тому

      I guess you can change that with the model template file

  • @marcoaerlic2576
    @marcoaerlic2576 Місяць тому

    Great video. Yes can you please create a Colab example for Lama 3?

  • @svenst
    @svenst 2 місяці тому

    Not to forget RAGNA Desktop App. Even though it’s only for Mac available yet ;)

  • @Gregadori
    @Gregadori 19 днів тому

    Bro what specs are needed to run these models?
    What if my laptop doesn't have a GPU?

    • @1littlecoder
      @1littlecoder  19 днів тому +1

      This tutorial assumes that you don't have a GPU on a consumer grade CPU laptop. You can run this

    • @Gregadori
      @Gregadori 19 днів тому

      @@1littlecoder Thank you for your reply. I downloaded Ollama windows preview. It works well.
      Thank you for your video. Can you connect over Linkedin if possible?

    • @1littlecoder
      @1littlecoder  18 днів тому

      Sure my LinkedIn is in the video description

  • @user-en4ek6xt6w
    @user-en4ek6xt6w 2 місяці тому +1

    I'm so impatient for Groq to host the mode, soon we will see blazing fast high quality agent working together

    • @1littlecoder
      @1littlecoder  2 місяці тому +2

      Even perplexity one is quite faster..not sure if they're offering api as well

  • @AnthonyTrivett
    @AnthonyTrivett 2 місяці тому

    Nousresearch must have removed those LLM. No accessible or seen.

  • @TheGamingAlong
    @TheGamingAlong 2 місяці тому

    Bro, how did you made the video so fast lol

  • @Receive_
    @Receive_ 2 місяці тому

    Try to use through the jan

    • @1littlecoder
      @1littlecoder  2 місяці тому

      Does it work well m

    • @Receive_
      @Receive_ 2 місяці тому +1

      @@1littlecoder without any problems

  • @g.r.traders781
    @g.r.traders781 2 місяці тому +1

    Colab notebook

  • @reinsein518
    @reinsein518 2 місяці тому

    GGUF??????