Meta's New Llama 3.2 is here - Run it Privately on your Computer

Поділитися
Вставка
  • Опубліковано 28 січ 2025

КОМЕНТАРІ • 132

  • @SkillLeapAI
    @SkillLeapAI  4 місяці тому +59

    Who's installing Llama 3.2?

    • @tedalert
      @tedalert 4 місяці тому +3

      Much appreciaated! Installing the 11B on my PC now, but can you make a video on how to get the 1B (or 3B, not sure if my phone is beefy enough for 3B) model run on Andoid?

    • @singingshelf834
      @singingshelf834 4 місяці тому

      download button not showing up

    • @tedalert
      @tedalert 4 місяці тому

      @@singingshelf834 I just learned that EU and some other countries are left outside. Will try with a VPN later...

    • @TransLearnTube
      @TransLearnTube 4 місяці тому

      I installed 3B of 3.2 localy with Openwebui

    • @x8z195
      @x8z195 4 місяці тому +2

      I got excited and downloaded 450B model 😅

  • @kwest84
    @kwest84 3 місяці тому +2

    Excellent tutorial video! I've been thinking about trying out local AI for quite some time but I never got around to it. This made it really simple and hassle free to get it up and running. Thank you! You've earned yourself a new subscriber.

  • @ChuckSwiger
    @ChuckSwiger 4 місяці тому +30

    I came for the "Vision" part of the title only to be told it's not available yet on groq - Playing with the python code on the model card and it'll read text from images but about any question about the image just gets a safety warning about cannot id ppl :) Even asking about the Rabbit in their example : "what is this animal and what is it thinking? I'm not able to provide information about the individual in this photo. I can give you an idea of the image's style, but not who's in it. I can provide some background information, but not names. The image is not intended to reveal sensitive information. The image is not intended to reveal personal information. The image is not intended to reveal personal information. The image is not intended to reveal personal information. The image is not intended to reveal personal information. The image is not intended to"

    • @pmarreck
      @pmarreck 3 місяці тому +1

      The real cost of censored models is dumbing down the model like that

    • @ChuckSwiger
      @ChuckSwiger 3 місяці тому

      @@pmarreck switch to the 'instruct' version and it works much better.

  • @dalecorne-new-mtv
    @dalecorne-new-mtv 4 місяці тому +26

    Installing Docker took 3 times as long as installing ollama. Installing this on Windows is different than what you show. On Windows 10, you don't have to install Llama 3.1 then 3.2. Just install 3.2. Also, after docker installs it gives a button that says "Close Restart". I thought it meant close the app and restart it....Noooooooooo, it meant restart Windows...so just be prepared. It's working great for me. Thank you.

  • @blocbonbon
    @blocbonbon 4 місяці тому +25

    Came here for Vision, since it's in the title. Left with no vision.

  • @dannyroberts7118
    @dannyroberts7118 Місяць тому +1

    Installed, and have been actively using it. I find it really fast and competent in relation to its answers. This is running on a MacBook Air m3 +16gb. I am now training the model on specific papers so I can use it as a medical repository which appears to be going well.

    • @__Wanderer
      @__Wanderer 21 день тому +1

      how can you train the model on your own documents? :) Interested in doing the same!

    • @pba21
      @pba21 4 дні тому

      How do you train your model?

  • @AICC2222
    @AICC2222 26 днів тому

    Thank you so much for this video. It was definitely a value-adding video!!!

  • @papridasgupta8111
    @papridasgupta8111 4 місяці тому +2

    Hi, I am an new developer from India where GPU hardware is a bigger bottle neck for developers! Therefore, please give the minimum GPU or CPU requirement while starting your next youtube video and thanks for sharing such nice video in a straight forward manner
    \

  • @amitshiksha
    @amitshiksha 4 місяці тому +2

    Your explainatiion is just awesome my friend.

  • @JenkinsUSA
    @JenkinsUSA 4 місяці тому +4

    I’m giving it a go! Thanks for the video.

    • @SkillLeapAI
      @SkillLeapAI  4 місяці тому +1

      Welcome

    • @JenkinsUSA
      @JenkinsUSA 4 місяці тому

      That was easy! I already had Docker. Everything turned out perfectly 3B / 1B text. 📐

  • @onnleon
    @onnleon 4 місяці тому

    Thanks so much been trying to get this to work for like 25 minutes and finally landed on your video

  • @capzor
    @capzor 4 місяці тому +9

    how do i install and run the vision models? I have access already

  • @Dan_Campbell
    @Dan_Campbell 3 місяці тому

    Is there a multiline output box available in Gooey? I know we can generate an input multiline textarea, but I'd like to find an alternative to just printing to the default output box.

    • @SkillLeapAI
      @SkillLeapAI  3 місяці тому +1

      Thanks for the tip. I'll dig in to it a bit more but I don't know a way to get multiple text areas as an output

    • @Dan_Campbell
      @Dan_Campbell 3 місяці тому

      @@SkillLeapAI Ok, thanks SL.

  • @mostwanted2000
    @mostwanted2000 21 годину тому

    Can you explain in the next video why choose Ollama versus FaceHug?

  • @hozgur
    @hozgur 4 місяці тому +13

    Title is : Meta's New Llama 3.2 with Vision is here - Run it Privately on your Computer. Are you sure?

  • @kamaleshpramanik7645
    @kamaleshpramanik7645 Місяць тому

    Excellent videos .. Thank you very much Sir.

  • @HogwartsDetentionBuddy
    @HogwartsDetentionBuddy 17 днів тому

    tried this but the Docker isn't showing the openwebui after its finished loading...😑

  • @MJFUYT
    @MJFUYT 4 місяці тому +1

    Excellent content and commentary!

  • @sohaibsultan8483
    @sohaibsultan8483 15 днів тому

    hi, i am unable to generate image through webui. currently installed 3.2, 3.1 and 3.0

  • @garymaya1767
    @garymaya1767 8 днів тому

    wow, amazing bro, you saved me time and money. if my damn internet wasn't so slow i probably could have do the install in real time. was able to get it up and running on windows 11 in no time. did have reboot after docker install. but all works well, also windows firewall locally had to be disabled. ANyhow it worked great thank you! can you do a video on how to train this model????

  • @JNET_Reloaded
    @JNET_Reloaded 4 місяці тому

    thanks so much i be testing this out today on a rpi 5 :d

  • @WINTERMUTE_AI
    @WINTERMUTE_AI 4 місяці тому +14

    LLAMA became my best friend after gpt went all corpo cnt on me.

    • @RememberTheLord
      @RememberTheLord 4 місяці тому +1

      What do you mean? Never used llama so what’s the difference

    • @SuvaKrpa
      @SuvaKrpa 3 місяці тому

      @@RememberTheLord freeeee opeeen souuuurcee kaching kaching for your broke a s

  • @shindre1
    @shindre1 27 днів тому +2

    can llama 3.2 be integrated in a website as a chat bot?

    • @ZORO-xj8vz
      @ZORO-xj8vz 15 днів тому

      Same i want to know this

  • @nosuchthing8
    @nosuchthing8 3 місяці тому +1

    So you can run a model with 64gb ram on a recent windows computer?

  • @Walter_Ayt
    @Walter_Ayt 4 місяці тому +1

    Mera sponsoring this video 😂

  • @rahatrumi
    @rahatrumi 28 днів тому

    Thanks for the excellent video. I was able to walk thru all the steps, but the llam models do not show up in the webui. HOw do I add the llam instslled models to webui chat interface

  • @tratkotratkov126
    @tratkotratkov126 4 місяці тому +4

    How to run privately 90b on Groq cloud … Also what’s the point of the demo when the multimodal is still not available

    • @Fred1989
      @Fred1989 4 місяці тому +3

      Not sure about that specific model, but I hope you do realise that you're not running privately when using services like Groq. You can never be 100% sure that your data and interactions with the model are private and not used internally by the service provider or sold. The way I look at it, any business is out to make money, and data is worth quite a bit these days, so if something is free or cheap you should probably wonder if you're not the product that they are making money on, ultimately it'll come down to trust.
      To ensure running a model privately you simply have to run it locally, but for a model with 90b parameters you would need a very expensive setup, so be prepared to either scale down your expectations to smaller models that fit in your vram, or scale up your budget for a system that can handle large models like that! 🙂

  • @Kingkimabdu5090
    @Kingkimabdu5090 3 місяці тому +2

    Everything seemed fine until I clicked the link in Docker. The website page opened with an error message stating, "This page isn't working." Can anyone offer assistance?

    • @petrgrebenicek1067
      @petrgrebenicek1067 Місяць тому

      I have same issue, someone has any solution?

    • @glenswada
      @glenswada Місяць тому

      There is option in docker under settings/resources/network to enable host networking. Enabling that setting worked for me.

  • @stevelatimer995
    @stevelatimer995 3 місяці тому

    Great video and so simple. Some guge had me running Ubuntu and all sorts. I gave up in the end and I'm pretty IT saavy. This was a doddle!

  • @wfung8572
    @wfung8572 3 місяці тому +1

    can install without a graphic card? Tks

  • @ericanku9451
    @ericanku9451 Місяць тому

    Thanks for the insights

  • @Jay-zr8kx
    @Jay-zr8kx 8 днів тому

    Thanks fo the vid.
    I want to make my app server to connect to Llama. And I tried making requests to the localhost where Llama is hosted on docker but I am getting Method not allowed. Do you know how to do it?

  • @dr_hebaahmed
    @dr_hebaahmed 4 місяці тому

    should i install llama 3.1 before 3.2? can i download the new model from the start?

  • @EddieOCon-l3b
    @EddieOCon-l3b 24 дні тому

    I installed Lama 3.2 and it is running perfect. Right now, I decided to do an upgrade, and I downloaded Llama 3.3. How do I make sure that Docker is going to except the Llama 3.3. Do I need to put another code in Command prompt to have the new llama 3.3 working perfect? Can you give me some advice?

  • @nosuchthing8
    @nosuchthing8 3 місяці тому +1

    Why not tell us how much vram is needed for these models??

  • @moonduckmaximus6404
    @moonduckmaximus6404 3 місяці тому

    i have 3.1 with this process . how do you just update the model from 3.1 to 3.2

  • @MojaveHigh
    @MojaveHigh 4 місяці тому +3

    Very cool! What are your computer specs? In other words, what do I need to get that speed locally? What are minimum specs to run Llama 3.2?

    • @annihilation9670
      @annihilation9670 4 місяці тому +2

      he 11B Vision model takes about 10 GB of GPU RAM

  • @AlexanderGarzon
    @AlexanderGarzon 3 місяці тому

    where to get the llama 3.2 with vision capabilities?

  • @nosuchthing8
    @nosuchthing8 3 місяці тому

    On windows the terminal is called a dos prompt

    • @SkillLeapAI
      @SkillLeapAI  3 місяці тому

      Windows 11 has something called windows terminal

  • @convoyflashh9259
    @convoyflashh9259 4 місяці тому

    Which version should I download if I just have a standard Dell laptop running windows and no intent to use the vision features? I don’t want to overwhelm my laptop but look for good performance

  • @showmequick2245
    @showmequick2245 3 місяці тому

    Question: hosting the local AI but giving access to family (with their own user account) will this give them access to my own uploaded content ?

  • @tomascoox
    @tomascoox 4 місяці тому +2

    Failed miserably on the classic question "How many words are your answer to this question?"

  • @holdthetruthhostage
    @holdthetruthhostage 4 місяці тому +1

    I just hope the 90b is Amazing & can output over 2k words & Code

  • @abirkhan924
    @abirkhan924 3 місяці тому

    Can you make a video for running deep learning model locally on mac

  • @stephenstern6228
    @stephenstern6228 3 місяці тому

    Great video, how do you install the larger models?

  • @ramesh150585
    @ramesh150585 2 місяці тому +1

    I tried this in my windows machine...its very slow. !!!

  • @trevorduguidfarrant2242
    @trevorduguidfarrant2242 4 місяці тому

    at 2 mins 30 you suddenly get a pop-up window appear and you selected move to applications, how did you get that?

    • @tomfrakey6335
      @tomfrakey6335 3 місяці тому

      Yeah that confused me too. Run Ollama the program and that'll open but you don't need it. Just punch in the command he gives you just after that.

  • @zhou-yc6mt
    @zhou-yc6mt 27 днів тому

    idk why does my webUI run so slow

  • @alexsanzphoto
    @alexsanzphoto Місяць тому

    Is it really private or does it send info in the background? (Or when back online)

  • @wolframight50shadesblack54
    @wolframight50shadesblack54 4 місяці тому +1

    Is there any API key for ollama models?

  • @DaveTheeMan-wj2nk
    @DaveTheeMan-wj2nk Місяць тому

    Is this one uncensored tho?

  • @anandmaurya3389
    @anandmaurya3389 2 дні тому

    Will it work with CPU?

    • @SkillLeapAI
      @SkillLeapAI  День тому

      Probably not or it will be extremely slow without a good GPU

  • @mikeyuk4242
    @mikeyuk4242 4 місяці тому

    I would argue that most people purchase cars via a subscription, in the UK we call it PCP but basically its just renting the car with such a high cost at end of term, noone does it.

  • @JuanManuelLucero
    @JuanManuelLucero 4 місяці тому

    Thank you!

  • @Fisherman00-y9e
    @Fisherman00-y9e 3 місяці тому

    Hey man amazing video been using llama 3.2 3B on my laptop ever since you posted this, thank you so much! I had a question tho, I am not tech savvy at all, a pop up to update openwebui appeared and I downloaded the zip but no idea how to update it... any help would be appreciated if not it's OK, ill just keep running this old version. Thank you

  • @TransLearnTube
    @TransLearnTube 4 місяці тому

    Please suggest any text to video converter model

  • @JosueMUHIRWA
    @JosueMUHIRWA 4 місяці тому

    What you show there is different from reality, especially when you use a terminal to get a container, I gett stuck there

  • @mohtishammuzzammil9084
    @mohtishammuzzammil9084 4 місяці тому

    How can we run this on mobile phone?

  • @javedAli-r3u4o
    @javedAli-r3u4o 4 місяці тому

    How can expose api

  • @ps3301
    @ps3301 4 місяці тому

    We want llama 4 o1 model!!!

  • @DesignDesigns
    @DesignDesigns 4 місяці тому

    Thank you....

  • @stuartbrown5012
    @stuartbrown5012 4 місяці тому +2

    Thanks. This worked. However the actual model is very disappointing. A quick 10 minute use of it convinced me that it is pretty worthless. The number of hallucinations was off the scale. Also, the rather daft need for this ridiculous sequence to even run it is bizarre. You would think it would just download and run. Not a patch on ChatGPT or Claude. Not even close.

    • @pmarreck
      @pmarreck 3 місяці тому

      It's because we only have access to the 1B or 3B models. I just tried the 70B on groq and it's MUCH better. But still not as good as those /shrug

  • @JosueMUHIRWA
    @JosueMUHIRWA 4 місяці тому

    Even me i was challenged to configure docker even webUI , any one who did in window11,can help to finish that

  • @OrvilleReyes-u3n
    @OrvilleReyes-u3n 4 місяці тому

    Stiedemann Shores

  • @jedi10101
    @jedi10101 3 місяці тому

    interesting, but i stopped watching around 3 min because of the tiny terminal screen that you used to show what you were doing.

  • @JohnsonNong
    @JohnsonNong 4 місяці тому

    nice tutorial

  • @AcapellaFella
    @AcapellaFella 4 місяці тому +3

    Last time I installed Llama3 I burned my hard drive up.

    • @SkillLeapAI
      @SkillLeapAI  4 місяці тому +1

      These new smaller models should perform a lot better

    • @KRAKEN777-u7b
      @KRAKEN777-u7b 4 місяці тому +1

      @@SkillLeapAI i am thinking for uploading legal/court case citations/legislation/regulations and the like. Can I ask what would be the minimum spec requirements for a pc and be capable enough ? Thanks

  • @Divyv520
    @Divyv520 4 місяці тому

    Hey skill , very good video ! I was wondering if I can help you with more Quality Editing in your videos and make Highly Engaging Thumbnails which will help your videos to get more views and engagement . Please let me know what do you think ?

  • @Azazeldewilz
    @Azazeldewilz 4 місяці тому

    Why do we need Lama????? I will wait until they make it easy to install without any other docker, links ... etc....

    • @SkillLeapAI
      @SkillLeapAI  4 місяці тому

      Privacy. If you don’t care about that, you can just use it on groq or meta Ai

    • @neatcool4770
      @neatcool4770 3 місяці тому

      I used LM Studio with Llama3 , it is easier

  • @ThirdHorseman
    @ThirdHorseman Місяць тому

    Like how it still thinks it’s connected to meta servers

  • @tomfrakey6335
    @tomfrakey6335 3 місяці тому +1

    Thank you for the tutorial, but this thing is dumb as a rock compared to Chat GPT 4.0 so I probably won't find much use for it.

  • @DebraMcClain-i5e
    @DebraMcClain-i5e 3 місяці тому

    Kertzmann Court

  • @LambertElroy-i8n
    @LambertElroy-i8n 3 місяці тому

    Yundt Springs

  • @HimanshuGiriGoswami24
    @HimanshuGiriGoswami24 4 місяці тому

    Ask it to write code for gta 6

  • @nonycount-je8uf
    @nonycount-je8uf 3 місяці тому

    so a total BS Clickbait title! Next time I see a Skill Leap AI video I am ignoring it

  • @morease
    @morease 4 місяці тому

    clickbait ;(

  • @seanmaman
    @seanmaman 4 місяці тому

    How can I install locally the 11B model ?