All You Need To Know About Running LLMs Locally

Поділитися
Вставка
  • Опубліковано 29 гру 2024

КОМЕНТАРІ • 287

  • @bycloudAI
    @bycloudAI  7 місяців тому +9

    stay up-to-date on the latest AI research with my newsletter! → mail.bycloud.ai/
    Minor correction: GGUF is not the predecessor to GGML, GGUF is the successor to GGML. (thanks to danielmadstv)

    • @Sebastian-oz1lj
      @Sebastian-oz1lj 7 місяців тому +1

      please make step by step guide how to install locally and private for example
      Mistral-7B. im trying to do this with multple guides and all time im stuck at something

  • @ambinintsoahasina
    @ambinintsoahasina 10 місяців тому +66

    The amount of infos you give both in the videos and the descriptions is insane dude! Keep up the good work!

  • @noobicorn_gamer
    @noobicorn_gamer 10 місяців тому +364

    I hoooonestly don't know how to feel about the thumbnails looking too similar to you-know-who that got me accidentally clicking this video but meh... One's gotta do what one's gotta do I guess.

    • @EdissonReinozo
      @EdissonReinozo 10 місяців тому +17

      Same

    • @Dedjkeorrn42
      @Dedjkeorrn42 10 місяців тому +3

      I don't know who, who?

    • @nathanfrandon2798
      @nathanfrandon2798 10 місяців тому +75

      @@Dedjkeorrn42 Fireship

    • @NIkolla13
      @NIkolla13 10 місяців тому +27

      Bycloud removed the frame and the grid background on his thumbnails, I think those work great as his signature style. I hope he keeps them

    • @seanrodrigues8184
      @seanrodrigues8184 10 місяців тому +3

      Let's just hope he doesn't get _burned~_

  • @danielmadstv
    @danielmadstv 10 місяців тому +141

    Thanks for the video! Minor correction: GGUF is not the predecessor to GGML, GGUF is the successor to GGML.

  • @flexoo7
    @flexoo7 10 місяців тому +29

    Poor Faraday nearly always gets overlooked when people talk about local LLMs, but it is without a doubt the most easy to use "install and run" solution. Unlike nearly all other options it's near-impossible to mess something up and default settings out of the box are not sub-par.

    • @hablalabiblia
      @hablalabiblia 10 місяців тому

      How much is Faraday?

    • @flexoo7
      @flexoo7 10 місяців тому

      @@hablalabibliaLike all the best things in life - it's free.

    • @joure.v
      @joure.v 9 місяців тому

      @@hablalabiblia It's free and very easy to use! It's really meant just for chatting, it's basically a Silly Tavern kind of app, just not with that many options but it has its own back end with a focus on GGML models. If you're looking to just run models through character cards I'd say, give it a go!

    • @Elegant-Capybara
      @Elegant-Capybara 6 місяців тому +1

      Faraday has outdated models and whenever you download models, you have to fumble with model cards and directory structures, plus it's not as fast as other options. LM Studio is better than Faraday.

    • @jbol2454
      @jbol2454 4 місяці тому

      @@Elegant-Capybara LM Studio is closed source.. no thanks..

  • @CarinaAdele5CA
    @CarinaAdele5CA 3 місяці тому +242

    The market trend can turn around very quickly. In fact, the indexes often switch from a bear market to a bull market when the news is at its worst and the mood of investors is at its lowest point. I read an article of people that grossed profits up to $150k during this crash, what are the best stocks to buy now or put on a watchlist?

    • @AmaliaGiselae8g
      @AmaliaGiselae8g 3 місяці тому

      In particular, amid inflation, investors should exercise caution when it comes to their exposure and new purchases. It is only feasible to get such high yields during a recession with the guidance of a qualified specialist or reliable counsel.

    • @AlbrechtChristoph016
      @AlbrechtChristoph016 3 місяці тому

      True, initially I wasn't quite impressed with my gains, opposed to my previous performances, I was doing so badly, figured I needed to diverssify into better assets, I touched base with a portfolio-advisor and that same year, I pulled a net gain of 550k...that's like 7times more than I average on my own.

    • @BerchtwaldElias1EH
      @BerchtwaldElias1EH 3 місяці тому

      This aligns perfectly with my desire to organize my finances prior to retirement. Could you provide me with access to your advisor?

    • @AlbrechtChristoph016
      @AlbrechtChristoph016 3 місяці тому

      NICOLE ANASTASIA PLUMLEE’ is the licensed fiduciary I use. Just research the name. You’d find necessary details to work with a correspondence to set up an appointment.

    • @BerchtwaldElias1EH
      @BerchtwaldElias1EH 3 місяці тому

      She appears to be well-educated and well-read. I ran an online search on her name and came across her website; thank you for sharing.

  • @Leo_Aqua
    @Leo_Aqua 10 місяців тому +21

    You can also use ollama. It even runs on a raspberry pi 5 (although slow)

    • @siddhubhai2508
      @siddhubhai2508 5 місяців тому +1

      Yeah you're right OLLAMA can be run on the raspberry pi 5 even, but don't forget that ollama is made for using local llms, and if you try to run local llms like llama 3 or deepseek, ready for FBI at your home catching you for building a unknown b*mb. Important life lesson - FIRST TRY THEN CRY. GOOD LUCK! 💣

    • @NickH-o5l
      @NickH-o5l 4 місяці тому +2

      I got Gemma 2b running on my end
      I got faster token per second with this really small model from alibaba (yes, it’s biased) with 0.5b parameters, but if you ask it right maybe there’s some use case
      But it’s kinda dumb

    • @siddhubhai2508
      @siddhubhai2508 4 місяці тому +1

      @@NickH-o5l Low parameters = Low accuracy
      👍

    • @ThatAverageMTBer
      @ThatAverageMTBer 26 днів тому

      What model are you running on your pi5?

  • @Veptis
    @Veptis 10 місяців тому +25

    Now we just need a cheap inference card with 128GB memory to run 70B models locally...
    Maybe we can hope for Qualcomm

    • @cbuchner1
      @cbuchner1 10 місяців тому +2

      I’d love to see AI inference accelerator cards with dual or quad channel DIMM slots.

    • @Veptis
      @Veptis 10 місяців тому

      @@cbuchner1 Qualcomm AI 100 Ultra is using LPDDR5

    • @nyxilos9167
      @nyxilos9167 10 місяців тому +1

      groq is using something of the sort, an LPU. although only usable through an api. no consumer cards yet that i know of, but it shows the trend towards it

    • @Veptis
      @Veptis 10 місяців тому

      @@nyxilos9167 you can buy a single groq card right now. it costs 21k and has 230MB on board. So to run 70B models at fp16 you need like 572 cards.... which is several racks. 14+ million to buy and 30kW to power. It will run the model at 400 tok/s easily.
      You can buy a ready made 8x H100 box for maybe 350k and run that with like 8kW and it might be slower than the groq card.
      none of that are consumer solutions.
      The one I am hoping for is Qualcomm AI 100 Ultra. Which comes with 128GB LPDDR5 and 150W. They say it's for edge inference, but it would be perfect for workstation.

    • @Vifnis
      @Vifnis 10 місяців тому

      idk Qualcomm SoCs are for phones mostly... maybe iPhone 30 will have it XD

  • @아잉뀨잉뀨잉-u5q
    @아잉뀨잉뀨잉-u5q Місяць тому

    I have been struggling on this issue for few months, and seems like this video already had the answer more than half an year ago. Thank you for your awesome vid!! Really love your work!

  • @VxV631
    @VxV631 9 днів тому

    This is a straight up LLMs 101 course that EXPLAINS THINGS??? Very well done!!!

  • @RetroPolly
    @RetroPolly 10 місяців тому +3

    A thousand thanks! Finding a good LLM model was a complete nightmare for me + it is difficult to figure out which formats is outdated and which - new hot stuff.

  • @Midicifu
    @Midicifu 28 днів тому

    This has to be one the videos I have most stop and rewind of under 20 minutes 😅 excelent info and format, and the memes are top peak (the gravity download just LOL)

  • @pedrogorilla483
    @pedrogorilla483 10 місяців тому +38

    Where ollama?

    • @sZenji
      @sZenji 10 місяців тому +1

      agree, with the new windows installer its so easy for everyone to get local models

    • @4.0.4
      @4.0.4 10 місяців тому +1

      For a while it was only Mac-based, so it saw limited use with most AI folks who have Nvidia cards. If you're stuck on a Mac I hear it's really the better one for that.

    • @zikwin
      @zikwin 10 місяців тому

      wow now on support windows too ?@@sZenji

    • @babbagebrassworks4278
      @babbagebrassworks4278 10 місяців тому

      I use it on my Raspberry Pi5 to run LMM's, which is seriously cool, er hot when working.

  • @juanantonionieblafigueroa377
    @juanantonionieblafigueroa377 9 місяців тому +1

    Your videos are way more fun than my algebra homework

  • @H1kari_1
    @H1kari_1 10 місяців тому +3

    I love your adhd-friendly edits cloudy.

  • @bossdaily5575
    @bossdaily5575 10 місяців тому +15

    Nice video! Can you do a video about fine tuning a model?

  • @the_gobbo
    @the_gobbo 10 місяців тому +2

    I can finally start my side project to take over the world, thanks!

  • @shoddits2156
    @shoddits2156 10 місяців тому +5

    does the a Giveaway has country restriction?? I mean maybe you can't send it overseas due to shipping cost or something else.

  • @ryry8997
    @ryry8997 20 днів тому

    appreciate the effort in the edit. liked&subbed

  • @Paulo-ut1li
    @Paulo-ut1li 10 місяців тому +3

    Boy, Chat with RTX is my personnel oracle for now on. Its RAG really indexes local documents without that whole hallucination from previous tools.

  • @robertmazurowski5974
    @robertmazurowski5974 8 місяців тому +1

    I was pretty sure this was a fireship video, but the video is great and informative. Exacly what I was looking for.

  • @lunadelinte
    @lunadelinte 10 місяців тому +2

    that was awesome, thanks for the concise information bycloud! 🔥

  • @remboldt03
    @remboldt03 10 місяців тому +112

    Stup osing Fireship thumbnails😭

    • @idk-dk7bq
      @idk-dk7bq 6 місяців тому

      Y

    • @aouyiu
      @aouyiu 5 місяців тому +1

      Stop neglecting proofreading comments 😭

    • @remboldt03
      @remboldt03 5 місяців тому

      @@aouyiu I apologize. I normally proofread all my comments, but I suspect that I was drunk while writing this one. As I don’t like editing comments afterwards, I didn’t change the spelling mistakes.

    • @HPTRUE
      @HPTRUE 3 місяці тому

      Never heard of fireship....

  • @bigglyguy8429
    @bigglyguy8429 10 місяців тому +2

    How did you miss Faraday? Very easy to use and runs faster than LM Studio

  • @papakamirneron2514
    @papakamirneron2514 10 місяців тому +1

    Immensely helpful video. I hope the future has tonnes of user controlled locally ran llms for us in store!

  • @jawbone1218
    @jawbone1218 10 місяців тому +3

    Curious headcount? 🙋How many of us watching these type videos are not developers?

  • @chevalier5691
    @chevalier5691 6 місяців тому +9

    I don't get these complaints about the thumbnails. Are you guys new to youtube? We have been through the era of fake or nsfw thumbnails and yet you're still complaining about similar style? If you're not willing to check the uploader channel name or profile, then enjoy getting scammed by phishing links online.

  • @johnsarvosky533
    @johnsarvosky533 9 місяців тому

    Thanks, this is great. Please make a comprehensive video on Fine-tuning locally 101..Cheers

  • @alfamari7675
    @alfamari7675 4 місяці тому

    So acording to the description llama 3 killed deepseek coder, wizard, and mistral? I just started getting into this stuff recently and those were some of the top performing models I had heard about (though they existed before llama 3).

  • @rumali_roti7406
    @rumali_roti7406 5 місяців тому

    You added models in the description but specify their usage. Can you add more details, please?

  • @rusticagenerica
    @rusticagenerica 21 день тому

    May God bless you for this super clear video. WEN will you update it for end of 2024?

  • @magfal
    @magfal 10 місяців тому +2

    The one thing I hope to see soon is offloading different layers to different GPUs
    I have a 4090 mobile in my laptop and an RX6800 in my eGPU.
    I do have 96GB of system memory in addition to these two 16GB cards so I can do some fun stuff already.

  • @u13e12
    @u13e12 7 місяців тому

    Just to clarify then. For inference speed is more important GDDR6 will be GDDR5, but for fine tuning more more having 2x the amount of GDDR5 will be the GDDR6?

  • @fra4897
    @fra4897 10 місяців тому +2

    what about ollama as a backend, what is your take on that? Thank you so much for the video, sending love from switzerland

  • @aketo8082
    @aketo8082 8 місяців тому +1

    Thank you. Very interessting. Is it possible in LM Studio to work with own files? Or create own LLM or extend LLM for own cases?

  • @artursvancans9702
    @artursvancans9702 10 місяців тому +5

    You pay 20$ for convenience. Spending 1 day to set up the flow, Waiting 2 minutes every time for your model to load when you have a quick question, your GPU + CPU setting your room on fire cuz of how hot they're running... Unless you need some really specific usecase that cloud models censor, then it's just easier to pay those 20$ for instant access

    • @thatguyalex2835
      @thatguyalex2835 9 місяців тому +6

      Patience is a virtue. I got Mistral 7B running on an 2018 laptop, and it takes two minutes to respond, but it works well. Why have 8 GB of RAM when I don't use all 8 GB. The AI uses all my RAM. :) But, for people who have to use AI for a job, $20 is cheap, and workplaces cover the cost. For AI at home, a fast enough computer could work.

    • @JohnDoe-jt7ns
      @JohnDoe-jt7ns 25 днів тому

      @@thatguyalex2835 im new to this but what are you using AI for at home?

  • @Anthonyg5005
    @Anthonyg5005 10 місяців тому +1

    EXL2 does support AMD GPUs. Turbo bought a couple just to make sure it runs with rocm

  • @hardik4942
    @hardik4942 4 місяці тому

    What's the best for investigation and data analysis?

  • @NeostormXLMAX
    @NeostormXLMAX 9 місяців тому +1

    but anyways this video was very helpful because no one made it very clear on what are the best front end interfaces to install, I kept trying to make one myself to no avail and give up after a while after testing stuff in the command prompt

  • @samuelpeery
    @samuelpeery 6 місяців тому

    Total newbie with running an LLM locally. What is the best llm for summarizing books and being able to ask questions about the books?

  • @felipetesta
    @felipetesta 4 місяці тому

    Anyway I can set a local AI that can access PDF files from my university folder and help me summarize and introduce the themes I have to study using the PDFs as primary source of content?

  • @joseph-ianex
    @joseph-ianex 9 місяців тому

    with local models are you able to make much longer responses given that you have enough ram and vram?

  • @lintalyor6535
    @lintalyor6535 6 місяців тому

    I like this simple explanation with the video editing thanks!

  • @kernsanders3973
    @kernsanders3973 10 місяців тому

    In regards to context, would LLM Lora's help with that? Lets say im busy with story writer LLM and the fantasy world I'm working with would be as big as something like Middle Earth from LOTRs. Would a Lora help with that? Like if I train a Lora on all our past chat history about the story etc. Also more text regarding the lore of places and history of characters and family trees. So taking that into consideration, would that assist in keeping the context low? So I don't need to keep a detailed summerized chat history etc. What would the requirement be for training such a Lora and what would the minimum text dataset require for a coherent training?

  • @proflead
    @proflead 4 місяці тому +1

    A video about fine tuning a model would be nice!

  • @vladislava5237
    @vladislava5237 10 місяців тому +1

    Very nice, tons of useful info
    Thank you!

  • @a.........1._.2..__..._.....__
    @a.........1._.2..__..._.....__ 9 місяців тому +1

    Ive been hamfisting my way through llms for over year. Just ramming squares into circles till it worked since informations so sporadic.
    100% checking out your other videos. Learned more in 5 min then 4 hours reading github docs

  • @FUHADEm
    @FUHADEm 8 місяців тому

    I don't have strong GPU , do you reccomend any sevices that i can run models on .

  • @trolik9113
    @trolik9113 8 місяців тому

    Absolutely fantastic and informative video. Well done! I will say I feel like the information certainly speaks to the grip that OpenAI has, especially from a development standpoint, despite the whole video being about open-source models.
    The procedures, time, research, and money required for any rando or small (even mid size) business owners to integrate open-source and local AI without any practical knowledge about it is near impossible. OpenAI wraps up RAG, "fine-tuning", and memory nice and neat into Assistants which can be easily called via the API. It would be amazing to have a completely standardized system that allows for the same type of application, but geared towards the variety of open-source models out there. Some platforms like NatDev let you compare multiple models based on the same input. Being able to see how RAG and fine tuning affects different models, both open-source and non, from the same platform would be unreal.

  • @tja9212
    @tja9212 10 місяців тому +1

    timecode 1:18 is a very questionable use of footage

  • @tutacat
    @tutacat 2 місяці тому

    You don't need finetuning, just do more prompts

  • @ericcadena2030
    @ericcadena2030 8 місяців тому

    Where is the diagram at 8:50 from?

  • @NeostormXLMAX
    @NeostormXLMAX 9 місяців тому +1

    I spent so much time trying to get something like this set up, but ended up back to gpt, most of these models are also censored just like gpt, and unlike gpt they are much slower AND on top of that they canot use plugins or special api's that let you access the internet or generate images etc. its sad but currently gpt has no peer

  • @4.0.4
    @4.0.4 10 місяців тому +2

    Dunno why my comment isn't going through, but try Kobold! Better for GGUF. Current fav is "Crunchy Onion" Q4_K_M GGUF. Give it a taste! 10t/s on a 3090 and pretty smart.

  • @fxstation1329
    @fxstation1329 10 місяців тому

    I'm a noob when it comes to this. I've come across Ollama, and started using it. Can I upload multiple things, texts, and possibly images, to chat with RTX and create my own data? And will it be uncensored? what are some other good options to 'Chat with RTX'

  • @cristianionascu
    @cristianionascu 9 місяців тому

    I guess my machine is not good enough, 2019 intel imac, because running any model locally is usually lagging way behind ChatGPT 3, Gemini, Perplexity, etc.

  • @RedOneM
    @RedOneM 10 місяців тому +1

    What 3 models do you recommend with 24 GB VRAM? Preferably 21-22GB / 24GB in practical usage.

    • @nyxilos9167
      @nyxilos9167 10 місяців тому +2

      huggingface lists models with their respective memory requirements. any 7b model will likely work very well and be under 21gb. you could also go with a bigger model but at a lower quantization. mistral models are among the most popular, open source, and very competitive.

  • @christopheralvarez1090
    @christopheralvarez1090 10 місяців тому

    Where do I upload the photo once GTC comes around ?

  • @shakta108
    @shakta108 8 місяців тому

    What do you think of phi model ?

  • @AshishKumar-kv4hr
    @AshishKumar-kv4hr 10 місяців тому +3

    Are you the same as fireship?

    • @swaggitypigfig8413
      @swaggitypigfig8413 10 місяців тому +4

      Different human being

    • @I_SEE_RED
      @I_SEE_RED 10 місяців тому +1

      it’s fireship experimenting with 100% channel automation

  • @dustindustir521
    @dustindustir521 10 місяців тому

    Step 4 is Clear, but How can I unlock step 3?
    I only see questionmarks.
    Do I have to do step 1 and 2 to unlock what I have to do at step 3,
    Or do I just need to gain more XP for the unlock.
    Maybe I just have to do step 4 twice to make up for the missing third step...

  • @WINTERMUTE_AI
    @WINTERMUTE_AI 10 місяців тому

    LM STUDIO and TRINITY 1.2 is my favorite non-GPT entities!

  • @NoidoDev
    @NoidoDev 6 місяців тому

    The important part for me is accessing it from CLI or Python. Ideally, doing the whole configuration in there. Because I need it to be automatized (no NodeJS of course).

  • @squfucs
    @squfucs 10 місяців тому

    i run LM Studio and i think its great, good video my dude

  •  10 місяців тому +1

    You did not name countries you are able to ship for the giveaway. Is it worldwide?

    • @bycloudAI
      @bycloudAI  10 місяців тому +5

      i’ll pay for whatever shipping it costs
      unless the country is unshippable like north korea

    •  10 місяців тому

      @@bycloudAI Thank you for this information, and also for the amazing content that you are putting out ♥

  • @siddhubhai2508
    @siddhubhai2508 5 місяців тому +1

    Oh Fireship's second hidden channel! 😂😂

  • @dungeon4971
    @dungeon4971 10 місяців тому +2

    what about ollama

  • @dipereira0123
    @dipereira0123 3 місяці тому

    To be fair, at 8:53, the 10 bucks you will be "saving" from running you LLM locally instead of paying github copilot will probably become more expensive in your energy bill... (your GPU will be working at max capacity) and lets not talk about the time it will take to set it all up
    unfortunatelly... the AI rev is something that will be in the hands of big corps

  • @alan_yong
    @alan_yong 10 місяців тому

    🎯 Key Takeaways for quick navigation:
    00:28 *🤖 Running AI chatbots and LM models locally provides flexibility and avoids subscription costs.*
    00:43 *📊 Choosing the right user interface (UI) for local AI model usage is crucial, depending on individual needs.*
    02:05 *🖥️ UABA is a versatile UI choice for running AI models locally, supported across various operating systems and hardware.*
    02:33 *💡 Installing UABA enables access to free and open-source models on Hugging Face, simplifying the model selection process.*
    05:18 *🤔 Context length is crucial for AI models' effectiveness, affecting their ability to process prompts accurately.*
    06:12 *⚙️ CPU offloading allows running large models even with limited VRAM, leveraging CPU and system RAM resources.*
    06:52 *🚀 Hardware acceleration frameworks like VM inference engine and TensorRTLM enhance model inference speed significantly.*
    07:36 *🎓 Fine-tuning models with tools like Kora enables customization for specific tasks, enhancing AI capabilities.*
    08:47 *💰 Running local LM models offers cost-saving benefits and customization options, making it an attractive option in the AI landscape.*
    Made with HARPA AI

  • @adamofigueroa
    @adamofigueroa 10 місяців тому

    I just have a question, why this channel is so similar to fireship? are you the same person? : )

  • @FlafyDev
    @FlafyDev 10 місяців тому +1

    this isn't fireship.. where am I?

    • @myname-mz3lo
      @myname-mz3lo 9 місяців тому

      same . the thumbnail got me and then i realised this guy took fireship's entire style

  • @hyposlasher
    @hyposlasher 10 місяців тому +1

    2:17 Bro lives in the future where M4 is already released

  • @XdekHckr
    @XdekHckr 6 місяців тому

    What would be the best llm for math?

  • @fennecthechoosenone5189
    @fennecthechoosenone5189 10 місяців тому +4

    Koboldcpp crying in the corner

  • @D0J0Master
    @D0J0Master 10 місяців тому

    How do local models compare to cloud ones like openai? Wouldnt a local pc have way worse results? A server farm can have way more vram and hence is better?

    • @joseph-ianex
      @joseph-ianex 9 місяців тому +2

      I'm getting ~gpt 3.5 performance on my laptop with 16gb ram and rtx 3060. I'm primarily using it because I feel like commerical ai chatbots are getting more and more censored

    • @MrBoxerbone
      @MrBoxerbone 9 місяців тому

      @@joseph-ianex Can you share which model are you using?, I have a laptop with those exact specs

    • @joseph-ianex
      @joseph-ianex 9 місяців тому +1

      @@MrBoxerbone *rtx 3050 ti. Most 7B models run fine, you can try Mistral, Gemma, or Llama 2. Get either ollama (command line) or llm studio (ui) to run the model. If you are new to running models I would recommend llm studio. The models are a bit slow and the context window is pretty small but they run. Pinokio is another cool ai if you want to test out open-source AI art tools 👍

  • @Saeed_al-moumen
    @Saeed_al-moumen 7 місяців тому +2

    my brain hurts ( i only reached 4:08 I just watched the video to see if there anything I need to know about sillytavren since that what I searched but i don't thinks there any more )

  • @つロつ
    @つロつ 10 місяців тому

    Besides saving money, are there any other reasons to do it locally vs spending $20 a month for chatGPT?

    • @nyxilos9167
      @nyxilos9167 10 місяців тому +1

      privacy mainly

    • @lodyllog
      @lodyllog 10 місяців тому

      privacy and reliability, as with local LLM you don't depend on anyone's else infrastructure

    • @voidsofold
      @voidsofold 10 місяців тому

      Privacy, it's not filtered so you can do more things with it, won't see random dips in quality based on the whims of investors.

  • @DanielHayes-p2u
    @DanielHayes-p2u 3 місяці тому

    The thumbnail style is just like Fireship

  • @Good.Idea.Zlovakia
    @Good.Idea.Zlovakia 5 місяців тому

    So is LM Studio trusted?

  • @LinkEX
    @LinkEX 5 місяців тому

    1:32 I typed "i am new to github" into my search bar, and sure enough, the autocompletion suggested the thread title.
    Came for the replies, which were more tame and not as many than I had expected.
    I initially thought this was an older image meme and you merely reused the screenshot.
    But since the original post was in fact posted 5 months ago (like this video), and the screenshot was shot 15 minutes after the post, I conclude you probably frequent r/github.

  • @abdelkaioumbouaicha
    @abdelkaioumbouaicha 10 місяців тому

    📝 Summary of Key Points:
    📌 The video discusses the landscape of AI services in 2024, highlighting the abundance of hiring freezes and the prevalence of subscription-based AI services.
    🧐 Various user interfaces for running AI chatbots and language models locally are explored, including UABA, Silly Tarvin, LM Studio, and Axel AO.
    🚀 The importance of choosing the right model format, understanding context length, and utilizing CPU offloading for running local language models efficiently is emphasized.
    💡 Additional Insights and Observations:
    💬 "Garbage in, garbage out" is a crucial principle highlighted when fine-tuning AI models, emphasizing the significance of quality training data.
    📊 Different model formats like GGF, AWQ, and EXL 2 are explained, showcasing how they optimize model size and performance.
    📣 Concluding Remarks:
    The video provides a comprehensive guide on running AI chatbots and language models locally, emphasizing the importance of model selection, context length, and fine-tuning techniques. Understanding these key aspects can help individuals navigate the AI landscape effectively and optimize performance while saving costs.
    Generated using TalkBud

  • @narpwa
    @narpwa 10 місяців тому

    what happened to the newsletter ????

  • @Afro__Joe
    @Afro__Joe 6 місяців тому

    Lol nobody reads anymore by these comments. Ooh shiny picture, click!
    Thanks for the info, I was looking for a video like this yesterday.

  • @Kevin.Kawchak
    @Kevin.Kawchak 7 місяців тому

    Thank you for the discussion.

  • @plagiats
    @plagiats 10 місяців тому

    Ollama + openwebui is the way to go. Same ui as ChatGPT, plenty of convenient functions. It's a no brainer.

  • @Phobos11
    @Phobos11 9 місяців тому

    Changing the precision of the models barely has any effect on the accuracy of the models, it's nothing near "lobotomizing" them, which is a term used to models that are intentionally trained to remove capacity out of them

  • @Dilfin90
    @Dilfin90 10 місяців тому

    I am from Russia, can I participate in the contest?

  • @twelvecatsinatrenchcoat
    @twelvecatsinatrenchcoat 10 місяців тому

    Which model is best for uh... y'know... stuff...

    • @Сергей-ч9н1ц
      @Сергей-ч9н1ц 9 місяців тому

      idk if you still need this, but one of the most "fun" models is MLewd

    • @twelvecatsinatrenchcoat
      @twelvecatsinatrenchcoat 9 місяців тому +1

      @@Сергей-ч9н1цI don't know what you're talking about but thank you. This conversation didn't happen.

  • @GraveUypo
    @GraveUypo 6 місяців тому

    running LMs on linux and windows, for some unknown (to me) reason, linux is over 5 times as fast as windows at prompt evaluation. it's not even close.

  • @knoopx
    @knoopx 10 місяців тому

    lm studio/ollama are probably the simplest ways to get started, not sure why you picked these ones

  • @itisallaboutspeed
    @itisallaboutspeed 10 місяців тому

    as a car content creator i approve this video

  • @WW-ir7sm
    @WW-ir7sm 10 місяців тому

    How hard is it to run LLM with AMD GPU? Is it still Linux only hell bc no driver support?

  • @mzafarr
    @mzafarr 7 місяців тому

    Please make a video about making our locally running LLMs available for others to use maybe like our own API which people can use or a webUI interface to use our local LLM.

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh 10 місяців тому

    I just really really like how many serious people have to say ooobabooga.
    It's like, almost as good of a joke on science as when that guy named the seventh planet.

  • @rougeseventeen
    @rougeseventeen 7 місяців тому

    thanks, this videos is very funny and helpful!

  • @WINTERMUTE_AI
    @WINTERMUTE_AI 10 місяців тому

    I keep canceling my GPT4 subscription and then renewing it... 'Just when I thought I was out, they pull me back in.' GPT4 reminded me of that phrase from The Godfather. :)

  • @keffbarn
    @keffbarn 8 місяців тому +1

    OOOGABOOOOGAAAAH 💪😎🍺

  • @violet-trash
    @violet-trash 7 місяців тому +1

    Kind of sucks that the GPU brand that works best with AI is the one that skimps on VRAM. 💀

  • @Paulo-ut1li
    @Paulo-ut1li 10 місяців тому

    Please make a video on how to fine tune a model using local documents.

  • @rotors_taker_0h
    @rotors_taker_0h 10 місяців тому +5

    Basically to understand this video one should already know everything mentioned in this video by heart.

    • @Trahloc
      @Trahloc 10 місяців тому +1

      Eh, it provides terms to hunt for and sometimes that's all someone needs, a starting point. The video is short and covers a lot of ground.

    • @MonkeeGeenyuss
      @MonkeeGeenyuss 10 місяців тому +1

      Dude wants a 16 part lecture to explain it all😂

    • @rotors_taker_0h
      @rotors_taker_0h 10 місяців тому +1

      @@MonkeeGeenyuss I mean, I can only follow because I know it all and cannot imagine someone unfamiliar to understand anything from this firehose, lol.