Run ANY Open-Source Model LOCALLY (LM Studio Tutorial)

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ •

  • @matthew_berman
    @matthew_berman  Рік тому +9

    The best discount for Black Friday: bit.ly/46bDM38

    • @MrAndi1281
      @MrAndi1281 Рік тому +6

      Hi Matthew, i love you videos, watching all of them lately, but i have to ask, did you forget the Autogen Expert Tutorial??

    • @amandamate9117
      @amandamate9117 Рік тому +1

      how to run deepseek-code-7B in ML Studio? its perfect for coding but i dont get a good answer. I dont know which Preset (on the right) to use for this model.

    • @SDGwynn
      @SDGwynn Рік тому +1

      Fake?

    • @Mike-Denver
      @Mike-Denver Рік тому +1

      It would be great to see how it works with autogen and memgpt. And thank you Mat for this great job your are doing! Keep doing!

    • @matthew_berman
      @matthew_berman  Рік тому

      @@MrAndi1281 haha no, but that’ll take a bit longer to put together

  • @jsmythib
    @jsmythib 9 місяців тому +11

    I just tried the local server of LM Studio...using it, and its examples I had a c# console app setup and talking to it in about 15 minutes. Easiest API to use, maybe ever. So good I came here to mention it! :)

  • @kanishak13
    @kanishak13 Рік тому +16

    I m blown by the possibilities it brings to the users who are not comfortable with the earlier present methods.

  • @godned74
    @godned74 Рік тому +17

    LM studio is awesome . running the server and operating open source models from an IDE I was able to get it to perform pretty much on par with gpt 3 j. just a bit slower . running the server is the way to give your llm the most tokens possible for inference while you formulate your questions around json's and SPR sparse primary representation prompts in the IDE. At one point I had dolphin 2.2 telling a story for over an hour strait with out stopping and not even repeating itself until I shut it off . Massive unexplored potential there.

    • @retex73
      @retex73 Рік тому +1

      OMG! What was the quality of the story like ? You think it could readable and enjoyable novels on demand ?

    • @bigglyguy8429
      @bigglyguy8429 10 місяців тому +1

      @@retex73 I too am interested in this magic?

  • @issiewizzie
    @issiewizzie Рік тому +2

    got difusionbee for local picture generation
    so its about time we had an easy way to use LLM on our local Maschine

  • @Boneless1213
    @Boneless1213 Рік тому +17

    Do you have a running list of the best models for each category? I can't always remember which one you tested last for either coding or uncensored ect. Thanks for any comments.

  • @64jcl
    @64jcl Рік тому +22

    In your demo you seem to only use 1 GPU layer. For my "old" Nvidia 2060 with 6GB I can easly do 40 layers on the GPU and it is very fast with for example the mistral dolphin 2.2.1 Q5 models. The API feature is brilliant, I use it for developing my own agent using a system message to give it some interesting features in its output (calling functions).

    • @irakmendez9985
      @irakmendez9985 Рік тому +4

      Any link?

    • @bigglyguy8429
      @bigglyguy8429 10 місяців тому

      @@irakmendez9985 You can search from inside the LM Studio software

  • @svcupc
    @svcupc Рік тому +3

    This looks much easier than TextGen WebUI. I haven't looked into it but I hope LM Studio will not record my usage for anything. Another interesting thing would be if we can use AutoGen or MemGPT to extend its capabilities. And if we can "chat with our own doc" using LM Studio.

  • @travotravo6190
    @travotravo6190 Рік тому +2

    I've been trying this out and it honestly delivers. So easy to run your own AI's!

  • @Appleloucious
    @Appleloucious 9 місяців тому

    One Love!
    Always forward, never ever backward!!
    ☀️☀️☀️
    💚💛❤️
    🙏🏿🙏🙏🏼

  • @pipoviola
    @pipoviola Рік тому +4

    That was amazing. You are helping us so much introducing to all this tools. Thank you very much.

  • @ezygoat
    @ezygoat 8 місяців тому +1

    I accidentally subscribed to you a long time ago, best decision I ever made.

  • @kalvinarts
    @kalvinarts Рік тому +7

    I know this is very easy to use but there are plenty open source solutions to do the same. It would be good to inform about the data collection these companies are doi g on the users who use their software.

    • @64jcl
      @64jcl Рік тому +1

      Do you know if LM Studio actually collects anything? Has anyone run a packet sniffer to check if it actually sends packets somewhere?

    • @RZRRR1337
      @RZRRR1337 Рік тому +3

      Like which one? Can you tell us some open source examples?

    • @RZRRR1337
      @RZRRR1337 Рік тому +1

      Like which one? Can you tell us some open source examples?

    • @NoidoDev
      @NoidoDev 4 місяці тому

      Any recommendation for doing stuff in CLI?!

  • @markelshnops
    @markelshnops Рік тому +13

    Would be a little more useful if the system would allow you to upload documents so you could perform actions like summarization

  • @kai_s1985
    @kai_s1985 Рік тому +7

    Do they have a document upload feature, so that we can chat about our document like the custom GPTs?

  • @ydmoskow
    @ydmoskow Рік тому +1

    We are about 1 year into gpt pandemonium and the momentum is only getting faster and everything is easier

  • @Axxis270
    @Axxis270 Рік тому

    I have been yelling about this and Faraday (my favorite) for quite some time now, but for some reason you never see any of the ai channels telling you about. These are the easy to use programs that the majority of ai users want.

  • @TheZanzz27
    @TheZanzz27 Рік тому +11

    I like how with nearly no context, "Mario" just pumped out a romance novel scene.....

    • @leandrotami
      @leandrotami 6 місяців тому +1

      OMG I stopped the video to read it and honestly I never imagined Mario in such a context. What did he do to Peach!? Is it even Peach!?

    • @productivitygod7887
      @productivitygod7887 3 місяці тому

      that was wild as hell, mario and his taboo activities

  • @youtubetruthlife4750
    @youtubetruthlife4750 Рік тому

    Its funny how everything that is "dead simple" is just simple enough for most people to start using. But yes, LM studio is maybe the best entrance to using open source LLM's.

  • @MichaelRamkissoon
    @MichaelRamkissoon Рік тому +9

    Love this!!! Thanks for always giving a walkthrough.

  • @theresalwaysanotherway3996
    @theresalwaysanotherway3996 Рік тому +3

    looks nice, but I wouldn't rely on it for testing in your videos until you can specify prompt formats (there's a good chance the model might be handicapped by the wrong format, currently it only lets you edit context, not the full prompt format). Also it only uses llama.cpp, which means anyone with an nvidia GPU could double their speed by switching to ExLlamaV2 and EXL2 quants.

  • @temp911Luke
    @temp911Luke Рік тому +72

    The only problem is...its CLOSED source, not open source program.

    • @etunimenisukunimeni1302
      @etunimenisukunimeni1302 Рік тому +18

      I agree, but what I'm seeing here, that seems to indeed be the _only_ problem. Which is cool, unless closed source is a showstopper for you.

    • @temp911Luke
      @temp911Luke Рік тому

      @@etunimenisukunimeni1302 It looks great but unfortunately it is a showstopper, at least for now.

    • @vaisakh_km
      @vaisakh_km Рік тому

      @@etunimenisukunimeni1302 ig, someone will make open source version of this in the near future.. may be not with all features and this much polished, but mostly..

    • @hrishikeshkumar2264
      @hrishikeshkumar2264 Рік тому +8

      Not sure if the title changed in the last 3 hours. But he only said about the model being open source. The closed source part is only the frontend which should be fine.

    • @olafge
      @olafge Рік тому +7

      TBH this is just a UI for open source models. I really can live with a closed source product here. There are actually OSS alternatives. So, no need to worry.

  • @jaykrown
    @jaykrown 2 місяці тому +1

    Great video, thank you for explaining this.

  • @imperialGaming.2473
    @imperialGaming.2473 8 місяців тому

    GPT killer other than Sora. This will be what LLM will look like in the near future! So excited to get my hands dirty! 😮

  • @Kivalt
    @Kivalt Рік тому +4

    I'm waiting for an open model to implement OpenAI's function stuff reliably. That would make up for a lot regarding the differences in intelligence between GPT-4 and open models.

    • @Hypersniper05
      @Hypersniper05 Рік тому

      Have you tried airoboros ? It's trained on function calling and works for me

    • @14supersonic
      @14supersonic Рік тому +4

      I'd say at the rate open llms are advancing, we'll probably have this ability within a years time. Although it's nice that we have the framework for when that does happen.

    • @raulbrebenaru2211
      @raulbrebenaru2211 Рік тому

      Check out Gorilla open functions

  • @spencerfunk6697
    @spencerfunk6697 Рік тому +4

    please do a tutorial with this for memgpt. ive been using lm studio for a couple weeks now. ive seen people get memgpt to work with the server but some people have issue, me included

    • @spencerfunk6697
      @spencerfunk6697 Рік тому

      or with anything that calls an openai api for that matter i just really wanna try memgpt and chat dev with this thing

  • @ManiSaintVictor
    @ManiSaintVictor Рік тому +6

    Just in time! Thank you. How is the MemGPT setup process? I’m gonna try this out after work. Thanks.

  • @tobiaswegener1234
    @tobiaswegener1234 Рік тому +4

    It's sadly not allowed for Commercial use. But indeed very easy to install and run.

  • @Pietro-Caroleo-29
    @Pietro-Caroleo-29 Рік тому

    Good afternoon Mr Berman... You have a talant doing these videos, you come over as clear as glass. well done.

  • @donaldparkerii
    @donaldparkerii Рік тому +4

    I believe that if you are enabling Apple Metal requires specific models that was trained with Apple Metal. Also if you are on Mac you can run - open -n -a "LM Studio" - to spawn multiple instances to run different models.
    I am going to try to do the Linux beta and see if you can get more configuration via CLI for a real server.

  • @danielsmithson6627
    @danielsmithson6627 Рік тому

    Thanks for this video! I was confused why you hadnt shown or seen this before. LM Studio has been my go to, it runs fast and has GPU / CPU support. I dont know another tool that works as well.

  • @Leto2ndAtreides
    @Leto2ndAtreides Рік тому

    TheBloke also gives recommendations for which models to use or not use - not necessarily which one is the biggest that you can run.

  • @fossil98
    @fossil98 Рік тому +6

    10:04
    😂 Indeed. I think we know what its finetuned on hahaha.

    • @adamstewarton
      @adamstewarton Рік тому

      Mario is definitely a hor.y little llm😂

  • @lukasareskog9230
    @lukasareskog9230 Рік тому +4

    Is it possible doing document retrieval within LMStudio? For example, a chatbot that can chat about .pdfs / .csvs / .txts, given to it?. If not possible, would privategpt be a better alternative? It seems very intuitive there.
    Couldn't find anything on google.

  • @OutdoorsHappiness
    @OutdoorsHappiness Рік тому +1

    LMStudio looks pretty awesome, great job on giving us a tour, going to try it, thanks !

  • @Joe_Brig
    @Joe_Brig Рік тому +1

    Looks good an I'll try it. I'd argue that Ollama is much easier.
    "ollama run mistral" vs
    Open LMStudio click, click, click, click...

    • @Pyriold
      @Pyriold Рік тому

      After opening its just one click if you use the same model as before. Maybe you have to reload the model, so ok, 2 clicks.

    • @Joe_Brig
      @Joe_Brig Рік тому

      @@Pyriold How many clicks to find, download, and start a new model?
      Compared to "ollama run vicuna"
      How do you start a model from the terminal?

    • @Pyriold
      @Pyriold Рік тому

      @@Joe_Brig Is this really relevant? I download a model maybe every few days or weeks and then its like 1 minute of work. Using a model is done way more often, and thats practically instant.

  • @ajaypranav1390
    @ajaypranav1390 Рік тому

    Wow in your previous video commented on LMstudio and now I see a video on it. Wow you are the best

  • @xdasdaasdasd4787
    @xdasdaasdasd4787 Рік тому +1

    Great video! Id love a lm studio with memgpt and autogen video if possible

  • @SYEDNURULHasan1789
    @SYEDNURULHasan1789 10 місяців тому

    crisp and concise content...

  • @johnne86sd
    @johnne86sd 10 місяців тому

    I have a GTX 1660Ti with 6GBs VRAM and I got way faster results from my Nvidia card when setting n_gpulayers to around 20-30, instead of leaving at 0. Haven't tried anything higher than that, but the difference was night and day. I tried it on mostly 7B 4QKM/S models around 4-5Gbs.

  • @unajoh6472
    @unajoh6472 7 місяців тому

    This is so helpful tutorial. Thank you so much!

  • @Buddylee-7
    @Buddylee-7 Рік тому +2

    Wish they would add the chat with your docs feature

  • @debashispanigrahi676
    @debashispanigrahi676 9 місяців тому

    Super One ! Thanks for this Video !

  • @007topless
    @007topless 10 місяців тому

    this was actually a really good video

  • @DikHi-fk1ol
    @DikHi-fk1ol Рік тому +1

    Off-topic question- how can i save a fine-tuned model that i fine-tuned using gradientAI to run it locally.
    Please reply, love your videos!❤❤

  • @yerneroneroipas8668
    @yerneroneroipas8668 Рік тому +2

    Mario started writing 50 shades of grey for you 💀

  • @nasimobeid2945
    @nasimobeid2945 Рік тому +1

    Awesome content as always!

  • @rakly347
    @rakly347 Рік тому +2

    Those 'should work' etc is not based on your system, it's about compatibility with the LM Studio app. (GGUF models)
    I have 128GB system and 40GB VRAM, and it also shows the 30GB+ required warning.

    • @daryl804
      @daryl804 25 днів тому

      that is a setting that you can turn off or make it less sensitive. key word is "safeguard" I believe

  • @HishamAl-Sanawi
    @HishamAl-Sanawi Рік тому

    brilliant! thank you Matthew and thank you LM Studio

  • @paveljanetka2864
    @paveljanetka2864 Рік тому +2

    thanks for video, please could you advice how to work with local documents with the model?

  • @peterwan816
    @peterwan816 11 місяців тому

    真係好撚正XD
    Its JUST AWESOME!!!

  • @friendofai
    @friendofai 11 місяців тому +1

    Would you be able to cover more in-depth about the developer side? I would like to host on my local PC, but be able to access it from my android phone.

  •  11 місяців тому +2

    Is there a way to add your own text files, datafiles etc.? So when using the chat, it also knows the specific info about a subject from the files I provided?

    • @davidhendrie6061
      @davidhendrie6061 9 місяців тому

      I am also very interested in this. i want to add tons of local video and audio content to the chosen LLM. would love to batch it in. anyone else doing that sort of thing?

  • @mdekleijn
    @mdekleijn Рік тому

    Love this! Thanks for sharing.

  • @Parisneo
    @Parisneo Рік тому +6

    Very cool tool. Thanks for this nice tutorial.
    I wish some day you give lollms a try. It has a models zoo and can run multiple types of models including GGUF , gptq and now awq. It has a persona system and can be installed with a single file install script. It supports basically all remote and local LLMs. Asits name suggests, it is built to support everything that crawls out there. It can be used to generate text, image and audio. It has an extension system (WIP) and It took me hell lot of effort to make. it is 100% free under apache 2.0 licence and there are documentations on my modest youtube channel. I think you can present it way better than I do :) .

    • @stickmanland
      @stickmanland Рік тому

      Look who's here, guys!

    • @jtabox
      @jtabox Рік тому

      lollms is absolutely worth giving a try, I installed it and have been using it for a week now. It has so many features and functions it's almost unbelievable that it's just a couple of devs behind it all. It's a bit rough around the edges in some aspects, but still very much functional and new additions and bugfixes are published constantly. It's been my favorite so far.

  • @Steve.Jobless
    @Steve.Jobless Рік тому +4

    Running the open-source models, but the software itself is not open source, lol

  • @samybenzekry
    @samybenzekry 3 місяці тому

    Great video. Would you happen to have "toyed" with the RAG option of this tool? If you happen to have done a video, I would gladly view it. Thank you, very quick and instructive.

  • @theh1ve
    @theh1ve Рік тому +3

    What telemetry does it capture/send?

    • @silentwindstudio
      @silentwindstudio 8 місяців тому +1

      In their website they say that they capture no data from user, we can only pray that this is true lol

  • @mutleyeng
    @mutleyeng 6 місяців тому

    im a complete coding/compter numty and got it running fine. Quest i dont know is how to take a basic base model and add learning to it. It told me it can extract information from webpages, but it dosnt seem very effective

  • @sadeghnakhjavani1986
    @sadeghnakhjavani1986 3 місяці тому

    Good job !

  • @mishlaev
    @mishlaev 9 місяців тому

    Thank you for your tutorial and the channel. It would be nice if you can teach how to process files with LM Studio. For example, I have an email (HTML) that I want to parse and structure. I would be interesting to learn all the details how to tune temperature, tokens, context window, etc.
    Thanks

  • @infinitytrading-ai
    @infinitytrading-ai Рік тому +1

    can you make a tutorial on how to run and test local llm models on a linux server for business us. also using vector embeddings to allow way more data to chat with. ?

  • @Ilan-Aviv
    @Ilan-Aviv 2 місяці тому

    Love your videos :)))

  • @parthwagh3607
    @parthwagh3607 11 місяців тому +1

    Can you please provide a specification for PC build of $2400, which will run ai models locally in fastest way possible at this price. What things we should consider when building PC solely for running ai models locally and rarely gaming? What really helps to run this model fastest locally? please provide related information also. I want to build a PC with budget of $2400. Thank you.

  • @bobbytables6629
    @bobbytables6629 11 місяців тому +1

    LMStudio lacks local documents, what a bummer I will continue to use GPT4All

  • @aminalyaquob1387
    @aminalyaquob1387 5 місяців тому

    awesome review! I wonder how to make the LLM constrained to read and analyze local files?

  • @seakyle8320
    @seakyle8320 Рік тому +1

    1. is LMStudio itself open source?
    2. are they sending userdata to their servers?
    3. whats the best uncensored model?

    • @merlinwarage
      @merlinwarage Рік тому +1

      no, no, mistral

    • @kotykd6212
      @kotykd6212 Рік тому

      ​@@merlinwarage you don't know if they are sending user data, and the best uncensored model is subjective, but in almost all cases, dolphin is the best uncensored one

  • @dylanalliata4809
    @dylanalliata4809 Рік тому

    Very well done.

  • @TomM-p3o
    @TomM-p3o Рік тому +1

    LM studio is great. My only issue with it is that it's got very small font that I haven't found a way to change.

  • @steveyantis
    @steveyantis 7 місяців тому

    Thanks!

  • @zikrullah1101
    @zikrullah1101 7 місяців тому

    awsome man thanks for that

  • @propolipropoli
    @propolipropoli 6 місяців тому

    Best Video Ever

  • @FrankSchwarzfree
    @FrankSchwarzfree Рік тому

    I love using LMStudio, but I need a newer computer. Everything works great.

  • @chrisbraeuer9476
    @chrisbraeuer9476 Рік тому

    This is awesome.

  • @jsmythib
    @jsmythib 10 місяців тому +1

    Just tried LM Studio....I have AI at home. I didnt think that was possible.

  • @theh1ve
    @theh1ve Рік тому +2

    Hmm what has Mario been up to 😂😂😂

  • @RoadTo19
    @RoadTo19 Рік тому

    I would be curious to watch a comparison with Pinokio.

  • @beeeev
    @beeeev Рік тому +1

    But can you fine tune the models or have it access your private documents locally on your computer?

  • @trashboat2821
    @trashboat2821 Рік тому

    Awesome! are you going to create a video on OpenAI's upcoming 'create your own gpt'? would love a video covering that, and exploring any alternatives for Mistral or Llama (ie open source).

  • @stickmanland
    @stickmanland Рік тому +5

    This is too powerful. And remember, with great power comes great responsibility.
    I just wish they had a feature to add prompt formats to the API. It just makes everything harder is you cannot specify the prompt format when working with apps like ChatDev and Aider

    • @Amejonah
      @Amejonah Рік тому

      They have it, you just need to select it the preset or make your own.

    • @Teh-Gaz
      @Teh-Gaz Рік тому

      Have you been talking to Uncle Ben AI lately? lol

    • @stickmanland
      @stickmanland Рік тому +1

      @@Amejonah You can't do that with the API

    • @mbottambotta
      @mbottambotta Рік тому

      @@stickmanland that must be because they offer the OpenAI API, which doesn't really need that. or does it?

    • @stickmanland
      @stickmanland Рік тому

      @@mbottambotta Open source models require a prompt format to work correctly. When used with apps like aider and chatdev (Which are made for ChatGPT, which does not have a prompt format), the model gives weird result due to not having the proper prompt format.

  • @lucademarco5969
    @lucademarco5969 Рік тому +1

    Is it possible to upload documents and query them?if yes, can you show how? Is it also available through the API server? Thanks in advance!

  • @RichardGetzPhotography
    @RichardGetzPhotography Рік тому +3

    Matthew, can these models be DLed to an external drive and used from there? Can you set up Agents? No capability to upload files? Can you report how the M processor does against a GPU? How well does the locally ran dev version scale? Obviously based on the size of the computer this is running on, but will it handle multiple requests from developers?

    • @just..someone
      @just..someone Рік тому +1

      you can def. have the models on a separate drive, which is super useful. not sure about the rest, but to the last question: via the API mode (emulates style of open AI api) you can have several requests, that then get queued up one after the other.

    • @RichardGetzPhotography
      @RichardGetzPhotography Рік тому +1

      @@just..someone thanks for the reply

  • @aketo8082
    @aketo8082 7 місяців тому

    Looks great, Thank you. But LM Studio didn't work with own text, PDF or Docx files, right? Also no dialogue mode possible.
    Is there a video that shows how to create own LLM? Thank you.

  • @BabylonBaller
    @BabylonBaller Рік тому +1

    I would love to install this, but I dont think it has a local web option like Gradio. Which would allow me to access it from any device in my network or from outside my home through a local IP / port

    • @Pyriold
      @Pyriold Рік тому

      It has a local server mode, that was shown in the video.

    • @BabylonBaller
      @BabylonBaller Рік тому

      @@Pyriold I did see that but from the video it seems its an api backend type of connection only, not a connection that has a gui and complete usage that you can use by simply browsing to the ip and port and see the entire front end like you can with Oobabooga, and Automatic 1111

  • @TrevorMatthews
    @TrevorMatthews Рік тому +2

    Thanks @matthew_berman One challenge I haven't solved yet is moving an environment. At the office I have the OK to explore LLM potential BUT within the existing software and hardware constraints. My PC is good enough, but our network is so locked down none of the scripts can pull down requirement files and libraries. I'd need to setup an environment on an 'internet facing' computer and then be able to move it. And run it. Is that possible??

    • @OpenLLM4All
      @OpenLLM4All Рік тому +2

      Could try using a VM. I noticed a company called Massed Compute has VMs specifically for Matthew. All of the tools he has used in his videos are pre-loaded

  • @SDGwynn
    @SDGwynn Рік тому

    Will watch. But question… Ollama or Llm studio?

  • @abdussamed107
    @abdussamed107 11 місяців тому +1

    First I want to thank for sharing the useful AI content.
    The LM Studio software was a key step to bring AI assistants a step closer to the customers and consumer.
    I made use of the software as well and was recently experimenting with dolphin mistral llm 2.2.1 and wondered after a while what the token count 4984/2048 at the bottom right below the chat input means. As far as I understood, it's some sort of counter how many tokens the llm already has written and answered, but why does it matter? Is the chat history fed into the language model each time we enter something new, and this happens somehow behind the scenes? When these language models are working like this, I would understand that the natural limit of the input the language model supports also is the maximum size of the chat history.
    I am not very familiar with LLM s and just started experimenting with them. Could someone please explain why the token count: yxcd/yxcd number is there and how it affects the Assistants' performance or affect the chat in which way?
    Thanks in advance

    • @m12652
      @m12652 8 місяців тому

      How many millions of tons of carbon are being wasted listening to these models apologise?

  • @jack-IR
    @jack-IR 7 місяців тому

    you got the subscribe for the last part

  • @JorgeGiro
    @JorgeGiro 7 місяців тому

    One thing I don't really understand is where I should place the php files, if I want to use php and curl, to access the local instance of the model.

  • @joserodolfobeluzo3100
    @joserodolfobeluzo3100 10 місяців тому

    How can I do fine tunning with my context? Is there any video that you explain it? It should be amazing! I tried LM studio! So easy! Thanks a lot!

  • @Lukevapeur
    @Lukevapeur 2 місяці тому

    Pausing to read the Mario response... *wheeze laugh*

  • @wilkerribeiro1997
    @wilkerribeiro1997 Рік тому

    Could you explain more about how that "Apple Metal" configuration works? Is it only for models trained on apple metal? What changes if it is enabled or not?

    • @Pyriold
      @Pyriold Рік тому

      I think training and inference are totally decoupled, so it does not matter how it was trained, you can use whatever hardware for inference.

  • @BetterThanTV888
    @BetterThanTV888 Рік тому

    Great video. How would you host this on provider like Linode or AWS?

  • @profittaker6662
    @profittaker6662 7 місяців тому

    can you make a video about how to make that server in localhost, python and curl versions

  • @rogerbruce2896
    @rogerbruce2896 Рік тому

    quick question, when you download how do you specify what hard drive to download to?

  • @rajvora2876
    @rajvora2876 Рік тому

    would love some tips on which specs can run it or some recs on laptops

  • @mrait
    @mrait 10 місяців тому

    cool video

  • @RZRRR1337
    @RZRRR1337 Рік тому

    Is there any playground studio like that but for commercial llms where you put your API keys and can play with anthropic, openAI, Cohere models in one interface?

  • @bigglyguy8429
    @bigglyguy8429 10 місяців тому

    I've been having a lot of fun with LM Studio. I'd have paid for it, so plenty happy that it's free :)

  • @WINTERMUTE_AI
    @WINTERMUTE_AI Рік тому

    Very cool, me and GPT recently parted ways on bad terms. I want a machine that follows my instructions and caters to my needs without its own opinion getting in the way, which model would work best for the best AI friend? Specifically one that will agree with the FACTS I give it, without argument.