Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM

Поділитися
Вставка
  • Опубліковано 16 лис 2024

КОМЕНТАРІ • 518

  • @codygaudet8071
    @codygaudet8071 8 місяців тому +178

    Please do a dedicated video on training minimal base models for specific purposes. You're a legend. Also a video on commercial use and licensing would be immensely valuable and greatly appreciated.

    • @Al-Storm
      @Al-Storm 8 місяців тому +7

      +1

    • @akram5960
      @akram5960 8 місяців тому +9

      Where to start with in the path of learning AI (llm, rag, generative Ai..,)

    • @fxstation1329
      @fxstation1329 8 місяців тому +1

      +1

    • @vulcan4d
      @vulcan4d 7 місяців тому +1

      Yes!

    • @nasirkhansafi8634
      @nasirkhansafi8634 6 місяців тому

      Very nice question i am waiting for the same. Wish Tim make that video soon

  • @PCFix411RetroZone
    @PCFix411RetroZone 8 місяців тому +6

    I’m just about to dive into LM Studio and AnythingLM Desktop, and let me tell you, I’m super pumped! 🚀 The potential when these two join forces is just out of this world!

  • @sitedev
    @sitedev 8 місяців тому +11

    I’d love to hear more about your product roadmap - specifically with how it relates to the RAG system you have implemented . I’ve been experimenting a lot with Flowise and the new LlamaIndex integration is fantastic - especially the various text summarisation and content refinement methods available with a LlamaIndex based RAG. Are you planning to enhance the RAG implementation in AnythingLLM?

  • @kylequinn1963
    @kylequinn1963 8 місяців тому +7

    This is exactly what I've been looking for. Now, I'm not sure if this is already implemented, but if the chat bot can use EVERYTHING from all previous chats within the workspace for context and reference... My god that will change everything for me.

    • @TimCarambat
      @TimCarambat  8 місяців тому +2

      It does use the history for context and reference! History, system prompt, and context - all at the same time and we manage the context window for you on the backend

    • @IrakliKavtaradzepsyche
      @IrakliKavtaradzepsyche 8 місяців тому

      @@TimCarambatbut isn’t history actually constrained by active model’s context size?

    • @TimCarambat
      @TimCarambat  8 місяців тому +4

      @@IrakliKavtaradzepsyche yes, but we manage the overflow automatically so you at least don't crash from token overflow. This is common for LLMs, to truncate or manipulate the history for long running sessions

    • @avepetro8380
      @avepetro8380 2 місяці тому

      So this strictly for LLM's? Is this like an AI assistant?

  • @bradcasper4823
    @bradcasper4823 8 місяців тому +4

    Thank you, I've been struggling for so long with problematic things like privateGPT etc. which gave me headaches. I love how easy it is to download models and add embeddings! Again thank you.
    I'm very eager to learn more about AI, but I'm absolute beginner. Maybe video on how would you learn from the beginning?

  • @VanSocero
    @VanSocero 5 місяців тому +1

    The potential of this is near limitless so congratulations on this app.

  • @JohnRiley-r7j
    @JohnRiley-r7j 8 місяців тому +2

    Great stuff,this way you can run a good smaller conversational model like 13b or even 7b,like Laser Mistral.
    Main problem with this smaller LLM are massive holes in some topics,or informations about events,celebs or other stuff,this way you can make your own database about stuff you wanna chat.
    Amazing.

  • @_lull_
    @_lull_ 3 місяці тому +1

    You deserve a Nobel Peace Prize. Thank you so much for creating Anything LLM.

  • @continuouslearner
    @continuouslearner 8 місяців тому +7

    So if in case we need to programmatically use this, does anythingllm itself offer a ‘run locally on server’ option to get an API endpoint that we could call from a local website for example? i.e. local website -> post request -> anythingllm (local server + PDFs)-> LMstudio (local server - foundation model)

    • @clinbrokers
      @clinbrokers 8 місяців тому

      Did you get an answer?

  • @NigelPowell
    @NigelPowell 8 місяців тому +3

    Mm...doesn't seem to work for me. The model (Mistral 7B) loads, and so does the training data, but the chat can't read the documents (PDF or web links) properly. Is that a function of the model being too small, or is there a tiny bug somewhere? [edit: got it working, but it just hallucinates all the time. Pretty useless]

  • @TazzSmk
    @TazzSmk 8 місяців тому +8

    thanks for the tutorial, everything works great and surprisingly fast on M2 Mac Studio, cheers!

  • @jimg8296
    @jimg8296 7 місяців тому

    Just got this running and it's fantastic. Just a note that LM Studio uses the API key "lm-studio" when connecting using Local AI Chat Settings.

    • @thegoat10.7
      @thegoat10.7 7 місяців тому

      does it provide script for youtube?

  • @autonomousreviews2521
    @autonomousreviews2521 8 місяців тому +11

    Fantastic! I've been waiting for someone to make RAG smooth and easy :) Thank you for the video!

  • @fieldpictures1306
    @fieldpictures1306 7 місяців тому

    Thanks for this, about to try it to query legislation and case law for a specific area of UK law to see if it effective in returning references to relevant sections and key case law. Interested in building a private LLM to assist with specific repetitive tasks. Thanks for the video.

  • @continuouslearner
    @continuouslearner 8 місяців тому +3

    Also, how is this different from implementing RAG on a base foundation model and chunking our documents and loading it into a vector db like pinecone? Is the main point here that everything is locally run on our laptop? Would it work without internet access?

  • @stanTrX
    @stanTrX 6 місяців тому

    IMO anythingLLM is much userfriendly and really has big potential. thanks Tim!

  • @scchengaiah4904
    @scchengaiah4904 Місяць тому

    Really awesome stuff. Thank you for bringing such quality aspects and making it open-source.
    Could you please help to understand on how efficiently the RAG pipeline in AnythingLLM works ?
    For example:
    If I upload a pdf with MultiModal content or If I want my document to be embedded in a semantic way or use Multi-vector search, Can we customize such advanced RAG features ?

  • @Augmented_AI
    @Augmented_AI 8 місяців тому +1

    How well does it perform on large documents. Is it prone to lost in the middle phenomena?

    • @TimCarambat
      @TimCarambat  8 місяців тому

      That is more of a "model behavior" and not something we can control.

  • @vivekkarumudi
    @vivekkarumudi 8 місяців тому +11

    Thanks a ton ...you are giving us power on working with our local documents... its blazingly fast to embed the docs, super fast responses and all in all i am very happy.

    • @ashleymusihiwa
      @ashleymusihiwa 8 місяців тому

      thats liberating ! i was really concerned about privacy especially when coding or working on refining internal proposals> Now I know what to do

    • @BarryFence
      @BarryFence 7 місяців тому

      What type of processor/GPU/model are you using? I'm using version 5 of Mistral and it is super slow to respond. i7 and an Nvidia RTX 3060ti GPU.

  • @catwolf256
    @catwolf256 6 місяців тому +2

    To operate a model comparable to GPT-4 on a personal computer, you would currently need around 60GB of VRAM. This would roughly necessitate three 24GB graphics cards, each costing between $1,500 and $2,000. Therefore, equipping a PC to run a similar model would cost more than 25 years' worth of a ChatGPT subscription at $20 per month or $240 per year.
    Although there are smaller LLM (Large Language Models) available, such as 8B or 13B models requiring only 4-16GB of VRAM, they don't compare favorably even with the freely available GPT-3.5.
    Furthermore, with OpenAI planning to release GPT-5 later this year, the hardware requirements to match its capabilities on a personal computer are expected to be even more demanding.

    • @TimCarambat
      @TimCarambat  6 місяців тому +6

      Absolutely. Closed source and cloud based models will always have a performance edge. The kicker is are you comfortable with their limitations on what you can do with them, paying for additional plugins, and the exposure of your uploaded documents and chats to a third party.
      Or get 80-90% of the same experience with whatever the latest and greatest oss model is running on your CPU/GPU with none of that concern. Its just two different use cases, both should exist

    • @catwolf256
      @catwolf256 6 місяців тому

      @@TimCarambat While using versions 2.6 to 2.9 of Llama (dolphin), I've noticed significant differences between it and ChatGPT-4. Llama performs well in certain areas, but ChatGPT generally provides more detailed responses. There are exceptions where Llama may have fewer restrictions due to being less bound by major company policies, which can be a factor when dealing with sensitive content like explosives or explicit materials. however, while ChatGPT has usage limits and avoids topics like politics and explicit content, some providers offer unrestricted access through paid services. and realistically, most users-over 95%-might try these services briefly before discontinuing their use.

    • @Betttcpp
      @Betttcpp 4 місяці тому +2

      Get a pcei nvme ssd. I have 500gb of “swap” that I labeled as ram3. Ran a 70b like butter with the gpu at 1% only running display. Also you can use a 15$ riser and add graphics cards. You should have like 256gb on the gpu, but you can also vramswap, but that isn’t necessary bc you shouldn’t rip anywhere near 100gb at once. Split up your processes. Instead of just cpu and ram use the cpu to send commands to anything with a chip, and attach a storage device immediately to it. The pc has 2 x 8gb ram naturally. You can even use an hdd it is just a noticeable drag of under 1 gb/s. There are many more ways to do it, once I finish the seamless container pass I will have an otb software solution for you. -- swap rate and swapiness will help if you have solid storage.

    • @catwolf256
      @catwolf256 4 місяці тому

      @@Betttcpp yes, you can modify or add addition to your pc to run LLM on your pc, but still it's not worth to do it. because, most of people who playing around LLM, they would use it only short period of time. a month or so max,
      what i am saying is paying over $ 5,000 build for the LLM is not worth, compare to paying $20 per month enjoying fun.

    • @PaulCuciureanu
      @PaulCuciureanu 3 місяці тому

      @@catwolf256 could be worth it if you make it available to all your friends and get them to pay you instead ;-)

  • @Helios1st
    @Helios1st 6 місяців тому +1

    Wow, great information. I have a huge amount of documents and everytime I search for something it's getting such a difficult task to fulfill

    • @morespinach9832
      @morespinach9832 4 місяці тому

      And what have you found with this combination of dumb tools? Search through documents is crazy slow with LM Studio and AnythingLLM.

  • @cosmochatterbot
    @cosmochatterbot 8 місяців тому +6

    Absolutely stellar video, Tim! 🌌 Your walkthrough on setting up a locally run LLM for free using LM Studio and Anything LLM Desktop was not just informative but truly inspiring. It's incredible to see how accessible and powerful these tools can make LLM chat experiences, all from our own digital space stations. I'm particularly excited about the privacy aspect and the ability to contribute to the open-source community. You've opened up a whole new universe of possibilities for us explorers. Can't wait to give it a try myself and dive into the world of private, powerful LLM interactions. Thank you for sharing this cosmic knowledge! 🚀👩‍🚀

  • @giovanith
    @giovanith 8 місяців тому +3

    I loaded a simple txt file, embbedded as presented in video, and ask a question about a topic within the text. Unfortunately it seems the model does't know nothing about the text. Any tip ? (Mistral 8 bit, RtX4090 24 Gb).

    • @MrZaarco
      @MrZaarco 8 місяців тому

      Same here, plus it hallucinates like hell :)

  • @viveks217
    @viveks217 8 місяців тому +3

    I have tried, but could not get it to work with the files that was shared as context. Am I missing something? It's giving answers like that the file is in my inbox I will have to read it, but does not actually reads the file

    • @_skiel
      @_skiel 8 місяців тому

      i m also struggling. sometimes it refers to the context and most of the times it forgot having access eventho its referencing it

  • @MusicByJC
    @MusicByJC 7 місяців тому +1

    I am a software developer but am clueless when it comes to machine learning and LLM's. What I was wondering, is it possible to train a local LLM by feeding in all of your code for a project?

  • @PswACC
    @PswACC 7 місяців тому +2

    The biggest challenge I am having is getting the prompt to provide accurate information that is included in the source material. The interpretation is just wrong. I have pinned the source material and I have also played with the LLM Temperature to no avail of an accurate chat response that aligns with the source material. Also tried setting chat mode to Query but it typically doesn't produce a response. Another thing that is bothering me is how I can't delete the default thread that is under the workspace as the first thread.

  • @claudiantenegri2612
    @claudiantenegri2612 8 місяців тому +3

    Very nice tutorial! Thanks Tim,

  • @monbeauparfum1452
    @monbeauparfum1452 25 днів тому

    Bro, this is exactly what I was looking for. Would love to see a video of the cloud option at $50/month

    • @TimCarambat
      @TimCarambat  25 днів тому

      @@monbeauparfum1452 have you tried the desktop app yet (free)

  • @jakajak1991
    @jakajak1991 6 місяців тому

    I get this response every time:
    "I am unable to access external sources or provide information beyond the context you have provided, so I cannot answer this question".
    Mac mini
    M2 Pro
    Cores:10 (6 performance and 4 efficiency)
    Memory:16 GB

  • @thualfiqar87
    @thualfiqar87 8 місяців тому +1

    That's really amazing 🤩, I will definitely be using this for BIM and Python

  • @LiebsterFeind
    @LiebsterFeind 8 місяців тому +9

    LM Studios TOS paragraph:
    "Updates. You understand that Company Properties are evolving. As a result, Company may require you to accept updates to Company Properties that you have installed on your computer or mobile device. You acknowledge and agree that Company may update Company Properties with or WITHOUT notifying you. You may need to update third-party software from time to time in order to use Company Properties.
    Company MAY, but is not obligated to, monitor or review Company Properties at any time. Although Company does not generally monitor user activity occurring in connection with Company Properties, if Company becomes aware of any possible violations by you of any provision of the Agreement, Company reserves the right to investigate such violations, and Company may, at its sole discretion, immediately terminate your license to use Company Properties, without prior notice to you."
    Several posts on LLM Reddit groups with people not happy about it. NOTE: I'm not one of the posters, read-only, I'm just curious what others think.

    • @TimCarambat
      @TimCarambat  8 місяців тому +6

      Wait so their TOS basically says they may or may not monitor your chats in case you are up to no good with no notification?
      okay. I see why people are pissed about that. I dont like that either unless they can verifiable prove the "danger assessment" is done on device because otherwise this is no better than just cloud hosting but paying for it with your resources

    • @TimCarambat
      @TimCarambat  8 місяців тому +4

      Thanks for bringing this to my attention btw. I know _why_ they have it in the ToS, but I cannot imagine how they think that will go over.

    • @LiebsterFeind
      @LiebsterFeind 8 місяців тому

      Ancient idea clash between wanting to be a good "software citizen" and the unfortunate fact that their intent is still to "monitor" your activities. As you said in your second reply to me, "monitoring" does not go over well with some and the consideration of the intent for doing so, even if potentially justified, is a subsequent thought they will refuse to entertain. @@TimCarambat

    • @alternate_fantasy
      @alternate_fantasy 7 місяців тому

      ​@@TimCarambatLet's say there is a monitoring background behind, what if we setup a vm that did not allow to connect to the internet, does that will make our data safe ?

    • @TimCarambat
      @TimCarambat  7 місяців тому +1

      @@alternate_fantasy it would prevent phone homes, sure, so yes. That being said I have Wiresharkd lmstudio while running and did not see anything sent outbound that would indicate they can view anything like that. I think that's just their lawyers being lawyers

  • @MCSchuscha
    @MCSchuscha 8 місяців тому +1

    changing the embedding model would be a good tutorial! For examle how to use a multi langual model!

  • @xevenau
    @xevenau 8 місяців тому +10

    software engineer and AI knowledge? You got my sub.

  • @olivierstephane9232
    @olivierstephane9232 8 місяців тому +2

    Excellent tutorial. Thanks a bunch😊

  • @AC-go1tp
    @AC-go1tp 7 місяців тому

    Thank you so much for your generosity. I wish the very best for your enterprise . God Bless!

  • @Dj-Mccullough
    @Dj-Mccullough 7 місяців тому

    I had a spare 6800xt sitting around that had been retired due to overheating for no apparent reason, as well as a semi-retired ryzen 2700x , and i found 32 gigs of ram sitting around for the box. Just going to say flat out that it is shockingly fast. I actually think running Rocm to enable gpu acceleration for lm studio is running llm's better than my 3080ti in my main system, or at the very least, so similar i cant perceive a difference

  • @2010Sisko
    @2010Sisko 5 днів тому

    That was a really good video. Thank you so much.

  • @rowbradley
    @rowbradley 7 місяців тому

    Thanks for building this.

  • @MrAmirhk
    @MrAmirhk 6 місяців тому

    Can't wait to try this. I've watched a dozen other tutorials that were too complicated for someone like me without basic coding skills. What are the pros/cons of setting this up with LMStudio vs. Ollama?

    • @TimCarambat
      @TimCarambat  6 місяців тому

      If you don't like to code, you will find the UI of lmstudio much more approachable, but it can be an information overload. Lmstudio has every model on huggingface. Ollama is only accessible via terminal and has limited model support but is dead simple.
      This video was made before we launched the desktop app. Our desktop comes with ollama pre-installed and gives you a UI to pick a model and start chatting with docs privately. That might be a better option since that is one app, no setup, no cli or extra application

  • @lmt125
    @lmt125 Місяць тому

    This is great! So we woukd always have to run lm studio before running anything llm?

    • @TimCarambat
      @TimCarambat  29 днів тому

      If you wanted to use LMStudio, yes. There is not specific order but both need to be running of course

  • @drew5834
    @drew5834 7 місяців тому

    Great work Tim, I'm hoping I can introduce this or anything AI into our company

  • @alfata72
    @alfata72 4 місяці тому

    thank you for your simple explanation

  • @pabloandrescaceresserrano263
    @pabloandrescaceresserrano263 5 місяців тому +1

    Absolutely great!! thank you!!!

  • @williamsoo8500
    @williamsoo8500 6 місяців тому

    Awesome man. Hope to see more video with AnythingLL!

  • @djkrazay7791
    @djkrazay7791 8 місяців тому

    This is an amazing tutorial. Didn't know there were that many models out there. Thank you for clearing the fog. I have one question though, how do I find out what number to put into "Token context window"? Thanks for your time!

    • @TimCarambat
      @TimCarambat  8 місяців тому +1

      Once pulling into LMStudio, its in the sidebar once the model is selected. Its a tiny little section on the right sidebar that say "n_ctxt" or something similar to that. Youll then see it will explain further how many tokens your model can handle at max, RAM permitting.

    • @djkrazay7791
      @djkrazay7791 8 місяців тому

      @@TimCarambat your the best... thanks... 🍻

  • @TheDroppersBeats
    @TheDroppersBeats 7 місяців тому

    @Tim, this episode is brilliant! Let me ask you one thing. Do you have any ways to force this LLM model to return the response in a specific form, e.g. JSON with specific keys?

  • @icometofightrocky
    @icometofightrocky 3 місяці тому

    Great video very well explained !

  • @immersift7856
    @immersift7856 8 місяців тому +1

    looks soo good! I have a question : is there some way to add chat diagram like voiceflow or botpress ?
    For example, guiding the discussion for an ecommerce chatbot and give multiple choice when ask questions ?

    • @TimCarambat
      @TimCarambat  8 місяців тому

      I think this could be done with just some clever prompt engineering. You can modify the system prompt to behave in this way. However, there is no voiceflow-like experience built-in for that. That is a clever solution though.

  • @CrusaderGeneral
    @CrusaderGeneral 8 місяців тому +1

    thats great, I was getting tired of the restrictions in the common AI platforms

  • @batboyboy
    @batboyboy 29 днів тому

    THANKS! how to port on WEBSIDE local ??

  • @shabbirug
    @shabbirug 7 місяців тому

    Excellent work. Please make a video on text to sql and excel csv sql support for llms and chatbot. Thank you so much ♥️

  • @BotchedGod
    @BotchedGod 8 місяців тому

    AnythingLLM looks super awesome, cant wait to setup with ollama and give it a spin. tried chat with rtx but the youtube upload option didnt install for me and that was all i wanted it for

  • @HugoRomero-mq7om
    @HugoRomero-mq7om 6 місяців тому

    Very useful video!! Thanks for the work. I kept a doubt about the chats that take place, there is any registration of the conversations? For commercial purposes it will be nice to generate leads with the own chat!

    • @TimCarambat
      @TimCarambat  6 місяців тому

      Absolutely, while you can "clear" a chat window you can always view all chats sent as a system admin and even export them for manual analysis or fine-tuning.

  • @jamesmiths72
    @jamesmiths72 Місяць тому

    Wao what a great tool. Congratulations and thank you.
    Can you make a video explaining licence and commercial use to sell this to clients? Thank you.

    • @TimCarambat
      @TimCarambat  29 днів тому

      License is MIT, not much more to explain :)

  • @Mursaat100
    @Mursaat100 8 місяців тому +1

    Thanks for the video!
    I did it as you said and got the model working (same as you picked). It ran faster than i expected and I was impressed with the quality of the text and the general understanding of the model.
    However when i uploaded some documents [in total just 150 kb of downloaded HTML from a wiki] it gave very wrong answers [overwhelmingly incorrect]. What can i do to improve this?

    • @TimCarambat
      @TimCarambat  8 місяців тому +1

      two things help by far the most!
      1. Changing the "Similarity Threshold" in the workspace settings to be "No Restriction". This basically allows the vector database to return all remotely similar results and no filtering is applied. This is based purely on the vector database distance of your query and the "score" filtered on, depending on documents, query, embedder, and more variables - a relevant text snippet can be marked as "irrelevant". Changing this setting usually fixes this with no performance decrease.
      2. Document pinning (thumbtack icon in UI once doc is embedded). This does a full-text insertion of the document into the prompt. The context window is managed in case it overflows the model, however this can slow your response time by a good factor, however coherence will be extremely high.

    • @Mursaat100
      @Mursaat100 8 місяців тому

      Thank you! But i dont understand what you mean with "Thumbtack icon in UI once doc is embedded". Could you please clarify?@@TimCarambat

  • @Chris.888
    @Chris.888 8 місяців тому

    Nice one Tim. It’s been on my list to get a private LLM set up. You’re guide is just what I needed. I know Mistral is popular. Are those models listed on capabilities, top being most efficient? I’m wondering how to choose the best model for my needs.

    • @TimCarambat
      @TimCarambat  8 місяців тому +1

      Those models are curated by thr lmstudio team. Imo they are based on popularity. However, if you aren't sure what model to chose, go for Llama2 or Mistral, can't go wrong with those models as they are all around capable

    • @Chris.888
      @Chris.888 8 місяців тому

      Thanks Tim, much appreciated.

  • @shattereddnb3268
    @shattereddnb3268 4 місяці тому

    I´ve been playing around with running local LLMs for a while now, and it´s really cool to be able to run something like that locally at all, but it does not come even close to replacing ChatGPT. If there actually were models as smart as ChatGPT to run locally they would require a very expensive bunch of computers...

  • @craftedbysrs
    @craftedbysrs 6 місяців тому

    Thanks a lot! This tutorial is a gem!

  • @BudoReflex
    @BudoReflex 8 місяців тому +1

    Thank you! Very useful info. Subbed.

  • @CaptZenPetabyte
    @CaptZenPetabyte 7 місяців тому

    Im on a Linux machine, and want to set up some hardware ... recommended GPU (or even point me to the direction for good information?) or better yet can an old bitcon rig do the job somehow seeing as though theyre useless for bitcoin these days?! Great tutorial too mate, really appreciate you taking the time!

  • @O-8-15
    @O-8-15 19 днів тому

    Can you make a tutorial how we can make either or the other to TTS for the AI-Response in a chat? I don't mean speech-recognition. just AI voice output.

  • @cee7004
    @cee7004 7 місяців тому

    Thank you for making this video. This helped me a lot.

  • @bennguyen1313
    @bennguyen1313 8 місяців тому +4

    I notice some of the models are 25GB+.. BLOOM, Meta's Llama 2, Guanaco 65B and 33B, dolphin-2.5-mixtral-8x7b etc
    Do these models require training? If not, but you wanted to train it with custom data, does the size of the model grow, or does it just change and stay the same size?
    Aside from LMStudio , AnythingLLM , any thoughts on other tools that attempt to make it simpler to get started, like Oobabooga , gpt4all io , Google Colab , llamafile , Pinokio ?

  • @uwegenosdude
    @uwegenosdude 7 місяців тому

    Thanks, Tim, for the good video. Unfortunately I do not get good results for uploaded content.
    I'm from Germany, so could it be a language problem, cause the uploaded content is german text?
    I'm using the same mistral model from your video and added 2 web pages to anythingLLMs workspace.
    But I'm not sure if the tools are using this content for building the answer.
    In the LM studio log I can see a very small chunk of one of the uploaded web pages. But in total, the result is wrong.
    To get good embeddings values I downloaded nomic-embed-text-v1.5.Q8_0.gguf and use it for the Embedding Model Settings in LM Studio which might be not necessary, cause you didn't mentioned such steps in your video.
    I would appreciate any further hints. Thanks a lot in advance.

  • @rosenvladev9654
    @rosenvladev9654 8 місяців тому +2

    How can I use .py files. It appears they arent supported

    • @TimCarambat
      @TimCarambat  8 місяців тому +2

      If you change them to .txt if will be okay. We just need to basically have all "unknown" types try to parse as text to allow this since there are thousands of programming text types

  • @proflead
    @proflead 3 місяці тому

    Well explained! Thanks!

  • @ezdeezytube
    @ezdeezytube Місяць тому

    Instead of dragging files, can you connect it to a local folder? Also, why does the first query work but the second always fail? (it says "Could not respond to message. an error occured while streaming response")

  • @nightmisterio
    @nightmisterio 8 місяців тому

    To add PDFs in the chat and make a Pools of knowledge to select from would be great.

  • @PuthethuKollam
    @PuthethuKollam 7 місяців тому

    Does the locally run LLM has back propagation?

  • @Djk0t
    @Djk0t 8 місяців тому +1

    Hi Tim, Fantastic. Is it possible to use anythingllm with gpt4 directly, for local use? like the example you demonstrated above.

    • @thedeathcake
      @thedeathcake 8 місяців тому

      Can't imagine that's possible with GPT4. The VRAM requires for that model would be in the hundreds of GB.

  • @fxstation1329
    @fxstation1329 8 місяців тому

    Thank you so much for the concise tutorial. Can we use ollama and LM studio as well with AnythingLLM. It only takes either of them. I have some models in ollama, and some in LM. Would love to have them both in AnythingLLM. I don't know if this is possible though. Thanks!

  • @TheHeraldOfChange
    @TheHeraldOfChange 7 місяців тому +1

    OK, I'm confused. If I were to feed this a bunch of pdf documents/books, would it then be able to draw on the information contained in those files to answer questions, summarise then info, or general content based on that info in the same literary/writing style as the initial files? And all 'offline' on a local install? (This is the Holy Grail that I am seeking out.

    • @holykim4352
      @holykim4352 7 місяців тому

      u can already do this with chatgpt custom gpts

    • @TheHeraldOfChange
      @TheHeraldOfChange 7 місяців тому

      @@holykim4352 got a link or reference? I've not found any way to do what I want so far. Maybe I misunderstand the process, but I can't seem to find the info I need either. Cheers.

  • @Al-Storm
    @Al-Storm 8 місяців тому

    Very cool, I'll check it out. Is there a way to not install this on your OS drive?

  • @Equality-and-Liberty
    @Equality-and-Liberty 7 місяців тому

    I want to try it in a Linux VM, but from what I see you can only make this work on a laptop with a desktop OS. It would be even better if both LMstudio and AnythingLLM could run in one or two separate containers with a web UI

  • @WestW3st
    @WestW3st 8 місяців тому

    I mean this is pretty useful already, is there plans to increase the capabilities to include other formats of documents, images, etc?

  • @stevekirsch8284
    @stevekirsch8284 6 місяців тому

    Very helpful video. I'd love to be able scrape an entire website in Anything LLM. Is there a way to do that?
    Is there a website where I can ask help questions about Anything LLM?

  • @Jascensionvoid
    @Jascensionvoid 8 місяців тому +1

    This is an amazing video and exactly what Ineeded. Thank you! I really appreciate it. Now the one thing,how do I find the token context window for the different models? I'm trying out gemma?

    • @TimCarambat
      @TimCarambat  8 місяців тому +3

      up to 8,000 (depends on VRAM available - 4096 is safe if you want best performance). I wish they had it on the model card on HuggingFace, but in reality it just is better to google it sometimes :)

    • @Jascensionvoid
      @Jascensionvoid 8 місяців тому

      I gotcha. So for the most part, just use the recommended one. I got everything working. But I uploaded a PDF and it keeps saying I am unable to provide a response to your question as I am unable to access external sources or provide a detailed analysis of the conversation. But the book was loaded and moved to workspace and save and embed?
      @@TimCarambat

    • @TimCarambat
      @TimCarambat  8 місяців тому +2

      For what its worth in LM Studio, on the sidebar there is a `n_cntxt` param that shows the maximum you can run. Performance will degrade if your GPU is not capable though to run the max token context.

  • @bhushan80b
    @bhushan80b 6 місяців тому

    Hi tim, citations showing are not correct it just showing random files ...is there any way to sort it out

  • @frosti7
    @frosti7 5 місяців тому

    Awesome, but anything LLM won't see PDFS with OCR like ChatGPT would, is there a multi-model that can do that?

    • @TimCarambat
      @TimCarambat  5 місяців тому +1

      We need to support vision first so we can enable OCR!

  • @snowinjon
    @snowinjon 7 місяців тому

    I want to ask from multiple csv files, how to do that?

  • @another_dude_online
    @another_dude_online 7 місяців тому

    Thanks dude! Great video

  • @muhammadbintang6588
    @muhammadbintang6588 3 місяці тому +1

    what much better between higher parameters or bit (Q)??

    • @TimCarambat
      @TimCarambat  Місяць тому

      Models tend to "listen" better at hight quantization

  • @niswanthskumar1608
    @niswanthskumar1608 4 місяці тому

    thanks a lot for the video. Can you please tell me if there is a way to install the model via usb pendrive (manual installing) the other system I'm trying to install doesn't have an internet connection. pls reply

  • @thegoat10.7
    @thegoat10.7 7 місяців тому

    Can someone explain to me what are this tools I have no idea and how this are replacement for chatgtp

  • @FisVii77
    @FisVii77 8 місяців тому +4

    Can you do more of these demonstrations or vidoeos, is anythingLLM capable of generating visual conent like a dalle3 or video, assuming using a capable open sourse modell is there a limitation other then local memory as to the size of the vector databases created? this is amazing ;)
    Thanks for this video truly appreciated man. Liked and subscrided to support you.

  • @stephanh1083
    @stephanh1083 8 місяців тому +1

    Why am I having a problem with docs (pdf, txt). I get "Could not respond to message. An error occurred while streaning response, network error. Without docs it works fine with LM-Studio. What am I doing wrong? I use M1 with 16 GB Ram

    • @opensource1000
      @opensource1000 8 місяців тому

      maybe watch the video and start your server and connect to it

    • @stephanh1083
      @stephanh1083 8 місяців тому +1

      @@opensource1000 Server is started and works fine with chat. When I use pdf or txt I get that error message and Anything LLM crashes. Yes, I will watch Video again, maybe I missed something.

  • @Peter-bi4hm
    @Peter-bi4hm 6 місяців тому

    Did not work for me on Windows11. Tried to add a local document, but save and embed always throws an error. "The specified module could not be found. -> \AppData\Local\Programs\anythingllm-desktop
    esources\backend
    ode_modules\onnxruntime-node\bin
    api-v3\win32\x64\onnxruntime_binding.node" Then I tried to download the Xenova all-MiniLM-L6-v2 manually but the error remains.

  • @arismatic
    @arismatic 7 місяців тому

    How do i know what token context window number to use?

  • @milorad9301
    @milorad9301 8 місяців тому

    Hello Tim, you can make a video connecting Ollama with AnythingLLM?

  • @quietackshon
    @quietackshon 8 місяців тому

    While there is are use cases for this technology, it definitely doesn't make ChatGPT 3.5 obsolete. Just one of the down sides of this tech is no spell checking. Also the responses are baked in by the documents intergrade, it becomes inflexible with its responses. Not for me.

  • @avantigaming1627
    @avantigaming1627 8 місяців тому

    I am trying to access pdf and documentation present on a website I have given AnythingLLM, but it seems not to be working. Is it possible to do so, or do I need to manually download them from the website and attach them in AnythingLLM?

  • @kvrmd25
    @kvrmd25 8 місяців тому +1

    Are there plans to improve the Anything LLM API? I like the built-in RAG web interface, but you're kind of stuck in a chat-interface with Anything LLM...

    • @TimCarambat
      @TimCarambat  8 місяців тому

      The API is fully available in the docker version. It will constantly be improved as more features become available - yes absolutely

  • @properlogic
    @properlogic 4 місяці тому

    00:01 Easiest way to run locally and connect LMStudio & AnythingLLM
    01:29 Learn how to use LMStudio and AnythingLLM for a comprehensive LLM experience for free
    02:48 Different quantized models available on LMStudio
    04:14 LMStudio includes a chat client for experimenting with models.
    05:33 Setting up LM Studio with AnythingLLM for local model usage.
    06:57 Setting up LM Studio server and connecting to AnythingLLM
    08:21 Upgrading LMStudio with additional context
    09:51 LM Studio and AnythingLLM enable private end-to-end chatting with open source models
    Crafted by Merlin AI.

  • @karlwireless
    @karlwireless 7 місяців тому +1

    This video changed everything for me. Insane how easy to do all this now!

  • @TheExceptionalState
    @TheExceptionalState 8 місяців тому +1

    Many thanks for this. I have been looking for this kind of solution for 6+ months now. Is it possible to create an LLM based uniquely on say a database of say 6000 pdfs?

    • @TimCarambat
      @TimCarambat  8 місяців тому +2

      A workspace, yes. You could then chat with that workspace over a period of time and then use the answers to create a fine-tune and then you'll have an LLM as well. Either way, it works. No limit on documents or embeddings or anything like that.

    • @TheExceptionalState
      @TheExceptionalState 8 місяців тому

      @@TimCarambatMany thanks! I shall investigate "workspaces". If I understand correctly I can use a folder instead of a document and AnythingLLM will work with the content it contains. Or was that too simplistic? I see other people are asking the same type of question.

  • @MrNatzu
    @MrNatzu 8 місяців тому

    Very nice. Will definitely try it. Is or will there be an option to integrate a anything LLM workspace in a python code to automate task via API?

    • @TimCarambat
      @TimCarambat  8 місяців тому

      Yes, but the api is only in the docker version currently since that can be run locally and on cloud so an API makes more sense for that medium

  • @NaveenKumar-vj9sc
    @NaveenKumar-vj9sc 8 місяців тому

    Thanks for the insights. What's the best alternative for a person who doesn't want to run locally yet he wants to use opensource LLMs for interacting with documents and webscraping for research.

    • @TimCarambat
      @TimCarambat  8 місяців тому

      OpenRouter has a ton of hosted open-source LLMs you can use. I think a majority of them are free and you just need an API key.

  • @PatJones82
    @PatJones82 7 місяців тому

    This is all new and cool to me. Is there a way to dump my Evernote Database (over 20,000 SOP's) into this? Thinking that would be awesome.

  • @wingwing2683
    @wingwing2683 8 місяців тому

    It's very helpful. Thank you!

  • @piotr.orzeszek
    @piotr.orzeszek 6 місяців тому

    What GPU? What model you have?