Please do a dedicated video on training minimal base models for specific purposes. You're a legend. Also a video on commercial use and licensing would be immensely valuable and greatly appreciated.
I’m just about to dive into LM Studio and AnythingLM Desktop, and let me tell you, I’m super pumped! 🚀 The potential when these two join forces is just out of this world!
I’d love to hear more about your product roadmap - specifically with how it relates to the RAG system you have implemented . I’ve been experimenting a lot with Flowise and the new LlamaIndex integration is fantastic - especially the various text summarisation and content refinement methods available with a LlamaIndex based RAG. Are you planning to enhance the RAG implementation in AnythingLLM?
This is exactly what I've been looking for. Now, I'm not sure if this is already implemented, but if the chat bot can use EVERYTHING from all previous chats within the workspace for context and reference... My god that will change everything for me.
It does use the history for context and reference! History, system prompt, and context - all at the same time and we manage the context window for you on the backend
@@IrakliKavtaradzepsyche yes, but we manage the overflow automatically so you at least don't crash from token overflow. This is common for LLMs, to truncate or manipulate the history for long running sessions
Thank you, I've been struggling for so long with problematic things like privateGPT etc. which gave me headaches. I love how easy it is to download models and add embeddings! Again thank you. I'm very eager to learn more about AI, but I'm absolute beginner. Maybe video on how would you learn from the beginning?
Great stuff,this way you can run a good smaller conversational model like 13b or even 7b,like Laser Mistral. Main problem with this smaller LLM are massive holes in some topics,or informations about events,celebs or other stuff,this way you can make your own database about stuff you wanna chat. Amazing.
So if in case we need to programmatically use this, does anythingllm itself offer a ‘run locally on server’ option to get an API endpoint that we could call from a local website for example? i.e. local website -> post request -> anythingllm (local server + PDFs)-> LMstudio (local server - foundation model)
Mm...doesn't seem to work for me. The model (Mistral 7B) loads, and so does the training data, but the chat can't read the documents (PDF or web links) properly. Is that a function of the model being too small, or is there a tiny bug somewhere? [edit: got it working, but it just hallucinates all the time. Pretty useless]
Thanks for this, about to try it to query legislation and case law for a specific area of UK law to see if it effective in returning references to relevant sections and key case law. Interested in building a private LLM to assist with specific repetitive tasks. Thanks for the video.
Also, how is this different from implementing RAG on a base foundation model and chunking our documents and loading it into a vector db like pinecone? Is the main point here that everything is locally run on our laptop? Would it work without internet access?
Really awesome stuff. Thank you for bringing such quality aspects and making it open-source. Could you please help to understand on how efficiently the RAG pipeline in AnythingLLM works ? For example: If I upload a pdf with MultiModal content or If I want my document to be embedded in a semantic way or use Multi-vector search, Can we customize such advanced RAG features ?
Thanks a ton ...you are giving us power on working with our local documents... its blazingly fast to embed the docs, super fast responses and all in all i am very happy.
To operate a model comparable to GPT-4 on a personal computer, you would currently need around 60GB of VRAM. This would roughly necessitate three 24GB graphics cards, each costing between $1,500 and $2,000. Therefore, equipping a PC to run a similar model would cost more than 25 years' worth of a ChatGPT subscription at $20 per month or $240 per year. Although there are smaller LLM (Large Language Models) available, such as 8B or 13B models requiring only 4-16GB of VRAM, they don't compare favorably even with the freely available GPT-3.5. Furthermore, with OpenAI planning to release GPT-5 later this year, the hardware requirements to match its capabilities on a personal computer are expected to be even more demanding.
Absolutely. Closed source and cloud based models will always have a performance edge. The kicker is are you comfortable with their limitations on what you can do with them, paying for additional plugins, and the exposure of your uploaded documents and chats to a third party. Or get 80-90% of the same experience with whatever the latest and greatest oss model is running on your CPU/GPU with none of that concern. Its just two different use cases, both should exist
@@TimCarambat While using versions 2.6 to 2.9 of Llama (dolphin), I've noticed significant differences between it and ChatGPT-4. Llama performs well in certain areas, but ChatGPT generally provides more detailed responses. There are exceptions where Llama may have fewer restrictions due to being less bound by major company policies, which can be a factor when dealing with sensitive content like explosives or explicit materials. however, while ChatGPT has usage limits and avoids topics like politics and explicit content, some providers offer unrestricted access through paid services. and realistically, most users-over 95%-might try these services briefly before discontinuing their use.
Get a pcei nvme ssd. I have 500gb of “swap” that I labeled as ram3. Ran a 70b like butter with the gpu at 1% only running display. Also you can use a 15$ riser and add graphics cards. You should have like 256gb on the gpu, but you can also vramswap, but that isn’t necessary bc you shouldn’t rip anywhere near 100gb at once. Split up your processes. Instead of just cpu and ram use the cpu to send commands to anything with a chip, and attach a storage device immediately to it. The pc has 2 x 8gb ram naturally. You can even use an hdd it is just a noticeable drag of under 1 gb/s. There are many more ways to do it, once I finish the seamless container pass I will have an otb software solution for you. -- swap rate and swapiness will help if you have solid storage.
@@Betttcpp yes, you can modify or add addition to your pc to run LLM on your pc, but still it's not worth to do it. because, most of people who playing around LLM, they would use it only short period of time. a month or so max, what i am saying is paying over $ 5,000 build for the LLM is not worth, compare to paying $20 per month enjoying fun.
Absolutely stellar video, Tim! 🌌 Your walkthrough on setting up a locally run LLM for free using LM Studio and Anything LLM Desktop was not just informative but truly inspiring. It's incredible to see how accessible and powerful these tools can make LLM chat experiences, all from our own digital space stations. I'm particularly excited about the privacy aspect and the ability to contribute to the open-source community. You've opened up a whole new universe of possibilities for us explorers. Can't wait to give it a try myself and dive into the world of private, powerful LLM interactions. Thank you for sharing this cosmic knowledge! 🚀👩🚀
I loaded a simple txt file, embbedded as presented in video, and ask a question about a topic within the text. Unfortunately it seems the model does't know nothing about the text. Any tip ? (Mistral 8 bit, RtX4090 24 Gb).
I have tried, but could not get it to work with the files that was shared as context. Am I missing something? It's giving answers like that the file is in my inbox I will have to read it, but does not actually reads the file
I am a software developer but am clueless when it comes to machine learning and LLM's. What I was wondering, is it possible to train a local LLM by feeding in all of your code for a project?
The biggest challenge I am having is getting the prompt to provide accurate information that is included in the source material. The interpretation is just wrong. I have pinned the source material and I have also played with the LLM Temperature to no avail of an accurate chat response that aligns with the source material. Also tried setting chat mode to Query but it typically doesn't produce a response. Another thing that is bothering me is how I can't delete the default thread that is under the workspace as the first thread.
I get this response every time: "I am unable to access external sources or provide information beyond the context you have provided, so I cannot answer this question". Mac mini M2 Pro Cores:10 (6 performance and 4 efficiency) Memory:16 GB
LM Studios TOS paragraph: "Updates. You understand that Company Properties are evolving. As a result, Company may require you to accept updates to Company Properties that you have installed on your computer or mobile device. You acknowledge and agree that Company may update Company Properties with or WITHOUT notifying you. You may need to update third-party software from time to time in order to use Company Properties. Company MAY, but is not obligated to, monitor or review Company Properties at any time. Although Company does not generally monitor user activity occurring in connection with Company Properties, if Company becomes aware of any possible violations by you of any provision of the Agreement, Company reserves the right to investigate such violations, and Company may, at its sole discretion, immediately terminate your license to use Company Properties, without prior notice to you." Several posts on LLM Reddit groups with people not happy about it. NOTE: I'm not one of the posters, read-only, I'm just curious what others think.
Wait so their TOS basically says they may or may not monitor your chats in case you are up to no good with no notification? okay. I see why people are pissed about that. I dont like that either unless they can verifiable prove the "danger assessment" is done on device because otherwise this is no better than just cloud hosting but paying for it with your resources
Ancient idea clash between wanting to be a good "software citizen" and the unfortunate fact that their intent is still to "monitor" your activities. As you said in your second reply to me, "monitoring" does not go over well with some and the consideration of the intent for doing so, even if potentially justified, is a subsequent thought they will refuse to entertain. @@TimCarambat
@@TimCarambatLet's say there is a monitoring background behind, what if we setup a vm that did not allow to connect to the internet, does that will make our data safe ?
@@alternate_fantasy it would prevent phone homes, sure, so yes. That being said I have Wiresharkd lmstudio while running and did not see anything sent outbound that would indicate they can view anything like that. I think that's just their lawyers being lawyers
I had a spare 6800xt sitting around that had been retired due to overheating for no apparent reason, as well as a semi-retired ryzen 2700x , and i found 32 gigs of ram sitting around for the box. Just going to say flat out that it is shockingly fast. I actually think running Rocm to enable gpu acceleration for lm studio is running llm's better than my 3080ti in my main system, or at the very least, so similar i cant perceive a difference
Can't wait to try this. I've watched a dozen other tutorials that were too complicated for someone like me without basic coding skills. What are the pros/cons of setting this up with LMStudio vs. Ollama?
If you don't like to code, you will find the UI of lmstudio much more approachable, but it can be an information overload. Lmstudio has every model on huggingface. Ollama is only accessible via terminal and has limited model support but is dead simple. This video was made before we launched the desktop app. Our desktop comes with ollama pre-installed and gives you a UI to pick a model and start chatting with docs privately. That might be a better option since that is one app, no setup, no cli or extra application
This is an amazing tutorial. Didn't know there were that many models out there. Thank you for clearing the fog. I have one question though, how do I find out what number to put into "Token context window"? Thanks for your time!
Once pulling into LMStudio, its in the sidebar once the model is selected. Its a tiny little section on the right sidebar that say "n_ctxt" or something similar to that. Youll then see it will explain further how many tokens your model can handle at max, RAM permitting.
@Tim, this episode is brilliant! Let me ask you one thing. Do you have any ways to force this LLM model to return the response in a specific form, e.g. JSON with specific keys?
looks soo good! I have a question : is there some way to add chat diagram like voiceflow or botpress ? For example, guiding the discussion for an ecommerce chatbot and give multiple choice when ask questions ?
I think this could be done with just some clever prompt engineering. You can modify the system prompt to behave in this way. However, there is no voiceflow-like experience built-in for that. That is a clever solution though.
AnythingLLM looks super awesome, cant wait to setup with ollama and give it a spin. tried chat with rtx but the youtube upload option didnt install for me and that was all i wanted it for
Very useful video!! Thanks for the work. I kept a doubt about the chats that take place, there is any registration of the conversations? For commercial purposes it will be nice to generate leads with the own chat!
Absolutely, while you can "clear" a chat window you can always view all chats sent as a system admin and even export them for manual analysis or fine-tuning.
Thanks for the video! I did it as you said and got the model working (same as you picked). It ran faster than i expected and I was impressed with the quality of the text and the general understanding of the model. However when i uploaded some documents [in total just 150 kb of downloaded HTML from a wiki] it gave very wrong answers [overwhelmingly incorrect]. What can i do to improve this?
two things help by far the most! 1. Changing the "Similarity Threshold" in the workspace settings to be "No Restriction". This basically allows the vector database to return all remotely similar results and no filtering is applied. This is based purely on the vector database distance of your query and the "score" filtered on, depending on documents, query, embedder, and more variables - a relevant text snippet can be marked as "irrelevant". Changing this setting usually fixes this with no performance decrease. 2. Document pinning (thumbtack icon in UI once doc is embedded). This does a full-text insertion of the document into the prompt. The context window is managed in case it overflows the model, however this can slow your response time by a good factor, however coherence will be extremely high.
Nice one Tim. It’s been on my list to get a private LLM set up. You’re guide is just what I needed. I know Mistral is popular. Are those models listed on capabilities, top being most efficient? I’m wondering how to choose the best model for my needs.
Those models are curated by thr lmstudio team. Imo they are based on popularity. However, if you aren't sure what model to chose, go for Llama2 or Mistral, can't go wrong with those models as they are all around capable
I´ve been playing around with running local LLMs for a while now, and it´s really cool to be able to run something like that locally at all, but it does not come even close to replacing ChatGPT. If there actually were models as smart as ChatGPT to run locally they would require a very expensive bunch of computers...
Im on a Linux machine, and want to set up some hardware ... recommended GPU (or even point me to the direction for good information?) or better yet can an old bitcon rig do the job somehow seeing as though theyre useless for bitcoin these days?! Great tutorial too mate, really appreciate you taking the time!
Can you make a tutorial how we can make either or the other to TTS for the AI-Response in a chat? I don't mean speech-recognition. just AI voice output.
I notice some of the models are 25GB+.. BLOOM, Meta's Llama 2, Guanaco 65B and 33B, dolphin-2.5-mixtral-8x7b etc Do these models require training? If not, but you wanted to train it with custom data, does the size of the model grow, or does it just change and stay the same size? Aside from LMStudio , AnythingLLM , any thoughts on other tools that attempt to make it simpler to get started, like Oobabooga , gpt4all io , Google Colab , llamafile , Pinokio ?
Thanks, Tim, for the good video. Unfortunately I do not get good results for uploaded content. I'm from Germany, so could it be a language problem, cause the uploaded content is german text? I'm using the same mistral model from your video and added 2 web pages to anythingLLMs workspace. But I'm not sure if the tools are using this content for building the answer. In the LM studio log I can see a very small chunk of one of the uploaded web pages. But in total, the result is wrong. To get good embeddings values I downloaded nomic-embed-text-v1.5.Q8_0.gguf and use it for the Embedding Model Settings in LM Studio which might be not necessary, cause you didn't mentioned such steps in your video. I would appreciate any further hints. Thanks a lot in advance.
If you change them to .txt if will be okay. We just need to basically have all "unknown" types try to parse as text to allow this since there are thousands of programming text types
Instead of dragging files, can you connect it to a local folder? Also, why does the first query work but the second always fail? (it says "Could not respond to message. an error occured while streaming response")
Thank you so much for the concise tutorial. Can we use ollama and LM studio as well with AnythingLLM. It only takes either of them. I have some models in ollama, and some in LM. Would love to have them both in AnythingLLM. I don't know if this is possible though. Thanks!
OK, I'm confused. If I were to feed this a bunch of pdf documents/books, would it then be able to draw on the information contained in those files to answer questions, summarise then info, or general content based on that info in the same literary/writing style as the initial files? And all 'offline' on a local install? (This is the Holy Grail that I am seeking out.
@@holykim4352 got a link or reference? I've not found any way to do what I want so far. Maybe I misunderstand the process, but I can't seem to find the info I need either. Cheers.
I want to try it in a Linux VM, but from what I see you can only make this work on a laptop with a desktop OS. It would be even better if both LMstudio and AnythingLLM could run in one or two separate containers with a web UI
Very helpful video. I'd love to be able scrape an entire website in Anything LLM. Is there a way to do that? Is there a website where I can ask help questions about Anything LLM?
This is an amazing video and exactly what Ineeded. Thank you! I really appreciate it. Now the one thing,how do I find the token context window for the different models? I'm trying out gemma?
up to 8,000 (depends on VRAM available - 4096 is safe if you want best performance). I wish they had it on the model card on HuggingFace, but in reality it just is better to google it sometimes :)
I gotcha. So for the most part, just use the recommended one. I got everything working. But I uploaded a PDF and it keeps saying I am unable to provide a response to your question as I am unable to access external sources or provide a detailed analysis of the conversation. But the book was loaded and moved to workspace and save and embed? @@TimCarambat
For what its worth in LM Studio, on the sidebar there is a `n_cntxt` param that shows the maximum you can run. Performance will degrade if your GPU is not capable though to run the max token context.
thanks a lot for the video. Can you please tell me if there is a way to install the model via usb pendrive (manual installing) the other system I'm trying to install doesn't have an internet connection. pls reply
Can you do more of these demonstrations or vidoeos, is anythingLLM capable of generating visual conent like a dalle3 or video, assuming using a capable open sourse modell is there a limitation other then local memory as to the size of the vector databases created? this is amazing ;) Thanks for this video truly appreciated man. Liked and subscrided to support you.
Why am I having a problem with docs (pdf, txt). I get "Could not respond to message. An error occurred while streaning response, network error. Without docs it works fine with LM-Studio. What am I doing wrong? I use M1 with 16 GB Ram
@@opensource1000 Server is started and works fine with chat. When I use pdf or txt I get that error message and Anything LLM crashes. Yes, I will watch Video again, maybe I missed something.
Did not work for me on Windows11. Tried to add a local document, but save and embed always throws an error. "The specified module could not be found. -> \AppData\Local\Programs\anythingllm-desktop esources\backend ode_modules\onnxruntime-node\bin api-v3\win32\x64\onnxruntime_binding.node" Then I tried to download the Xenova all-MiniLM-L6-v2 manually but the error remains.
While there is are use cases for this technology, it definitely doesn't make ChatGPT 3.5 obsolete. Just one of the down sides of this tech is no spell checking. Also the responses are baked in by the documents intergrade, it becomes inflexible with its responses. Not for me.
I am trying to access pdf and documentation present on a website I have given AnythingLLM, but it seems not to be working. Is it possible to do so, or do I need to manually download them from the website and attach them in AnythingLLM?
Are there plans to improve the Anything LLM API? I like the built-in RAG web interface, but you're kind of stuck in a chat-interface with Anything LLM...
00:01 Easiest way to run locally and connect LMStudio & AnythingLLM 01:29 Learn how to use LMStudio and AnythingLLM for a comprehensive LLM experience for free 02:48 Different quantized models available on LMStudio 04:14 LMStudio includes a chat client for experimenting with models. 05:33 Setting up LM Studio with AnythingLLM for local model usage. 06:57 Setting up LM Studio server and connecting to AnythingLLM 08:21 Upgrading LMStudio with additional context 09:51 LM Studio and AnythingLLM enable private end-to-end chatting with open source models Crafted by Merlin AI.
Many thanks for this. I have been looking for this kind of solution for 6+ months now. Is it possible to create an LLM based uniquely on say a database of say 6000 pdfs?
A workspace, yes. You could then chat with that workspace over a period of time and then use the answers to create a fine-tune and then you'll have an LLM as well. Either way, it works. No limit on documents or embeddings or anything like that.
@@TimCarambatMany thanks! I shall investigate "workspaces". If I understand correctly I can use a folder instead of a document and AnythingLLM will work with the content it contains. Or was that too simplistic? I see other people are asking the same type of question.
Thanks for the insights. What's the best alternative for a person who doesn't want to run locally yet he wants to use opensource LLMs for interacting with documents and webscraping for research.
Please do a dedicated video on training minimal base models for specific purposes. You're a legend. Also a video on commercial use and licensing would be immensely valuable and greatly appreciated.
+1
Where to start with in the path of learning AI (llm, rag, generative Ai..,)
+1
Yes!
Very nice question i am waiting for the same. Wish Tim make that video soon
I’m just about to dive into LM Studio and AnythingLM Desktop, and let me tell you, I’m super pumped! 🚀 The potential when these two join forces is just out of this world!
I’d love to hear more about your product roadmap - specifically with how it relates to the RAG system you have implemented . I’ve been experimenting a lot with Flowise and the new LlamaIndex integration is fantastic - especially the various text summarisation and content refinement methods available with a LlamaIndex based RAG. Are you planning to enhance the RAG implementation in AnythingLLM?
This is exactly what I've been looking for. Now, I'm not sure if this is already implemented, but if the chat bot can use EVERYTHING from all previous chats within the workspace for context and reference... My god that will change everything for me.
It does use the history for context and reference! History, system prompt, and context - all at the same time and we manage the context window for you on the backend
@@TimCarambatbut isn’t history actually constrained by active model’s context size?
@@IrakliKavtaradzepsyche yes, but we manage the overflow automatically so you at least don't crash from token overflow. This is common for LLMs, to truncate or manipulate the history for long running sessions
So this strictly for LLM's? Is this like an AI assistant?
Thank you, I've been struggling for so long with problematic things like privateGPT etc. which gave me headaches. I love how easy it is to download models and add embeddings! Again thank you.
I'm very eager to learn more about AI, but I'm absolute beginner. Maybe video on how would you learn from the beginning?
The potential of this is near limitless so congratulations on this app.
Great stuff,this way you can run a good smaller conversational model like 13b or even 7b,like Laser Mistral.
Main problem with this smaller LLM are massive holes in some topics,or informations about events,celebs or other stuff,this way you can make your own database about stuff you wanna chat.
Amazing.
You deserve a Nobel Peace Prize. Thank you so much for creating Anything LLM.
So if in case we need to programmatically use this, does anythingllm itself offer a ‘run locally on server’ option to get an API endpoint that we could call from a local website for example? i.e. local website -> post request -> anythingllm (local server + PDFs)-> LMstudio (local server - foundation model)
Did you get an answer?
Mm...doesn't seem to work for me. The model (Mistral 7B) loads, and so does the training data, but the chat can't read the documents (PDF or web links) properly. Is that a function of the model being too small, or is there a tiny bug somewhere? [edit: got it working, but it just hallucinates all the time. Pretty useless]
thanks for the tutorial, everything works great and surprisingly fast on M2 Mac Studio, cheers!
Just got this running and it's fantastic. Just a note that LM Studio uses the API key "lm-studio" when connecting using Local AI Chat Settings.
does it provide script for youtube?
Fantastic! I've been waiting for someone to make RAG smooth and easy :) Thank you for the video!
Thanks for this, about to try it to query legislation and case law for a specific area of UK law to see if it effective in returning references to relevant sections and key case law. Interested in building a private LLM to assist with specific repetitive tasks. Thanks for the video.
Also, how is this different from implementing RAG on a base foundation model and chunking our documents and loading it into a vector db like pinecone? Is the main point here that everything is locally run on our laptop? Would it work without internet access?
IMO anythingLLM is much userfriendly and really has big potential. thanks Tim!
Really awesome stuff. Thank you for bringing such quality aspects and making it open-source.
Could you please help to understand on how efficiently the RAG pipeline in AnythingLLM works ?
For example:
If I upload a pdf with MultiModal content or If I want my document to be embedded in a semantic way or use Multi-vector search, Can we customize such advanced RAG features ?
How well does it perform on large documents. Is it prone to lost in the middle phenomena?
That is more of a "model behavior" and not something we can control.
Thanks a ton ...you are giving us power on working with our local documents... its blazingly fast to embed the docs, super fast responses and all in all i am very happy.
thats liberating ! i was really concerned about privacy especially when coding or working on refining internal proposals> Now I know what to do
What type of processor/GPU/model are you using? I'm using version 5 of Mistral and it is super slow to respond. i7 and an Nvidia RTX 3060ti GPU.
To operate a model comparable to GPT-4 on a personal computer, you would currently need around 60GB of VRAM. This would roughly necessitate three 24GB graphics cards, each costing between $1,500 and $2,000. Therefore, equipping a PC to run a similar model would cost more than 25 years' worth of a ChatGPT subscription at $20 per month or $240 per year.
Although there are smaller LLM (Large Language Models) available, such as 8B or 13B models requiring only 4-16GB of VRAM, they don't compare favorably even with the freely available GPT-3.5.
Furthermore, with OpenAI planning to release GPT-5 later this year, the hardware requirements to match its capabilities on a personal computer are expected to be even more demanding.
Absolutely. Closed source and cloud based models will always have a performance edge. The kicker is are you comfortable with their limitations on what you can do with them, paying for additional plugins, and the exposure of your uploaded documents and chats to a third party.
Or get 80-90% of the same experience with whatever the latest and greatest oss model is running on your CPU/GPU with none of that concern. Its just two different use cases, both should exist
@@TimCarambat While using versions 2.6 to 2.9 of Llama (dolphin), I've noticed significant differences between it and ChatGPT-4. Llama performs well in certain areas, but ChatGPT generally provides more detailed responses. There are exceptions where Llama may have fewer restrictions due to being less bound by major company policies, which can be a factor when dealing with sensitive content like explosives or explicit materials. however, while ChatGPT has usage limits and avoids topics like politics and explicit content, some providers offer unrestricted access through paid services. and realistically, most users-over 95%-might try these services briefly before discontinuing their use.
Get a pcei nvme ssd. I have 500gb of “swap” that I labeled as ram3. Ran a 70b like butter with the gpu at 1% only running display. Also you can use a 15$ riser and add graphics cards. You should have like 256gb on the gpu, but you can also vramswap, but that isn’t necessary bc you shouldn’t rip anywhere near 100gb at once. Split up your processes. Instead of just cpu and ram use the cpu to send commands to anything with a chip, and attach a storage device immediately to it. The pc has 2 x 8gb ram naturally. You can even use an hdd it is just a noticeable drag of under 1 gb/s. There are many more ways to do it, once I finish the seamless container pass I will have an otb software solution for you. -- swap rate and swapiness will help if you have solid storage.
@@Betttcpp yes, you can modify or add addition to your pc to run LLM on your pc, but still it's not worth to do it. because, most of people who playing around LLM, they would use it only short period of time. a month or so max,
what i am saying is paying over $ 5,000 build for the LLM is not worth, compare to paying $20 per month enjoying fun.
@@catwolf256 could be worth it if you make it available to all your friends and get them to pay you instead ;-)
Wow, great information. I have a huge amount of documents and everytime I search for something it's getting such a difficult task to fulfill
And what have you found with this combination of dumb tools? Search through documents is crazy slow with LM Studio and AnythingLLM.
Absolutely stellar video, Tim! 🌌 Your walkthrough on setting up a locally run LLM for free using LM Studio and Anything LLM Desktop was not just informative but truly inspiring. It's incredible to see how accessible and powerful these tools can make LLM chat experiences, all from our own digital space stations. I'm particularly excited about the privacy aspect and the ability to contribute to the open-source community. You've opened up a whole new universe of possibilities for us explorers. Can't wait to give it a try myself and dive into the world of private, powerful LLM interactions. Thank you for sharing this cosmic knowledge! 🚀👩🚀
I loaded a simple txt file, embbedded as presented in video, and ask a question about a topic within the text. Unfortunately it seems the model does't know nothing about the text. Any tip ? (Mistral 8 bit, RtX4090 24 Gb).
Same here, plus it hallucinates like hell :)
I have tried, but could not get it to work with the files that was shared as context. Am I missing something? It's giving answers like that the file is in my inbox I will have to read it, but does not actually reads the file
i m also struggling. sometimes it refers to the context and most of the times it forgot having access eventho its referencing it
I am a software developer but am clueless when it comes to machine learning and LLM's. What I was wondering, is it possible to train a local LLM by feeding in all of your code for a project?
The biggest challenge I am having is getting the prompt to provide accurate information that is included in the source material. The interpretation is just wrong. I have pinned the source material and I have also played with the LLM Temperature to no avail of an accurate chat response that aligns with the source material. Also tried setting chat mode to Query but it typically doesn't produce a response. Another thing that is bothering me is how I can't delete the default thread that is under the workspace as the first thread.
Very nice tutorial! Thanks Tim,
Bro, this is exactly what I was looking for. Would love to see a video of the cloud option at $50/month
@@monbeauparfum1452 have you tried the desktop app yet (free)
I get this response every time:
"I am unable to access external sources or provide information beyond the context you have provided, so I cannot answer this question".
Mac mini
M2 Pro
Cores:10 (6 performance and 4 efficiency)
Memory:16 GB
That's really amazing 🤩, I will definitely be using this for BIM and Python
LM Studios TOS paragraph:
"Updates. You understand that Company Properties are evolving. As a result, Company may require you to accept updates to Company Properties that you have installed on your computer or mobile device. You acknowledge and agree that Company may update Company Properties with or WITHOUT notifying you. You may need to update third-party software from time to time in order to use Company Properties.
Company MAY, but is not obligated to, monitor or review Company Properties at any time. Although Company does not generally monitor user activity occurring in connection with Company Properties, if Company becomes aware of any possible violations by you of any provision of the Agreement, Company reserves the right to investigate such violations, and Company may, at its sole discretion, immediately terminate your license to use Company Properties, without prior notice to you."
Several posts on LLM Reddit groups with people not happy about it. NOTE: I'm not one of the posters, read-only, I'm just curious what others think.
Wait so their TOS basically says they may or may not monitor your chats in case you are up to no good with no notification?
okay. I see why people are pissed about that. I dont like that either unless they can verifiable prove the "danger assessment" is done on device because otherwise this is no better than just cloud hosting but paying for it with your resources
Thanks for bringing this to my attention btw. I know _why_ they have it in the ToS, but I cannot imagine how they think that will go over.
Ancient idea clash between wanting to be a good "software citizen" and the unfortunate fact that their intent is still to "monitor" your activities. As you said in your second reply to me, "monitoring" does not go over well with some and the consideration of the intent for doing so, even if potentially justified, is a subsequent thought they will refuse to entertain. @@TimCarambat
@@TimCarambatLet's say there is a monitoring background behind, what if we setup a vm that did not allow to connect to the internet, does that will make our data safe ?
@@alternate_fantasy it would prevent phone homes, sure, so yes. That being said I have Wiresharkd lmstudio while running and did not see anything sent outbound that would indicate they can view anything like that. I think that's just their lawyers being lawyers
changing the embedding model would be a good tutorial! For examle how to use a multi langual model!
software engineer and AI knowledge? You got my sub.
Excellent tutorial. Thanks a bunch😊
Thank you so much for your generosity. I wish the very best for your enterprise . God Bless!
I had a spare 6800xt sitting around that had been retired due to overheating for no apparent reason, as well as a semi-retired ryzen 2700x , and i found 32 gigs of ram sitting around for the box. Just going to say flat out that it is shockingly fast. I actually think running Rocm to enable gpu acceleration for lm studio is running llm's better than my 3080ti in my main system, or at the very least, so similar i cant perceive a difference
That was a really good video. Thank you so much.
Thanks for building this.
Can't wait to try this. I've watched a dozen other tutorials that were too complicated for someone like me without basic coding skills. What are the pros/cons of setting this up with LMStudio vs. Ollama?
If you don't like to code, you will find the UI of lmstudio much more approachable, but it can be an information overload. Lmstudio has every model on huggingface. Ollama is only accessible via terminal and has limited model support but is dead simple.
This video was made before we launched the desktop app. Our desktop comes with ollama pre-installed and gives you a UI to pick a model and start chatting with docs privately. That might be a better option since that is one app, no setup, no cli or extra application
This is great! So we woukd always have to run lm studio before running anything llm?
If you wanted to use LMStudio, yes. There is not specific order but both need to be running of course
Great work Tim, I'm hoping I can introduce this or anything AI into our company
thank you for your simple explanation
Absolutely great!! thank you!!!
Awesome man. Hope to see more video with AnythingLL!
This is an amazing tutorial. Didn't know there were that many models out there. Thank you for clearing the fog. I have one question though, how do I find out what number to put into "Token context window"? Thanks for your time!
Once pulling into LMStudio, its in the sidebar once the model is selected. Its a tiny little section on the right sidebar that say "n_ctxt" or something similar to that. Youll then see it will explain further how many tokens your model can handle at max, RAM permitting.
@@TimCarambat your the best... thanks... 🍻
@Tim, this episode is brilliant! Let me ask you one thing. Do you have any ways to force this LLM model to return the response in a specific form, e.g. JSON with specific keys?
Great video very well explained !
looks soo good! I have a question : is there some way to add chat diagram like voiceflow or botpress ?
For example, guiding the discussion for an ecommerce chatbot and give multiple choice when ask questions ?
I think this could be done with just some clever prompt engineering. You can modify the system prompt to behave in this way. However, there is no voiceflow-like experience built-in for that. That is a clever solution though.
thats great, I was getting tired of the restrictions in the common AI platforms
THANKS! how to port on WEBSIDE local ??
Excellent work. Please make a video on text to sql and excel csv sql support for llms and chatbot. Thank you so much ♥️
AnythingLLM looks super awesome, cant wait to setup with ollama and give it a spin. tried chat with rtx but the youtube upload option didnt install for me and that was all i wanted it for
Very useful video!! Thanks for the work. I kept a doubt about the chats that take place, there is any registration of the conversations? For commercial purposes it will be nice to generate leads with the own chat!
Absolutely, while you can "clear" a chat window you can always view all chats sent as a system admin and even export them for manual analysis or fine-tuning.
Wao what a great tool. Congratulations and thank you.
Can you make a video explaining licence and commercial use to sell this to clients? Thank you.
License is MIT, not much more to explain :)
Thanks for the video!
I did it as you said and got the model working (same as you picked). It ran faster than i expected and I was impressed with the quality of the text and the general understanding of the model.
However when i uploaded some documents [in total just 150 kb of downloaded HTML from a wiki] it gave very wrong answers [overwhelmingly incorrect]. What can i do to improve this?
two things help by far the most!
1. Changing the "Similarity Threshold" in the workspace settings to be "No Restriction". This basically allows the vector database to return all remotely similar results and no filtering is applied. This is based purely on the vector database distance of your query and the "score" filtered on, depending on documents, query, embedder, and more variables - a relevant text snippet can be marked as "irrelevant". Changing this setting usually fixes this with no performance decrease.
2. Document pinning (thumbtack icon in UI once doc is embedded). This does a full-text insertion of the document into the prompt. The context window is managed in case it overflows the model, however this can slow your response time by a good factor, however coherence will be extremely high.
Thank you! But i dont understand what you mean with "Thumbtack icon in UI once doc is embedded". Could you please clarify?@@TimCarambat
Nice one Tim. It’s been on my list to get a private LLM set up. You’re guide is just what I needed. I know Mistral is popular. Are those models listed on capabilities, top being most efficient? I’m wondering how to choose the best model for my needs.
Those models are curated by thr lmstudio team. Imo they are based on popularity. However, if you aren't sure what model to chose, go for Llama2 or Mistral, can't go wrong with those models as they are all around capable
Thanks Tim, much appreciated.
I´ve been playing around with running local LLMs for a while now, and it´s really cool to be able to run something like that locally at all, but it does not come even close to replacing ChatGPT. If there actually were models as smart as ChatGPT to run locally they would require a very expensive bunch of computers...
Thanks a lot! This tutorial is a gem!
Thank you! Very useful info. Subbed.
Im on a Linux machine, and want to set up some hardware ... recommended GPU (or even point me to the direction for good information?) or better yet can an old bitcon rig do the job somehow seeing as though theyre useless for bitcoin these days?! Great tutorial too mate, really appreciate you taking the time!
Can you make a tutorial how we can make either or the other to TTS for the AI-Response in a chat? I don't mean speech-recognition. just AI voice output.
Thank you for making this video. This helped me a lot.
I notice some of the models are 25GB+.. BLOOM, Meta's Llama 2, Guanaco 65B and 33B, dolphin-2.5-mixtral-8x7b etc
Do these models require training? If not, but you wanted to train it with custom data, does the size of the model grow, or does it just change and stay the same size?
Aside from LMStudio , AnythingLLM , any thoughts on other tools that attempt to make it simpler to get started, like Oobabooga , gpt4all io , Google Colab , llamafile , Pinokio ?
Thanks, Tim, for the good video. Unfortunately I do not get good results for uploaded content.
I'm from Germany, so could it be a language problem, cause the uploaded content is german text?
I'm using the same mistral model from your video and added 2 web pages to anythingLLMs workspace.
But I'm not sure if the tools are using this content for building the answer.
In the LM studio log I can see a very small chunk of one of the uploaded web pages. But in total, the result is wrong.
To get good embeddings values I downloaded nomic-embed-text-v1.5.Q8_0.gguf and use it for the Embedding Model Settings in LM Studio which might be not necessary, cause you didn't mentioned such steps in your video.
I would appreciate any further hints. Thanks a lot in advance.
How can I use .py files. It appears they arent supported
If you change them to .txt if will be okay. We just need to basically have all "unknown" types try to parse as text to allow this since there are thousands of programming text types
Well explained! Thanks!
Instead of dragging files, can you connect it to a local folder? Also, why does the first query work but the second always fail? (it says "Could not respond to message. an error occured while streaming response")
To add PDFs in the chat and make a Pools of knowledge to select from would be great.
Does the locally run LLM has back propagation?
Hi Tim, Fantastic. Is it possible to use anythingllm with gpt4 directly, for local use? like the example you demonstrated above.
Can't imagine that's possible with GPT4. The VRAM requires for that model would be in the hundreds of GB.
Thank you so much for the concise tutorial. Can we use ollama and LM studio as well with AnythingLLM. It only takes either of them. I have some models in ollama, and some in LM. Would love to have them both in AnythingLLM. I don't know if this is possible though. Thanks!
OK, I'm confused. If I were to feed this a bunch of pdf documents/books, would it then be able to draw on the information contained in those files to answer questions, summarise then info, or general content based on that info in the same literary/writing style as the initial files? And all 'offline' on a local install? (This is the Holy Grail that I am seeking out.
u can already do this with chatgpt custom gpts
@@holykim4352 got a link or reference? I've not found any way to do what I want so far. Maybe I misunderstand the process, but I can't seem to find the info I need either. Cheers.
Very cool, I'll check it out. Is there a way to not install this on your OS drive?
I want to try it in a Linux VM, but from what I see you can only make this work on a laptop with a desktop OS. It would be even better if both LMstudio and AnythingLLM could run in one or two separate containers with a web UI
I mean this is pretty useful already, is there plans to increase the capabilities to include other formats of documents, images, etc?
Very helpful video. I'd love to be able scrape an entire website in Anything LLM. Is there a way to do that?
Is there a website where I can ask help questions about Anything LLM?
This is an amazing video and exactly what Ineeded. Thank you! I really appreciate it. Now the one thing,how do I find the token context window for the different models? I'm trying out gemma?
up to 8,000 (depends on VRAM available - 4096 is safe if you want best performance). I wish they had it on the model card on HuggingFace, but in reality it just is better to google it sometimes :)
I gotcha. So for the most part, just use the recommended one. I got everything working. But I uploaded a PDF and it keeps saying I am unable to provide a response to your question as I am unable to access external sources or provide a detailed analysis of the conversation. But the book was loaded and moved to workspace and save and embed?
@@TimCarambat
For what its worth in LM Studio, on the sidebar there is a `n_cntxt` param that shows the maximum you can run. Performance will degrade if your GPU is not capable though to run the max token context.
Hi tim, citations showing are not correct it just showing random files ...is there any way to sort it out
Awesome, but anything LLM won't see PDFS with OCR like ChatGPT would, is there a multi-model that can do that?
We need to support vision first so we can enable OCR!
I want to ask from multiple csv files, how to do that?
Thanks dude! Great video
what much better between higher parameters or bit (Q)??
Models tend to "listen" better at hight quantization
thanks a lot for the video. Can you please tell me if there is a way to install the model via usb pendrive (manual installing) the other system I'm trying to install doesn't have an internet connection. pls reply
Can someone explain to me what are this tools I have no idea and how this are replacement for chatgtp
Can you do more of these demonstrations or vidoeos, is anythingLLM capable of generating visual conent like a dalle3 or video, assuming using a capable open sourse modell is there a limitation other then local memory as to the size of the vector databases created? this is amazing ;)
Thanks for this video truly appreciated man. Liked and subscrided to support you.
Why am I having a problem with docs (pdf, txt). I get "Could not respond to message. An error occurred while streaning response, network error. Without docs it works fine with LM-Studio. What am I doing wrong? I use M1 with 16 GB Ram
maybe watch the video and start your server and connect to it
@@opensource1000 Server is started and works fine with chat. When I use pdf or txt I get that error message and Anything LLM crashes. Yes, I will watch Video again, maybe I missed something.
Did not work for me on Windows11. Tried to add a local document, but save and embed always throws an error. "The specified module could not be found. -> \AppData\Local\Programs\anythingllm-desktop
esources\backend
ode_modules\onnxruntime-node\bin
api-v3\win32\x64\onnxruntime_binding.node" Then I tried to download the Xenova all-MiniLM-L6-v2 manually but the error remains.
How do i know what token context window number to use?
Hello Tim, you can make a video connecting Ollama with AnythingLLM?
While there is are use cases for this technology, it definitely doesn't make ChatGPT 3.5 obsolete. Just one of the down sides of this tech is no spell checking. Also the responses are baked in by the documents intergrade, it becomes inflexible with its responses. Not for me.
I am trying to access pdf and documentation present on a website I have given AnythingLLM, but it seems not to be working. Is it possible to do so, or do I need to manually download them from the website and attach them in AnythingLLM?
Are there plans to improve the Anything LLM API? I like the built-in RAG web interface, but you're kind of stuck in a chat-interface with Anything LLM...
The API is fully available in the docker version. It will constantly be improved as more features become available - yes absolutely
00:01 Easiest way to run locally and connect LMStudio & AnythingLLM
01:29 Learn how to use LMStudio and AnythingLLM for a comprehensive LLM experience for free
02:48 Different quantized models available on LMStudio
04:14 LMStudio includes a chat client for experimenting with models.
05:33 Setting up LM Studio with AnythingLLM for local model usage.
06:57 Setting up LM Studio server and connecting to AnythingLLM
08:21 Upgrading LMStudio with additional context
09:51 LM Studio and AnythingLLM enable private end-to-end chatting with open source models
Crafted by Merlin AI.
This video changed everything for me. Insane how easy to do all this now!
Many thanks for this. I have been looking for this kind of solution for 6+ months now. Is it possible to create an LLM based uniquely on say a database of say 6000 pdfs?
A workspace, yes. You could then chat with that workspace over a period of time and then use the answers to create a fine-tune and then you'll have an LLM as well. Either way, it works. No limit on documents or embeddings or anything like that.
@@TimCarambatMany thanks! I shall investigate "workspaces". If I understand correctly I can use a folder instead of a document and AnythingLLM will work with the content it contains. Or was that too simplistic? I see other people are asking the same type of question.
Very nice. Will definitely try it. Is or will there be an option to integrate a anything LLM workspace in a python code to automate task via API?
Yes, but the api is only in the docker version currently since that can be run locally and on cloud so an API makes more sense for that medium
Thanks for the insights. What's the best alternative for a person who doesn't want to run locally yet he wants to use opensource LLMs for interacting with documents and webscraping for research.
OpenRouter has a ton of hosted open-source LLMs you can use. I think a majority of them are free and you just need an API key.
This is all new and cool to me. Is there a way to dump my Evernote Database (over 20,000 SOP's) into this? Thinking that would be awesome.
It's very helpful. Thank you!
What GPU? What model you have?