Wow, I have a lot of saved documents, articles, and even e-books on my computer. The idea of my own local chatbot being able to reference all of this and carrying on a conversation with me about it is almost like having a friend that shares all my own interests and has read all the same things that I've read! Amazing how the technology is advancing! I can't wait for this!
@@lynngeek5191 You said the desktop version is limited to 2000 but that is not true, you have the option for 8000, however you need a gpu that can handle it(like a RTX 4090, or RTX A6000, or a quadro rtx 8000 card..)
Thanks! I just started there. Still will make my normal youtube content but this open source project is exactly the sort of stuff I would normally cover. Glad you liked the video.
Great video. I hope this gets a lot of views because it is relevant to so many different use cases that need to protect source data. Love the demo of how easy it is load vectorize your own local docs with Langchain.
Awesome!!! Here I was losing hope about AI / GPT being more transparent about biases getting trained "baked" into popular chatbots already & the lack of real privacy users have about what they discuss "there is no privacy". And blammo you guys already had this out in just a few months. Super cool!! Thanks to all who went in on this!
This is amazing and very well put together! You have one of my favorite channels on all of UA-cam and I’ve been able to follow what you teach. It’s an honor to learn from you! Thank you!!
The best explanation so far. I've experience using GPT4All, self hosted whisper, Wav2lip, stable diffusion and also tried few others that I failed to run succesfully. The AI community is growing so fast and is fascinating. I'm using RTX3060 12GB and the speed is okay for chatbot use case but for realtime AI game engine character response it is slow to get response. I recently get a hand of RTX3080 10GB and in this video I see you are using RTX 3080TI which has 10240 CUDA vs mine 8960. It is first time i see that you can use additional cards which in your case GTX1080 vs mine GTX1060 to run the LLM. Very informative video!
If you’re trying to shoe-horn a full fledged LLM to power some NPCs then you’re doing it wrong. All that you need is basic chat personalities and the ability to train based on in-game events, this requires very little processing power!
This is one of the best discussions of building an AI locally I have seen Bravo!! BTW the tutorial is excellent. its clearly enunciated the print ia veru big and readable for old foggies like me and he goes slow enough to follow and describes and shows what he is doing so noobs like me can follow,. also don't forget the transcription button gives details about every minute and a half . Very welll done anybody who is patient will like this. thank you Rob Mulla
This was great! I’m in the process of setting up langchain locally with openllm as a backend but to think I’ll try this as a next step. Thanks for sharing!
been using Chatbots to write a tabletop GPG campaign for my friends, but having the main story in separate files has been a problem. If I can use the material I already have as my own training material it might be way easier! This chatbot might be exactly what I need! Cool, I will give it a go!
@@tacom6 I write Inspection and test Plans, inspection test reports and standard operating procedure for industrial engineering and industrial construction. For instance if we need to write a procedure and checks for the correct way to flush HDPE or terminate and electrical panel etc. Currently I can paste SOPs or our checklists and ask if it was missed anything, ask about new ideas to add or new things entirely. It's great at asking for ASTM codes instead of looking it up or buying the PDF. I'm using Claude currently and perplexity. My company does everything internal it doesn't want the data hosted with another company. I'd like to make something for us to use internally. I believe using AI language models has sped up our procedure drafting and checksheet drafting about 40% so far. It's been game changing. But, I'm using 3d party sites and I have to scrub out client details, names etc. If I had an in house model I could let it retain all client or confidential data and others could ask requests against it also. I have a bot running that I've made through Poe using Claude as the base but I can't make it public for colleagues to use.
Dude, you are not even being biased THIS IS THE BEST INVENTION EVER!!! Open source??? AND it runs locally???? even without the file access feature this would've been the coolest piece of software I've ever encountered!
5:30 yes rob yes. Please. It will be all round approach if you start teaching python on a cloud environment. Much awaited and thanks for everything ur channels has offered me till date. Explicitly love your YT SHORTS.
Thank you so much. This is a clear guide for us to begin experimenting with our vision for a new application, and the last 4 minutes is a great executive summary for me to show to my management.
Hello Rob, I liked your video very much. I wanted to suggest that you consider making a video on how a translator or voice-to-text transformation can become a tool for everyone based on an open language model. It would be an interesting topic to explore and could benefit many people. Thank you for your great content!
Look for "Faster WhisperAI", maybe it could help you in creating transcriptions and translations from audio-to-text, I've had great success in using it to transcribe youtube videos and create subtitles for them.
Yes, as requested, I am letting you know that I am interested in any of the potential future videos you mention in this video! You are giving gold away for free!
Great video! I'm a fan of H20. Really impressed with the driverless performance. Helps me benchmark my own code! Gonna try this out later this week, thanks Rob!
Just checked the thing out, as soon as it was mentioned (luckily, this video was suggested to me by UA-cam). Being a tech enthusiast and a translator, I ended up spending an hour discussing the technical dimensions of my profession. Loved this! I will definitely look into this in more detail later. The thing is also pleasantly polite. 😊
Wow an OpenSource GPT model. This is freaking awesome. I am working on building some AI products this is a life saver. I am excited to play with this big time. Throwing in some Vector Memory Databases to add context on top of this and I can get my first AI product out real soon. I can easily build some text to speech and Computer Vision models of my own on tensorflow to get something big to happen. Man Christmas has really come early for me this year.
you got the like only for just including the last part 14:15 and after. the whole video is decently good. keep up the good work,the last part of info is really what eh people should get in their heads. BRAVO!!! ευχαριστω που το επες.
Thank you!! I’ve been stumped on building a model to generate an entire script for a Seinfeld episode based on my personal scores of each episode and I think this video just unlocked it for me!!
Thank you very much, finally a reasonably reasonable documentation. The topic of Cuda is also somehow a single cramp. I wanted to realize the whole thing as a Docker to keep the system more scaled, just cramp.
Mighty nice video, very useful! Giving local context is interesting. The question about roller coasters was a clever way to demo the feature. Thanks! 😊
Rob, the video is Awsome! Great content as usual 🤩 Would love to watch a version utilizing a spinned up instance from a cloud provider too ( for those of us without a gpu 😊)
great i have learn so much about how use the open source ai and ai modules ,i am glad to do my self to build the project on my local computer ,the read PDF ability is so good i well try it!
Awesome content! I noticed the audio dropouts from time to time. I had a similar issue this week when recording some videos myself, and the culprit for me was the Nvidia Noise Removal filter in OBS. I changed it back to RNNoise and it worked like a charm. Don't know if yours is related, but if it helps, then happy days! Cheers!
Could you do a video on the “fine tuning” you talk about near the end? I like the privacy attribute of running open source locally and the fine tuning would be really beneficial.
One suggestion here, this would be more popular for everyone if there was an installer like GPT4All has, as those who have no command-line experience can still use it.
On the why you would want this question: I actually think 1 of the most compelling answers is "because it works without the internet". Most of the interesting potential applications I can think of for an LLM are not copacetic with mandated internet access... eg. using it for dynamic NPC dialogue in a video game, people don't like always-on connections for very good reasons.
Man ! Having own version of GPT will definitely help in doing personal research or study by providing it accurate resources. Since we already know, online chat-gpt often gives inaccurate result. But by giving this offline gpt accurate resource like in the end of this video, may be this problem can be tackled......
I decided to try this out, and I don't feel like the document feature really works? I uploaded a few smaller markdown files, and I wanted a summary of everything that was discussed in those documents - instead, it picks two of those documents, and ignores everything else. It's not clear to me how this was implemented, or even how this would work? Best guess, it creates a *separate* index (using BERT or something similar) and queries that for relevant files - then includes the contents of those files in the actual query? Or however much it can, based on the max context length and the length of the query? Even after explicitly selecting all my documents, it only picks two of them. What I was hoping for was some kind of deep integration with the LLM itself - but I don't suppose that's really possible at this time? While this feature is probably useful for some things, it doesn't really help solve my problem, which is trying to distill a lot of long conversations into conclusions. I'm still waiting for an LLM that can actually handle larger context. It sounds like they're on the horizon with something like LongLlama? But it doesn't look like they're here yet? In the mean time, this is better than nothing, I suppose. But the real killer feature would be very large context, enabling it to read and ingest larger volumes of content, and then make queries. Maybe I'm too impatient. 😅
I am interested more about how this can turn into features for 3d model making, music making, 3d/2d game making or even software programming. Some tests of what can it generate and other stuff could be nice.
What GPU is big enough to fit the 40B model? Is there a commercial GPU capable of such? What's the highest model that I can most likely get with say a $1000-2000 budget GPU? Thanks. And great content!
Just scaling from the 7bil at 9gib when the weights are truncated to 8 bits. so at least 60GB, Maybe 30 if you cut them to 4 bits but at least 2-3 24gb mem GPUs.
I agree with you on the llm’s… eventually you will probably even have certifications… like ai solutions engineer… different flavors or llm’s just like the different flavors of Linux… Every small to medium company will want their own private ai setup when they see the benefits.
Thank you! Nice. What is the difference to GPT4ALL? Where are the strengths and weaknesses? Can h2oGPT understand associations? Or can it be trained accordingly? Will there be more videos with how-to’s for basics, using your own files, training and corrections?
Great video! Thank you! I do agree that it's better to have your own local model running open source software if your machine can run it. What GPU do you need??!!! lol The biggest issue I have with ChatGPT - open source or otherwise, are incorrect responses. That makes it next to worthless because you can't trust the responses 100% of the time. Can it also respond incorrectly if you train it with your own data?? And how much of your own data do you need to train it? So if I try to train it on all the PDF's on Raman Microscopy, what's the percent likelihood that a response is going to be incorrect? Thanks in advance. Cheers!
Very clear explanation of the program. Great video. I wonder how people create these open source programs and still can put food on the table. They must have day jobs.
Excellent stuff, thank you. I just followed your instruction, plus the README in the GIT, and spin up an instance on a Cloud VM, as my notebook has not GPU. It is fun... and I wish you can further teach us how to do the langchain things when we have a lot of documents that we want to feed them in. Thank you once again.
Wow, I have a lot of saved documents, articles, and even e-books on my computer. The idea of my own local chatbot being able to reference all of this and carrying on a conversation with me about it is almost like having a friend that shares all my own interests and has read all the same things that I've read! Amazing how the technology is advancing! I can't wait for this!
In its current state, i think you will be underwhelmed by its performance unless you have a pretty powerful GPU
@@lynngeek5191In simple English, what are you even talking about dude?
@@Raylightsen I think what he meant is that his GPU is not good enough, since he couldn't use the 8k token version.
@@lynngeek5191 You said the desktop version is limited to 2000 but that is not true, you have the option for 8000, however you need a gpu that can handle it(like a RTX 4090, or RTX A6000, or a quadro rtx 8000 card..)
@@Raylightsen ok dude, here it is for you : He's trying to say that this shit ain't free homie. Apprently far from it according to @lynngeeks5191
I didn't knew that you were working for h2o, but I am happy for you all. You're doing a great work making open source LLM more accesible and friendly!
Thanks! I just started there. Still will make my normal youtube content but this open source project is exactly the sort of stuff I would normally cover. Glad you liked the video.
I’ve always liked H2O, I used to use their deep learning framework a lot. Will definitely check this out.
@@robmulla You work for this?... Sad.
@@sylver369 get off his back
@@sylver369if you worked for anyone "better" then why are you here? 🤔
Great video. I hope this gets a lot of views because it is relevant to so many different use cases that need to protect source data. Love the demo of how easy it is load vectorize your own local docs with Langchain.
Thanks for the feedback! Glad you liked it. Please share it anywhere you think other people might like it.
Awesome!!! Here I was losing hope about AI / GPT being more transparent about biases getting trained "baked" into popular chatbots already & the lack of real privacy users have about what they discuss "there is no privacy". And blammo you guys already had this out in just a few months. Super cool!! Thanks to all who went in on this!
The last thing they want is AI “noticing” patterns in modern western society.
This is amazing and very well put together! You have one of my favorite channels on all of UA-cam and I’ve been able to follow what you teach. It’s an honor to learn from you! Thank you!!
Wow. Thanks for such kind words. I appreciate the positive feedback.
YES PLEASE make another video where you sey up all of these in a cloud environment instead of local. Excellent video, thank you very much
This is awesome! Definitely going down this rabbit hole
The best explanation so far. I've experience using GPT4All, self hosted whisper, Wav2lip, stable diffusion and also tried few others that I failed to run succesfully. The AI community is growing so fast and is fascinating. I'm using RTX3060 12GB and the speed is okay for chatbot use case but for realtime AI game engine character response it is slow to get response. I recently get a hand of RTX3080 10GB and in this video I see you are using RTX 3080TI which has 10240 CUDA vs mine 8960. It is first time i see that you can use additional cards which in your case GTX1080 vs mine GTX1060 to run the LLM. Very informative video!
You should refit the model using Lora to get a smaller size, more narrow for in game usage, that way it's more optimize
Would AMD cards work or is it a headache?
@@CollosalTrollge currently they are using cuda technology which require nvidia cards.
@@hottincup good suggestion, i will try.
If you’re trying to shoe-horn a full fledged LLM to power some NPCs then you’re doing it wrong.
All that you need is basic chat personalities and the ability to train based on in-game events, this requires very little processing power!
This is one of the best discussions of building an AI locally I have seen Bravo!! BTW the tutorial is excellent. its clearly enunciated the print ia veru big and readable for old foggies like me and he goes slow enough to follow and describes and shows what he is doing so noobs like me can follow,. also don't forget the transcription button gives details about every minute and a half . Very welll done anybody who is patient will like this. thank you Rob Mulla
Thanks!
Thanks a ton! Glad you liked it.
As many searches I've done on UA-cam, you're Channel came up today. I'm really impressed. Great Job!
I love the Content, even though no one doesn't know about this, Very very useful content we are expecting a cloud version demo also. Thank You
This was great! I’m in the process of setting up langchain locally with openllm as a backend but to think I’ll try this as a next step. Thanks for sharing!
Glad you enjoyed the video! Thanks for the feedback.
been using Chatbots to write a tabletop GPG campaign for my friends, but having the main story in separate files has been a problem. If I can use the material I already have as my own training material it might be way easier! This chatbot might be exactly what I need! Cool, I will give it a go!
update?
This is exactutly what i was looking for to develop a model for internal use at my compnay. Thank you!
any specific use-case?
@@tacom6 I write Inspection and test Plans, inspection test reports and standard operating procedure for industrial engineering and industrial construction. For instance if we need to write a procedure and checks for the correct way to flush HDPE or terminate and electrical panel etc. Currently I can paste SOPs or our checklists and ask if it was missed anything, ask about new ideas to add or new things entirely. It's great at asking for ASTM codes instead of looking it up or buying the PDF. I'm using Claude currently and perplexity. My company does everything internal it doesn't want the data hosted with another company. I'd like to make something for us to use internally. I believe using AI language models has sped up our procedure drafting and checksheet drafting about 40% so far. It's been game changing. But, I'm using 3d party sites and I have to scrub out client details, names etc. If I had an in house model I could let it retain all client or confidential data and others could ask requests against it also. I have a bot running that I've made through Poe using Claude as the base but I can't make it public for colleagues to use.
@@BryanEnsign sounds fantastic. my interest is similar but with focus on cybersecurity. thanks for sharing!
@@tacom6 That's awesome. So many possibilities. Luck to you brother!
Dude, you are not even being biased THIS IS THE BEST INVENTION EVER!!!
Open source??? AND it runs locally???? even without the file access feature this would've been the coolest piece of software I've ever encountered!
THIS IS A GAME CHANGER!! FOSS FOR THE WIN!
5:30 yes rob yes. Please. It will be all round approach if you start teaching python on a cloud environment. Much awaited and thanks for everything ur channels has offered me till date.
Explicitly love your YT SHORTS.
Really enjoy the video
Thanks for the Video I really appreciated and support the open source community! .
This is really cool. I just installed it and tried it. actually runs pretty fast on my CPU
How did you get it to work with your CPU? I keep getting token limitations on the answers. Did you follow the documentation?
I would also like to see another video from you about setting up all a cloud environment. Thanks for sharing your knowledge.
Amazing! Thanks for the detailed guide. Will definitely be using this for future projects!
This was a really transformative experience and I really appreciate that you did this video!
Very interesting, I stood it up on a VPS with 10 CPUs, it's painfully slow but it works!
Great work. I was looking for a tutorial like this for a long time.
Thank you so much. This is a clear guide for us to begin experimenting with our vision for a new application, and the last 4 minutes is a great executive summary for me to show to my management.
Thanks for all your efforts teaching brotha
Fascinating stuff. So important to figure out ways of using these tools in a way that allows us to retain some privacy. Subscribed.
Fantastic tutorial and superb framework! Congratulations for you and the H2O team! 🔥🔥🔥
Hello Rob,
I liked your video very much. I wanted to suggest that you consider making a video on how a translator or voice-to-text transformation can become a tool for everyone based on an open language model. It would be an interesting topic to explore and could benefit many people. Thank you for your great content!
there are plugins for this fine an open source one and use that
@@pirateben what's it called Ben? Linkey pls.. :)
Look for "Faster WhisperAI", maybe it could help you in creating transcriptions and translations from audio-to-text, I've had great success in using it to transcribe youtube videos and create subtitles for them.
Yes, as requested, I am letting you know that I am interested in any of the potential future videos you mention in this video! You are giving gold away for free!
Great video! I'm a fan of H20. Really impressed with the driverless performance. Helps me benchmark my own code! Gonna try this out later this week, thanks Rob!
this is the first video ive noticed that highlighted the actual subscribe button. it looked really clean.
Just checked the thing out, as soon as it was mentioned (luckily, this video was suggested to me by UA-cam).
Being a tech enthusiast and a translator, I ended up spending an hour discussing the technical dimensions of my profession. Loved this!
I will definitely look into this in more detail later. The thing is also pleasantly polite. 😊
Wow an OpenSource GPT model. This is freaking awesome. I am working on building some AI products this is a life saver. I am excited to play with this big time. Throwing in some Vector Memory Databases to add context on top of this and I can get my first AI product out real soon. I can easily build some text to speech and Computer Vision models of my own on tensorflow to get something big to happen. Man Christmas has really come early for me this year.
It does have many limitations such as requiring fairly beefy hardware (or get really restrictive questions) and a ton of storage space.
@@Runefrag do you think nvidia rtx 4090 can handle this? When you say beefy hardware what do you have in mind?
you got the like only for just including the last part 14:15 and after.
the whole video is decently good.
keep up the good work,the last part of info is really what eh people should get in their heads.
BRAVO!!!
ευχαριστω που το επες.
Thanks
Thank you!! I’ve been stumped on building a model to generate an entire script for a Seinfeld episode based on my personal scores of each episode and I think this video just unlocked it for me!!
You are now master of your domain!
Finally a video that gives me a "hello world" for an attainable local gpt alike chat bot. Now I can actually LEARN how to fine tune a model.
Thank you very much, finally a reasonably reasonable documentation. The topic of Cuda is also somehow a single cramp. I wanted to realize the whole thing as a Docker to keep the system more scaled, just cramp.
Mighty nice video, very useful! Giving local context is interesting. The question about roller coasters was a clever way to demo the feature. Thanks! 😊
I will be trying this, can't wait!!
Nice! It's working.. I'm excited
Very informative video ion how to create your own private chat bot and have it learn from your context. Genius! I look forward to further development.
Excellent content, especially the LangChain part, thank you!
This is extremely helpful, Thank you for sharing this.
Nice work buddy. Keep it up
Awesome knowledgeable video this is so useful. Keep making videos like this
Great content, I enjoyed your video.
Great work, thanks for the video 🎉
You explained everything very clearly, Thanks
Rob, the video is Awsome! Great content as usual 🤩
Would love to watch a version utilizing a spinned up instance from a cloud provider too ( for those of us without a gpu 😊)
Thanks for watching! Will def look into a video with steps to setup on the cloud.
@@robmulla definitely interested in the cloud provider video too
Absolutely brilliant. Thank you.
you got me subscribe !!! wow thanks for explaining in step by step for anyof your content keep it up
Crickin SCARY! I love it! Hail our new overlords!
Great video, your voice and pace is perfect. Thank you
Extremely excellent explanations
That was good presentation.
Thanks !
Valeu!
🙏 I appreciate it!
great i have learn so much about how use the open source ai and ai modules ,i am glad to do my self to build the project on my local computer ,the read PDF ability is so good i well try it!
This is cool and brilliant. Great tutorial
Very helpful video. Thank you!
Awesome content! I noticed the audio dropouts from time to time. I had a similar issue this week when recording some videos myself, and the culprit for me was the Nvidia Noise Removal filter in OBS. I changed it back to RNNoise and it worked like a charm. Don't know if yours is related, but if it helps, then happy days! Cheers!
These language models are getting improved so fast, by the time you have it installed and working there's 3 better ones
Can’t imagine what sort of Ai we could build if we had all them ethereum miners ^^
Amazing work!
Thank you! Cheers!
I can't say THANKS enough. This is exactly what my team is looking for!
So glad you found it helpful.
Man, you're just awesome ❤
Thank you very much for providing the beautiful resource..
Could you do a video on the “fine tuning” you talk about near the end? I like the privacy attribute of running open source locally and the fine tuning would be really beneficial.
You should explain up front that it's an ad.
One suggestion here, this would be more popular for everyone if there was an installer like GPT4All has, as those who have no command-line experience can still use it.
WONDERFUL!
Thank you.
Great work. Thank you
Amazing video! Thanks
Glad you liked it! Thanks for watching.
On the why you would want this question: I actually think 1 of the most compelling answers is "because it works without the internet". Most of the interesting potential applications I can think of for an LLM are not copacetic with mandated internet access... eg. using it for dynamic NPC dialogue in a video game, people don't like always-on connections for very good reasons.
Man ! Having own version of GPT will definitely help in doing personal research or study by providing it accurate resources. Since we already know, online chat-gpt often gives inaccurate result. But by giving this offline gpt accurate resource like in the end of this video, may be this problem can be tackled......
Thanks for the video =)
Brilliant. Thank you for sharing. I am now looking for a faster machine to reproduce what you have demonstrated 🙂
This works great on a collab notebook!
Thank you ❤️
Amazing video.
Thank you so much.
I also would to love to see a video about the setup in a cloud provider.
You are consistanly inspiring and educating me with your content. Thanks for this!
You are so welcome! Thanks for watching and commenting.
Thanks for the video.
It has potential. I hate all the subscription models coming at us all the time. Hopefully more of this will come about.
Rob thanks a lot for this video. Please make a video on how to get the gpu in the cloud
Thanks for asking. Others have been asking for this too so I am planning to work on it.
I decided to try this out, and I don't feel like the document feature really works? I uploaded a few smaller markdown files, and I wanted a summary of everything that was discussed in those documents - instead, it picks two of those documents, and ignores everything else. It's not clear to me how this was implemented, or even how this would work? Best guess, it creates a *separate* index (using BERT or something similar) and queries that for relevant files - then includes the contents of those files in the actual query? Or however much it can, based on the max context length and the length of the query? Even after explicitly selecting all my documents, it only picks two of them. What I was hoping for was some kind of deep integration with the LLM itself - but I don't suppose that's really possible at this time? While this feature is probably useful for some things, it doesn't really help solve my problem, which is trying to distill a lot of long conversations into conclusions. I'm still waiting for an LLM that can actually handle larger context. It sounds like they're on the horizon with something like LongLlama? But it doesn't look like they're here yet? In the mean time, this is better than nothing, I suppose. But the real killer feature would be very large context, enabling it to read and ingest larger volumes of content, and then make queries. Maybe I'm too impatient. 😅
Keep good work. I'm very interested
I am interested more about how this can turn into features for 3d model making, music making, 3d/2d game making or even software programming. Some tests of what can it generate and other stuff could be nice.
Awesome! 💯
What GPU is big enough to fit the 40B model? Is there a commercial GPU capable of such? What's the highest model that I can most likely get with say a $1000-2000 budget GPU? Thanks. And great content!
Just scaling from the 7bil at 9gib when the weights are truncated to 8 bits. so at least 60GB, Maybe 30 if you cut them to 4 bits but at least 2-3 24gb mem GPUs.
I agree with you on the llm’s… eventually you will probably even have certifications… like ai solutions engineer… different flavors or llm’s just like the different flavors of Linux… Every small to medium company will want their own private ai setup when they see the benefits.
Yes Rob ,, would love to see your implementation in the cloud space for these models.
Thank you! Nice. What is the difference to GPT4ALL? Where are the strengths and weaknesses? Can h2oGPT understand associations? Or can it be trained accordingly?
Will there be more videos with how-to’s for basics, using your own files, training and corrections?
Hello Rob, do you have any suggestions on what kind local machine can run LLM, cpu, ram, GPU?
Great video! Thank you! I do agree that it's better to have your own local model running open source software if your machine can run it. What GPU do you need??!!! lol The biggest issue I have with ChatGPT - open source or otherwise, are incorrect responses. That makes it next to worthless because you can't trust the responses 100% of the time. Can it also respond incorrectly if you train it with your own data?? And how much of your own data do you need to train it? So if I try to train it on all the PDF's on Raman Microscopy, what's the percent likelihood that a response is going to be incorrect? Thanks in advance. Cheers!
Very clear explanation of the program. Great video. I wonder how people create these open source programs and still can put food on the table. They must have day jobs.
thanks for the information Rob i just subscribed to you too
Excellent stuff, thank you. I just followed your instruction, plus the README in the GIT, and spin up an instance on a Cloud VM, as my notebook has not GPU. It is fun... and I wish you can further teach us how to do the langchain things when we have a lot of documents that we want to feed them in.
Thank you once again.