NVIDIA ChatRTX: Private Chatbot for Your Files, Image Search via Voice | How to get started
Вставка
- Опубліковано 7 чер 2024
- This video provides an in-depth review and tutorial of NVIDIA's ChatRTX, a new tool designed for users with RTX GPUs on Windows PCs. The tool leverages Retrieval Augmented Generation technology and tensor RT LLM alongside RTX acceleration to chat with documents and use voice interaction. It now supports local photo and image search with improvements in its features. The application requires specific system configurations including an RTX 30 or 40 series GPU. The video covers installation, model downloading like Mistral 7B and Whisper for speech to text, and showcases the application's ability to retrieve information from documents and images stored locally. It demonstrates the chat functionality, interaction with text and voice, searching capabilities within documents and images, and highlights its potential utility for users with corresponding NVIDIA hardware.
🦾 Discord: / discord
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Advanced RAG:
tally.so/r/3y9bb0
LINKS:
nvda.ws/3WmXEiy
TIMESTAMPS
00:00 Introduction to NVIDIA ChatRTX
01:16 Setting Up NVIDIA ChatRTX
01:59 Exploring Chat Features with AI Models
04:00 Chatting with Documents Using ChatRTX
06:25 Testing Different AI Models and Image Search
09:04 Final Thoughts and Future Possibilities
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu... - Наука та технологія
If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag
Nvidia should also provide an option to select custom model from hugging face as well. So we can always select the latest and best one according to our preference
I agree, that would be really helpful
I am actually surprised that they didnt gave llama 3 as option
It would have been nice to test this on more than one text file, and also on various files of various type i.e. pdf, pdf with tables, and then to report on strengths and weaknesses.
Good point, will need to do that.
This is kinda cool even if its a bit limited rn. I really like the idea of being able to run a powerful AI model locally on my PC, making it literally mine to do whatever with and also not sharing any of my data with a big server. I'd love to see how this progresses. I am downloading it as we speak but I'm not too sure whether or not my lowly 3060 will run it well or not.
good review. thanks. a tool to experiment but I kind of feel that more soon then later is going to be integrated in file manager. maybe the issue is to make this run without needing so much hardware and consequently energy
Agree, its a good tool if you have the hardware to run it. I haven't experimented with windows copilot but very similar functionality might be coming there.
Now we're getting somewhere with AI. I assume ChatRTX is 100% _Secure and Private?_ Some other public LLM(s) are not secretly "learning" from an Individuals local datasets? (Is ChatRTX making external API calls somewhere?) If so, is there an OCR type functionality so ChatRTX can ingest scanned docs? Can agents interact via API with the included LLM's? Can custom LLMs be used, either natively or via API? User owned Security and Privacy makes this a real "Personal Assistant".
Requests go through the company's servers and they are censored. There is no privacy involved.
I hope it will come soon to the 20 line rtx
When you say it's faster than ollama, do you mean you ran this same model with same parameters on ollama too, and timed it, and it is indeed slower? How much slower?
It's running the 4bit quantized model same as the default quantization in ollama. I haven't done quantification I terms of tokens per second since both chartrtx and ollama do not provide that info but based on empirical evidence I "feel" it's faster
@@engineerprompt ollama provides that info.
@@engineerprompt ollama run --verbose shows you tokens\sec
how much is the content window ? can we search on a pdf with 80 pages?
It's seems to be the first version so I would think it will be 8k
It doesn’t work on my Quadro RTX8000. I can run Llama 3 models so it’s pretty disappointing that my large VRAM of 48GB and lots of Cuda and Tensor cores is not enabled to run this
Seems to only support RTA 30 and 40 series. Hopefully they will add support.
I think the speed here is just a function of a small and heavily quantized model and not the TensorRT magic.
It could be. Would be interesting if they would let you bring other models
Trying with 6VRAM on my laptop's RTX 3060
please update me if that works
@@im-notai it doesn't unfortunately, won't even allow me to install
its basic rag, and frankly speaking using the lobotomized quantized models makes no sese in using it
So true I’m looking for a non woke LLM. Which one should I used please ?
@@wmsone depends on your need man. i usually use the fp16 models , and for tother things im using the Meta-Llama-3-8B-Instruct-correct-pre-tokenizer-and-EOS-token-Q8_0.gguf
@@wmsone non woke? what do you mean? you want heavy censored llm?
@@wmsone bing is most censored
How to use llama3 with it?
In the demo they show llama3 in the list. Might be coming in a new update, probably.
on of the best options? that is one of the worst ones. it is soooo behind
LM studio is much better, you can also offload all compute to your RTX+ gpu.
I’m doing that - is there a model that can examine local text files - I’m a bit new to all this
Nop, i 've tried both and a lot of LLM local ai versions. RTX pretty fast answers than others but the worst thing is you can 🚫 choose your models but Nvidia.