NVIDIA ChatRTX: Private Chatbot for Your Files, Image Search via Voice | How to get started

Поділитися
Вставка
  • Опубліковано 7 чер 2024
  • This video provides an in-depth review and tutorial of NVIDIA's ChatRTX, a new tool designed for users with RTX GPUs on Windows PCs. The tool leverages Retrieval Augmented Generation technology and tensor RT LLM alongside RTX acceleration to chat with documents and use voice interaction. It now supports local photo and image search with improvements in its features. The application requires specific system configurations including an RTX 30 or 40 series GPU. The video covers installation, model downloading like Mistral 7B and Whisper for speech to text, and showcases the application's ability to retrieve information from documents and images stored locally. It demonstrates the chat functionality, interaction with text and voice, searching capabilities within documents and images, and highlights its potential utility for users with corresponding NVIDIA hardware.
    🦾 Discord: / discord
    ☕ Buy me a Coffee: ko-fi.com/promptengineering
    |🔴 Patreon: / promptengineering
    💼Consulting: calendly.com/engineerprompt/c...
    📧 Business Contact: engineerprompt@gmail.com
    Become Member: tinyurl.com/y5h28s6h
    💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
    Signup for Advanced RAG:
    tally.so/r/3y9bb0
    LINKS:
    nvda.ws/3WmXEiy
    TIMESTAMPS
    00:00 Introduction to NVIDIA ChatRTX
    01:16 Setting Up NVIDIA ChatRTX
    01:59 Exploring Chat Features with AI Models
    04:00 Chatting with Documents Using ChatRTX
    06:25 Testing Different AI Models and Image Search
    09:04 Final Thoughts and Future Possibilities
    All Interesting Videos:
    Everything LangChain: • LangChain
    Everything LLM: • Large Language Models
    Everything Midjourney: • MidJourney Tutorials
    AI Image Generation: • AI Image Generation Tu...
  • Наука та технологія

КОМЕНТАРІ • 37

  • @engineerprompt
    @engineerprompt  8 днів тому

    If you are interested in learning more about how to build robust RAG applications, check out this course: prompt-s-site.thinkific.com/courses/rag

  • @im-notai
    @im-notai 28 днів тому +8

    Nvidia should also provide an option to select custom model from hugging face as well. So we can always select the latest and best one according to our preference

    • @engineerprompt
      @engineerprompt  28 днів тому +2

      I agree, that would be really helpful

    • @victorfsaaa
      @victorfsaaa 26 днів тому

      I am actually surprised that they didnt gave llama 3 as option

  • @neilomalley9887
    @neilomalley9887 28 днів тому +3

    It would have been nice to test this on more than one text file, and also on various files of various type i.e. pdf, pdf with tables, and then to report on strengths and weaknesses.

  • @RedVRCC
    @RedVRCC 22 дні тому

    This is kinda cool even if its a bit limited rn. I really like the idea of being able to run a powerful AI model locally on my PC, making it literally mine to do whatever with and also not sharing any of my data with a big server. I'd love to see how this progresses. I am downloading it as we speak but I'm not too sure whether or not my lowly 3060 will run it well or not.

  • @kaistriban
    @kaistriban 26 днів тому

    good review. thanks. a tool to experiment but I kind of feel that more soon then later is going to be integrated in file manager. maybe the issue is to make this run without needing so much hardware and consequently energy

    • @engineerprompt
      @engineerprompt  26 днів тому

      Agree, its a good tool if you have the hardware to run it. I haven't experimented with windows copilot but very similar functionality might be coming there.

  • @shotelco
    @shotelco 28 днів тому +3

    Now we're getting somewhere with AI. I assume ChatRTX is 100% _Secure and Private?_ Some other public LLM(s) are not secretly "learning" from an Individuals local datasets? (Is ChatRTX making external API calls somewhere?) If so, is there an OCR type functionality so ChatRTX can ingest scanned docs? Can agents interact via API with the included LLM's? Can custom LLMs be used, either natively or via API? User owned Security and Privacy makes this a real "Personal Assistant".

    • @global_dominance
      @global_dominance 24 дні тому

      Requests go through the company's servers and they are censored. There is no privacy involved.

  • @vertigoz
    @vertigoz 28 днів тому +1

    I hope it will come soon to the 20 line rtx

  • @irrelevantdata
    @irrelevantdata 28 днів тому +2

    When you say it's faster than ollama, do you mean you ran this same model with same parameters on ollama too, and timed it, and it is indeed slower? How much slower?

    • @engineerprompt
      @engineerprompt  28 днів тому

      It's running the 4bit quantized model same as the default quantization in ollama. I haven't done quantification I terms of tokens per second since both chartrtx and ollama do not provide that info but based on empirical evidence I "feel" it's faster

    • @Ginto_O
      @Ginto_O 25 днів тому

      @@engineerprompt ollama provides that info.

    • @Ginto_O
      @Ginto_O 25 днів тому

      @@engineerprompt ollama run --verbose shows you tokens\sec

  • @hamidmohamadzade1920
    @hamidmohamadzade1920 28 днів тому

    how much is the content window ? can we search on a pdf with 80 pages?

    • @engineerprompt
      @engineerprompt  28 днів тому

      It's seems to be the first version so I would think it will be 8k

  • @longboardfella5306
    @longboardfella5306 26 днів тому

    It doesn’t work on my Quadro RTX8000. I can run Llama 3 models so it’s pretty disappointing that my large VRAM of 48GB and lots of Cuda and Tensor cores is not enabled to run this

    • @engineerprompt
      @engineerprompt  26 днів тому

      Seems to only support RTA 30 and 40 series. Hopefully they will add support.

  • @alx8439
    @alx8439 27 днів тому

    I think the speed here is just a function of a small and heavily quantized model and not the TensorRT magic.

    • @engineerprompt
      @engineerprompt  27 днів тому

      It could be. Would be interesting if they would let you bring other models

  • @hobologna
    @hobologna 28 днів тому +1

    Trying with 6VRAM on my laptop's RTX 3060

    • @im-notai
      @im-notai 28 днів тому +1

      please update me if that works

    • @hobologna
      @hobologna 27 днів тому

      @@im-notai it doesn't unfortunately, won't even allow me to install

  • @adriangpuiu
    @adriangpuiu 28 днів тому +6

    its basic rag, and frankly speaking using the lobotomized quantized models makes no sese in using it

    • @wmsone
      @wmsone 27 днів тому

      So true I’m looking for a non woke LLM. Which one should I used please ?

    • @adriangpuiu
      @adriangpuiu 27 днів тому

      @@wmsone depends on your need man. i usually use the fp16 models , and for tother things im using the Meta-Llama-3-8B-Instruct-correct-pre-tokenizer-and-EOS-token-Q8_0.gguf

    • @Ginto_O
      @Ginto_O 25 днів тому

      @@wmsone non woke? what do you mean? you want heavy censored llm?

    • @Ginto_O
      @Ginto_O 25 днів тому

      @@wmsone bing is most censored

  • @Ginto_O
    @Ginto_O 27 днів тому

    How to use llama3 with it?

    • @engineerprompt
      @engineerprompt  26 днів тому

      In the demo they show llama3 in the list. Might be coming in a new update, probably.

  • @HassanHammoud
    @HassanHammoud 24 дні тому

    on of the best options? that is one of the worst ones. it is soooo behind

  • @michai333
    @michai333 27 днів тому +1

    LM studio is much better, you can also offload all compute to your RTX+ gpu.

    • @longboardfella5306
      @longboardfella5306 26 днів тому

      I’m doing that - is there a model that can examine local text files - I’m a bit new to all this

    • @bajerra9517
      @bajerra9517 22 дні тому

      Nop, i 've tried both and a lot of LLM local ai versions. RTX pretty fast answers than others but the worst thing is you can 🚫 choose your models but Nvidia.