NVIDIA'S NEW OFFLINE GPT! Chat with RTX | Crash Course Guide

Поділитися
Вставка
  • Опубліковано 25 лип 2024
  • Nvidia has released their new private GPT chatbot called to Chat with RTX. This quick video shows you how to download, install and use it. It's very simple and super powerful. You can ask the AI about documents, folders, PDFs, Docs, Videos and more! By the end you should know how it works and how to use it.
    Download Chat with RTX: www.nvidia.com/en-us/ai-on-rt...
    ======== Related AI videos ========
    Chat with RTX isn't the only powerful, offline, free software (Some don't even need GPUs!) See these videos for more info:
    [CHAT] Oobabooga Desktop: • NEW POWERFUL Local Cha...
    [IMAGE] Stbale Diffusion: • AUTOMATIC1111 SDUI One...
    [VOICE] Applio: • BEST FREE TTS AI Voice...
    Timestamps:
    0:00 - Intro/Explanation
    0:40 - Requirements to use Chat with RTX
    1:30 - Downloading Chat with RTX
    1:40 - Installing Chat with RTX
    2:50 - Opening Nvidia RTX
    3:12 - AI with Documents, PDFs and MORE!
    4:14 - AI with UA-cam videos
    5:27 - AI Model Default
    5:37 - Is Nvidia Chat with RTX worth downloading?
    #Nvidia #RTX #AI
    -----------------------------
    💸 Found this useful? Help me make more! Support me by becoming a member: / @troublechute
    -----------------------------
    💸 Support me on Patreon: / troublechute
    💸 Direct donations via Ko-Fi: ko-fi.com/TCNOco
    💬 Discuss the video & Suggest (Discord): s.tcno.co/Discord
    👉 Game guides & Simple tips: / troublechutebasics
    🌐 Website: tcno.co
    📧 Need voiceovers done? Business query? Contact my business email: TroubleChute (at) tcno.co
    -----------------------------
    🎨 My Themes & Windows Skins: hub.tcno.co/faq/my-windows/
    👨💻 Software I use: hub.tcno.co/faq/my-software/
    ➡️ My Setup: hub.tcno.co/faq/my-hardware/
    🖥️ My Current Hardware (Links here are affiliate links. If you click one, I'll receive a small commission at no extra cost to you):
    Intel i9-13900k - amzn.to/42xQuI1
    GIGABYTE Z790 AORUS Master - amzn.to/3nHuBHx
    G.Skill RipJaws 2x(2x32G) [128GB] - amzn.to/42cilxN
    Corsair H150i 360mm AIO - amzn.to/42cznvP
    MSI 3080Ti Gaming X Trio - amzn.to/3pdnLdb
    Corsair 1000W RM1000i - amzn.to/42gOTGY
    Corsair MP600 PRO XT 2TB - amzn.to/3NSvwzx
    🎙️ My Current Mic/Recording Gear:
    Shure SM7B - amzn.to/3nDGYo1
    Audient iD14 - amzn.to/3pgf2XK
    dbx 286s - amzn.to/3VNaq7O
    Triton Audio FetHead - amzn.to/3pdjIgZ
    Everything in this video is my personal opinion and experience and should not be considered professional advice. Always do your own research and ensure what you're doing is safe.

КОМЕНТАРІ • 274

  • @Tarangot
    @Tarangot 5 місяців тому +252

    Just used Chat with RTX to summarize your video in about a minute worth of reading. What a crazy time to be alive. I'll leave your video running in a tab so you're credited for the view and watch time.

    • @BabySisZ_VR
      @BabySisZ_VR 5 місяців тому +4

      lol

    • @GumboRyan
      @GumboRyan 5 місяців тому +18

      Efficient AND considerate.

    • @looseman
      @looseman 5 місяців тому +1

      It is reading from subtitle, not from Video.

    • @KIaKlaa
      @KIaKlaa 5 місяців тому +3

      just used chat with rtx to create a thingmabob to make yo wife bald and yo dog fat, watch out m blud

    • @ekot0419
      @ekot0419 4 місяці тому

      I have been doing that using Chatgpt for a long time already.

  • @tbarczyk1
    @tbarczyk1 3 місяці тому

    Awesome tutorial! This is the first one of yours that I've watched, but between this one and few others I've looked at since, your tutorials are the best I've seen anywhere. Thanks for getting into all the interesting details and dumbing it down like your viewers are idiots.

  • @MrErick1160
    @MrErick1160 5 місяців тому +72

    Wow this is AMAZING. A non-cloud chat that we can use with our local documents!!! Freaking cool and very useful product, NVIDIA def knows what people need

    • @DrakeStardragon
      @DrakeStardragon 5 місяців тому +2

      Uhh, they are not the first, but ok.

    • @merlinwarage
      @merlinwarage 5 місяців тому

      LMStudio is out for almost 8 months what does the same and 10x more.

    • @KillFrenzy96
      @KillFrenzy96 5 місяців тому

      Well we already have many solutions for this. It's running Mistral 7B which has been available for many months now. It's nowhere near ChatGPT quality though.
      However if you have a 24GB GPU, I would suggest running the more powerful Mixtral 8x7B model using EXL2 3.5 bpw quantization. I use the oobabooga WebUI for this. It's about as powerful as ChatGPT free, but is much less restrictive.

    • @adrianzockt5347
      @adrianzockt5347 5 місяців тому +1

      GPT4ALL also exists and supports multiple chats, like chatgpt does. However it crashes when reading large documents and doesn't have the youtube feature.

    • @chromefuture5561
      @chromefuture5561 5 місяців тому +2

      And it adds finally another real reason for the 40. gen RTX cards

  • @SB-KNIGHT
    @SB-KNIGHT 5 місяців тому +8

    This is really cool and one of the biggest missing pieces in the whole equation. Being able to run these models locally and be able to highly curate your own will be very valuable. GPT4All is really neat, does a decent job with this as well, so I am glad to see something similar from Nvidia who makes the GPUs. Crazy times!

  • @ashw1nsharma
    @ashw1nsharma 5 місяців тому

    Thanks for this new discovery! Hope you're having a nice day! 🌻

  • @no_the_other_ariksquad
    @no_the_other_ariksquad 5 місяців тому +11

    It's really useful when you have a folder full of documentations for different apis and all things, very good for that.

  • @19mitch54
    @19mitch54 5 місяців тому +27

    After exhausting the free trials of DALL-E and Midjourney, I bought my new computer with the RTX3070 to run Stable Diffusion. I love this AI stuff. Chat with RTX was a LONG download and it downloaded more dependencies during install but was worth it. I didn’t bother exploring the included dataset and started with my own documents. This works great! I want to build a big library of references and put this thing to work.

    • @jimmydesouza4375
      @jimmydesouza4375 5 місяців тому

      How good is it for automatically generating things? For example if you stick a bunch of PDFs for a roleplaying game ruleset and setting and then ask it to generate DM prompts from that, can it do it?

    • @19mitch54
      @19mitch54 5 місяців тому +8

      I don’t know much about role playing games. The program is good at answering questions. I pointed it to some manuals including my car’s owners’ manual and it was able to answer technical questions like “how do I reset the service interval?” I want to test it with some microcontroller programming manuals next.

    • @Vysair
      @Vysair 5 місяців тому

      @@19mitch54This is wicked. Your usage is hella perfect for programmer and alike

    • @AvtarSingh1122
      @AvtarSingh1122 4 місяці тому

      Nice👌🏻

    • @amumuisalivedatcom8567
      @amumuisalivedatcom8567 2 місяці тому

      @@jimmydesouza4375 i'm late but yup, consider using RAG (Retrieval Augmented Generation) to pass docs to the LLM.

  • @minty87
    @minty87 5 місяців тому

    would love to see a photo generator on it id definitely get on it in that case . nice video

  • @elpideus
    @elpideus 5 місяців тому +1

    Definitely much easier to set up compared to your average text-generation-webui, however still has a long way to go when it comes to features and control.

  • @RedVRCC
    @RedVRCC 2 місяці тому

    Thanks! I just downloaded and installed it but I'm not too sure how to get it running. Working with these complex LLMs is still new to me but I really want my own AI so your video really helps. I hope this runs well enough on my entry level af 3060. This seems simple enough. Will it at least remember everything it learned so I can keep training it more and more?

  • @IIHydraII
    @IIHydraII 5 місяців тому

    Can you make a video about different presentation modes and how to set them? I’m trying to get my games to run in Hardware Composed: Independent flip, but I’ve only been successful when running games in non native resolutions and also forcing windows to use that resolution. If I try to run native, I end up with Hardware: Independent Flip. I’m aware the only difference between HWCF and HWI is that the former uses DirectFlip optimisations, but I can’t figure out why they’re not working at native resolution. Kinda stumped here. 😅

  • @EuropaeusOrigo
    @EuropaeusOrigo 5 місяців тому

    Very cool!

  • @_B.C_
    @_B.C_ 5 місяців тому +1

    Will it do this for yt videos in another language?

  • @LaminarRainbow
    @LaminarRainbow 5 місяців тому

    Thank you!!

  • @girinathprthi
    @girinathprthi 5 місяців тому

    interesting
    started downloading this app

  • @yuro1337
    @yuro1337 5 місяців тому +3

    it looks like Whisper AI with chat and some additional models

  • @user-uw9ir7fl8l
    @user-uw9ir7fl8l 2 місяці тому

    yup a solid demo for an intro with your pc and an Ai model thats local

  • @johncollins9263
    @johncollins9263 Місяць тому +1

    I am having an issue with installing this as it comes up with chat with rtx failed to install however hardware is not an issue as everything i have is new but it decides not to work?

  • @invisisolation
    @invisisolation 5 місяців тому +15

    I’m curious… If you’re comparing between models with the same amount of VRAM (e.g. 3050, 3060 8GB, 4060) will the quality of the outputs improve if the card is better or will it only just have a faster/slower response time?

    • @ahmetemin08
      @ahmetemin08 5 місяців тому +1

      no, only the interference speed will differ.

    • @Embassy_of_Jupiter
      @Embassy_of_Jupiter 5 місяців тому +5

      if it's the same model, not running with lower precision, it shouldn't make a difference in quality.

    • @Unknown-xm8ll
      @Unknown-xm8ll 5 місяців тому +3

      See the weights in a neutral network are present by Nvidia so no change in response the model is fitted with the most optimal neutral weights which determine the accuracy and precision of the model. A better faster GPU like 4070, 4080 or the 4090 can improve the speed of the results but the jump till 4080 is not significant. Only 4090 performs faster and more noticeable compared to other GPUs. And fun fact you can run the chat with RTX on AMD gpu 😂 with slight tweeks or just copy the model data and paste it into the lalama interface.

    • @PrintScreen.
      @PrintScreen. 5 місяців тому +1

      @@ahmetemin08 isn't it "inference" ?

    • @ahmetemin08
      @ahmetemin08 5 місяців тому

      @@PrintScreen. you are correct

  • @IzanamiNoMikotoo
    @IzanamiNoMikotoo 5 місяців тому +10

    The reason Llama 2 doesn't show is that it "requires" 16GB of VRAM. It will only let you install it if your card has at least 16GB... Unless you change the setting in the llama13b.nvi file. If you set the value to, say, 10GB then you can run it on a 3080 10GB. Idk if it will work perfectly but you can try.

    • @codeblue6925
      @codeblue6925 5 місяців тому

      where is that file located?

    • @codeblue6925
      @codeblue6925 5 місяців тому

      nvm i found it

    • @crobinso2010
      @crobinso2010 5 місяців тому

      @@codeblue6925 Did it work? I have a 12GB 3060

    • @rockcrystal3277
      @rockcrystal3277 4 місяці тому

      how do you change the setting in the llama13b.nvi file to 10gb for it to work?

    • @IzanamiNoMikotoo
      @IzanamiNoMikotoo 4 місяці тому

      @@rockcrystal3277 Go to the file llama13b.nvi located in the installation directory
      “\NVIDIA_ChatWithRTX_Demo\ChatWithRTX_Offline_2_11_mistral_Llama\RAG”. Then change the "MinSupportedVRAMSize" value however many GB of VRAM your card has.

  • @leeishere7448
    @leeishere7448 5 місяців тому

    How can I get the lama 13b model? I don't have it.

  • @hairy7653
    @hairy7653 4 місяці тому +2

    the UA-cam option isn't showing up on my rtxchat

  • @handsonlabssoftwareacademy594
    @handsonlabssoftwareacademy594 Місяць тому

    Man, I really like your analysis great work. So ChatRTX can be used with any cpu and graphics card including Intel HD Graphics once there's sufficient RAM like 16GB?

    • @christerjohanzzon
      @christerjohanzzon Місяць тому +1

      No, you need an RTX card from at least 3000-series. It's the tensor cores that is important. Luckily these cards aren't expensive.

  • @arsalanganjeh198
    @arsalanganjeh198 5 місяців тому +2

    Nice

  • @siddharthmishra8283
    @siddharthmishra8283 4 місяці тому

    Waiting for your 12gb SUPIR version installation guide for A1111 Sdxl 😊

  • @ubaidfayaz1989
    @ubaidfayaz1989 2 місяці тому

    Sir how can we bypass the nvidia check that occurs prior to installation?

  • @Jascensionvoid
    @Jascensionvoid 5 місяців тому +1

    I keep getting this error when trying to upload some PDF's into my Dataset.
    [02/23/2024-19:42:28] could not convert string to float: '98.-85' : Float Object (b'98.-85') invalid; use 0.0 instead

    • @MTX1699
      @MTX1699 4 місяці тому

      So, is there a solution to this?

  • @Tore_Lund
    @Tore_Lund 5 місяців тому +4

    System requirements are minimum requirements? Is Win11 needed or does Win10 work?

    • @Vysair
      @Vysair 5 місяців тому

      Isnt Win11 are just Win10 under the hood? Why wouldnt it work

  • @shadowcaster111
    @shadowcaster111 5 місяців тому +9

    is the non C drive install fixed yet ?
    I tried it on my P drive and it failed to install

    • @Green_Toast
      @Green_Toast 5 місяців тому

      no, badly not, they talked about it at the nvidia forum

    • @jackflash6377
      @jackflash6377 4 місяці тому +1

      I just installed it to my F: drive under a folder named RTXChat and it's working as normal.

  • @KenZync.
    @KenZync. 4 місяці тому

    i just download this and it can't be run can you try remove and redownload it ? i think nvidia cooked something failed

  • @TheMangese
    @TheMangese 3 місяці тому

    I'm interested in having an interactive AI chatbot in my chat channel on Twitch. Can this do that?

  • @JoyKazuhira
    @JoyKazuhira 5 місяців тому

    wow maybe in the future, this will be added in a game. will definitely use instead of turning on ray tracing.

  • @faa-
    @faa- 5 місяців тому

    this is so cool

  • @monkshee
    @monkshee 5 місяців тому +4

    hey man i don't see the llama option when installing i already have an install how would i add it to the list of models?

    • @haseef
      @haseef 5 місяців тому +1

      same issue here even though I ticked clean install

    • @N1h1L3
      @N1h1L3 5 місяців тому

      win 10 ?

    • @zslayerlpsfmandminecraftan367
      @zslayerlpsfmandminecraftan367 4 місяці тому

      llama 2 needs 16gb vram not quantizized, so if you have 8gb it doesn't install it

  • @moonduckmaximus6404
    @moonduckmaximus6404 4 місяці тому +1

    THE UA-cam OPTION DOES NOT EXIST IN THE DROP DOWN MENU

  • @TonTheCreator
    @TonTheCreator 3 місяці тому

    I installed and used it bu after I closed it I can't use/open it again. I mean i don't know how to

  • @elgodric
    @elgodric 5 місяців тому +3

    How many pages of the document can Mistral 7B handle?

  • @abdiel_hd
    @abdiel_hd 3 місяці тому

    Mine didn't come with UA-cam as a dataset/source... can someone help me? I have a laptop with a 3070

  • @Vimal_S_Thomas
    @Vimal_S_Thomas 3 місяці тому

    will it work on my laptop with RTX 2050

  • @lolxgaming7993
    @lolxgaming7993 Місяць тому

    I tried downloading it but the download is too slow and this is normal?

  • @user-sl9op3gy5e
    @user-sl9op3gy5e 3 місяці тому +1

    I don't have the youtube Url option

  • @blitzguitar
    @blitzguitar 5 місяців тому

    Can I use it to overclock my 3070

  • @LaminarRainbow
    @LaminarRainbow 5 місяців тому

    Originally I thought it didn't work, but turns out I just have to wait.. :P

  • @thanksfernuthin
    @thanksfernuthin 5 місяців тому +4

    You finally made another video I'm interested in! 😃I was just on the verge of letting you go. My main interest is the AI stuff.

    • @TroubleChute
      @TroubleChute  5 місяців тому +6

      Always happy to cover new stuff when I hear about it ~ A friend let me know of this. I also saw the new OpenAI video stuff... but nobody has access to that yet...

    • @RentaEric
      @RentaEric 5 місяців тому +3

      You do know a subscribe is free. If you leave 10 others will replace you 😅

    • @thanksfernuthin
      @thanksfernuthin 5 місяців тому +2

      @@RentaEric Unless he doesn't create content they want. You understand how consensual interactions work, right? Or do you have ten thousand subscriptions and you can't pick out what you want to see from all the crap?

    • @RentaEric
      @RentaEric 5 місяців тому +5

      @@thanksfernuthin you act like you support him financially or even through liking every video and commenting. Do you? If not your opinion is irrelevant cause you are talking about leaving if he doesn't give you what you want but have you gave him anything besides taking his free content?

    • @thanksfernuthin
      @thanksfernuthin 5 місяців тому +3

      @@RentaEric So it's a bad thing to give feedback in your mind? You think he doesn't want to know when people like what he does or doesn't like what he does? Have you ever produced something of value for another human being in your life?

  • @mayorc
    @mayorc 5 місяців тому

    Does it support custom models like using OpenAI api endpoint local servers?

    • @JA_BRE
      @JA_BRE 5 місяців тому +2

      Its only Demo, no way it supports it yet...

  • @banabana4691
    @banabana4691 5 місяців тому

    i think its make nvidia graphic crad more valuable

  • @dioghane231
    @dioghane231 2 місяці тому

    I have a 3050 rtx and it won’t let me install it? Why?

  • @arooman3194
    @arooman3194 5 місяців тому +1

    Min 6:56, can not understand the tools you suggest, would you mind to post the link to that tools?

    • @carlossalgado9075
      @carlossalgado9075 5 місяців тому

      same isue

    • @sky37blue
      @sky37blue 3 місяці тому

      It is in the video description
      [CHAT] Oobabooga Desktop:
      • NEW POWERFUL Local ChatGPT 🤯 Mindblow...

  • @IndieAuthorX
    @IndieAuthorX 5 місяців тому +11

    I was excited to use this, but I got it up and running and things did not work so good. I realized that it wasn't technically made to run on Windows 10, according to the requirements page, and I think that might be why. I think that this kind of thing has potential, but I want a chatbot that is completely released for commercial use before getting too comfy with it.

    • @acllhes
      @acllhes 5 місяців тому +2

      Windows 11 is one of the requirements listed

    • @IndieAuthorX
      @IndieAuthorX 5 місяців тому

      @@acllhes yeah, I saw that after. I could have sworn I'd seen both systems. I might have read a non Nvidia page first and then just installed.

    • @fontende
      @fontende 5 місяців тому

      i'm not sure what you mean "commercial", none of this allowed such by license, it's only allowed for research use by original llama license (except if it based on llama 2 where something allowed but limited by installations). If you want just chatbot right away - easiest way is LLAMAFILE by Mozilla, just click and it works, their small model container is kinda 1,5 Gb but can analyse images

  • @Jcorella
    @Jcorella 5 місяців тому

    6:57 What was that model? Couldn't understand you

    • @zslayerlpsfmandminecraftan367
      @zslayerlpsfmandminecraftan367 4 місяці тому

      oobabooga desktop, wich in itself is a gui similiar to this. but it lets you use custom models. but its more complicated to set up with python 3.10.9

  • @SpudHead42
    @SpudHead42 5 місяців тому

    Does it support other models, like Mixtral?

    • @zslayerlpsfmandminecraftan367
      @zslayerlpsfmandminecraftan367 4 місяці тому

      at the current time no... for that you need a gui like oobabooga or KoboldCPP wich supports custom models

  • @erkinox1391
    @erkinox1391 5 місяців тому +2

    I really don't get it; i have all of the requirements (VRAM, RAM, OS, Latest Driver, I got plenty of storage), but whenever I launch the installation, it stops and say Chat with RTX Failed and Mistral Not Installed

    • @jaderey467
      @jaderey467 5 місяців тому

      Are you windows 11 it doenst work on 10

    • @ben9262
      @ben9262 5 місяців тому

      I'm getting the same thing

    • @AlecksSubtil
      @AlecksSubtil 4 місяці тому

      Disable completely your antivirus, also check the dock icon to disable it from there. avast for example has to be disabled in the tray icon, only in the gui is not enought. Also install it on the default folder. Maybe necessary run it with admin privileges. It is safe to install btw

  • @glucapav
    @glucapav 4 місяці тому

    It is saying I don't have 8 GB of GPU memory. Is it checking my integrated GPU instead of my NVDA? How do I fix this? I'm using an Asus Pro Duo so the BIOS isn't letting me change it.

    • @queless
      @queless 4 місяці тому

      What card do you have?

  • @MiNombreEsEscanor
    @MiNombreEsEscanor 5 місяців тому +1

    I downloaded this, it works pretty good locally, but I want to create a web application and use this chatbot in my application. Currently chat with rtx doesn't offer api to send questions and retrieve answers. Is there any way to achieve this? Or maybe they will add api feature in the future? What do you guys think?

    • @Hypersniper05
      @Hypersniper05 5 місяців тому +1

      Text generation webui

    • @voidsh4man
      @voidsh4man 5 місяців тому +1

      at scale it would cost you more to run an ai chatbot on your own hardware than using openai's api

    • @anispinner
      @anispinner 5 місяців тому +1

      Considering it runs a local node I suppose one of the folders should contain plain .js files, otherwise it might be packed as an electron which you can unpack and inject your API into.

    • @fontende
      @fontende 5 місяців тому

      nvidia never made any great software, they're only hardware. Don't count on that, why do you think we use Afterburner made by MSI (why Nvidia can't made such tool is a puzzle), even this they could make a year ago by hiring any student on Ai faculty

    • @anispinner
      @anispinner 5 місяців тому

      Puzzle? Why would you make an overclocking soft that goes against your business model? Your goal (as a business) should be to sell the product, not to extend its lifespan.

  • @kathiravan_vj
    @kathiravan_vj 5 місяців тому

    Does RTX 2060 super supports this with 16gb ram?

    • @xXXEnderCraftXXx
      @xXXEnderCraftXXx 5 місяців тому +1

      Well no. Atleast not without some bypass programs.

  • @TazzSmk
    @TazzSmk 5 місяців тому

    is RTX A4000 supported? should be Ampere generation card I believe

    • @skym1nt
      @skym1nt 5 місяців тому +1

      yes, it can.

  • @vulcan4d
    @vulcan4d 5 місяців тому +7

    This is a demo which clearly means Nvidia wants to see how many people will use it so they can release a subscription based service later for your AI offline needs.

    • @ozz3549
      @ozz3549 5 місяців тому +1

      That's only UI for llama 2 model, you can find any another ui and this will work same

    • @gavinderulo12
      @gavinderulo12 5 місяців тому

      ​@@ozz3549it's also something you can build in a week.

  • @Subarashi77
    @Subarashi77 3 місяці тому +2

    they removed youtube url option

  • @violentvincentplus
    @violentvincentplus 5 місяців тому +21

    35GB goes crazy

    • @Flashback_Jack
      @Flashback_Jack 5 місяців тому +9

      About the same size as a triple A game.

    • @pedro.alcatra
      @pedro.alcatra 5 місяців тому +3

      Exactly. The size is absolutely fine. The problem is having to download it thru the browser instead of a download manager

    • @arsalanganjeh198
      @arsalanganjeh198 5 місяців тому +4

      Lighter than cities skylines 2😂

    • @gamingballsgaming
      @gamingballsgaming 5 місяців тому +2

      ​@pedro.alcatra im fine with that for archival purposes. If i want to install it in the future, i can as long as i have the exe, even if the nvidia servers shut down

    • @Javier64691
      @Javier64691 5 місяців тому +5

      @@Flashback_Jackan old triple a, most nowadays are 60gb plus

  • @cmdr.o7
    @cmdr.o7 5 місяців тому +15

    I hope this software doesn't just snoop around your file system and documents, scraping it all back to nvidia with telemetry
    wouldn't be surprised at all if it did, people have little respect left for privacy
    if it turns out it does, well, just hope video author has done research and not just blindly enabling nvidia
    that said, we are each responsible for our own security and fighting back against invasive big tech, malware root kits etc

    • @Jet_Set_Go
      @Jet_Set_Go 5 місяців тому +8

      They have Nvidia Experience for that already

    • @jordanturner7821
      @jordanturner7821 5 місяців тому

      They already do that with telemetry data. he absolutely does know what he is talking about.@@jeffmccloud905

    • @cmdr.o7
      @cmdr.o7 5 місяців тому +3

      @@jeffmccloud905 that's right, that is the troubling part
      clearly you don't know either or you would have enlightened us - but you are a man of few words
      scraping user data is not a big mystery, it happens everywhere, i think most people have a pretty good idea about that
      and i do actually know quite a lot about ai systems - and nvidia xD

    • @AndrewTSq
      @AndrewTSq 5 місяців тому +2

      I think Microsofts AI already does that in Win11

    • @goldmund22
      @goldmund22 3 місяці тому

      I'm glad I finally found someone commenting on the privacy aspect of this. Since you mentioned you are experienced with AI and Nvidia, do you think there is a good chance this is happening, even though it is "local"?
      I am considering using it for analyzing specific folders and PDFs related to my work. I guess the only way to be sure it doesn't also have access to everything else is to literally use this on a different PC and on a different network. I don't know. Then I think about Microsoft OneDrive, and well it already is connected most of everything we have on our PCs by default. Just insane.

  • @ahmetrefikeryilmaz4432
    @ahmetrefikeryilmaz4432 5 місяців тому

    One question: is that HHKB I have been hearing?

  • @jomymatthews
    @jomymatthews 5 місяців тому

    What is Ub boo boogie desktop ?

  • @jonmichaelgalindo
    @jonmichaelgalindo 5 місяців тому +10

    Thanks for the video. Very informative. GPT4All and LMStudio are probably easier for most users though, and they support more models, more OSs, and more features. I wonder what NVidia thought was so special about this...

    • @NippieMan
      @NippieMan 5 місяців тому +4

      Offline AIs can be useful since companies such as OpenAI place in very restrictive rules. While there are already programs that can do what NVIDIA is offering, most consumers are too stupid to set it up themselves

    • @AntonChekhoff
      @AntonChekhoff 5 місяців тому

      Which GPU-accelerated model would you recommend? For translation for instance?

    • @bigglyguy8429
      @bigglyguy8429 5 місяців тому +2

      Well I love Faraday and LM Studio, but getting it to understand my own docs is hard,

    • @jonmichaelgalindo
      @jonmichaelgalindo 5 місяців тому +1

      @@AntonChekhoff I haven't done any translation. I use Mistral raw for my D&D solver system, and for creative writing (mostly for generating large lists, like a thesaurus but for abstract topics).

    • @crobinso2010
      @crobinso2010 5 місяців тому

      I'm hoping for that too -- a comparison btw LM Studio and Chat with RTX, which do the same things.

  • @juanb0609
    @juanb0609 4 місяці тому +2

    I dont have the option for UA-cam videos

  • @Lp-ze1tg
    @Lp-ze1tg 5 місяців тому

    How slow will it be if I run it with 4gb or even 2gb vram?,
    Will it even run with less than 8gb vram?

    • @Baconator119
      @Baconator119 5 місяців тому +1

      It requires a 30 or 40 Series GPU, the weakest of which iirc is a 3050 with 6GB of VRAM. So, will it run with less than 8? Yeah. It might be slow, though.

    • @MARProduction24434
      @MARProduction24434 5 місяців тому

      Tried it. The installer just block it if requirement not met ;(

  • @KrishnVallabhDas
    @KrishnVallabhDas 5 місяців тому

    i am getting this error
    ModuleNotFoundError: No module named 'torch'
    how to fix this??

    • @CindyHuskyGirl
      @CindyHuskyGirl 4 місяці тому +1

      pip install torch (put this into your terminal)

    • @OpenAITutor
      @OpenAITutor 4 місяці тому

      You should go through the installer. It has all the stuff build in. It also creates it's virtual python environment in a folder called env_vnd_rag

  • @rockcrystal3277
    @rockcrystal3277 5 місяців тому

    I noticed llama didn't install for you also, found anyway to install it?

    • @queless
      @queless 4 місяці тому

      It requires an RTX card with 16gb vram or more

    • @rockcrystal3277
      @rockcrystal3277 4 місяці тому

      @@queless how do you change the setting in the llama13b.nvi file to 10gb for it to work?

    • @queless
      @queless 4 місяці тому

      @@rockcrystal3277 don't know, I have a 4070ti super OC 16gb, it worked for me without anything extra. Uninstalled it and hour later because the AI is super basic, like chatgpt 1 but dumber

  • @rionix88
    @rionix88 5 місяців тому +1

    gemini will use this technology. you can chat with 1 hour video

  • @GKGames2018
    @GKGames2018 4 місяці тому

    mine does not have youtube

  • @muruganmurugan507
    @muruganmurugan507 5 місяців тому

    Its cool does it support 2gb single pdf with 4000 pages😂

  • @OpenSourceGuyYT
    @OpenSourceGuyYT 5 місяців тому +2

    Yea. With Ollama, you don't need to have an RTX GPU. And it's offline too.

  • @bensoos
    @bensoos 5 місяців тому

    Now real interlegend bots in games.

  • @MaiderGoku
    @MaiderGoku 5 місяців тому +1

    Answer this properly, download size and how much space does it take on your hard drive?

    • @IMABADKITTY
      @IMABADKITTY 5 місяців тому +1

      35gb download size

    • @MaiderGoku
      @MaiderGoku 5 місяців тому +1

      @@IMABADKITTY how much for rtx remix?

  • @buttpub
    @buttpub 5 місяців тому +2

    so why on earth would anyone choose this over for example ollama thru wsl on windows or even easier gpt4all? with this you only get one model, mistral, which is a good model but at 35 gb of download how could that possibly be the model file considering min req is 8gb of ram? so what other bloatware is there, the mistral model is only 7.4gb thru any of the freeware model query tools mentioned above or by just downloading the model and weights urself. Nvidia is once again late to the party and they forgot drinks

    • @anispinner
      @anispinner 5 місяців тому +1

      Most of those that you mentioned use CPU for that easier setup, especially gpt4all. For the size of guess it's the dependencies and the ease that you can uninstall everything with one click as the most of it should be within one folder. Otherwise the user has to deal with pythons, condas and other reptiles. Hmm, maybe it also contains portable CUDA? Id have to give it a closer look as well.

    • @buttpub
      @buttpub 5 місяців тому +1

      @@anispinner most of what i mentioned? gpt4all AND ollama BOTH have the options to do cpu or gpu depending on your setup. If you have gotten to the point of trying to f with llm's on your local pc, then you know how to open a terminal window.

    • @anispinner
      @anispinner 5 місяців тому +1

      There is quite a difference between opening a console and clicking an install button.

    • @buttpub
      @buttpub 5 місяців тому +1

      @@anispinner indeed, without context there is, but with context; and the fact that these are llm's, you need some basic understanding before you even embark on this. And people without any; are rarely at this point yet, and if they are then learn.

  • @arsalanganjeh198
    @arsalanganjeh198 5 місяців тому +1

    Us there any chance that use this with a 4GB graphics card?

    • @VGHOST008
      @VGHOST008 5 місяців тому +1

      You can install oobabooga locally and use a relatively small model like Tiny-Llama 1B or some other 3B~ model. NVidia uses a 7B model (requires exactly 8Gb of VRAM at medium~ accuracy settings) as a low end solution so there is no way you'd be able to run it with decent performance on 4Gb of VRAM.

    • @galaxymariosuper
      @galaxymariosuper 5 місяців тому +1

      a much better option is LM studio. there you can offload layers from the NN to the GPU as you wish. and the installation and usage is even easier than this RTX stuff

    • @VGHOST008
      @VGHOST008 5 місяців тому

      @@galaxymariosuper Yeah, stablity is also an issue with LM Studio. It often crashes and the results it produces are very shallow. Same with GPT4ALL and any other relatively small client (kobold UI would be the only exception, it just crashes often).

    • @fontende
      @fontende 5 місяців тому

      even better easier solution is Llamafile container by Mozilla, runs on Win 8 on very old hardware. I personally use obabooga but it's annoying how every new update breaks there previous function and not fixed for months, always back up these before updates

    • @mayday2011
      @mayday2011 5 місяців тому +1

      I have 6gb vram 3060

  • @083-cse-sameerkhan3
    @083-cse-sameerkhan3 5 місяців тому

    does it will work on GTX 1650

  • @spicymaggi1853
    @spicymaggi1853 5 місяців тому +1

    I only have 4GB vram (dedicated) is there any workaround for this?

    • @mascot4950
      @mascot4950 5 місяців тому

      If you are not aware of LM Studio, then you might want to check that out as it doesn't require a GPU (but it does support using them, and you can partially offload however many layers the GPU has vram to hold). Assuming sufficient ram+vram, you can download and use the same model. But, there's no ability for ingesting local files as far as I am aware.

  • @mr.bekfast9744
    @mr.bekfast9744 5 місяців тому +1

    Am I the only one that is downloading this and Setup.exe is not in the Zip file?

    • @victornpb
      @victornpb 5 місяців тому

      same problem, zip seems corrupted

    • @pillowism
      @pillowism 5 місяців тому

      Same issue here

    • @0AThijs
      @0AThijs 5 місяців тому

      For many 😔

    • @mr.bekfast9744
      @mr.bekfast9744 5 місяців тому

      @@victornpb Okay good to know that im not the only one. Is there anyway for us to report it or get an older version where the zip isnt messed up?

  • @Xandercorp
    @Xandercorp 5 місяців тому

    So how private is it?

    • @notram249
      @notram249 5 місяців тому

      Very
      Since it runs on your pc

  • @flurit
    @flurit 5 місяців тому

    Nvideas really making me regret getting an amd card

  • @boro057
    @boro057 5 місяців тому +4

    Pretty cool that the setup is so simple. I wonder if there’s any telemetry going on in the background. GeForce experience has loads which is why I avoid it.

  • @Waldherz
    @Waldherz 5 місяців тому

    Downloading dependencies for hours and hours and hours.
    Zero network activity. Anti virus checked, admin mode checked, network checked. No user error.

  • @XiangWeiHuang
    @XiangWeiHuang 5 місяців тому

    can we make a erotic roleplay chatbot with this? I use openai API solely for those.

  • @Spengas
    @Spengas 5 місяців тому

    That sucks that it is windows 11 only... never upgrading from 10

  • @heyguyslolGAMING
    @heyguyslolGAMING 5 місяців тому +2

    What is the fastest animal on the planet?

    • @DeepThinker193
      @DeepThinker193 5 місяців тому +2

      The slug.

    • @Spectrulight
      @Spectrulight 5 місяців тому +1

      Idk probably a falcon

    • @N1h1L3
      @N1h1L3 5 місяців тому +1

      @@Spectrulight The peregrine falcon is the fastest bird, and the fastest member of the animal kingdom, with a diving speed of over 300 km/h (190 mph).

    • @TenOfClub
      @TenOfClub 5 місяців тому

      airborne Microbes👌👌

    • @bgill7475
      @bgill7475 5 місяців тому

      Me when I need to pee

  • @carsfan9648
    @carsfan9648 5 місяців тому +3

    zip corrupted?

    • @0AThijs
      @0AThijs 5 місяців тому +2

      It seems... 😢 35GB!

    • @aalejanddro2328
      @aalejanddro2328 5 місяців тому +1

      there is a fix?

    • @carsfan9648
      @carsfan9648 5 місяців тому

      Is it because I have windows 10?

    • @0AThijs
      @0AThijs 5 місяців тому

      @@carsfan9648 no, should be fixed, I haven't tried it, redownload 🥲

  • @nosinfantasia
    @nosinfantasia 5 місяців тому

    anyone with installer failed , with no reason...

    • @OpenAITutor
      @OpenAITutor 4 місяці тому

      This only works for RTX 4000 series min with 8GB of VRAM.

  • @_vr
    @_vr 5 місяців тому

    Llama is Facebook's chat model

  • @CrudelyMade
    @CrudelyMade 5 місяців тому

    6:57 using WHAT kind of desktop? lol

  • @sriaakashsrikanth8622
    @sriaakashsrikanth8622 4 місяці тому

    Nvdia getforce gtx 1650 can be used ?

  • @blueyf22
    @blueyf22 3 місяці тому

    my teachers will never know what hit em

  • @im_Dafox
    @im_Dafox 5 місяців тому

    everything was fine until "windows 11" 😄
    Shame, looks really cool and useful

  • @MousePotato
    @MousePotato 2 місяці тому

    AI voice. Us Brits never say anyway with a plural.

  • @andyone7616
    @andyone7616 4 місяці тому +1

    Can you make a video on how to uninstall chat with rtx?

  • @mhvdm
    @mhvdm 5 місяців тому

    Very buggy, tested it myself and I must say I'm impressed, but darn they need to fix bugs. It was very bad at responding to stuff in general.

  • @NarbsWorldTV
    @NarbsWorldTV 5 місяців тому

    it didnt chat

  • @Ortagonation
    @Ortagonation 5 місяців тому

    have dedicated tensor core for ai, but use rtx core instead. Kinda funny

  • @paulocoelho558
    @paulocoelho558 4 місяці тому

    File Size 35 GB? Why? 💀💀

    • @OpenAITutor
      @OpenAITutor 4 місяці тому

      The two LLMs 14 GB and 8 GB .. Then NVIDIA installs mini conda and all the python libararies in a separate environment called env_vnd_rag 16 GB plus TensortRT_LLM for creating the enginees to work with your GPU

  • @itxaddict7503
    @itxaddict7503 5 місяців тому

    C'mon Skynet. You need us to hand you the world on a silver platter?

  • @TsukikoKiri
    @TsukikoKiri 5 місяців тому

    No rtx 20 series? Yikes.

    • @TheMidnightGoose
      @TheMidnightGoose 5 місяців тому

      If you're technically inclined lookup "OobaBooga Text Generation Webui" running LLMs locally has been possible for a long time now and they support any graphics card that can run the models. It also has far more features compared to "RTX Chat", sad to see another mega-corporation attempting to stick their grubby fingers into the open source scene.