NEW POWERFUL Local ChatGPT 🤯 Mindblowing Unrestricted GPT-4 | Vicuna

Поділитися
Вставка
  • Опубліковано 29 вер 2024

КОМЕНТАРІ • 488

  • @TroubleChute
    @TroubleChute  Рік тому +19

    UPDATED VIDEO: If you don't have a GPU or want to use the CPU version, or get llama_inference_offload or a CUDA error: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html
    You can download the CPU model if you have the GPU version already installed, or reinstall the entire thing from scratch.
    Huge improvements, and I'm surprised the solution for other videos and articles is "use something else" or "don't use the webui"... This is so much better...

    • @HostileRespite
      @HostileRespite Рік тому +1

      Quick question. Can this recruit other AI to complete tasks like Hugging and Auto GPT? I'm looking for an AI that I can personalize, and even satellite on Colab for my own needs. I'm a bit of a mad scientist with my hands in many cookie jars so to speak, so this multi-functional ability is very exciting! If it doesn't have this ability, do you think it can be trained to?

    • @scorpio2921
      @scorpio2921 Рік тому

      Thanks i will try that. Followed everything, but when opening the web ui it wasn't working, getting no answer or whateve ! i'll try this video thanks a lot !:)

    • @bluejazzman
      @bluejazzman Рік тому

      Thanks for the video! I tried to use the 1 line powershell installation script, but its saying that the file is not found. would you mind sharing the script again? thank you!

    • @thewizardsofthezoo5376
      @thewizardsofthezoo5376 Рік тому

      @@HostileRespite I would guess a nice to have feature would be : blowjobs, cooking and cleaning around and back rubs too, I am looking for github repo, please help!!

    • @RomboDawg
      @RomboDawg Рік тому +1

      OP your one click installer is broken

  • @TroubleChute
    @TroubleChute  Рік тому +65

    The llama_inference_offload error a lot of people are having sounds like an issue with the CPU only option. I'll look into this in a few hours an probably have info on fixing it, otherwise it's up to the developers to fix it.

    • @vuo7ng
      @vuo7ng Рік тому +6

      i love you

    • @FURYWOLF
      @FURYWOLF Рік тому +4

      it says out of memory, i got 3070TI, when i type to chat..

    • @DoubsGaming
      @DoubsGaming Рік тому +4

      there is also an issue with picking the GPU option, something about a missing file path or something.

    • @jamesmaine8438
      @jamesmaine8438 Рік тому +2

      @@FURYWOLF Same issue. Did you figure this out?

    • @sonic7digiteyes209
      @sonic7digiteyes209 Рік тому +6

      -- same here - "No module named 'llama_inference_offload'"

  • @Serifinity
    @Serifinity Рік тому

    Thanks for the great Walkthrough video. Very useful.

  • @newpersia88
    @newpersia88 Рік тому +1

    I tried this but the responses are gibberish .characters from many different languages appears in one word 🤔

  • @MaximilianPs
    @MaximilianPs Рік тому

    Can I ask you what those small emoticon are?
    While you're typing I notice that they pops out like they analyze the text that you insert 🤔

  • @null69420
    @null69420 Рік тому +18

    there is an error it reads "import llama_inference_offload
    ModuleNotFoundError: No module named 'llama_inference_offload'

    • @Karvan420
      @Karvan420 Рік тому +4

      I got the same error. I've retried the same installation three times, it doesn't work. I used CPU mode during installation

    • @TroubleChute
      @TroubleChute  Рік тому

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

  • @zeiko3834
    @zeiko3834 Рік тому +14

    I got error said "import llama_inference_offload
    ModuleNotFoundError: No module named 'llama_inference_offload'

    • @madakuse
      @madakuse Рік тому +1

      Same got this error

    • @TroubleChute
      @TroubleChute  Рік тому

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

    • @zeiko3834
      @zeiko3834 Рік тому

      @@TroubleChute thank youuuu

  • @franckoABO
    @franckoABO Рік тому +11

    ModuleNotFoundError: No module named 'llama_inference_offload'
    please help

    • @TroubleChute
      @TroubleChute  Рік тому

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

  • @glassslack
    @glassslack Рік тому +13

    Thanks! I want to learn more from you! I love the one click since i don't know how to code, I'm trying to learn and this is really helpful!

  • @ed3m1r
    @ed3m1r Рік тому +2

    How to fix this error ? "FileNotFoundError: [Errno 2] No such file or directory: 'models\\anon8231489123_vicuna-13b-GPTQ-4bit-128g\\pytorch_model-00001-of-00003.bin'"

    • @stratgia
      @stratgia Рік тому

      I have the same issue. It does not start and thats the error. Thank you!

    • @TroubleChute
      @TroubleChute  Рік тому +1

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

    • @stratgia
      @stratgia Рік тому

      @@TroubleChute Thank you!!

  • @Reimoto42
    @Reimoto42 Рік тому +25

    Great video! I'm really impressed by the capabilities of Vicuna, it seems like such a powerful tool for natural language processing. I was wondering if there's any way to integrate it with Wolfram Alpha similar to how GPT-4 works? It would be amazing to see what kind of insights we could uncover with that level of computational power at our fingertips. Keep up the great work!

    • @jessem2176
      @jessem2176 Рік тому

      That should be pretty easy actually with their extension tutorial... or If they intergrate it with LlamaHub or LangChaing this Webui would be unstoppable.

    • @RossPfeiffer
      @RossPfeiffer Рік тому +2

      Is it censored? Asking for a friend

  • @marcocinalli755
    @marcocinalli755 Рік тому +9

    I had "out of memory" errors on my 3060ti (8Gb vram) and I solved it adding these parameters to the launch command on start-webui.bat: --gpu-memory 5 --pre_layer 25
    I am not sure it is the best and most efficient version. If somebody has suggestions, please let me know.

    • @peterpoulsen4794
      @peterpoulsen4794 Рік тому +2

      Thank you! This is pretty amazing. Its the first working solution i have found online sofar for us with only 8 GB of Vram. This should be mentioned in the videodescription optimized or not. hopefully they will fix the issue with running this on the CPU shortly for those without expensive Nvidia graphic cards.

    • @madreric
      @madreric Рік тому +3

      thanks for this i used --gpu-memory 6 --pre_layer 32 and it works! It is very slow but solid. I wonder if there is anymore optimization that can be done to make it a bit faster

    • @procheathepitai
      @procheathepitai Рік тому

      how did you add these parameters?

    • @TroubleChute
      @TroubleChute  Рік тому +2

      Edit the start-webui.bat, and edit the line near the end that starts with "call python"

    • @marcocinalli755
      @marcocinalli755 Рік тому

      @@madreric ​ Thanks Raul, on a 3060Ti I keep on having some instability unless I set --prelayer to something around 25. It is very slow also for me. I am not sure the --gpu-memory parameter really makes something, but the parameter --pre_layer is surely a MUST to be changed in case of low VRAM (it sets the number of layers to allocate to the GPU). Hopefully they will make this model more efficient in future updates.

  • @NakedTrashPanda
    @NakedTrashPanda Рік тому +78

    Honestly was not going to do this because I know how annoying setting up AI can be (Stable Diffusion) but since you made it so simple might as well. Thank you!
    Edit #2: It's only a matter of time before the self-hosted AI gets better than ChatGPT. Most Stable Diffusion models are already much better than OpenAIs.

    • @TomFairhall
      @TomFairhall Рік тому +4

      Hey NTP how easy was it to set up, if this is now, next months going to be crazy [usually say next year but shit moving so fast] ok so there’s multiple characters and next the Ai will be talking between its own characters

    • @xiaojinyusaudiobookswebnov4951
      @xiaojinyusaudiobookswebnov4951 Рік тому +2

      It's sad that I run into a lot of errors while setting this up on Ubuntu! (held packages, gpu not discovered etc. ) but I hope it works on windows.

    • @kaziahscats
      @kaziahscats Рік тому +1

      ​@@TomFairhall Character AI already allows AI characters to talk between each other. After about 50 or so messages it just devolves into insanity though

    • @JonnyCrackers
      @JonnyCrackers Рік тому +1

      @@kaziahscats In my experience it can ruin your chatbots. Last time I tried it, one of my bots who never used emojis started spamming random emojis after everything it said because the other bot it was talking to used them a lot.

    • @johannesdolch
      @johannesdolch Рік тому +2

      Especially because OpenAI is now more interested in how to monetize it and prevent it from giving answers that contradict the one true narrative instead of just making the model better. Sorry to say it but when the goal is to create fascist AIs, openAI is on the right track.

  • @sleetible
    @sleetible Рік тому +12

    Ok, I'm adding ANOTHER thanks just because this PowerShell is so well written I feel like you've inspired how I'm going to do some things in the future... 💜

    • @TroubleChute
      @TroubleChute  Рік тому +4

      Well I'm just getting started I hope you don't learn bad practice from me haha. Iex irm is a very short way of running things but far from recommended

  • @NapalmCandy
    @NapalmCandy Рік тому +2

    [SOLVED] -> with C:\AI\Texto\Oobabooga\oobabooga-windows\installer_files\env>python -m pip install wrapt
    I have an error:
    ModuleNotFoundError: No module named 'wrapt'

  • @DeGameBox_SRBT
    @DeGameBox_SRBT Рік тому +2

    why can't chatgpt have the same functionality as this OPEN ai?

  • @Leyline-l1f
    @Leyline-l1f Рік тому +3

    Is a GTX 1070 good enough to run this?

  • @captainyossarian388
    @captainyossarian388 Рік тому +4

    8GB 3070 but the responses are coming along at about 1 word per second. Any advice on speeding that up?
    Would I have better luck installing for CPU (i7 4core 2.8ghz)?

    • @lovol2
      @lovol2 Рік тому

      Oh that's sad to hear. That's my graphics card😢. I'll give it a try anyway.

  • @miauzure3960
    @miauzure3960 Рік тому +1

    HONESTLY in which way this AI is unrestricted? when it comes to filtering it is as pathetic as ChatGPT, not even giving me an answer which painkiller is safer to use with alcohol! other than that, pretty cool and thanks for making the script

  • @gavrielxd157
    @gavrielxd157 Рік тому +41

    ModuleNotFoundError: No module named 'llama_inference_offload'

    • @alexdunn4963
      @alexdunn4963 Рік тому +1

      Same

    • @UsamaKenway
      @UsamaKenway Рік тому +1

      I've fixed . Are u on windows or Linux

    • @octupusss
      @octupusss Рік тому +3

      @@UsamaKenway Hi there, I'm on windows.. how did you fix it?

    • @TroubleChute
      @TroubleChute  Рік тому +2

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

  • @thefantasyhero
    @thefantasyhero Рік тому +2

    Hi guys! It gives me the error "This script needs to be run as an administrator.
    Process can try to continue, but will likely fail. Press Enter to continue..." When I try to run it in a powershell inside of the folder. How do I run powershell as admin inside of folder in Windows 11?

    • @TsunaXZ
      @TsunaXZ Рік тому

      Here's my alternative way:
      1. Run the powershell as admin
      2. "Copy as path" the path of the folder you want it to be installed (You can hold shift + right click the folder and find the "copy as path" option)
      3. Type CD (paste folder path here) REMOVE THE PARENTHESSES then ENTER
      4. Type iex (irm vicuna.tc.ht) then ENTER
      EDIT: Just noticed that it will be installed at desktop/new folder regardless of what you input.

    • @BRoyce69
      @BRoyce69 Рік тому

      run powershell as admin
      > cd C:\[folder path]
      C:\[folder path > -COMMANDS-

  • @Nekotico
    @Nekotico Рік тому +2

    me watching the beggining of the agi....on my most poweful pc from 2016

  • @jakkalsvibes
    @jakkalsvibes Рік тому +1

    so far tried all your installations this one has no Model downloaded and no download-model.bat file 😞

  • @fabiomatias9088
    @fabiomatias9088 Рік тому +3

    Mine just can't find the micromamba version after the download, i have a micromamba.exe in oobabooga-windows\installer_files\mamba but after the download i can't progress. Can you help?

  • @zain1045
    @zain1045 Рік тому +4

    anyway to limit the memory ? or a low ram setting cmd line? i have 24 gigs of RAM, 4gb vram still cant start it,

    • @TroubleChute
      @TroubleChute  Рік тому

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

  • @xX-DogSama-Xx
    @xX-DogSama-Xx Рік тому +8

    I'm having a strange issue when trying to run the web ui, right at the line before it's supposed to say "done" it just hangs for a few seconds, then prompts to "press any key to continue..." and closes. Anybody know how to solve this? I did a manual install as well and came up with the same issue

    • @xgunxsh0txpr0x
      @xgunxsh0txpr0x Рік тому +1

      Commenting to see if you get help, I have this also. I am no coder so hell if I understand what the error is about lol

    • @tsuhao7767
      @tsuhao7767 Рік тому

      i have the same error. Do you find a way to fix it?

    • @xX-DogSama-Xx
      @xX-DogSama-Xx Рік тому

      @@tsuhao7767 not yet, sadly

    • @Eestimees125
      @Eestimees125 Рік тому

      I also have the same problem. I guess I'm in the waiting line for an answer. Also everyone who is also commenting here, please like the comment so it gets more popular. Any help would be appreciated 🙏

    • @BigBoss-Pioneer
      @BigBoss-Pioneer Рік тому

      Copie paste it to chatgpt

  • @Shakespeare1612
    @Shakespeare1612 Рік тому +6

    Thank you! I have it up and running on my local PC. I'm chatting with it. These are incredible times.

    • @danjones7561
      @danjones7561 Рік тому +2

      Compared to GPT3.5 and 4, how does it rate? Can you inject him long prompts like 4K words?

  • @markdd4281
    @markdd4281 Рік тому +1

    Thanks for the text guide this video kinda confusing tbh

  • @Starius2
    @Starius2 Рік тому +3

    If we can't use the intended model, then I'm going to say the 90%claim is bunk

  • @degagere
    @degagere Рік тому +4

    Great video! Can you please consider also offering instructions for macOS… many thanks….

    • @traknologist
      @traknologist Рік тому

      I definitely want to run this on my Mac and especially utilize the neural engine!

  • @nickherr9338
    @nickherr9338 Рік тому +9

    No module named 'llama_inference_offload

    • @oocube3183
      @oocube3183 Рік тому +4

      Same problem here i need help too

    • @sonic7digiteyes209
      @sonic7digiteyes209 Рік тому

      ​@@oocube3183same here - No module named 'llama_inference_offload'

    • @TroubleChute
      @TroubleChute  Рік тому +1

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

  • @unluck1396
    @unluck1396 Рік тому +6

    RuntimeError: [enforce fail at C:\cb\pytorch_1000000000000\work\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 52428800 bytes.
    help pls?

  • @l3lackoutsMedia
    @l3lackoutsMedia Рік тому +7

    Amazing format. Your one click script is very convenient for essentially directly getting amazing value. Subbed

  • @Fahama
    @Fahama 8 місяців тому

    Unfortunatly got this: ModuleNotFoundError: No module named 'yaml'

  • @小小綠小綠綠
    @小小綠小綠綠 Рік тому +2

    Micromamba version:
    Micromamba not found.
    Press any key to continue . . .
    pls help

  • @akhilldhilipkumarkalaiyara1601
    @akhilldhilipkumarkalaiyara1601 11 місяців тому +1

    it says it needs to be run as administrator

  • @Gome.o
    @Gome.o Рік тому +2

    Hey man, any videos for macOS based AI stuff coming up in the near future?

  • @occultislux
    @occultislux Рік тому

    After installing it, I try to open it and it just opens up my cmd. It does not work at all.

  • @pomi8299
    @pomi8299 Рік тому +1

    Is this model has limitations to the length of the promt? I want to summarize long meeting transcriptions

  • @RapPeriscope
    @RapPeriscope Рік тому +1

    micromamba error

  • @CollinTowle
    @CollinTowle Рік тому +1

    I don't understand. why am I getting this error.
    Enter 1 to launch CPU version, or 2 to launch GPU version
    1 (CPU) or 2 (GPU): 2
    Start-Process : This command cannot be run due to the error: The system cannot find the file specified.
    At line:209 char:9
    + Start-Process ".\start-webui-vicuna-gpu.bat"
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidOperation: (:) [Start-Process], InvalidOperationException
    + FullyQualifiedErrorId : InvalidOperationException,Microsoft.PowerShell.Commands.StartProcessCommand

  • @TheOmnigularity
    @TheOmnigularity Рік тому +2

    Does this work on Mac?

  • @ห้าวน้าา
    @ห้าวน้าา Рік тому +1

    It says Python has stopped working. Is there anyway to fix that?

  • @AICineVerseStudios
    @AICineVerseStudios Рік тому +1

    Followed everything as per your video but for some reason its different than how you installed. First thing I didn't get any link for the model to be used which you copied and pasted , also I get this weird error which says "Gardio Moduke Missing" no idea what happened there, Then there was a file which was being download and it didn't because my system said there is a virus in that file. So in the end no success.

  • @ondrejkoutnik238
    @ondrejkoutnik238 Рік тому +2

    Hello, once I select A) Nvidia, B) None and press enter, it should load the micromamba. But in my case it says: Micromamba not found and Micromamba hook not found. Do you know how to fix it? Thanks

    • @NakedTrashPanda
      @NakedTrashPanda Рік тому

      Make sure you are installing it into the C: drive or somewhere that does not have spaces in the name of the install path. In my case, I have it set to "C:\Users\PCName\"

  • @beforedrrdpr
    @beforedrrdpr Рік тому +5

    I really like the new thumbnail style, I hope you continue it

    • @TroubleChute
      @TroubleChute  Рік тому +3

      Indeed! Just figuring out things I need to do to improve it. Not to great having OpenAI as the image for all AI... But it is important to understand what it does.

    • @SepehrKiller
      @SepehrKiller Рік тому +1

      @@TroubleChute Im a Graphic Designer and I really like most of your Thumbnails, you know what you are doing for sure

  • @jimg8296
    @jimg8296 Рік тому +6

    Love to see a follow up on the training abilities.

    • @lovol2
      @lovol2 Рік тому

      Yes. This is where these locally hosted bots will work better
      Like in stable diffusion you can have really specific models for specific types of images, open bracket, cats, cars, whatever.
      Being able to train this ourselves on a specific data set, you could have an expert in every Honda car, for example!
      We really need to know how to train this!

  • @B3D
    @B3D Рік тому +1

    Hi , thank you for sharing
    but I follow the methord install but every time i say anything it just refresh , come out error below.
    CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 8.00 GiB total capacity; 7.07 GiB already allocated; 0 bytes free; 7.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    Output generated in 2.75 seconds (0.00 tokens/s, 0 tokens, context 43)
    im using i9 and RTX3070ti
    may i know how to fix it >?

    • @TroubleChute
      @TroubleChute  Рік тому

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

  • @mik3lang3lo
    @mik3lang3lo Рік тому +3

    Would love to see how the training tab works

  • @LookingForSomethingMissing
    @LookingForSomethingMissing Рік тому +2

    any Mac tutorial please?

  • @dliedke
    @dliedke Рік тому +1

    RuntimeError: shape '[32001, 5120]' is invalid for input of size 0 usign NVIDIA 3070 8GB VRAM, probably not enough VRAM

  • @hqcart1
    @hqcart1 Рік тому +1

    I tried it, it suck.

  • @whattha_huh
    @whattha_huh Рік тому +2

    Does it remember everything?

    • @null69420
      @null69420 Рік тому +2

      Yes, it uses tokens you can set and save.

  • @lutang771
    @lutang771 Рік тому +1

    I got a bunch of error running the one line script, need to install oogabooga manually.

  • @IvanVazquezS
    @IvanVazquezS Рік тому +2

    Does this model also have the capability to generate code, recipe options to cook, give factual data and do retrospective about its own answers to correct itself?

  • @jsadecki1
    @jsadecki1 Рік тому +1

    doesnt open whenever i try to open it it says
    This script relies on Miniconda which can not be silently installed under a path with spaces.
    Press any key to continue . . .
    please help

  • @seventus
    @seventus Рік тому +2

    It's so annoying to have AMD cards...

  • @ItsBlueOfficial
    @ItsBlueOfficial Рік тому +2

    How do I upload my own images for characters/create a new one?

  • @eemaanj
    @eemaanj Рік тому +3

    This is awesome! You really worked hard to help people access AI locally on their computers. Thank you very much for your efforts and for making this so simple for those who are not as pro as you are, but who are interested in trying things out!!

  • @justin3571
    @justin3571 Рік тому +1

    AWESOME!! the SCRIPT makes this possible for me! Please make script for installing AUTO GPT as Im having problem installing that. Thank you!

  • @FSchack
    @FSchack Рік тому +1

    Thanks, nice video :-)
    It would be nice with a video explaining the interplay between models, configuration and loaders. It seems like the loader or configuration also affects the answer significantly, as I had a model answer the height of the Eiffel Tower and World Trade Center perfectly, but then after some configuration changes it couldn't get one number right.

  • @linuxlinux6941
    @linuxlinux6941 Рік тому +1

    Impressive video and clear-cut instructions, bravo! I have three questions for you:
    1. I need to use it when I won't have internet access for a long time. Are there any restrictions or limitations to consider in such situations?
    2. How effective is it in helping with English corrections and creating professional email responses?
    3. If I need to purchase it, what are the limitations or drawbacks of not doing so?
    Thank you!

    • @BRoyce69
      @BRoyce69 Рік тому

      its an open source AI research project from top US students and professors. specifically a fork made to be uncensored and run locally
      1. if you're running the local model, assuming its downloaded, installed and functional before you go offline it should remain functional thereafter.
      2. its less effective than GPT3.5, but wont give any issues with privacy/security or subject matter. as with most LLMs it is multi-lingual and can serve as an okay translator in a pinch.
      3.its a free and open source project. read their guidelines around commercial use. you likely can use it and its outputs, but cant distribute its code for money.

  • @Kokujou5
    @Kokujou5 Рік тому

    very cooll. now all you need is like a 30.000€ computer to execute that shit... >.> very very good... would've been too nice if normal people could afford this... *sigh*

  • @YatharthYx
    @YatharthYx Рік тому +1

    @troublechute can this model code like chat gpt and let help in vulnerability coding as gpt model is restricted to not help in vulnerability coding
    thanks

  • @BR-hi6yt
    @BR-hi6yt Рік тому

    Oh arrr. I do love this step by step guide because most of my attempts get stuck somewhere. I can't even get python working now on my Windows 11 i5 PC. Its trying to make python an App or something or nothing. Can't even get my %PATH% right ... doh

  • @Kyzik244
    @Kyzik244 6 місяців тому

    Think i did it right, miniconda installed. Ran onescript through but when program opens , it just closes again ? Cudnn in same folder, miniconda downloaded. Installed alright.
    "This script relies on Miniconda which can not be silently installed under a path with spaces.
    Press any key to continue . . ."

  • @shrekgaming124xdlol9
    @shrekgaming124xdlol9 Рік тому +1

    I have a 1660 and am getting cuda errors. I am familiar with how to get it to offload weights to the system ram but it doesnt seem to work that way on here. do you have a suggestion for me so I can run this model?

  • @enton9422
    @enton9422 Рік тому +2

    totally awesome. I've subscribe ur channel.

  • @stratgia
    @stratgia Рік тому

    hello. I am getting this error and is not working. Any idea on how to solve it? thank you. Error: Loading anon8231489123_vicuna-13b-GPTQ-4bit-128g...
    Could not find the quantized model in .pt or .safetensors format, exiting...

  • @salgadev
    @salgadev Рік тому

    How can I tell which models work with CPU-only? I have an AMD GPU and don't want to be switching to Linux + ROCm because its such a pain in the ass

  • @sheven18
    @sheven18 Рік тому

    Why is everything different Oobabooga folder does not appear, it's Y/N not A/B when it asks for about GPU, and it never asks to copy and paste anything.

  • @SikandarKhan-ys1ik
    @SikandarKhan-ys1ik Рік тому +1

    Subbed because of the content and your interactiveness;!

  • @tetraedri_1834
    @tetraedri_1834 Рік тому +2

    What are the hardware requirements to run this?

  • @RedCloudServices
    @RedCloudServices Рік тому +1

    are there instructions for mac os (linux)

    • @TroubleChute
      @TroubleChute  Рік тому +2

      Windows is the simplest, and I have automated it even further, but you can run it on Linux and Mac (the second is untested however), by following the steps on the oobabooga GitHub page: github.com/oobabooga/text-generation-webui/
      Manually install Conda, then aall required PyTorch packages, and it should be able to run shortly after.

  • @RichardGetzPhotography
    @RichardGetzPhotography Рік тому

    Anything specifically written for Apple Silicon? To use the GPU and we’re in net work?

  • @povang
    @povang Рік тому

    I dont know why everyone is making a big deal out of Vicuna, its a crappy version of GPT3.

  • @HarryWilson55896
    @HarryWilson55896 Рік тому

    Why is it saying for me for logging in this You tried signing in (as my email) using a password, which is not the authentication method you used during sign up. Try again using the authentication method you used during sign up. (error=identity_provider_mismatch) I tried signing in through google and resetting my password but it’s not working I tried signing In normal too how do I fix this?

  • @Dan-oj4iq
    @Dan-oj4iq Рік тому +1

    Although this is "plain and simple", it would help if you already are very articulate and intuitive in most things' software.

  • @naytron210
    @naytron210 Рік тому +1

    Everything's workin great until I try to use whisper, it tries to record but then spits an error. Any ideas anyone?

  • @JustinCiriello
    @JustinCiriello Рік тому

    Followed instructions precisely and it did not work. On a mid-high end machine. Too many errors to list. Thanks for the effort though!

  • @raphaelfncs
    @raphaelfncs Рік тому +1

    RTX 2060 6GB is not enough? Not everyone is rich bro...

  • @TheRealKidRed_
    @TheRealKidRed_ Рік тому +1

    I'm getting error "MambaHook not found", pls help

  • @manuelhch
    @manuelhch Рік тому

    i got up until this
    warning libmamba draining failed An existing connection was forcibly closed by the remote host.

  • @marcelkuiper5474
    @marcelkuiper5474 Рік тому +1

    Democratization of a(g)i, I have compiled stable diffusion, it's awesome.
    But I also think decentralization of gpt like Ai will prevent or postpone a total Ai takeover by 1 entity, this way the bots will fight/compete eachother instead of us , we will take ownership of our instances who will have different goals then other ones, we will also keep ownership of our data, this pushback is very necessary and may be the only thing to safe us from a(i)pocalypse. Divided and conquer, can also be employed by the people.

    • @jsadecki1
      @jsadecki1 Рік тому

      very insightful, when i try and open from shortcut that t creates on the desktop it says this:
      This script relies on Miniconda which can not be silently installed under a path with spaces.
      Press any key to continue . . .
      any idea how to fix?

  • @funday4748
    @funday4748 Рік тому

    Yeah this ChatGPT is cool and all. But your voice it is so hardcore and satisfying to my ear.

  • @Damianciu1995
    @Damianciu1995 9 місяців тому

    full of errors now, no model selected, no download-model.bat, model is not loading properly once is selected in webui.

  • @kubakuba1621
    @kubakuba1621 Рік тому +1

    how much gb is this freaking program its like eat my 5bg from 25 gb idk if its good idea t o RUN it on 25gb

  • @msampson3d
    @msampson3d Рік тому +2

    Love finding new high quality UA-cam channels making this cutting edge AI stuff more accessible to users. Easiest subscription in my life.

  • @null784
    @null784 Рік тому

    Is there a ChatGPT offline for weak computers like 7B?

  • @morena-jackson
    @morena-jackson 6 місяців тому

    I can't get the powershell to open up in the new folder. Do I need to be on windows 11 to make this work?

  • @MrJBA79
    @MrJBA79 Рік тому +3

    Hi, thanks for the one line installation code.. I've installed it and I know it is running as it flatlines my dedicated VRAM.. but when I chat in the UI, it appears to be generating a response then my input and the response don't appear. Can you suggest what might be going on here?

    • @_DonMon_
      @_DonMon_ Рік тому +1

      Yes I am having the same issue, did you ever get it fixed?

    • @MrJBA79
      @MrJBA79 Рік тому

      @@_DonMon_ .. no nothing. We'll be very lucky to get some help as content creators are under no obligation to support their content. I live in hope though.

  • @furiaoprimida1282
    @furiaoprimida1282 Рік тому

    "Only 12 GB VRAM", do you want to trade yours for mine of 3GB? =P

  • @Haburg
    @Haburg Рік тому

    its great, but i wanna ask, what if i want real-time NSFW results?
    you said its not connected to internet!

  • @Jaymes400
    @Jaymes400 Рік тому +1

    Tried twice to install it on my windows and both times igot the error "ModuleNotFoundError: No module named 'llama_inference_offload'"

    • @TroubleChute
      @TroubleChute  Рік тому

      Updated video is finally out. This should fix errors by allowing you to use the CPU model: See the new video here ua-cam.com/video/d4dk_7FptXk/v-deo.html

  • @Scardus412
    @Scardus412 Рік тому

    My Chat whith Vicuna disappears instantly after i send, do you know why this happens?

  • @zekeriyaatilgan521
    @zekeriyaatilgan521 10 місяців тому

    Does it support different languages? Or is it just English?

  • @1Chitus
    @1Chitus Рік тому +1

    This is awesome! You really worked hard to help people access AI locally on their computers. Thank you very much for your efforts and for making this so simple for those who are not as pro as you are, but who are interested in trying things out!!

  • @skandranon314
    @skandranon314 3 місяці тому

    Can I run this on a poweredge server using a Google coral tpu? Is there a guide for that?

  • @zippytechnologies
    @zippytechnologies Рік тому

    now - how do we use it with auto gpt all local gpt resources and access to google and pinecone or better vicuna all open source it all the way and self host as much as possible

  • @adytech5788
    @adytech5788 Рік тому +1

    it is a very nice language model abut is not able to visit an url unfortunately