1 Command To Bring Home Your Very Own Robot Overlord - Text Generation Webui & LLaMa.cpp

Поділитися
Вставка
  • Опубліковано 9 вер 2024

КОМЕНТАРІ • 23

  • @jameswubbolt7787
    @jameswubbolt7787 Рік тому +1

    Great stuff😊

  • @lochnessmonster4518
    @lochnessmonster4518 Рік тому +1

    This is awesome. Thanks for the tutorial Don. Hopefully you will do a more in depth video on the different features and extensions that are available. Speaking of extensions, any chance you could tell me where I can add elevenlabs? Would love to have TTS available.

  • @shrirangkahale
    @shrirangkahale Рік тому +3

    Awesome video :D

  • @ronm6585
    @ronm6585 Рік тому

    Thanks Don.

  • @HasanAYousef
    @HasanAYousef Рік тому +1

    Awesome Don, can You show us how to use it with our own data, I've a couple of pdf and text files, how can I use them to answer my prompts?

  • @yerunski
    @yerunski Рік тому +3

    Just installed it on my Ubuntu server and it seems to have gone well. Except the web URL can't be reached. I can't use the localhost IP (since the server doesn't have a GUI) so i'm trying from a remote system connected to the same network. . It says the site can't be reached. I have UFW turned off. What am i missing here?

    • @NovaspiritTech
      @NovaspiritTech  Рік тому +5

      edit the webui.py file on line 146 add "--listen" after "--chat" to enable listening on local ip

    • @yerunski
      @yerunski Рік тому +1

      @@NovaspiritTech Awesome, thanks for the quick reply!

  • @robertjamessteeleiii
    @robertjamessteeleiii Рік тому +1

    Where can i go if i am having problems with this install scrypt?

  • @ORCdot
    @ORCdot Рік тому

    I installed oobabogaa webui with docker-compose and llama 4 bit model is about 4gb im using the gpu 1070 ti on linux. What i noticed is that is to slow and when i try to run alpaca model runs out of memory error. Maybe if i use cpu is going to be faster? I have i5 8400 and 16gb ram.

  • @IronMechanic7110
    @IronMechanic7110 Рік тому +1

    How much vram of GPU do in need to run this maybe 16gb ??

    • @NovaspiritTech
      @NovaspiritTech  Рік тому +1

      Yea at least, I was having trouble running 7B with my 8gb 1070

  • @ElectrogicsPH
    @ElectrogicsPH Рік тому

    Can we use coral ai accelerator to RPI and run llama?

  • @phasechange5053
    @phasechange5053 Рік тому

    Sadly im running AMD on both my good rigs so its probably going to be a pain.
    Someone was telling me it will chug without decent GPU acceleration and a GPU with a fair amount of memory also.

  • @shannonashley7224
    @shannonashley7224 Рік тому

    Yep like all these chat ais, its broken, the chat ai community has a lot of catching up compared to art ai

  • @schrodingersmechanic7622
    @schrodingersmechanic7622 Рік тому +2

    But will it run on a Pi? I'm thinking the answer will be "yes, eventually."

    • @NovaspiritTech
      @NovaspiritTech  Рік тому +4

      yes, acutally tested some stuff on rpi and on khadas edge2. if it wasn't for the limited ram, you are stuck on 7B or lower

    • @caibi_
      @caibi_ Рік тому

      wait rpi got vram?

    • @DerekJoinerPRESTONEZ
      @DerekJoinerPRESTONEZ Рік тому

      @@NovaspiritTech Which executable/zip file did you use for your khadas? I'm attempting this on the orange pi 5b :b