Good choice for AI roleplay - Magnum 72B

Поділитися
Вставка
  • Опубліковано 14 лис 2024

КОМЕНТАРІ • 31

  • @katykun1240
    @katykun1240 3 місяці тому +7

    Please review Magnum-Mini, I sincerely believe it is worth your time and worth it for people who cannot run the larger magnum. 💛

    • @featherlessai
      @featherlessai 3 місяці тому

      If you cannot run magnum 72b because of your local hardware, know there are lots of hosting providers that can

  • @David-z1c2x
    @David-z1c2x 3 місяці тому +1

    Thanks for this video, man, very good. 👍

  • @natsirhasan2288
    @natsirhasan2288 3 місяці тому +5

    By the way, As for my experience, lumimaid 70B new version is crazy good. Perhaps better than magnum

    • @chrishauer5106
      @chrishauer5106 3 місяці тому

      Excuse me Sir, How many memory is needed for this one?

    • @natsirhasan2288
      @natsirhasan2288 3 місяці тому

      @@chrishauer5106 i usually run iq3 xxs, it runs well on 30gb vram.

    • @mchaney2003
      @mchaney2003 3 місяці тому +1

      Very excited to try lumimaid once they make quants for it

  • @natsirhasan2288
    @natsirhasan2288 3 місяці тому +2

    Im waiting for llama 3.1 4x8 or 8x8... even better if they release MoA...

  • @zeldars
    @zeldars 3 місяці тому +4

    try llama 3.1 8b

  • @kakaynd4264
    @kakaynd4264 3 місяці тому

    Одобряем)

  • @Renejay-ii7yb
    @Renejay-ii7yb Місяць тому

    How do I run this model though?

  • @nekotoru
    @nekotoru 3 місяці тому

    Thanks

  • @eduardosilveira8685
    @eduardosilveira8685 3 місяці тому

    Can you tell how it compares to CosmosRP? It is the one I currently use the most.

  • @keithgalesmusic
    @keithgalesmusic 3 місяці тому

    is it free ?

    • @MustacheAI
      @MustacheAI  3 місяці тому

      Yes, it's open source, but you'll need about 24GB VRAM to run it locally.

    • @H786...
      @H786... 3 місяці тому

      @@MustacheAI what do yoou mean need. is it slower on 16gb cards or just completely unplayable.

    • @MustacheAI
      @MustacheAI  3 місяці тому

      @@H786... You can offload some layers to RAM, but even with fast DDR5 it will be very slow, and when the context increases you will have to wait for responses for 20 minutes or more.

  • @kinkanman2134
    @kinkanman2134 3 місяці тому

    Athene Llama 3 70b?!?!!?!?!??!!?!?

  • @Cashprt
    @Cashprt 3 місяці тому +9

    Man, your videos got so lazy. Almost as lazy as me.

    • @ElaraArale
      @ElaraArale 3 місяці тому +2

      Exactly

    • @MustacheAI
      @MustacheAI  3 місяці тому +18

      I haven't gotten lazy with my videos. I'm still fully committed to creating quality content while aiming for efficiency where appropriate. If my recent videos seemed lacking, that wasn't my intent. Let me know what specific improvements you'd like to see, and I'll work on incorporating them in future videos.

    • @veryseriousperson_
      @veryseriousperson_ 3 місяці тому +11

      @@MustacheAI Nah your videos are good, at least for me.

    • @themanwithblackhair4547
      @themanwithblackhair4547 3 місяці тому +4

      He could be making a joke tbh ​@@MustacheAI

    • @TheHasder01
      @TheHasder01 3 місяці тому

      @@MustacheAI You can add how much Vram model need, for example.