Let's test QwQ, the new opensource alternative to o1

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ • 11

  • @UCs6ktlulE5BEeb3vBBOu6DQ
    @UCs6ktlulE5BEeb3vBBOu6DQ 10 днів тому +1

    btw QwQ can totally do multi-turn. Set it to 32k context and 16k output tokens so its thinking isn't cut before he's done. llama.cpp has much more settings.

    • @volkovolko
      @volkovolko  10 днів тому

      Oh okay, I didn't knew that.
      I thought it cannot do multi turn because it's single turn only in the QwQ Space ^^
      Thanks a lot for the precision !

  • @UCs6ktlulE5BEeb3vBBOu6DQ
    @UCs6ktlulE5BEeb3vBBOu6DQ 10 днів тому +1

    Tetris game is often my coding test and they all struggle with it.

    • @volkovolko
      @volkovolko  10 днів тому

      Yes, tetris is quite difficult for LLMs. Only Claude 3.5 Sonnet and Qwen2.5 Coder 32B got it right on my tests. Even gpt4o didn't got it in my test (but i think it has more related to luck)

  • @SoM3KiK
    @SoM3KiK 13 днів тому +1

    hey! Would it work with a 3060ti and 32gb ram?

    • @hatnis
      @hatnis 12 днів тому

      I mean, you can't fit the required 24 gb of VRAM on your graphics card, but hey, only one way to find out if it works right.

    • @SoM3KiK
      @SoM3KiK 12 днів тому +2

      @@hatnis well, it was free to ask 😅

    • @volkovolko
      @volkovolko  12 днів тому

      Yes, but you will have to offload a lot in your CPU/RAM.
      It will run pretty slow but it will work 👍

    • @volkovolko
      @volkovolko  12 днів тому

      In the video, I ran it in my 24Go of VRAM. I think it is q4_k_m

    • @Timely-ud4rm
      @Timely-ud4rm 11 днів тому

      I was able to get it working on my new Mac mini base m4 pro chip model. QwQ-32B-Preview-GGUF bartowski repo. IQ3_XS quantization. the only one I could download as this one is 13.71 gb of ram. Note because I am using a Mac mini apples ram is unified so my 24gb of ram is shared between the gpu and cpu. if I spent spent a extra 300$ from the 1.4k I spent for the m4 pro model I could of loaded the max quantization model but I don't really do AI locally as I use online Ai services more. I hope this helps!