DeepSeek debunked in 100 seconds.

Поділитися
Вставка
  • Опубліковано 29 січ 2025

КОМЕНТАРІ • 38

  • @momenwadood1342
    @momenwadood1342 День тому +84

    I find it funny how Americans can't just accept defeat

  • @adrianovianawerneck472
    @adrianovianawerneck472 День тому +57

    Just take the L, bro

  • @ThaRSGeek
    @ThaRSGeek День тому +39

    Communism 1
    Capitalism 0

  • @hopefulXime
    @hopefulXime День тому +15

    The levels of cope of this guy when he criticized deepseek for being open source. Come on

  • @fntr
    @fntr День тому +12

    i think deepseek hurt the feelings of the biggest openai fanboy

  • @korbpw
    @korbpw День тому +25

    most effective murica propaganda

  • @bomba76
    @bomba76 День тому +22

    Yes, it can totally sustain itself - it uses way less stuff to keep itself running and has run locally with just a few M2 Ultras for the full 671B model. That's almost a counterpoint, as it's OpenAI and Anthropic that need the massive sustainability costs. This video is pure copium.

  • @emcell2
    @emcell2 День тому +26

    we're not closer to AGI. correct. But those guys found ways to use AI tech more efficient. And that's the amazing part.

    • @frwlkr
      @frwlkr День тому +2

      And released to the public. With a permissive license.

  • @ThanasisKapelonis
    @ThanasisKapelonis День тому +11

    “I don’t like it” “it’s not American” so “it’s going to fail”.
    Bro, I’m sorry but you lost credibility with this video…

  • @amazingsoyuz873
    @amazingsoyuz873 День тому +13

    ok so a lot wrong here. For one, the inference cost is so low because, in part, they use a VERY sparse MOE model trained natively in 8bit precision (rather than the standard 16 or 32). Not only is memory requirements lower (Which is great for the VRAM constrained cards in China) but so is the compute needed for a forward pass as only 32/700 billion parameters are active at a time. Compare this to models like GPT4 that likely are running hundreds of billions of parameters at a time and it makes since how the model is so much cheaper and more scalable.
    As for the practicality of running locally, many people have made partial quants as low as 1.58bit using ternary weights for many of the layers, only requireing ~80GB of ram for the full model and runnable at OKish speed off CPU due to the small number of active parameters. Quants due of course result in lower performance and running this would still be slow on most consumer hardware. Luckily, they also released a series of distills having sizes like 1.5b, 7b, 8b, 14b, 32b, and 70b all of which are runnable on consumer hardware depending on your specs. I have a 3060 12GB and can run all but the 70b at decent speed (and the 70b slowly) and the 14 and 32 actually aren't that far off the full model in terms of benchmark performance (although obviously they'll have less world knowledge at such a small size).
    Overall this family of models is a big win for open source and the fight against big tech, I've fully switched my workflow over to open source models for the first time (Only occasionally using Gemini when I need big context lengths for a problem since that is hard to achieve on consumer hardware for now. Looking forward to RWKV models next!)

  • @michaelblum4557
    @michaelblum4557 День тому +7

    Why even make this video? These are all non-arguments. You note that while DeepSeek is cheaper, it being open source is no silver bullet or whatever because backend complexity barriers to entry. But the payment for reduction in complexity, as you seconds earlier noted, is cheaper than what OpenAI is asking. Would love to know what motivated you to write edit and upload this.

  • @corvusprojects
    @corvusprojects День тому +17

    Lol the turbocope on channels like this is hilarious.

  • @miguel900030
    @miguel900030 День тому +13

    Just accept the fact that the United States is no longer leading the AI race 😂🎉

  • @burnem2166
    @burnem2166 День тому +8

    It's not that deep bro
    The US may have lost the lead🐋

  • @redwan_lmati
    @redwan_lmati День тому +7

    title: debunked, video: I'm skeptical... lol

  • @martinsanchez-hw4fi
    @martinsanchez-hw4fi День тому +16

    Meh. What did you debunked

  • @toasty4000000
    @toasty4000000 День тому +12

    I wonder how much time you've cost humanity creating and watching this stupid video

  • @alsto8298
    @alsto8298 День тому +2

    Overhyping or not, but is IS free and it IS better than GPT-4. Everything else is a mere speculation.

  • @tapesteer
    @tapesteer День тому +14

    All your arguments about sustainability ar the exact questions you should be asking about OpenAI and the corporate approach americans have. The fact that you bring these up about deep seek, ignoring that OpenAI with the amount of money they put into their work, and the amount of money the put into monopolizing it, speaks alot about your horseblinders. And no, I fucking hate the CCP and am deeply skeptical about them, but at least it's open sourced and 100 times cheaper so far. Nah, you fucked up on this one.

  • @ItsD3vil
    @ItsD3vil День тому +6

    bruh

  • @art_of_deception4074
    @art_of_deception4074 14 годин тому

    This video was great. Not many people willing to break things down to the lowest levels.

  • @kokop1107
    @kokop1107 День тому

    Just as Google said at the beginning of this hype. Big tech has no moat.

  • @youdeservecriticism
    @youdeservecriticism День тому

    your country would be unstoppable if you knew how to learn from your mistakes

  • @Trashdarkrunner
    @Trashdarkrunner 21 годину тому

    OpenAI sicked a hydra on Deepseek an flew him out

  • @kokop1107
    @kokop1107 День тому

    This seems like absolute nonsense. Especially since you can run the model locally. Server maintenance is thus a non-issue. Also how difficult so you think self-hosting can be?? It’s extremely trivial for anyone who can follow an extremely basic tutorial.

  • @mertcoz
    @mertcoz 14 годин тому

    unsubbed.

  • @WagaTouso
    @WagaTouso 6 годин тому

    Cope harder