New nVidia 5090 Leaks! GB202 is MASSIVE Leap for Local AI!

Поділитися
Вставка
  • Опубліковано 12 вер 2024
  • The future of graphics is here! Get the scoop on all the leaked specs for the next-gen NVIDIA GeForce RTX 50 series GPUs. This video breaks down the rumored memory size and bus bandwidth for every card in the lineup, from the RTX 5050 to the monstrous RTX 5090. Find out if these leaks hold true and what kind of performance gains we might be looking at. Is it time to upgrade your graphics card? Watch to find out!
    Tell us what you think in the comments below!
    VideoCardz Article: videocardz.com...

КОМЕНТАРІ • 61

  • @jonmichaelgalindo
    @jonmichaelgalindo 3 місяці тому +17

    If 5090 is 24GB there's no AI reason to upgrade.

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +1

      I agree, the bus bandwidth will only offer piecemeal performance advancements over the 5080 (if these leaks are accurate). Which GPU are you potentially upgrading from?

    • @jonmichaelgalindo
      @jonmichaelgalindo 3 місяці тому +2

      @@aifluxchannel 4090, so it'll be a tough sell.

    • @Wobbothe3rd
      @Wobbothe3rd 3 місяці тому

      Bullshit. If it's twice as fast or more than a 4090 it will still be a great improvement.

    • @jonmichaelgalindo
      @jonmichaelgalindo 3 місяці тому +5

      @@Wobbothe3rd Even if it can train 2x speed, without being able to handle any of the recent bigger models, there's no way it's worth the price. Just use Lambda or something. You'll get much more time for your money.

    • @yahm0n
      @yahm0n 3 місяці тому +1

      The VRAM squeeze probably meets all the criteria necessary for a price fixing lawsuit. I'm not saying NVidia, AMD, and Intel got together and had a meeting where they agreed to limit consumer cards to 24GB of VRAM, but they are all in silent agreement over it to maximize profits. The NVidia A6000 is basically half as strong as an RTX 4090 but it sells for 3x as much, and the only reason it sells for so much is the extra VRAM. The commodity price of that difference in RAM is less than $100.

  • @mirek190
    @mirek190 3 місяці тому +5

    For LLM we need more VRAM not better bandwidth .... 5090 should had at least 48 GB VRAM

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +5

      I've been praying daily for 48GB of vram. We can have hope!

    • @TomM-p3o
      @TomM-p3o 3 місяці тому +2

      ​@@aifluxchannelThe only way 48GB happens is if Nvidia's competition forces them to do it

    • @IntentStore
      @IntentStore 2 місяці тому

      @@TomM-p3oa100 80gb is 3 years old. H100 goes up to 96GB. H200 is releasing soon with 141gb. Nvidia is infantilizing consumers with this 24 gig crap. This will be the first time ever nvidia released the same vram for 3 generations in a row. 3090 came out in 2020. They need to give us 32GB if they want any sales, and 48GB if they want us to think they respect us at all. If 24 was good enough in 2020 while their workstation gpus went up to 48, why is 48 not good enough now when their workstation gpus go up to 96GB??

    • @pearce_nz
      @pearce_nz 2 місяці тому +2

      @@TomM-p3oWhich is never happening considering the next amd gpu releases will be more mid tier cards, as they will wait until the amd9000 series for some better perf but the 50 series will be well out by then. 5090 is most likely gonna be 28-32gb vram and the ti edition will probably be 32gb too

  • @azero79
    @azero79 3 місяці тому +5

    How is 28 GB of memory is a huge leap for anything? let alone AI 😒

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +4

      22GB on the 2080TI is a really interesting sweet spot. Ideally we'll see 32Gb of GDDR7 on the 5090.

    • @mirek190
      @mirek190 3 місяці тому +1

      @@aifluxchannel 32 for 5090 is not enough difference ... it need at least 48GB+

    • @TomM-p3o
      @TomM-p3o 3 місяці тому

      ​​@@mirek190At 48GB Nvidia would be competing with their own higher end commercial offerings, so it's not happening.

    • @mirek190
      @mirek190 3 місяці тому

      @@TomM-p3o How?
      Professional cards have nvlink, home cards can not be used to training llm as you can't connect them together and working llm on bigger scale more than one model.
      Your argument is faulty.

    • @efexzium
      @efexzium 2 місяці тому +1

      They just don’t care about the working class

  • @TheNolok3428
    @TheNolok3428 3 місяці тому +2

    Nice Video. Nvidia is alwasy very silent about the new GPUs. But concidering that we are at the end of the 2 year cycle between generations, it seems very possible to relase new GPUs later this year. I'm not sure if 24GB is enought GRAM, because it is difficult to predict the capabilitises of GDDR7 and the usecases they are aming for.
    I'm personaly still a bit sad that they never made a Quadro from the 4090, only from the 3090. Whould be fun to get one used but now I have to go to the 4090 when the 5090 releases.
    Edit: sry for my bad spelling. Still working on my english. Greetings from germany

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +1

      The RTX 6000 is about as close as anything in the current 4000 series of "enterprise class" nVidia gpus based on the AD102 die. That said, more of what you're paying for is more reliable and fault-tolerant drivers that can do quite a bit more with ECC GDDR6 etc.

  • @southcoastinventors6583
    @southcoastinventors6583 3 місяці тому +2

    They really don't have enough troops to do it since joining the military in China is considered a desperation job, plus they have no combat experience. Seems like it better to wait for some these to drop so we know if it worth dropping 2k+ or not. Not mention the capability of the open source models in 6 months whatever the case being sloppy seconds is the best we can hope for.

  • @MacS7n
    @MacS7n 3 місяці тому +1

    To everyone complaining about the 24gb of RAM please remember that this is a gaming card capable of handling AI tasks. Nvidia wants to make money so they’ll give us a powerful card but with a smaller ram for gaming but if you need more ram to offload the local llms, you will 2 of the same card 2X rtx 5090

  • @loflog
    @loflog 3 місяці тому +3

    Is 5090 really worth it for AI if I can get 3 3090's for the same money?

    • @aifluxchannel
      @aifluxchannel  3 місяці тому

      Yeah, in terms of speed and memory bandwidth this will obliterate 3x 3090. If you don't care about speed or having to rely on more complex inference setups to use mixed GPUs the 5090 probably isn't the best choice. What do you currently have?? Curious if you fine-tune / train or just run inference more?

    • @antaishizuku
      @antaishizuku 3 місяці тому

      ​@@aifluxchanneldoes that speed and memory bandwidth matter if you cant even run the model in the first place? Genuinely curious because if the model needs 40gb and you only have 24 its gonna run really slow or not at all right?

  • @BlACKHARD444YT
    @BlACKHARD444YT 2 місяці тому

    My plan is to find as many presales and ICOs possible, to be early. You still have time to ape in Cyberopolis

  • @bvikaskumar9111
    @bvikaskumar9111 2 місяці тому

    Not a shill, but that Times Square announcement was hard to ignore. Cyberopolis will be big!

  • @zbeast
    @zbeast 3 місяці тому +2

    I'm just going to slam my card down and buy the 5090 when it hits...
    I see more power. will it have an NPU core... you remember when nvidea added PhysX to their cards. I want to see and npu there.

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +1

      I also just want nVidia to take all of my money :)

    • @SlyNine
      @SlyNine 3 місяці тому

      I don't think nvidia changed anything for PhysX. It was all Cuda. Ageia was building hardware but after nvidia bought them it was all software.

  • @reinerheiner1148
    @reinerheiner1148 3 місяці тому +1

    24-32BG would be a joke. But I would expecting nothing less of Nvidia, always using less VRAM and AMD having to be the first using more so Nvidia needs to react. But this time, I am not so sure it will work that way unless AMD gets its AI act together... I remember back in 2017 when I bought my 1080TI that the lack of a proper machine learning ecosystem, or even compatibility to cuda was driving me towards Nvidia. Practically nothing has changed since then. Last time I checked ROCm still was subpar to cuda and anyone serious about AI has to get a Nvidia GPU (or rent Nvidia gpu time). Unfortunately it seems that Nvidia will keep VRAM low to not create competition to their more profitable server AI stuff. Yes, I know, a 5090 and a H200 are not even closely compareable when it comes to performance, but the less bigger LLMs get executed on their servers, and the more they get executed on local hardware, the less servers they will sell. In addition, people could start to build 5090 gpu clusters, again decreasing demand for their expensive gpus somewhat. It all comes down to this: some company needs to combine fast enough chips with massive ram capacity and memory bandwitdh so that there starts to be some progress and competition in the lowish budget market. For now, we basically only have the option to connect multiple gpus to one system(even then VRAM size will be limited), run larger LLMs on CPUs, or use Apple hardware like the m3 max...

  • @Sirfrummel
    @Sirfrummel 2 місяці тому +1

    My 4090 is almost 2 years old but it's more than I need for gaming. Unless the 5090 allows something else with LLMs that I couldn't do previously the 5xxxx generation will be a pass.
    If I can get gpt4 locally, I would buy it, 🤣

  • @TomM-p3o
    @TomM-p3o 3 місяці тому +2

    If the certain large country absorbes Taiwan, another large country already said that they will take out the semiconductor fabs.
    If anybody is wondering why - It's good deterrence
    But yeah, if that happens forget about advanced chips for 5+ years. Don't sell that 4090 prematurely 😂

  • @NickAubert
    @NickAubert 3 місяці тому +1

    I'm just a hobbyist. I'm hoping the 50x0 launch will put downward pressure on older GPUs so I can justify buying a second 3090.

    • @fixelheimer3726
      @fixelheimer3726 3 місяці тому +1

      Or the prices stay we sell the 3090 and buy 50 series😁

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +1

      IMO the 3090 is such a great performance value it's going to be in the $700-900 range used for some time! But also, people just love the last generation of EVGA gpus :(

  • @user-rs7ov3nb6n
    @user-rs7ov3nb6n 2 місяці тому

    Not here to shill, but the Times Square ad for Cyberopolis made me Google it.

  • @user-cl7vn1eg3u
    @user-cl7vn1eg3u 3 місяці тому +1

    What is a good affordable Nvidia GPU for a standard gaming computer from 2023

    • @aifluxchannel
      @aifluxchannel  3 місяці тому

      See my video coming out tomorrow. (whatever comes between an RTX 4060 and 4080 ;) )

    • @Wobbothe3rd
      @Wobbothe3rd 3 місяці тому +1

      Obviously the 4060 is great, but the 4070ti is the best overall value imo. But really if you want great value, buy used. You can find used 3080s for the same price as a 4060 new.

  • @Khatri_1m
    @Khatri_1m 2 місяці тому

    Love BTC the most, but it is time to put some into presales. Cyberopolis is the next big thing.

  • @alcocat
    @alcocat 2 місяці тому +1

    ~ Same VRAM as 4090 = no buy

  • @_mr___itachi
    @_mr___itachi 2 місяці тому

    Cyberopolis, easy 50-200x in next few months

  • @InnocentiusLacrimosa
    @InnocentiusLacrimosa 3 місяці тому +1

    The drop between rumored 5090 and 5080 is massive. The 5080 looks really bad even compared to 4000 gen. I hope these rumors will not be fulfilled.

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +1

      The simple solution is to just buy the 5090 ;)

    • @Wobbothe3rd
      @Wobbothe3rd 3 місяці тому

      You could say the same about the 4080 to the 4090, and this generation turned out fine. 4080s still have sold almost as well as 4090s.

  • @GerryPrompt
    @GerryPrompt 3 місяці тому +1

    I want at least seven 😂

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +2

      What are you planning to use them for?? At least get 8 (even numbers are always better with GPUs)

    • @betterthantrash111
      @betterthantrash111 3 місяці тому

      ​@@aifluxchannelget 10

    • @GerryPrompt
      @GerryPrompt 3 місяці тому

      Local coding and fine-tuning models for auto-generating blog posts. 😊

  • @cacogenicist
    @cacogenicist 3 місяці тому +2

    Zzzzz. More RAM.

    • @aifluxchannel
      @aifluxchannel  3 місяці тому +1

      More is always better! What GPU are you currently using?