AMD Reveals MI300X AI Chip (Watch It Here)

Поділитися
Вставка
  • Опубліковано 12 чер 2023
  • At AMD’s most recent product reveal event, the company’s CEO Lisa Su unveils generative AI chip to work inside the data center, the new MI300X.
    Never miss a deal again! See CNET’s browser extension 👉 bit.ly/39Ub3bv
    Subscribe to our channel: / @cnethighlights
  • Наука та технологія

КОМЕНТАРІ • 293

  • @jbob34345
    @jbob34345 Рік тому +229

    Lisa Su shows how much better it is to have a CEO that has an engineering background opposed to a some turkey with a MBA.

  • @brakmaster
    @brakmaster Рік тому +91

    I always supported AMD as the first desktop I built was running on an Athlon and an AMD graphics card. Keep going Lisa you are killing it.

    • @Shannon-ul5re
      @Shannon-ul5re Рік тому +3

      me to, all the way back to the K6 😂

    • @awesomegmg956
      @awesomegmg956 Рік тому +1

      Me too. Although my first CPU was Cyrix 486, my first DIY upgrade was AMD 5X86 133. Also my first CPU with fan.

    • @Pixel_FX
      @Pixel_FX Рік тому +8

      @@istealpopularnamesforlikes3340 King? yeah king of power consumption lmao. almost 300W at stock. slight OC will make it guzzle 400W+

    • @BenedictPenguin
      @BenedictPenguin Рік тому +3

      @@Pixel_FX naaah intel is way more efficient on real world daily drive use case, unless u are forcing that chip to run at its turbo for most of ur workload then yes AMD is better there

    • @lunascomments3024
      @lunascomments3024 Рік тому

      ​@@Pixel_FXhaha yes. i just can't cool down this thing called Intel CPU.

  • @DoctorJack16
    @DoctorJack16 Рік тому +111

    I love watching the competition in the AI hardware and software space heat up the way it has. We are truly living during a special moment in the industry and in our species’ history.

    • @Learna_Hydralis
      @Learna_Hydralis Рік тому +6

      Let's hope it's not our last moment.

    • @DoctorJack16
      @DoctorJack16 Рік тому +2

      @@Learna_Hydralis exactly!

    • @petrushka2
      @petrushka2 Рік тому +1

      I wonder where is intel in all these domain.

    • @parkout95
      @parkout95 Рік тому

      There have been so many movies that show otherwise… 😢

    • @duladrop4252
      @duladrop4252 Рік тому

      His intent is not about the AI, but the concept of competition that steers technology to a greater advancement. Monopoly can stagnate technology we just have to thank AMD for that, for continuing to strive for technology advancement.

  • @edw.y2451
    @edw.y2451 Рік тому +6

    AMD is doing well. I trained some DL and RL models during the study. The memory is always a big issue. I believe APUs would work well for the purpose of training RL model.

  • @BladerSoul9375
    @BladerSoul9375 Рік тому +37

    Subtitle Error M1300X(X) ☞ Instinct MI300X GPU

  • @AI-xi4jk
    @AI-xi4jk Рік тому +35

    This is great but the message is missing software part. How easy it is to integrate with frameworks, etc. Nvidia spent a lot of effort on CUDA and AMD needs to show something similar and make it clear to developers that barrier of entry is low. That’s the key to adoption.

    • @DanOneOne
      @DanOneOne Рік тому +2

      I am pretty sure they will write all the python libraries for their chips. With so much money involved there, it will not be an issue.

    • @blakjedi
      @blakjedi Рік тому +1

      They write software models for their clients link Ms and Sony

    • @robjamo1441
      @robjamo1441 Рік тому +1

      ROCm, HIP its all there

    • @mdzaid5925
      @mdzaid5925 11 місяців тому +1

      They are working on RoCm but it will take some time to be picked up.

    • @robjamo1441
      @robjamo1441 11 місяців тому

      Redshift, Blender cycles and Boris FX sapphire tools are nearly done for a start@@mdzaid5925

  • @ThePipton
    @ThePipton Рік тому +7

    Them showing inference with falcon-40b in bfloat16 gets me a little worried. Int8 inference is quickly gaining ground and I would have really liked seeing the actual performance compared using this. Moreover, if you bought this thing, you would probably be more interested in actual finetuning or training models yourself. Yet no information on training performance was given.

    • @kusumayogi7956
      @kusumayogi7956 Рік тому +1

      What difference using bfloa16 and int4/8 in deep learning?

  • @-WandreW-
    @-WandreW- Рік тому +19

    😂😂😂It's funny how Intel is so behind. AMD and Nvidia are already into AI stuff, while Intel has just started playing with GPUs.

    • @tluangasailo3663
      @tluangasailo3663 Рік тому +1

      Intel instead is now focusing on manufacturing/fabrication, equally important

    • @DarkWizardGG
      @DarkWizardGG Рік тому

      FYI, Intel just recently launched their Quantum CPU if u dont know. Dont worry wont be very behind, sooner or later theyll build their very own AI. Just wait and see, bruh. Lol😁😉😄😅🤗🤗🤗

  • @novemberalpha6023
    @novemberalpha6023 Рік тому +46

    The difference between the AMD chip and NVidia chip is less than the difference between their CEOs. Practically, AMD ceo looks like the female version of Nvidia ceo.

    • @digranes1976
      @digranes1976 Рік тому +19

      They are related. Found this online “Lisa Su's own grandfather is actually Jen-Hsun Huang's uncle”

    • @teekanne15
      @teekanne15 Рік тому +1

      @@digranes1976 if thats true, this is somehow hilarious.

    • @Eleganttf2
      @Eleganttf2 Рік тому +5

      @@teekanne15 it is in fact, confirmed and true xD the Verge already fact checked it again in 2020 article haha

    • @Pixelsplasher
      @Pixelsplasher Рік тому

      Let's prompt AI to generate them dancing together.

    • @yasunakaikumi
      @yasunakaikumi Рік тому +1

      at least no long pause with mama sue

  • @wyattnoise
    @wyattnoise Рік тому +9

    How long until we find out all these AMD keynotes have been AI-generated?

  • @807800
    @807800 Рік тому +24

    Fix your title CNET! I really thought AMD was releasing a new chip called M"1300"X.

  • @nicyt7391
    @nicyt7391 Рік тому +42

    I suppose we should be thankful for any kind of competition at this point, we need more market pressure to keep NVIDIA from drifting towards more obscene pricing. But it looks like Jensen isn’t concerned by AMD, especially in the realm of AI. I’ll always bite the bullet and buy NVIDIA mainly because I’ve never had an AMD card that didn’t have issues, but especially because CUDA is king

    • @Humanaut.
      @Humanaut. Рік тому +3

      I never had an AMD card that had issues either.
      I've had both, Nvidia and amd and never had problems with either one of them.
      You're just buying into the marketing, and that's fine as long as you know you're doing it.
      Best thing would be to actually be informed enough to know the details of your purchasing decision though.
      Only reason to buy Nvidia over and is for top for the line cards for raytracing (4090, maybe 4080), if you need it for specific productivity tools or if you really place a lot of emphasis on dlss3/framegeneration.
      In pure rasterized gaming performance on low to mid tier buying Nvidia is nothing more than a dumb decision.

    • @xjohnny1000
      @xjohnny1000 Рік тому +1

      My experience too. I've bought and managed dozens if not hundreds of systems and I can always count on AMD having a higher level of required support. Their CPUs like Threadripper are amazing but chipsets are not reliable and GPUs even less. My brother thought I was biased so he swapped over to an AMD GPU and had nothing but problems. He switched back to GeForce shortly afterwards and it's been rock steady ever since.

    • @Humanaut.
      @Humanaut. Рік тому +1

      @@xjohnny1000 which gen was he on? Never had any problems with my rx 480 whatsoever and based on tech reviewers RDNA3 is their most solid and reliable gen yet.

    • @xjohnny1000
      @xjohnny1000 Рік тому +1

      @@Humanaut. I can't speak for the latest cards but I think it's really a software/driver problem rather than hardware. AMD also has a long history of poor thermal management which probably doesn't help either.

    • @lunascomments3024
      @lunascomments3024 Рік тому +1

      ​@@xjohnny1000there's this DDU software that you must run after swapping to AMD GPU otherwise GeForce and Adrenalin is going to battle each other out.
      Seems like Nvidia has good drivers but actually, it's the user that are just dumb.

  • @KrK-EST
    @KrK-EST Рік тому +1

    Someone should make drivers for it to support gaming and make a gpu card with it capable of gaming (not only ai but can do it too)

  • @kwon-illee6362
    @kwon-illee6362 Рік тому +13

    Lisa Sue seems to be getting younger and younger.

    • @verlax8956
      @verlax8956 Рік тому +4

      it's that next gen amd liquid coolant she be drinking

    • @DarkWizardGG
      @DarkWizardGG Рік тому

      In the near future she'll be immortal by the help of her own AI. lol😁😄😉🤖🤖🤖🤖

    • @pneumonoultramicroscopicsi4065
      @pneumonoultramicroscopicsi4065 3 місяці тому

      She uploaded herself entirely on an MI300X chip and she has 60 gb left

  • @andrewchen8125
    @andrewchen8125 Рік тому +7

    CNET title in error - it's MI300X, not M1300X - I, not 1

    • @HarryXiao88
      @HarryXiao88 Рік тому

      yeah, it's also very clear why they'd made such mistake...

  • @kyber.octopus
    @kyber.octopus Рік тому +49

    From the lack of numeric values it can be assumed that everything except memory is worse than on H100.
    But still nice that someone is actually trying to compete with Nvidia.

    • @samyy974
      @samyy974 Рік тому +6

      maybe on price/performance ratio they are better positionned

    • @mikeb3172
      @mikeb3172 Рік тому +1

      CPU & GPU are more closely connected as well

    • @Dayta
      @Dayta Рік тому

      haha omg your comment just saved me .. while i was just listening to the audio of this clip i totaly forgot that it amd .. until i read your coment i was like .. what ? compete with nvidea i thought this was nvidea oh .. it was not .. hahao omg :D

    • @R4K1B-
      @R4K1B- Рік тому +2

      @@samyy974 People buying these chips don't care about price. They care about speed and efficiency (Effciency not to save money, but because of power limits in a factory)

    • @boltfixer24
      @boltfixer24 Рік тому

      Ye 7900xtx vs 4090 shows that already

  • @cadetsparklez3300
    @cadetsparklez3300 Рік тому +2

    This is the mi300x, and the mi300a is the one with a cpu on it for ai inferencing, not the mi300x

  • @NoidoDev
    @NoidoDev Рік тому

    Consumer and Prosumer version? Why not? 😢 Or when?

  • @MrDebranjandutta
    @MrDebranjandutta Рік тому +5

    No matter whoever wins , between them and nVidia, the real winners laughing to the bank will be TSMC and ASML

  • @Swiss4.2
    @Swiss4.2 Рік тому +1

    Definitely an easier keynote to sit through than Jensen's ramblings about which computer is the heaviest. I don't know how these new chips will stack up against Hopper, although even if they're not as good companies are going to rush to buy them up because of the AI boom.

  • @BlueRice
    @BlueRice Рік тому

    This is competition for companie and not for regular gamers. Although, ai server can be connected to online ai bots and such. So it can be future online gaming like MMO and such.

    • @fuckkatuas2837
      @fuckkatuas2837 Рік тому +1

      Might be added like an addon processor like physx to drive AI in games to accelerate VR use. A subdued population distracted by games is the best population. WEF/Blackrock maybe behind all this?

  • @kusumayogi7956
    @kusumayogi7956 Рік тому +3

    02:47 chatGPT with Nvidia is still faster when generate languange

    • @littlelostchild6767
      @littlelostchild6767 Рік тому +1

      Piyush Arora
      6 hours ago
      They ran Falcon 40B model, which is an open source competitor to ChatGPT.
      For perspective, it's heavy and doesn't run on my 4090 GPU (24GB VRAM with 80GB DDR5 RAM). The system goes out of memory after a few commands.

    • @kusumayogi7956
      @kusumayogi7956 Рік тому

      @@littlelostchild6767 maybe because chatGPT is better

    • @kusumayogi7956
      @kusumayogi7956 Рік тому

      @@littlelostchild6767 chat gpt use 175 billion parameter and Falcon only 40 billion

  • @antdx316
    @antdx316 Рік тому +1

    yes, bringing AI LLMs to local use just incase ChatGPT shutsdown.

  • @jnjnqy
    @jnjnqy Рік тому

    Correct the typo in the title first, please.

  • @zingg7203
    @zingg7203 Рік тому +2

    But can it work with PyTorch or TF?

    • @tehehe5929
      @tehehe5929 Рік тому

      AMD ROCM stack works with these quite well.

  • @incription
    @incription Рік тому

    how much ? under $1000?

  • @timothyzhuk2847
    @timothyzhuk2847 Рік тому +2

    Damn these rocks that they tricked into thinking are getting pretty good.

  • @vidyavathi3969
    @vidyavathi3969 Рік тому

    Can it run Gollum though?

  • @guidosalescalvano9862
    @guidosalescalvano9862 Рік тому

    Does anyone have experience using amd hardware for deep learning?

  • @erb34
    @erb34 Рік тому +4

    The more you buy, the more you save.

    • @alteredcarbon3853
      @alteredcarbon3853 Рік тому +2

      You don't have to understand the technology, you don't have to understand the strategy!

  • @francisdelacruz6439
    @francisdelacruz6439 Рік тому

    Just in time when smaller language models are starting to run circles on large number models. On the demo it’s the training that needs all the power.

  • @76ayoub76
    @76ayoub76 Рік тому

    Wow, very exciting times ahead

  • @garievolutionsoccer3218
    @garievolutionsoccer3218 Рік тому

    hope Rx 7995xTx 3D like this: with 88B transistor: 36gb vram or 24gb hbm3

  • @Wulfcry
    @Wulfcry Рік тому

    So how is this product gonna work outside the data center.

  • @powerhouse884
    @powerhouse884 Рік тому +7

    We are excited to show you: SKYNET Mi300X

    • @SleepyRulu
      @SleepyRulu Рік тому

      Funny joke

    • @novemberalpha6023
      @novemberalpha6023 Рік тому +2

      ​@@SleepyRulueverything is funny until T101 rolls out of the factory.

    • @SleepyRulu
      @SleepyRulu Рік тому

      @@novemberalpha6023 sure

  • @solidkundi
    @solidkundi Рік тому

    did someone just do image to image of Nvidia's Presentation

  • @MARKXHWANG
    @MARKXHWANG Рік тому +1

    "This is going to work out great for us, or terribly, because we are all in." - Jesen Huang 2017

  • @RealShinpin
    @RealShinpin Рік тому +1

    how much will one of these cost?

    • @RealShinpin
      @RealShinpin Рік тому

      @n n :( Maybe I'll sell my kidney for one.

    • @RealShinpin
      @RealShinpin Рік тому

      @n n I want to be able to run high level LLMs. Currently I can only run low level stuff, even on mid to mid-high tier hardware. I wouldn't mind setting it up as a server if I had to.

    • @RealShinpin
      @RealShinpin Рік тому

      @n n Do you know of any old server grade hardware that might be useful to consumers to get now? Like maybe the stuff they're replacing?
      I heard something about the p40 accelerator being useful for llms.

  • @kurtdewittphoto
    @kurtdewittphoto Рік тому +7

    But does the generative AI know how to play Crysis?

  • @whitefata104
    @whitefata104 Рік тому

    Miner seeing 5.2tb bandwidth like🤯

  • @fallguyjames
    @fallguyjames Рік тому +1

    GPT3 has 175B parameters

  • @powray
    @powray Рік тому +1

    Would be really nice to have 1.5TB dedicated to training models

  • @luizconrado
    @luizconrado Рік тому +31

    AMD needs something as good as better than what CUDA is today.

    • @duladrop4252
      @duladrop4252 Рік тому +27

      CUDA is already going to its end. PyTorch is working on ROCm's LLVM directly and other AI frameworks are doing the same. Meaning AMD's ROCm are being supported by them without working on CUDA remapping. Frameworks and Developers are already making a move to get rid of CUDA's dependency...

    • @FrozzenFreak
      @FrozzenFreak Рік тому +1

      Modular AI is solving the software issues around AI. So NVIDIA will likely lose its moat there

    • @cadetsparklez3300
      @cadetsparklez3300 Рік тому +2

      Cuda is irrelevant on ai, its just a bs lock nvidia used to control the design industry

    • @adaml.5355
      @adaml.5355 Рік тому +1

      AI will not have anything to do with GPU compute or CUDA within the next several years.

    • @luizconrado
      @luizconrado Рік тому +3

      However, for now, having an NVIDIA GPU makes learning and working with AI on a small scale much easier, correct?
      I believe AMD would get a lot of traction if they made it extremely easy to set up a computer using Linux or Windows to learn and work with small/medium AI tasks.
      Do you disagree?

  • @Pixel_FX
    @Pixel_FX Рік тому

    Correct your title and description to MI300X not M1300X lol.

  • @chrisrogers1092
    @chrisrogers1092 Рік тому

    Wow the title of the video is wrong lmao

  • @MARKXHWANG
    @MARKXHWANG Рік тому

    nvlink-c2c is the future, it will address amd chiplets latency and power issue

  • @NisseOhlsen
    @NisseOhlsen Рік тому

    Yes, but does it run Tensorflow 2 ?

    • @DarkWizardGG
      @DarkWizardGG Рік тому

      I guess, it could run tho. Why u not just try it?! Lol😁😄😅👍👍👍

    • @NisseOhlsen
      @NisseOhlsen Рік тому

      @@DarkWizardGG because I'd have to buy the card to find out ?

    • @DarkWizardGG
      @DarkWizardGG Рік тому

      @@NisseOhlsen what card are u talkin about then, GPU?! And u said "TensorFlow 2" is theres version 2 already?! 😁

  • @zakariahan3900
    @zakariahan3900 Рік тому

    Cnet... its MI300X not M1300X 😅

  • @aidenkim6629
    @aidenkim6629 Рік тому +2

    Why do I keep thinking that she must be related to Jensen from Nvidia. Hons I ship it

  • @vivek-1318
    @vivek-1318 Рік тому

    Awesome

  • @HarryXiao88
    @HarryXiao88 Рік тому +2

    all llms are capable of writing poem. try ask it to solve some differential equations...

  • @jinyingzhan927
    @jinyingzhan927 Рік тому

    how does quantum computer compete with this AI chips?

  • @soraaoixxthebluesky
    @soraaoixxthebluesky Рік тому

    How it’s fair against Nvidia GH200?

  • @jinmail3294
    @jinmail3294 Рік тому

    I believe the title should be "MI"300X, "i" for internet instead of one.

  • @nexovec
    @nexovec Рік тому +6

    Hey is it just me or is AMD killing it again. I thought they wouldn't be able to keep up with team green but hey, this sounds pretty reasonable at first glance. Another good call for Lisa perhaps?

  • @iammuzan
    @iammuzan Рік тому

    Title of video is Incorrect. It's MI300X

  • @orrbifold
    @orrbifold Рік тому

    everyone making GPUs for AI is screwed as more photonic systems come online

  • @JourneyStand
    @JourneyStand Рік тому

    But can it run Minesweeper?

    • @DarkWizardGG
      @DarkWizardGG Рік тому

      I guess no. For now, only that BOT from NVidia could do that.😁😉😄🤖🤖🤖🤖

  • @NguyenAzn
    @NguyenAzn Рік тому

    Its M "I" 300X not M"1" 300X

  • @AA-rc8uy
    @AA-rc8uy Рік тому

    Jeez.. Chip name is MI300X not M1300X and I wait entire video when will they introduce more advance version of 300X lol You cought me CNET Bot!

  • @002tar
    @002tar Рік тому

    does A.I say AMD is a good investment at this price?

  • @SkyCuration
    @SkyCuration Рік тому +18

    The Nvidia CEO and AMD CEO look like they are a couple.

    • @xelerator2398
      @xelerator2398 Рік тому +3

      lol seriously

    • @JaysenTC
      @JaysenTC Рік тому +5

      They need matching leather jackets

    • @DanishBashir-sz6vs
      @DanishBashir-sz6vs Рік тому +5

      I heard somewhere they are actually relatives

    • @Freshbott2
      @Freshbott2 Рік тому +3

      They’re cousins once removed!

    • @jbob34345
      @jbob34345 Рік тому +2

      Imagine if they merged and had a child, they'd be the ultimate CEO

  • @Darkyber
    @Darkyber Рік тому +2

    苏妈宣布AMD不会动摇老黄GPU的独特性,而是走了另一条路。

    • @RuohongZhao
      @RuohongZhao Рік тому

      苏妈堆HBM3,疯狂拓展带宽,这简直是炼丹的神器啊。不过,如果能针对大语言模型优化芯片架构就完美了。

    • @cadetsparklez3300
      @cadetsparklez3300 Рік тому

      Does chinese not have a word for green?

    • @GreenCappuccino
      @GreenCappuccino Рік тому +1

      ​​@@cadetsparklez3300retty sure 老黄(lao huang) is just a phonetic nickname for jensen huang. it translates to old yellow due to google translate shenanigans

    • @Darkyber
      @Darkyber Рік тому

      @@cadetsparklez3300 🤣🤣我觉得从你的理解也没错。

  • @sanesanyo
    @sanesanyo Рік тому +3

    But Nvidia strength is GPUs needed to train LLMs.

  • @vinvin8971
    @vinvin8971 Рік тому

    sad bcos the launch is slow and production was not ready...

  • @triynizzles
    @triynizzles Рік тому

    I dont like how she has the same haircut as Jensen. It throws me off XD

  • @derptweaker945
    @derptweaker945 Рік тому

    Lisa is COOL

  • @LeveragedFinance
    @LeveragedFinance Рік тому

    !price MI300

  • @QuestForVideoGames
    @QuestForVideoGames Рік тому +4

    Wait so does MT300X Al Chip have chatgpt installed on it because they only show it writing a poem 😅

    • @jwhite1337
      @jwhite1337 Рік тому +4

      I think Microsoft has exclusive rights to the chatGPT code base and data. If no one else can run chatGPT but Microsoft, so a public demo using chatGPT would be less useful for all other customers. I would think they ran a demo privately for Microsoft at some point. Hugging face is an AI community that promotes open source contributions. So all customers can run that LLM for their business and run their own benchmarks to confirm. I believe chatGPT newest versions overall performs the best but open source has closed the gap significantly. The amount of progress the open source community has done lately is nothing short of remarkable, will be interesting to see were we will be in 6 months.

    • @thomasireland1770
      @thomasireland1770 Рік тому

      no

    • @Piyush.A
      @Piyush.A Рік тому +2

      They ran Falcon 40B model, which is an open source competitor to ChatGPT.
      For perspective, it's heavy and doesn't run on my 4090 GPU (24GB VRAM with 80GB DDR5 RAM). The system goes out of memory after a few commands.

    • @DarkWizardGG
      @DarkWizardGG Рік тому

      @@Piyush.A I guess, u need a 32GB VRAM GPU to run that Falcon40B. Yes its pretty damn heavy for an LLM like that. Me, Im using the lesser LLM like WizardVicuna13B.😁😉😄🤖🤖🤖

    • @kazedcat
      @kazedcat Рік тому +3

      ​@@DarkWizardGG No a 40B parameter AI cannot run on 32GB VRAM unless your running the quantized 4bit version. For the full 16bit version you need 120GB of VRAM to run it 80 GB VRAM for just the weights alone and another 40GB overhead to run the model itself.

  • @duladrop4252
    @duladrop4252 Рік тому +4

    Nvidia should be having a reason to be concerned now that MI300X instinct performance is out, While their Two Hopper H100 via NVLink can only get 3K+ Tflops, the MI300X can do 5,218 TFplops on a "Single GPU", even the two combine H100 can't even beat a single MI300X. Watch out Nvidia, AMD is coming...

  • @Nick_With_A_Stick
    @Nick_With_A_Stick Рік тому

    :(. I want amd to do good, but this is literally the same speed if not significantly slower than 2x 3090’s not even 4090’s.

  • @flargosa
    @flargosa Рік тому +2

    How do we know it isn’t her secretary behind stage typing the poem?

    • @QuestForVideoGames
      @QuestForVideoGames Рік тому

      It's probably a pre-recorded video of writing the poem but speed up to make it look like it wasn't a human.

    • @hendrx
      @hendrx Рік тому +1

      facts, there's no way they wanted to deal with an outrage or wrong response

  • @valderith
    @valderith Рік тому

    always 2nd best as usual

  • @user-np6rn3jp3p
    @user-np6rn3jp3p Рік тому

    good

  • @Determinator21
    @Determinator21 10 місяців тому

    Can I use that chip as brain for my ai wife?

  • @Commenter9120
    @Commenter9120 Рік тому

    A time will come in the future where everything can be done by AI (like mundane human tasks as well) , how will the human civilization function? What will governments do? What will people do for jobs or to earn a living? cause you need money for everything.

  • @chimsgraphic
    @chimsgraphic Рік тому

    She sounds very articulate

    • @DarkWizardGG
      @DarkWizardGG Рік тому

      Shes been articulate ever since, bro. Lol😁😄👍👍👍

  • @randallcromer66
    @randallcromer66 Рік тому +2

    Not only is this a incredible piece of technology it could quite literally be one of the biggest threat to millions of people's lives and job's. Maybe we should stop and think about this one simply thing just because we can do a thing doesn't necessarily mean we should. I can believe how so many people have failed to realize the dangers of A.I. and the treats it's bringing along with each and every advancement in technology. It's truly terrifying to think what this world is going to be like in the next few decades. Yes you should be afraid very afraid of the wrong people using this technology to control and manipulate all of humanity or even destroy civilization as we have known it. I hope enough people wake up and realize the truth before it's to late to do anything about any of it. This is not a good thing and I can't hardly believe anyone doesn't recognize it...?...💯

  • @Jakwine
    @Jakwine Рік тому +2

    She’s Jensen with makeup

    • @Jakwine
      @Jakwine Рік тому

      @@N_N23296 I had completely forgotten about it

  • @Zeeves
    @Zeeves Рік тому

    ChatGPT / Language models are the least impressive part of AI. AI is not overhyped - the OpenAI company IS overhyped. They are a flash in the pan

  • @_Island_Boy
    @_Island_Boy 4 місяці тому

    Intel =cpu
    Nvidia = gpu
    BUT AMD = cpu+gpu lol

  • @MARKXHWANG
    @MARKXHWANG Рік тому

    chances are ChatGPU cannot run on any single server

    • @marshallmcluhan33
      @marshallmcluhan33 Рік тому

      GPT4ALL with some uncensored models are OK, no GPU needed.

  • @pauldannelachica2388
    @pauldannelachica2388 Рік тому +1

    New AI chip in AMD win win for everyone hope Western Democratic country monopolize this things. To image recognition. astronomy .autonomous systems .cyber security. Warfare. Simulation mechanics. Medical fields . Weather . Chemistry. Material Science. Satellite constellation. Coding. big data gardening analysis photo in video editing .games speech and voice to text recognition. In VR RA education. Robotic systems. Logistics Etc

  • @wzt9376
    @wzt9376 Рік тому

    Can it run crysis tho

  • @ndrattia
    @ndrattia Рік тому

    Salute to Lady Lisa Su, real awesome engineering with high leadership level

  • @typingcat
    @typingcat Рік тому +1

    Just add RAM slots to GPU instead of onboard RAM, so that users can add whatever amount of RAM they want.

    • @user-bj4fe4zj7i
      @user-bj4fe4zj7i Рік тому

    • @typingcat
      @typingcat Рік тому

      @@N_N23296 Well, about space, now that most graphics cards take 2 slots and some even 3 slots, they could make the cards double-layered to have the space for RAM slots. And about the speed, I don't know how much slower slotted RAM is compared to onboard RAM, but maybe, it could be a two-tier system. That is, a GPU has on-board RAM and slots for RAM. The GPU would use the onboard RAM first, and only if there is not enough onboard RAM, it then tries to use slotted RAM. I know it would be slower, but currently, if you run out of VRAM, the process just crashes or doesn't get executed. I think slower execution is better than not being able to run at all.

    • @lunascomments3024
      @lunascomments3024 Рік тому +1

      lol wut. 50000 MB/s speed vs 5300000 MB/s HBM3. there's a crystal clear choice here.

  • @xnx304
    @xnx304 Рік тому

    go go RED team

  • @bhaskardeepak
    @bhaskardeepak Рік тому

    You want better engineered product
    Put an engineer on the helm of it

  • @osumania1
    @osumania1 2 місяці тому

    She's look like nvidia CEO

  • @univera1111
    @univera1111 Рік тому +8

    I trust AMD they should put this in a laptop or a hand held windows or linux pc.

    • @Eleganttf2
      @Eleganttf2 Рік тому +2

      ???

    • @fthishandleshit
      @fthishandleshit Рік тому +1

      How stupid. Laptop connection.

    • @vmafarah9473
      @vmafarah9473 Рік тому +1

      it is better to be fit in your smartwatch so your hand turn into AI.

    • @novemberalpha6023
      @novemberalpha6023 Рік тому +1

      Currently this chip will be used in a machine like Mainframe or series of servers that can handle a mountain of data. Laptops are not used for that purpose. If someone puts it in a laptop, he or she has to cram up a lot of current technologies in that laptop to keep up with that chip. That would be nearly impossible as the solution for the heating issue alone would take a considerable space which will be hard to make room in a small place like laptops. Even if it happens, that will not only alter the purpose of the laptop but also pump up the price of that laptop way outside the purchasing power of the potential buyers.

    • @univera1111
      @univera1111 Рік тому

      It will best run on personal LLM in personal research. Listen to AMD CEO she's bringing LLM to individuals.

  • @pinochioo5678
    @pinochioo5678 Рік тому

    lol for those of you who are paying attention

  • @magekweon8335
    @magekweon8335 Рік тому

    nvidia가 돈에 취해 안하던 것을 AMD에서 선제적으로 해서 다행입니다. 192GB/Card !

  • @antdx316
    @antdx316 Рік тому +2

    The AI world is getting better and better.

  • @nickhubble2838
    @nickhubble2838 Рік тому

    MI330x not M1300x... It was literally the first words out of her mouth........

  • @xsuploader
    @xsuploader Рік тому

    its good but its not nearly as good as nividias hopper

  • @EstellaWhite-ws7gh
    @EstellaWhite-ws7gh 2 місяці тому +1

  • @r4v1m5
    @r4v1m5 Рік тому

    Dr Dre writes better poetry

  • @nannan3347
    @nannan3347 Рік тому

    It doesn’t mean anything without CUDA support

  • @Jexep
    @Jexep Рік тому +1

    But can it run CUDA? 😢

    • @hondajacka2
      @hondajacka2 Рік тому +2

      Don’t use AMD for training. It’s not compatible most of the time and will waste you endless hours trying to debug.

    • @marshallmcluhan33
      @marshallmcluhan33 Рік тому +2

      @@clehaxze Isn't AMD notoriously bad for this? Their software support is so bad some are claiming the defects are in the hardware since the bugs continue. I hope AMD takes the software stack seriously and improves support.

    • @JackRoyL
      @JackRoyL Рік тому +1

      cuda is overhyped.

    • @mikelay5360
      @mikelay5360 Рік тому

      ​​@@JackRoyLith very good reason. If you were in the industry you'd understand why

    • @clehaxze
      @clehaxze Рік тому

      @@marshallmcluhan33 Not on Linux. AMD on Linux is better then NVIDIA due to both kernel space and userland drivers are open source. Also AMD is much better at adopting open standards like GBM and wayland. While NVIDIA tried 10 years doing their own EGLStream, that the community simply hates.
      The compute stack is still behind NVIDIA. But it works nevertheless. And there's sane PyTorch out-of-box support.

  • @freakinElvis
    @freakinElvis Рік тому

    Blah blah when can I get one for my pc on Amazon?