AMD and intel will definitely have their share of the market. TSMC is at max capacity and investing in other semiconductor companies will be an absolute power move, I keep increasing my shares manageably. Different chips are good at different things and Nvidia has been very specialised, which leaves other aspects of Al open.
certainly, i had bought NVDA shares at $300, $475 cheap b4 the 10 for 1 split and with huge interest I keep adding, i’m currently doings the same for PLTR and AMD constructively. Best possible way to get ahead, is participating behind top experienced performers.
I'm compiling and picking stocks that l'd love to hold on to for a few years before retirement, do you think these stocks would do better over the years? My goal is to have at least $2 million saved for retirement.
You are buying a company to own it and not a piece of paper, The market is a zero-sum game (2 sides), Know what you are buying not just out of trend interest.
@@benjamintran5444 dosnt quite work like that, with that logic Intel should have no issue topping nvidia right ? it's not just about money and resources. Getting the right people on board, making the right bets etc etc
I foresee AMD doing well over into 4.5B-6B extra in data centre alone because of the MI300 family of products. They have the fastest AI chips in the world at the moment. With already 400K of orders for the MI300 for 2024. Don’t forget they have already shipped millions of ryzen AI chips alone.
@ps3guy22 There is dude who plays GTA4 with traffic speed set to 9999999. I hope he gets to play GTA6 someday with jiggle physics set to 9999999 with mods so that twerking can cause damage.
@@fruitcake4910nah, 200 customized whips at a sideshow each with four-layer iridescent paint jobs reflecting and refracting each other's headlights, all doing donuts with smoke effects, each with a glistening sweaty ho frantically twerking on the roof is really going to tax your PC. Start saving for 2025!
- Introduction to Instinct Mi 300X and its performance capabilities for generative AI [0:10] - Details on the cdna 3 architecture and performance features of Mi 300X [0:19] - Memory capacity and bandwidth comparison with competitors [0:53] - Explanation of the Mi 300X's structural components, including IO dies and chiplets [1:32] - Introduction to Rockham 6, an open-source software platform for AI development [3:22] - Launch announcement of the Mi 300a, the first data center APU for AI and HPC [5:59] - Introduction to the Ryzen 8040 series mobile processors with improved AI performance [7:14]
ROcM isn’t at the same level as the nvidia software stack at all. However, I know a senior guy at Cray/HPE and they’re extremely impressed with these chips
I hate the fact that nVidia is super greedy and sells their product at an enormous margin and f. their PC gaming customer. The price is out of this world. I hope any competition will destroy nVidia market share. I want a 24Gb 4090 but oh god i can't justify myself to get that card even though i can easily afford one.
And Cerebras' "The entire semiconductor wafer is one chip!" design. The hyperscale cloud compute and AI companies will evaluate these chips and choose the best one for their needs, although I'm sure the chip makers compete to give them special deals. Nvidia appears to have a huge lead in selling tens of thousands of H100s at $10,000+ each to any big AI company that wants to be taken seriously; I think AMD has been winning a lot of the top 500 supercomputer designs.
@@skierpage Not to mention that Nvidia has the H200 which is a lot faster than the H100, and also have the B100 coming out next year which is even faster.
NVIDIA H100's are extremely hard to get a hold of at retail pricing and NVIDIA has huge (60%+) margins even at that pricing. Considering H100's are still being sold for $45k/each even performance parity is good but NVIDIA also has an extensive AI software ecosystem that will be a hurdle for AMD going into companies using NVIDIA software stack.
What I got out of this more than anything, is that finally, finally we can start saying " times " again instead of " X ". 2.1 times more performance. I just hear breaking glass when someone says 1.6 " X" more bla bla bla. Thank you Mrs. Su 🥳❤
Ryzen 9 8940HS (Hawk Point) miniPCs should cost about $500. Ryzen 7 8700G (Hawk Point) desktop APUs should cost about $250-300. Ryzen 9 8900G (Strix Point) desktop APUs should cost about $400-500. Strix Halo/Sarlak laptops should cost about $1000-1500 (but won't arrive until near the end of 2024 or early 2025. It's also very unlikely that these APUs will ever be released for standard desktop PCs, because they require quad-channel RAM to get enough bandwidth for the iGPU. MiniPCs around $800 are possible, but unlikely IMO) The Hawk Point iGPU is Radeon 780M (same as in Phoenix), about equivalent to a low-power RTX 3050 4GB laptop GPU. The Strix Point iGPU is expected to be about equivalent to a high-power RTX 3050 6GB or low-power RTX 3060 or RTX 4050 laptop GPU. The Strix Halo/Sarlak iGPU is expected to be about equivalent to an RTX 3070 Ti/4070 laptop GPU.
But keep in mind that H100's are extremely hard to get a hold of right now, scalpers are selling them for 4x retail on eBay and even based on retail pricing NVIDIA currently has about a 60% or more profit margin with them. NVIDIA is so far ahead of anyone else that just being a little behind them (and offering potentially better pricing/availability) is a meaningful accomplishment. NVIDIA's AI software ecosystem is extensive too though. They do in fact have a moat here.
It's not just memory capacity and bandwidth but the whole vertical integration of software and hardware. You can't optimally utilize all the bandwidth and memory if the runtime is slow. That's where CuDNN and CUDA shines!
Well, there you have it: You'll never be able to see a Pat Gelsinger from Intel be able to come on stage and talk about these deep technical topics. He used to be a engineer now turned business man just like the clown (Bob Swan) that he replaced. Only Jensen Huang has balls to also speak like Lisa Su.
Choosing AMD GPU has been the biggest mistake I made, as I literally been left out of 2 years of AI development opportunity. Not gonna make the same mistake again.
Me too... I had the RX 7900 XTX for a half year and it was the worst decision. Swapped to Nvidia because of the always crashing drivers and the missing MIOpen implementation under Windows.
as the 2nd GPU & Chip frontier there is always an advantage, cos they can break down product from no.1 competitor 😉, modelled and improve on it and scale it way faster...i believe AMD is the real winner against Nvidia soon...
No. The lead times for these things are far too long for AMD to analyze and respond to the H100 with its own chips before the H200 comes out. AMD and Nvidia (and poor old Intel) adjust what they're doing in the future as they learn more about the other's product roadmap, but what matters is executing on their own roadmap. They're both doing a good job.
Thats true for hardware, but sadly not for software. AMD hardware is amazing, but the bad driver implementation can't compete with Nvidia. Maybe in the future with MIOpen under Windows there will be a change, but will Rocm have a change against TensorRT?
So not only is Ai software improving exponentially but hardware is trying hard to keep up. There has never been a time where our world is changing at such a rapid pace. In the world I live in most of my contacts have absolutely no idea of what is happening and if I do talk to them it all means nothing and they will just wait and see what transpires. So for those who are on board with this ever accelerating phenomenon buckle up cos I think there’s a lot more on the near horizon because the mega $$$ involved and the massive competition is driving a risk taking mindset to deliver both software and hardware technology to make this happen and to win at all costs.
I have no idea why AMD took so long to sail all ships towards AI. nvidia was doing this for ages. It is like they bet against AI. A true Steve Ballmer iPhone moment :)
She's been CEO for about 10 years compared to Nvidia's CEO that's been there from the start. This type of innovation takes many many years. It's just really amazing what she's accomplished at AMD. IMO she's just getting started.
Mi300x has 16 matrix cores per SM compare with only 4 matrix core per SM inside Mi250x. It has supperior 5 PETA OPS INT8 compare with nvidia h100 &h200 only 4 PETA OPS. Also mi300x has higher fp16 and bfloat 16 compare with competition
Are you a GAMER reading this? Dont watch the video. theres nothing here for you, gaming benchmarks are only thing what matter to people like you and me , wait for gamers to get their hands on new cpu, benchmark them, and see actual gaming performance, let the benchmarks do the talking
It was _never_ going to take over the world. Quantum computers can solve certain classes of problems that are intractable for conventional digital computers. Current ones can solve convoluted test problems, such as simulating.... a quantum mechanical interaction, better than conventional computers; but the ability to do something useful like crack encryption or model a full drug-protein interaction, will require more stability plus a thousand times more qubits for error correction. That's going to take several more years, at least.
AMD should just cancel all consumer end user product and 100% focus on server and AI for enterprise market since they dont care about budget segment anymore
I will take these numbers with a big grain of salt. So they cant make faster GPU's for consumers than Nvidia. but they crush them completely here?.. no I dont think so.
@@AndrewTSq Graphics processing is more than matrix multiplication. Yes AI is 90% Matrix operation but the other 10% is nonlinear function + data management overhead. The bottleneck in AI is data management. Sparse data optimization, data graph analysis all this technique are AI optimization that deals with data management. Even using lower precision data format is primarily done to reduce data management.
They are showing Nvidia H100 in the presentation but she doesn't mention it. All she is saying is that they are better than the competition. So you can simply pick whatever competitor with a bad product you want. Always be aware that _everything_ in the US business environment is fraud. Everything.
AI chips are going to be full of security vulnerabilities.. coding for these is just in it's infancy... can you REALLY trust in/on your desktop just yet?
I dont want to mention my company. We were forced to use Mi300x in addition to H100 for our GenAI tasks. Its been a horrible weeks because of Mi300x. Nothing works. ROcm is horrible and buggy. Mi300x is nowhere near H100. Using H100 was a piece of Cake. Mi300x is like walking on Nails. Finally we gave up on Mi300x and decided to stick with H100. I dont know how they make up these imaginary numbers. Mi300x is bad, very bad. Mi300x is atleast 5x slower than H100
Mi300s are sold-out as soon as they are made. If a youtuber can get a rack of mi200s on youtube wjth worse rocm than today and mega corps can get them going, then what does that say about you? Probably fud anyway.
They are still freaking clueless. It's all about SW. CUDA owns the LLM/DL/ML world and nVidia earned that right. So when AMD said Open Source support, it sounds to me that they still haven't decided to fork out the money and resource to own that software layer. Instead, they want the community to contribute for free so they can make more money selling their hardware? nVidia, despite all her faults, paid up to create, promote, and evolve CUDA for more than 15 years already. AMD still seems clueless about what it takes. Downright pathetic if you ask me.
That's a little harsh. AMD is spending a lot of money to write drivers and libraries to support ROCm, the difference is it's part of an open source ecosystem. Despite lack of software support in the past, AMD has been dominating recent supercomputer installations; if you have the best performance, many customers will write their own software to take advantage of it, and that's a lot easier to do when the available software is open source.
@@skierpageSupercomputer installations for the CPU or for the ATI GPU's? You care to provide data to support your claim? "many customers will write their own software to take advantage of it, and that's a lot easier to do when the available software is open source."? Humor me, give me an example. Do you realize that we are talking about HAL/CAL, not applications right? You sound like you are a gamer who is in love with his AMD Ryzen but actually has no ideas about how things actually work. Until AMD accelerated Pytorch, LightGBM, and Huggingface models are available, it's all just empty talk.
@@stefanx5470 Tom's Hardware May 2023: "The Top 500 list of the fastest supercomputers in the world was released today, and AMD continues its streak of impressive wins with 121 systems now powered by AMD's silicon - a year-over-year increase of 29%. Additionally, AMD continues to hold the #1 spot on the Top 500 with the Frontier supercomputer" "PyTorch on ROCm includes full capability for mixed-precision and large-scale training using AMD’s MIOpen & RCCL libraries." "AMD Instinct™ MI200 Adopted for Large-Scale AI Training in Microsoft Azure". Etc. There's no need to be rude.
AMD and intel will definitely have their share of the market. TSMC is at max capacity and investing in other semiconductor companies will be an absolute power move, I keep increasing my shares manageably. Different chips are good at different things and Nvidia has been very specialised, which leaves other aspects of Al open.
This is the type of in-depth detail on the semiconductor market that investors need, also the right moment to focus on the rewarding AI manifesto.
certainly, i had bought NVDA shares at $300, $475 cheap b4 the 10 for 1 split and with huge interest I keep adding, i’m currently doings the same for PLTR and AMD constructively. Best possible way to get ahead, is participating behind top experienced performers.
I'm compiling and picking stocks that l'd love to hold on to for a few years before retirement, do you think these stocks would do better over the years?
My goal is to have at least $2 million saved for retirement.
You are buying a company to own it and not a piece of paper, The market is a zero-sum game (2 sides), Know what you are buying not just out of trend interest.
Amazingly, people are starting to get the uniqueness of Palantir.
As an old veteran that worked at ETA Systems and Convex Computer and later as an Apps engineer this is just incredible progress that is happening.
Lisa Su is only 54yrs old
May she provide us with many more years of awesome products
what a magnificient lady
Amen.
And a horrible SP to go with it?
Bruh. AMD under Dr. Su is nothing short of breathtaking. They ate Intel's lunch and now they're going for Nvidia's.
They ate Intel's lunch when they are in tight budget. They got a lot more resource now so taking Nvidia's lunch will be a lot easier.
Literal AMD no era ni el 5 % de lo que es ahora y asi se le comio a intel en todo
@@benjamintran5444 dosnt quite work like that, with that logic Intel should have no issue topping nvidia right ? it's not just about money and resources. Getting the right people on board, making the right bets etc etc
@@MissMan666 - I meant on top whatever AMD are doing well to beat Intel, it can do now against Nvidia with a lot less resource constraint than before.
except bullshiting performance and tampering with benchmarks always gets cause. it is a tradition for AMD...
Incredible processing power for AI/ML workloads. Finally a real alternative for customers instead of just Nvidia.
yes I am sick of being stuck with CUDA
@@samueljett7807 The majority of networks are running out of the box using ROCm. Lots of work has been done with Pytorch and Tensorflow
I foresee AMD doing well over into 4.5B-6B extra in data centre alone because of the MI300 family of products. They have the fastest AI chips in the world at the moment. With already 400K of orders for the MI300 for 2024. Don’t forget they have already shipped millions of ryzen AI chips alone.
The world’s fastest supercomputer uses AMD cpu and gpu.
@@mitchjames9350 yeah, but that's MI250 with zen 3 Epyc, this is MI300 almost 2x more perf and memory with lower power usage.
@@mitchjames9350 AMD CPU*
We'll need that memory bandwidth for the PC port of GTA VI. Especially for "population increase" mods
Yeah imagine walking in the beach even more populated than the trailer with 500% jiggle physics. So many calculations to do, so many results to store.
@@fruitcake4910 Dear god yes! Bless you Lisa Su.
@@fruitcake4910that's gonna become a benchmark test
@ps3guy22 There is dude who plays GTA4 with traffic speed set to 9999999. I hope he gets to play GTA6 someday with jiggle physics set to 9999999 with mods so that twerking can cause damage.
@@fruitcake4910nah, 200 customized whips at a sideshow each with four-layer iridescent paint jobs reflecting and refracting each other's headlights, all doing donuts with smoke effects, each with a glistening sweaty ho frantically twerking on the roof is really going to tax your PC. Start saving for 2025!
- Introduction to Instinct Mi 300X and its performance capabilities for generative AI [0:10]
- Details on the cdna 3 architecture and performance features of Mi 300X [0:19]
- Memory capacity and bandwidth comparison with competitors [0:53]
- Explanation of the Mi 300X's structural components, including IO dies and chiplets [1:32]
- Introduction to Rockham 6, an open-source software platform for AI development [3:22]
- Launch announcement of the Mi 300a, the first data center APU for AI and HPC [5:59]
- Introduction to the Ryzen 8040 series mobile processors with improved AI performance [7:14]
I'm eagerly anticipating the opportunity to delve into the intricate workings of these groundbreaking processors
ROcM isn’t at the same level as the nvidia software stack at all. However, I know a senior guy at Cray/HPE and they’re extremely impressed with these chips
I hate the fact that nVidia is super greedy and sells their product at an enormous margin and f. their PC gaming customer. The price is out of this world. I hope any competition will destroy nVidia market share. I want a 24Gb 4090 but oh god i can't justify myself to get that card even though i can easily afford one.
I just wish all these high tech will translate to bigger sales and market share.
I would like to see a side by side of AMD and Nvidia and Dojo.
And Cerebras' "The entire semiconductor wafer is one chip!" design.
The hyperscale cloud compute and AI companies will evaluate these chips and choose the best one for their needs, although I'm sure the chip makers compete to give them special deals. Nvidia appears to have a huge lead in selling tens of thousands of H100s at $10,000+ each to any big AI company that wants to be taken seriously; I think AMD has been winning a lot of the top 500 supercomputer designs.
Yeah I was really interested to know how this stands up to Dojo
@@skierpage Not to mention that Nvidia has the H200 which is a lot faster than the H100, and also have the B100 coming out next year which is even faster.
NVIDIA H100's are extremely hard to get a hold of at retail pricing and NVIDIA has huge (60%+) margins even at that pricing. Considering H100's are still being sold for $45k/each even performance parity is good but NVIDIA also has an extensive AI software ecosystem that will be a hurdle for AMD going into companies using NVIDIA software stack.
What I got out of this more than anything, is that finally, finally we can start saying " times " again instead of " X ". 2.1 times more performance. I just hear breaking glass when someone says 1.6 " X" more bla bla bla. Thank you Mrs. Su 🥳❤
I still call it "twitter"
2.1 twitter the performance!
For Laptop soon too?
What is the possible price for the midrange next gen APU? I'm sick of expensive and huge GPUs
Ryzen 9 8940HS (Hawk Point) miniPCs should cost about $500.
Ryzen 7 8700G (Hawk Point) desktop APUs should cost about $250-300.
Ryzen 9 8900G (Strix Point) desktop APUs should cost about $400-500.
Strix Halo/Sarlak laptops should cost about $1000-1500 (but won't arrive until near the end of 2024 or early 2025. It's also very unlikely that these APUs will ever be released for standard desktop PCs, because they require quad-channel RAM to get enough bandwidth for the iGPU. MiniPCs around $800 are possible, but unlikely IMO)
The Hawk Point iGPU is Radeon 780M (same as in Phoenix), about equivalent to a low-power RTX 3050 4GB laptop GPU.
The Strix Point iGPU is expected to be about equivalent to a high-power RTX 3050 6GB or low-power RTX 3060 or RTX 4050 laptop GPU.
The Strix Halo/Sarlak iGPU is expected to be about equivalent to an RTX 3070 Ti/4070 laptop GPU.
@@nathangamble125 if that's the case it's too good to be true
* meanwhile from intel *
intel: "its all snek oil"
and gamers would be like yeah but can it run crysis
AMD HBM3 memory bandwidth is higher than Nvidia HMB3e memory, interesting!!
I’m a huge fan of AMD and has great respect for the company too.
But… hasn’t NVIDIA released their H200 chips already? 🤔🧐
H200 will be available next year probably
H200 and B100 are set to be released next year
@@AndrewTSq ty
But keep in mind that H100's are extremely hard to get a hold of right now, scalpers are selling them for 4x retail on eBay and even based on retail pricing NVIDIA currently has about a 60% or more profit margin with them. NVIDIA is so far ahead of anyone else that just being a little behind them (and offering potentially better pricing/availability) is a meaningful accomplishment. NVIDIA's AI software ecosystem is extensive too though. They do in fact have a moat here.
It's not just memory capacity and bandwidth but the whole vertical integration of software and hardware. You can't optimally utilize all the bandwidth and memory if the runtime is slow. That's where CuDNN and CUDA shines!
We'll see
Does the infinity fabric prevent side fumbling?
Terminator rise of the ai
i don't understand a single sentence that she talked about 😢
Anyone know the name of the guy at 5:15? Thanks
forrest norrod
@@rafaelsalamanca6792 thanks!
Damn ChatGPT got everyone going Ai this AI that.
She looks like a female version of the nVidia CEO.
I think Jenson is the cooler one, how many CEO's can say that they designed the companys GPU's? (the first gpus that nvidia released)
They both from taiwan
Can’t wait for the trickle down.
Client side AI NPC.
Please
AMD next SoC can be a beast.
Well, there you have it: You'll never be able to see a Pat Gelsinger from Intel be able to come on stage and talk about these deep technical topics. He used to be a engineer now turned business man just like the clown (Bob Swan) that he replaced.
Only Jensen Huang has balls to also speak like Lisa Su.
Whatever happens, i think next gen console may make use of AI chipset for sure
And fully implement, games as service
Lisa Su is impressive.
Choosing AMD GPU has been the biggest mistake I made, as I literally been left out of 2 years of AI development opportunity. Not gonna make the same mistake again.
Me too... I had the RX 7900 XTX for a half year and it was the worst decision. Swapped to Nvidia because of the always crashing drivers and the missing MIOpen implementation under Windows.
how do folks migrate to amd gpus without cuda?
They have to rewrite for rocm
Cool but can it run minesweeper?
Nah your gonna have to wait for Nvidia H200 to play that
Or 3D Chess and Solitaire
how is this compare to nvidia?
It says on the screen? they are comparing to H100, they destorying nvidia..
dont sleep on Lisa!
as the 2nd GPU & Chip frontier there is always an advantage, cos they can break down product from no.1 competitor 😉, modelled and improve on it and scale it way faster...i believe AMD is the real winner against Nvidia soon...
No. The lead times for these things are far too long for AMD to analyze and respond to the H100 with its own chips before the H200 comes out. AMD and Nvidia (and poor old Intel) adjust what they're doing in the future as they learn more about the other's product roadmap, but what matters is executing on their own roadmap. They're both doing a good job.
Thats true for hardware, but sadly not for software. AMD hardware is amazing, but the bad driver implementation can't compete with Nvidia. Maybe in the future with MIOpen under Windows there will be a change, but will Rocm have a change against TensorRT?
Imagine gaming on that thing 😂
They literally just say the word "AI" and the stock jumps 10% lmfaoooo
Yep, literally what I banked on.
Given that I’m not Tech Savvy…..I honestly didn’t understand anything….
translated: amd tech goes brum brum fast
@@sgtnik4871 Ahhhh.... So that's what she was trying to say......
Is she done better presentation than jenson?
So not only is Ai software improving exponentially but hardware is trying hard to keep up. There has never been a time where our world is changing at such a rapid pace. In the world I live in most of my contacts have absolutely no idea of what is happening and if I do talk to them it all means nothing and they will just wait and see what transpires. So for those who are on board with this ever accelerating phenomenon buckle up cos I think there’s a lot more on the near horizon because the mega $$$ involved and the massive competition is driving a risk taking mindset to deliver both software and hardware technology to make this happen and to win at all costs.
get 10k shares on NVidia 10k Shares on AMD after 15 years you are sorted for life.
do they have a chance at catching up to Nvidia?
Can they run CUDA? a software ecosystem grown over more than a decades matters a lot in real world application right now.
Will these chips make it to PS5 Pro / XBox Pro / ahem Switch Pro…?
U for real?
Uhh, MS said their not doing a pro model..and that Series X is the pro. 🤣🤣🤣 Sony might tho.
@@princephillip1481 you obviously didnt see their own slide leaks lmfao
@@UKKN516 Don't give a fk really..because MS doesn't even know how sell a damn console 🤣 and the Series X is proof of that. 🙄🙄
😁
switch is with nvidia
Intel after seeing Nvidia and AMD, will be :-D I hope they will bounce back too!!
Intel will lead the industry soon !
@@maximumoverload5134I'll believe it when I see it. At the moment it's not looking good for Intel.
for the good of competition, by all means it would be great with a less dominant intel, they still have to much market share.
Their new chip looks promising!
I have an idea of installing many Neural PU cards to many PCI-E slots for becoming a brainless computer to an AI computer.
I'm a layman, I don't understand
I have no idea why AMD took so long to sail all ships towards AI. nvidia was doing this for ages. It is like they bet against AI. A true Steve Ballmer iPhone moment :)
MI300X will support LLMs that need more than 128G of HBM3. Such LLMs will slow down to a crawl on H100 as it swaps between HBM3 & DDR5.
@@tringuyen7519 Lucky for Nvidia that they have H200 and B100 next year
She's been CEO for about 10 years compared to Nvidia's CEO that's been there from the start. This type of innovation takes many many years. It's just really amazing what she's accomplished at AMD. IMO she's just getting started.
Half sure it's because they were on the verge of bankruptcy til they made ryzen
Mi300x has 16 matrix cores per SM compare with only 4 matrix core per SM inside Mi250x.
It has supperior 5 PETA OPS INT8 compare with nvidia h100 &h200 only 4 PETA OPS. Also mi300x has higher fp16 and bfloat 16 compare with competition
Skynet is happening
She got all those numbers on her finger tips..
The marketing Dept. needs way better naming of the products. There is no sizzle or excitement for brand recognition proposes. Very poor creativity.
Too many acronyms and abbreviations for my feeble brain. But I'm happy for AMD though... or sorry that happened.
Only thing she reviled was a bunch of talk without any follow through. She would make a great politician.
Are you a GAMER reading this? Dont watch the video. theres nothing here for you, gaming benchmarks are only thing what matter to people like you and me , wait for gamers to get their hands on new cpu, benchmark them, and see actual gaming performance, let the benchmarks do the talking
KO nvidia
Whatever happened to quantum computing hype half a decade ago that it will take over the world?
ibm just announced that they released the first-ever 1000-qubit quantum chip...
It was _never_ going to take over the world. Quantum computers can solve certain classes of problems that are intractable for conventional digital computers. Current ones can solve convoluted test problems, such as simulating.... a quantum mechanical interaction, better than conventional computers; but the ability to do something useful like crack encryption or model a full drug-protein interaction, will require more stability plus a thousand times more qubits for error correction. That's going to take several more years, at least.
I love Lisa Su
Who cares CUDA is closed source. At least it works ootb
ROCm barely works, everybody wants Nvidia because of stupid ROCm support.
In Su-bae I trust!
Not a coincidence that both ceos of amd and nvidia are Asians , they’re related
Good
AMD should just cancel all consumer end user product and 100% focus on server and AI for enterprise market since they dont care about budget segment anymore
So much jargon. 😢 Who is this for?
That’s why Steve Jobs is always the best. He knows how to translate complicated techical terms to words a simple person could understand
simple person are not buying this monster chips.
Вне конкуренции, но много вопросов
AMD plz stop NVIDIA, their prices are insane and they are getting kinda greedy
Frame rates for games ? So called "games" or video degraded galleries we avoid by any means.
I will take these numbers with a big grain of salt. So they cant make faster GPU's for consumers than Nvidia. but they crush them completely here?.. no I dont think so.
Gaming GPU is a lot harder than AI and compute GPU. AI processor is so simple even Tesla can make them.
@@kazedcat strange, I always thought AI used matrix multiplications, and that is what 3d cards are good at. So it should not matter ?
@@AndrewTSq Graphics processing is more than matrix multiplication. Yes AI is 90% Matrix operation but the other 10% is nonlinear function + data management overhead. The bottleneck in AI is data management. Sparse data optimization, data graph analysis all this technique are AI optimization that deals with data management. Even using lower precision data format is primarily done to reduce data management.
They weren't trying to beat them. That's obvious at this point.
They are showing Nvidia H100 in the presentation but she doesn't mention it.
All she is saying is that they are better than the competition.
So you can simply pick whatever competitor with a bad product you want.
Always be aware that _everything_ in the US business environment is fraud. Everything.
No clue what she's just said
Show us the real performance. Not this theoretical numbers.
❤❤❤❤
Jai Hinduja. Indian origins are the best but why are Indians still lacking behind in the AI chip leadership which is a National Security concern.
Is she the twin of Nvidia CEO?
iirc theyre cousins
Wow ❤
I/O
Huh? 😊
And they say lady and tech dont mix.
No one with a brain says that.
Lisa is special. Don't get it twisted. Just look at the male/female ratio in CS classes.
AI chips are going to be full of security vulnerabilities.. coding for these is just in it's infancy... can you REALLY trust in/on your desktop just yet?
❤
*Very Good**¬~``Introduced By ``Thechna Donna®**[The Device Dome]`` From [Digimon World 2¬^Alternative©]** ``PS1``™
🙂💯
I dont want to mention my company. We were forced to use Mi300x in addition to H100 for our GenAI tasks. Its been a horrible weeks because of Mi300x. Nothing works. ROcm is horrible and buggy. Mi300x is nowhere near H100. Using H100 was a piece of Cake. Mi300x is like walking on Nails. Finally we gave up on Mi300x and decided to stick with H100. I dont know how they make up these imaginary numbers. Mi300x is bad, very bad. Mi300x is atleast 5x slower than H100
Mi300s are sold-out as soon as they are made. If a youtuber can get a rack of mi200s on youtube wjth worse rocm than today and mega corps can get them going, then what does that say about you? Probably fud anyway.
"ooh AMD is this, AMD is that, YAY AMD!
BUT NOBODY WANTS TO USE THEM FR...
SMH
Hey it’s me
PS5 Pro Let's Go!! lol
Where are these chips going ? Stop thinking we are all electrical enngineers and talk to the common man aka the buyer's
They are not meant for the common man. These are used by companies developing AI models.
…The language they are using is directed to the demographic buying the product.
Common people aren't buying these...they're for those with massive pockets.
I thought the CEO, Jensen huang, went trans for a bit.
@@greengoblin9567he did
A war against anything and anyone associated with AI.
The T1000’s will be visiting your home first.
The $pace is ACTUALLY unlimited (conspiracy) No comment here. Except this one.
They are still freaking clueless. It's all about SW. CUDA owns the LLM/DL/ML world and nVidia earned that right. So when AMD said Open Source support, it sounds to me that they still haven't decided to fork out the money and resource to own that software layer. Instead, they want the community to contribute for free so they can make more money selling their hardware?
nVidia, despite all her faults, paid up to create, promote, and evolve CUDA for more than 15 years already. AMD still seems clueless about what it takes. Downright pathetic if you ask me.
That's a little harsh. AMD is spending a lot of money to write drivers and libraries to support ROCm, the difference is it's part of an open source ecosystem. Despite lack of software support in the past, AMD has been dominating recent supercomputer installations; if you have the best performance, many customers will write their own software to take advantage of it, and that's a lot easier to do when the available software is open source.
CUDA is dead end.
@@skierpageSupercomputer installations for the CPU or for the ATI GPU's? You care to provide data to support your claim?
"many customers will write their own software to take advantage of it, and that's a lot easier to do when the available software is open source."? Humor me, give me an example. Do you realize that we are talking about HAL/CAL, not applications right?
You sound like you are a gamer who is in love with his AMD Ryzen but actually has no ideas about how things actually work.
Until AMD accelerated Pytorch, LightGBM, and Huggingface models are available, it's all just empty talk.
@@stefanx5470 Tom's Hardware May 2023: "The Top 500 list of the fastest supercomputers in the world was released today, and AMD continues its streak of impressive wins with 121 systems now powered by AMD's silicon - a year-over-year increase of 29%. Additionally, AMD continues to hold the #1 spot on the Top 500 with the Frontier supercomputer"
"PyTorch on ROCm includes full capability for mixed-precision and large-scale training using AMD’s MIOpen & RCCL libraries." "AMD Instinct™ MI200 Adopted for Large-Scale AI Training in Microsoft Azure". Etc.
There's no need to be rude.
@@stefanx5470faster computer in the world, Frontier uses Epyc cpus and Amd mi250x. El Capitan will use Epyc cpus and MI300Xs
❤