AMD Reveals MI300X AI Chip (Watch It Here)
Вставка
- Опубліковано 12 чер 2023
- At AMD’s most recent product reveal event, the company’s CEO Lisa Su unveils generative AI chip to work inside the data center, the new MI300X.
Never miss a deal again! See CNET’s browser extension 👉 bit.ly/39Ub3bv
Subscribe to our channel: / @cnethighlights - Наука та технологія
Lisa Su shows how much better it is to have a CEO that has an engineering background opposed to a some turkey with a MBA.
true
an mba, not a mba
Bingo , that's why I love Elon, Jensen and Lisa companys much much better than others ....
facts
Not just an engineer, she’s a PhD engineer
I always supported AMD as the first desktop I built was running on an Athlon and an AMD graphics card. Keep going Lisa you are killing it.
me to, all the way back to the K6 😂
Me too. Although my first CPU was Cyrix 486, my first DIY upgrade was AMD 5X86 133. Also my first CPU with fan.
@@istealpopularnamesforlikes3340 King? yeah king of power consumption lmao. almost 300W at stock. slight OC will make it guzzle 400W+
@@Pixel_FX naaah intel is way more efficient on real world daily drive use case, unless u are forcing that chip to run at its turbo for most of ur workload then yes AMD is better there
@@Pixel_FXhaha yes. i just can't cool down this thing called Intel CPU.
I love watching the competition in the AI hardware and software space heat up the way it has. We are truly living during a special moment in the industry and in our species’ history.
Let's hope it's not our last moment.
@@Learna_Hydralis exactly!
I wonder where is intel in all these domain.
There have been so many movies that show otherwise… 😢
His intent is not about the AI, but the concept of competition that steers technology to a greater advancement. Monopoly can stagnate technology we just have to thank AMD for that, for continuing to strive for technology advancement.
AMD is doing well. I trained some DL and RL models during the study. The memory is always a big issue. I believe APUs would work well for the purpose of training RL model.
Subtitle Error M1300X(X) ☞ Instinct MI300X GPU
ok
Sure nvidia, u wish
Ridiculous error to make. and to not fix it four hours later! CNET, you're acting like some fly-by-night.
AMD error hahahahahahahaha
Probably an AI error
This is great but the message is missing software part. How easy it is to integrate with frameworks, etc. Nvidia spent a lot of effort on CUDA and AMD needs to show something similar and make it clear to developers that barrier of entry is low. That’s the key to adoption.
I am pretty sure they will write all the python libraries for their chips. With so much money involved there, it will not be an issue.
They write software models for their clients link Ms and Sony
ROCm, HIP its all there
They are working on RoCm but it will take some time to be picked up.
Redshift, Blender cycles and Boris FX sapphire tools are nearly done for a start@@mdzaid5925
Them showing inference with falcon-40b in bfloat16 gets me a little worried. Int8 inference is quickly gaining ground and I would have really liked seeing the actual performance compared using this. Moreover, if you bought this thing, you would probably be more interested in actual finetuning or training models yourself. Yet no information on training performance was given.
What difference using bfloa16 and int4/8 in deep learning?
😂😂😂It's funny how Intel is so behind. AMD and Nvidia are already into AI stuff, while Intel has just started playing with GPUs.
Intel instead is now focusing on manufacturing/fabrication, equally important
FYI, Intel just recently launched their Quantum CPU if u dont know. Dont worry wont be very behind, sooner or later theyll build their very own AI. Just wait and see, bruh. Lol😁😉😄😅🤗🤗🤗
The difference between the AMD chip and NVidia chip is less than the difference between their CEOs. Practically, AMD ceo looks like the female version of Nvidia ceo.
They are related. Found this online “Lisa Su's own grandfather is actually Jen-Hsun Huang's uncle”
@@digranes1976 if thats true, this is somehow hilarious.
@@teekanne15 it is in fact, confirmed and true xD the Verge already fact checked it again in 2020 article haha
Let's prompt AI to generate them dancing together.
at least no long pause with mama sue
How long until we find out all these AMD keynotes have been AI-generated?
Fix your title CNET! I really thought AMD was releasing a new chip called M"1300"X.
😂
I suppose we should be thankful for any kind of competition at this point, we need more market pressure to keep NVIDIA from drifting towards more obscene pricing. But it looks like Jensen isn’t concerned by AMD, especially in the realm of AI. I’ll always bite the bullet and buy NVIDIA mainly because I’ve never had an AMD card that didn’t have issues, but especially because CUDA is king
I never had an AMD card that had issues either.
I've had both, Nvidia and amd and never had problems with either one of them.
You're just buying into the marketing, and that's fine as long as you know you're doing it.
Best thing would be to actually be informed enough to know the details of your purchasing decision though.
Only reason to buy Nvidia over and is for top for the line cards for raytracing (4090, maybe 4080), if you need it for specific productivity tools or if you really place a lot of emphasis on dlss3/framegeneration.
In pure rasterized gaming performance on low to mid tier buying Nvidia is nothing more than a dumb decision.
My experience too. I've bought and managed dozens if not hundreds of systems and I can always count on AMD having a higher level of required support. Their CPUs like Threadripper are amazing but chipsets are not reliable and GPUs even less. My brother thought I was biased so he swapped over to an AMD GPU and had nothing but problems. He switched back to GeForce shortly afterwards and it's been rock steady ever since.
@@xjohnny1000 which gen was he on? Never had any problems with my rx 480 whatsoever and based on tech reviewers RDNA3 is their most solid and reliable gen yet.
@@Humanaut. I can't speak for the latest cards but I think it's really a software/driver problem rather than hardware. AMD also has a long history of poor thermal management which probably doesn't help either.
@@xjohnny1000there's this DDU software that you must run after swapping to AMD GPU otherwise GeForce and Adrenalin is going to battle each other out.
Seems like Nvidia has good drivers but actually, it's the user that are just dumb.
Someone should make drivers for it to support gaming and make a gpu card with it capable of gaming (not only ai but can do it too)
Lisa Sue seems to be getting younger and younger.
it's that next gen amd liquid coolant she be drinking
In the near future she'll be immortal by the help of her own AI. lol😁😄😉🤖🤖🤖🤖
She uploaded herself entirely on an MI300X chip and she has 60 gb left
CNET title in error - it's MI300X, not M1300X - I, not 1
yeah, it's also very clear why they'd made such mistake...
From the lack of numeric values it can be assumed that everything except memory is worse than on H100.
But still nice that someone is actually trying to compete with Nvidia.
maybe on price/performance ratio they are better positionned
CPU & GPU are more closely connected as well
haha omg your comment just saved me .. while i was just listening to the audio of this clip i totaly forgot that it amd .. until i read your coment i was like .. what ? compete with nvidea i thought this was nvidea oh .. it was not .. hahao omg :D
@@samyy974 People buying these chips don't care about price. They care about speed and efficiency (Effciency not to save money, but because of power limits in a factory)
Ye 7900xtx vs 4090 shows that already
This is the mi300x, and the mi300a is the one with a cpu on it for ai inferencing, not the mi300x
Consumer and Prosumer version? Why not? 😢 Or when?
No matter whoever wins , between them and nVidia, the real winners laughing to the bank will be TSMC and ASML
Definitely an easier keynote to sit through than Jensen's ramblings about which computer is the heaviest. I don't know how these new chips will stack up against Hopper, although even if they're not as good companies are going to rush to buy them up because of the AI boom.
This is competition for companie and not for regular gamers. Although, ai server can be connected to online ai bots and such. So it can be future online gaming like MMO and such.
Might be added like an addon processor like physx to drive AI in games to accelerate VR use. A subdued population distracted by games is the best population. WEF/Blackrock maybe behind all this?
02:47 chatGPT with Nvidia is still faster when generate languange
Piyush Arora
6 hours ago
They ran Falcon 40B model, which is an open source competitor to ChatGPT.
For perspective, it's heavy and doesn't run on my 4090 GPU (24GB VRAM with 80GB DDR5 RAM). The system goes out of memory after a few commands.
@@littlelostchild6767 maybe because chatGPT is better
@@littlelostchild6767 chat gpt use 175 billion parameter and Falcon only 40 billion
yes, bringing AI LLMs to local use just incase ChatGPT shutsdown.
Correct the typo in the title first, please.
But can it work with PyTorch or TF?
AMD ROCM stack works with these quite well.
how much ? under $1000?
Damn these rocks that they tricked into thinking are getting pretty good.
Can it run Gollum though?
Does anyone have experience using amd hardware for deep learning?
The more you buy, the more you save.
You don't have to understand the technology, you don't have to understand the strategy!
Just in time when smaller language models are starting to run circles on large number models. On the demo it’s the training that needs all the power.
Wow, very exciting times ahead
hope Rx 7995xTx 3D like this: with 88B transistor: 36gb vram or 24gb hbm3
So how is this product gonna work outside the data center.
We are excited to show you: SKYNET Mi300X
Funny joke
@@SleepyRulueverything is funny until T101 rolls out of the factory.
@@novemberalpha6023 sure
did someone just do image to image of Nvidia's Presentation
"This is going to work out great for us, or terribly, because we are all in." - Jesen Huang 2017
how much will one of these cost?
@n n :( Maybe I'll sell my kidney for one.
@n n I want to be able to run high level LLMs. Currently I can only run low level stuff, even on mid to mid-high tier hardware. I wouldn't mind setting it up as a server if I had to.
@n n Do you know of any old server grade hardware that might be useful to consumers to get now? Like maybe the stuff they're replacing?
I heard something about the p40 accelerator being useful for llms.
But does the generative AI know how to play Crysis?
You need the uncensored models for that
This Generative AI *IS* the Crisis.
@@novemberalpha6023 That's only the half of it.
Miner seeing 5.2tb bandwidth like🤯
GPT3 has 175B parameters
Would be really nice to have 1.5TB dedicated to training models
AMD needs something as good as better than what CUDA is today.
CUDA is already going to its end. PyTorch is working on ROCm's LLVM directly and other AI frameworks are doing the same. Meaning AMD's ROCm are being supported by them without working on CUDA remapping. Frameworks and Developers are already making a move to get rid of CUDA's dependency...
Modular AI is solving the software issues around AI. So NVIDIA will likely lose its moat there
Cuda is irrelevant on ai, its just a bs lock nvidia used to control the design industry
AI will not have anything to do with GPU compute or CUDA within the next several years.
However, for now, having an NVIDIA GPU makes learning and working with AI on a small scale much easier, correct?
I believe AMD would get a lot of traction if they made it extremely easy to set up a computer using Linux or Windows to learn and work with small/medium AI tasks.
Do you disagree?
Correct your title and description to MI300X not M1300X lol.
Wow the title of the video is wrong lmao
nvlink-c2c is the future, it will address amd chiplets latency and power issue
Yes, but does it run Tensorflow 2 ?
I guess, it could run tho. Why u not just try it?! Lol😁😄😅👍👍👍
@@DarkWizardGG because I'd have to buy the card to find out ?
@@NisseOhlsen what card are u talkin about then, GPU?! And u said "TensorFlow 2" is theres version 2 already?! 😁
Cnet... its MI300X not M1300X 😅
Why do I keep thinking that she must be related to Jensen from Nvidia. Hons I ship it
Because they are actually relatives
@@DanishBashir-sz6vs what that’s crazy
Awesome
all llms are capable of writing poem. try ask it to solve some differential equations...
how does quantum computer compete with this AI chips?
How it’s fair against Nvidia GH200?
I believe the title should be "MI"300X, "i" for internet instead of one.
Hey is it just me or is AMD killing it again. I thought they wouldn't be able to keep up with team green but hey, this sounds pretty reasonable at first glance. Another good call for Lisa perhaps?
Title of video is Incorrect. It's MI300X
everyone making GPUs for AI is screwed as more photonic systems come online
But can it run Minesweeper?
I guess no. For now, only that BOT from NVidia could do that.😁😉😄🤖🤖🤖🤖
Its M "I" 300X not M"1" 300X
Jeez.. Chip name is MI300X not M1300X and I wait entire video when will they introduce more advance version of 300X lol You cought me CNET Bot!
does A.I say AMD is a good investment at this price?
The Nvidia CEO and AMD CEO look like they are a couple.
lol seriously
They need matching leather jackets
I heard somewhere they are actually relatives
They’re cousins once removed!
Imagine if they merged and had a child, they'd be the ultimate CEO
苏妈宣布AMD不会动摇老黄GPU的独特性,而是走了另一条路。
苏妈堆HBM3,疯狂拓展带宽,这简直是炼丹的神器啊。不过,如果能针对大语言模型优化芯片架构就完美了。
Does chinese not have a word for green?
@@cadetsparklez3300retty sure 老黄(lao huang) is just a phonetic nickname for jensen huang. it translates to old yellow due to google translate shenanigans
@@cadetsparklez3300 🤣🤣我觉得从你的理解也没错。
But Nvidia strength is GPUs needed to train LLMs.
sad bcos the launch is slow and production was not ready...
I dont like how she has the same haircut as Jensen. It throws me off XD
Lisa is COOL
!price MI300
Wait so does MT300X Al Chip have chatgpt installed on it because they only show it writing a poem 😅
I think Microsoft has exclusive rights to the chatGPT code base and data. If no one else can run chatGPT but Microsoft, so a public demo using chatGPT would be less useful for all other customers. I would think they ran a demo privately for Microsoft at some point. Hugging face is an AI community that promotes open source contributions. So all customers can run that LLM for their business and run their own benchmarks to confirm. I believe chatGPT newest versions overall performs the best but open source has closed the gap significantly. The amount of progress the open source community has done lately is nothing short of remarkable, will be interesting to see were we will be in 6 months.
no
They ran Falcon 40B model, which is an open source competitor to ChatGPT.
For perspective, it's heavy and doesn't run on my 4090 GPU (24GB VRAM with 80GB DDR5 RAM). The system goes out of memory after a few commands.
@@Piyush.A I guess, u need a 32GB VRAM GPU to run that Falcon40B. Yes its pretty damn heavy for an LLM like that. Me, Im using the lesser LLM like WizardVicuna13B.😁😉😄🤖🤖🤖
@@DarkWizardGG No a 40B parameter AI cannot run on 32GB VRAM unless your running the quantized 4bit version. For the full 16bit version you need 120GB of VRAM to run it 80 GB VRAM for just the weights alone and another 40GB overhead to run the model itself.
Nvidia should be having a reason to be concerned now that MI300X instinct performance is out, While their Two Hopper H100 via NVLink can only get 3K+ Tflops, the MI300X can do 5,218 TFplops on a "Single GPU", even the two combine H100 can't even beat a single MI300X. Watch out Nvidia, AMD is coming...
:(. I want amd to do good, but this is literally the same speed if not significantly slower than 2x 3090’s not even 4090’s.
How do we know it isn’t her secretary behind stage typing the poem?
It's probably a pre-recorded video of writing the poem but speed up to make it look like it wasn't a human.
facts, there's no way they wanted to deal with an outrage or wrong response
always 2nd best as usual
good
Can I use that chip as brain for my ai wife?
A time will come in the future where everything can be done by AI (like mundane human tasks as well) , how will the human civilization function? What will governments do? What will people do for jobs or to earn a living? cause you need money for everything.
She sounds very articulate
Shes been articulate ever since, bro. Lol😁😄👍👍👍
Not only is this a incredible piece of technology it could quite literally be one of the biggest threat to millions of people's lives and job's. Maybe we should stop and think about this one simply thing just because we can do a thing doesn't necessarily mean we should. I can believe how so many people have failed to realize the dangers of A.I. and the treats it's bringing along with each and every advancement in technology. It's truly terrifying to think what this world is going to be like in the next few decades. Yes you should be afraid very afraid of the wrong people using this technology to control and manipulate all of humanity or even destroy civilization as we have known it. I hope enough people wake up and realize the truth before it's to late to do anything about any of it. This is not a good thing and I can't hardly believe anyone doesn't recognize it...?...💯
She’s Jensen with makeup
@@N_N23296 I had completely forgotten about it
ChatGPT / Language models are the least impressive part of AI. AI is not overhyped - the OpenAI company IS overhyped. They are a flash in the pan
Intel =cpu
Nvidia = gpu
BUT AMD = cpu+gpu lol
chances are ChatGPU cannot run on any single server
GPT4ALL with some uncensored models are OK, no GPU needed.
New AI chip in AMD win win for everyone hope Western Democratic country monopolize this things. To image recognition. astronomy .autonomous systems .cyber security. Warfare. Simulation mechanics. Medical fields . Weather . Chemistry. Material Science. Satellite constellation. Coding. big data gardening analysis photo in video editing .games speech and voice to text recognition. In VR RA education. Robotic systems. Logistics Etc
Can it run crysis tho
Salute to Lady Lisa Su, real awesome engineering with high leadership level
Just add RAM slots to GPU instead of onboard RAM, so that users can add whatever amount of RAM they want.
@@N_N23296 Well, about space, now that most graphics cards take 2 slots and some even 3 slots, they could make the cards double-layered to have the space for RAM slots. And about the speed, I don't know how much slower slotted RAM is compared to onboard RAM, but maybe, it could be a two-tier system. That is, a GPU has on-board RAM and slots for RAM. The GPU would use the onboard RAM first, and only if there is not enough onboard RAM, it then tries to use slotted RAM. I know it would be slower, but currently, if you run out of VRAM, the process just crashes or doesn't get executed. I think slower execution is better than not being able to run at all.
lol wut. 50000 MB/s speed vs 5300000 MB/s HBM3. there's a crystal clear choice here.
go go RED team
You want better engineered product
Put an engineer on the helm of it
She's look like nvidia CEO
I trust AMD they should put this in a laptop or a hand held windows or linux pc.
???
How stupid. Laptop connection.
it is better to be fit in your smartwatch so your hand turn into AI.
Currently this chip will be used in a machine like Mainframe or series of servers that can handle a mountain of data. Laptops are not used for that purpose. If someone puts it in a laptop, he or she has to cram up a lot of current technologies in that laptop to keep up with that chip. That would be nearly impossible as the solution for the heating issue alone would take a considerable space which will be hard to make room in a small place like laptops. Even if it happens, that will not only alter the purpose of the laptop but also pump up the price of that laptop way outside the purchasing power of the potential buyers.
It will best run on personal LLM in personal research. Listen to AMD CEO she's bringing LLM to individuals.
lol for those of you who are paying attention
nvidia가 돈에 취해 안하던 것을 AMD에서 선제적으로 해서 다행입니다. 192GB/Card !
The AI world is getting better and better.
It will mess you completely
@@cybercrazy1059 for the good
MI330x not M1300x... It was literally the first words out of her mouth........
its good but its not nearly as good as nividias hopper
❤
Dr Dre writes better poetry
It doesn’t mean anything without CUDA support
But can it run CUDA? 😢
Don’t use AMD for training. It’s not compatible most of the time and will waste you endless hours trying to debug.
@@clehaxze Isn't AMD notoriously bad for this? Their software support is so bad some are claiming the defects are in the hardware since the bugs continue. I hope AMD takes the software stack seriously and improves support.
cuda is overhyped.
@@JackRoyLith very good reason. If you were in the industry you'd understand why
@@marshallmcluhan33 Not on Linux. AMD on Linux is better then NVIDIA due to both kernel space and userland drivers are open source. Also AMD is much better at adopting open standards like GBM and wayland. While NVIDIA tried 10 years doing their own EGLStream, that the community simply hates.
The compute stack is still behind NVIDIA. But it works nevertheless. And there's sane PyTorch out-of-box support.
Blah blah when can I get one for my pc on Amazon?
Never these are not sold on amazon