RYZEN AI - AMD's bet on Artificial Intelligence
Вставка
- Опубліковано 1 чер 2024
- With Ryzen AI, AMD introduced the first artificial intelligence engine on a x86 chip. In this video we will take a look at AI computing for consumers, the AMD Ryzen Phoenix APU that contains the first AI Engine and talk about the future of AI in general, from hardware to software.
Follow me on Twitter: / highyieldyt
0:00 Intro
0:50 AI Engines explained (vs CPU & GPU)
4:03 AMD XDNA & Ryzen AI
6:13 AMDs AI focus & Phoenix
8:13 x86 AI Hardware & Software
11:10 Future AI use cases - Наука та технологія
The close business relationships between AMD, Microsoft, and OpenAI are starting to make a lot of sense.
Fully agree!
I didn't realize AMD and MSFT were super close - I missed that manifestation. Do you have any events or partnerships I could go lookup to get context?
More like a cabal
@@rkalla Does CES ring a bell? They publicly announced their love relationship just a few weeks ago.
@@rkalla AMD is in Azure and have been in the xbox for a few gens now.
I definitely was considering a Phoenix APU before knowing about Ryzen AI and my excitement only increased hearing this news. AI upscaling for video content is the thing I'm most excited about because there are so many low bitrate low resolution videos out there and the potential for conferencing is also huge since webcams probably won't get any better (if the covid home office years didn't get OEMs to improve their webcams nothing will)
But any video card that is less than 5 years old can already do this... why want it in the CPU as well?
@@juliusfucik4011 because most ultrabooks & office computers don't have any dGPUs? Also, running an "AI Assistant" or any other AI task with a GPU is for sure not the most efficient way to do on laptops.
I think this product is part of AMD and Microsoft cooperation. Microsoft want to try AI-powered Windows on mobile devices (Surface lineup) and AMD want to try their AIE in real life workload before launch it on other segments with little to no use.
Potentially easier analogy; A CPU core is like a 4 math professors, a GPU core is like a 1000 promotional pocket calculators.
I hope there will be a common instruction set for matrix operations (what's in all of these AI-branded coprocessors) so that developers could just use it not specializing for a specific hardware implementation.
That's super important, otherwise it wont take off. We dont need closed source shenanigans.
Microsoft's take is DirectCompute.
I don't think these instructions are needed as matrix addition and multiplication is fairly generic. It suffices to have good libraries such as BLAS and IPP that make optimal use of the existing instruction set.
Training online takes only little computational power. It is the initial training that is expensive. For that, we have GPUs.
The AI cores are only meant for running the network forward for inference. This means no feedback, gradient calculation and weight adaptation is needed.
Fun fact; if you quantizatize a typical neural network from floating point to integer you can get 30+fps on a single core of a Raspberry Pi 4. Inference just isn't that expensive.
A library or SW layer is where matrix operations belong. And it needs to be optimized for the specific hardware implementation, including compute cores, cache sizes, DRAM sizes and bandwidth and so much more.
Take a look at how very large matrix multiplies are done. They are not done in the simple way that would take N^3 multiplies and ignore the HUGE differences differences in each level of the memory hierarchy.
Standardization is helpful, but not at too low of an abstraction level that prevents optimizations.
Lets hope the scalable matrix extension for ARM delivers. Coming to ARM v9A. Something they should have added some years ago IMO.
Great video! First time here but Im subbed. Loved the format and the info given. Well done!
Manual rotoscoping in video editors would take from a few minutes to hours depending on the complexity of the scenes, and I was surprised to see an AI engine pull that off in seconds.
Mindblowing stuff. Definitely convinces me to continue waiting for 7040 availability in thin and lights. Aside from all the potential applications, battery life improvements should also be significant.
You've absolutely nailed it on the need for strong software support.
I looked into it, and apparently it has it's own special API/SDK required to utilize it. This is a big disappointment, they should allow it to plug in to DirectML (this is how AI acceleration works on Xboxes and it's great). By integrating it into existing APIs AMD would have a large amount of support out of the gate and avoid further fracturing the programming ecosystem.
I mean, every specialized hardware implementation needs its own SDK, handling the specifics. That alone doesn't prevent it from plugging into DirectML.
@@leeloodog DirectML is a pretty high level abstraction and one that's Windows exclusive at that. You don't build hardware directly to that standard. There is always going to be a low-level SDK that handles the hardware access.
Now of course, it could be handled differently, DirectML could be supported from the get-go, which is a shame that they didn't do that, I agree.
One reason I can think why DirectML is not a focus for AMD is because it's not cross platform, and doesn't work on Linux.
Why is this important? AI computation in enterprises are usually done on Linux. Enterprises are one of the biggest consumer of AI compute
There is irony in this to, I think AMD is making some of the same mistakes that Intel made whenever they got large and powerful. For now this stuff isn’t gunna be very useful until all chip makers get on board working on a standard
@@dennisp8520 Although from what I can tell, PyTorch, Tensorflow and ONNX are all supported by the Xilinx AI Framework as Frontends. So really, there is no huge need to support DirectML as Middleware between Frontend Frameworks and the hardware Backend.
quaity content, thank u so much
wd love to see more of you, specially for new chip paradigms , on the research side of things
Great video, very informative. These are the kinds of videos I like to educate myself on the future of computing. Coreteks is a great channel, but his niche is mainly for the future of gaming and graphics, which is less relevant to what I need to know about.
My best idea for AI in games is AI vision and hearing systems for NPCs.
At the moment in gaming, let's take a Stealth game example, the enemies have vision and hearing cones, dumb pure distance mechanics triggering a behaviour branch if the player is close enough or loud enough, usually augmented with simplistic rules based around crouching, movement speed limits, baked shadow regions and 'special grass'
Replace that with a quick and dirty low resolution rendering of what the NPC is looking at using the GPU.
Now run that image through a trained neutral network.
Suddenly this opens up the possibilities of real effects from movement, lighting and camouflage.
Literal camouflage, you're trying to fool the pattern matching algorithm in the machine in exactly the same way we try to fool the pattern matching algorithm situated between every humans ears 👉🧠
Same with audio, you render the sound at where the NPC is and run it through another NN and see if it meets a threshold to trigger the NPC AI (too many things named AI) universities branch.
The game design trick is feeding back to the player the level of danger they are in without hokey constructs like the 'interest danger' markers in games like far cry.
Basically, AI needs to multiply the weights set by a model across the whole network to figure out the best fit output, but it doesn't need high precision, since it only needs to determine a rough estimate on it's certainty.
Everything can be approximated with randomized varying depth relus with proper regularization......... standard sparse linear learners no complicated solver needed. Algorithmic complexity is far far more important than hardware power
A highly interesting and insightful video! Thank you.
Always happy to see a new upload. Thanks for covering this!
As a gamer, I’m excited to see how good AI upscaling become. DLSS 2/3 has already shown a lot of promise, now just waiting for AMD to release their version.
I am more excited about neural rendering (neural radiance fields), it is not real-time on current hardware, but with the right dedicated hardware it will be soon.
Nice video shot, thanks for sharing with us, well done :)
Wieder top Video!
Again, a high quality video!
ah yes, the APU that I've been waiting for. Not yet out there but looking very promising. Any idea of when it will be out? Also, are those AI cores also supposed to be used for something like FSR, such as in the way that Nvidia uses AI cores in its GPUs to sharpen and upscale stuff? Thanks and cheers.
If AI cores are to be used for FSR, then FSR will not work on the vast number of GPUs that it currently works on. I do not think AMD would go in that direction for the time being.
The RDNA3 cores come with their own smaller AI cores which are used for FSR, and FSR in general doesnt even need AI acceleration iirc, thats why it also runs on older GPUs.
Phoenix should be out in late Q1, but thinking back to Rembrandt last year, it might take AMD longer. Lets hope the rollout will happen faster this time!
AMD APU for notebooks , Phoenix Point will arrive in March 2023 . It was announced by AMD in CES 2023
AI will just learn what is best and fastest way to make the use of gpu or cpu, and it doesnt even have to send data to AMD or NVIDIA. It is a baby learning machine. it may work or not, like GCN 5 had primitive shaders that never got the usage...Radeon had tessalation way back and was not used ATI Radeon 8500 in 2001...Nvidia Physx was short lived...4870x2 had double gpu and between PLX chip that was never really used.....Intel had AVX 512 in cpu and now its removed,AMD has this only now in 7000 series xD.....Nvidia RTX 2000 series had AI and it learned how to better use DLSS and optimize drivers, but AMD has better stronger hardware so this will help driver team alot. AI will need time like 3-4Years to make some proper use of it IF it will work like people think.
I feel like, outside of notebooks and mobile computing, by the time specialized hardware is preferable for handling AI tasks, that discrete accelerator cards will be the market standard. Either that, or GPUs will market AI accelerators on their boards and make use of the insane bandwidth PCIe 5 gives them.
Integrated AI cores will be more or less like integrated graphics in future in x86/PC applications
Nvidia already sells GPU's that do this.
Very good video. Most of the information on XDNA is exact, I mean not overestimated!!! All the animations on the AI Engine are really nice, compared to my poor PPTs !
great vid as always, thanks
GPU-style parallel processors are very nice-to-haves for digital artists such as musicians, video editors, and animators.
Thank you and a great overview. I'm getting into AI machine learning and am hoping to utilize this new feature for training models. Do you have any resources on how to utilize Ryzen AI for machine learning model training?
This is what I'm waiting for. Hope it will be available in some mini PC form. Also hope there will be an API available for XDNA in Linux.
With how well AMD is doing for their GPU drivers on Linux, I think theres a good chance.
sadly a year later, Linux support is still missing. Did you get a mini PC though? I have a Framework 13 with Phoenix myself, although not for the AI engine but more for the battery life and incredible efficiency
@@VideogamesAsArt I did not got one as I was busy with other things ans since there is no support for XDNA I will probably wait for XDNA2. There is not as much progress as I will have hoped. Still use a i7 - 3770 so over a decade old.
Just looking at AI art, this makes me super optimistic about the gaming industry. The environments that AI will create will be incredible.
Or they will be hellish.
ah fuck it what could go wrong?
Let's gooooo!!!
AI engines and dedicated AI capabilities are already available on Apple and Intel cpus. Apple has dedicated IP for AI offloading and intel has TMUL instructions in Alder lake for AI operations. It's just a matter of which one has more application support and which one is more effective in terms of performance and power consumption. Secondly as you said meteor lake has dedicated AI engine on the cpu and raptor lake has onboard AI IP.
sorry if you already mentioned this but are the ai cores only for amd to use for fsr or something or its something users can use for machine learning or whatever
and how do these compare to gpu? like can i do as much as on gpu with this or how much better or worse is it?
again, if you already mentioned this im really sorry but im too tired to rewatch it again today
It’s not meant for FSR, those cores are inside the GPU. In theory you should be able to use it for machine learning code.
@@HighYield ah crap just asked this question haha.
CPUs, GPUs and the AI Engine all are Turing complete, so technically they all can execute the same tasks provided they are programmed for the respective processor. What differs is the speed at which they can do certain tasks. Linear, logic heavy code will perform best on CPUs. General purpose parallel number crunching will be best on GPUs. Specialized parallel matrix math will perform best on the AI Engine.
Comparing it to the rest of the Pheonix APU (Ryzen 7 variant), the integrated 780M provides up to 8.9 TFLOPS of FP32/17.8 TFLOPS FP16 (possibly 35.6 TOPS Int8/71.3 TOPS Int4? The ISA manual states support for Int8/Int4 matrix math but not packed acceleration of it. I would assume this is carried over from Xboxes but I can't be totally sure). The AI Engine hits 12 TOPS (unspecified, assuming Int4).
While it might sound like this makes the AI Engine pointless, the real story is in the perf/watt. The AI Engine according to AMD has power usage measured in milliwatts, while the 780M could easily pull 20W+. Thus, the AI Engine is great for ultrabooks that cannot afford to be blasting the GPU like that.
Ai is the future. Even now in its infancy it helps me a bunch. If it became 100x better in assisting me. DAMN! It'll do all my work for me.
7:39-7:41 audio blip from the audio editing on “transistor”
I'm pretty sure I just had a horrible microphone pop at this point and tried to remove it, the result is a few missing frames and the audio blip. Why are you paying so much attention? Cant even make my mistakes in peace ;)
Good catch tho!
@@HighYield I’m a professor who pre-records some lessons, so I’m all too familiar with replaying 1 second of audio a dozen times to fix pops, blips and doots :P
Great video by the way, kudos on being informative and entertaining!
This was GREAT!!
what a great channel. i am so glad I found it..
Same :D
A Ai engine coupled with a small FPGA on chip could cover a lot of non efficient tasks that would burden a GPU or CPU's main task set correct?
Yes, I’d say so too
Man I wish I had something like this when I was taking a class on ai last year. Some code would take several minutes to run. This probably would have cut that down a bit. If compilers can take advantage of such features on the silicon automatically it will have huge implications for students. Additionally once ai cores are common to most laptop chips universities can adjust curriculums to teach cs students how to leverage them before they graduate.
Great analogy using the cooking! AI is here to say, and this is only the beginning. There will be more and AI in the future, behind the scenes, you won't even know its there, but it ill make tasks easier and better. Of course I'll get a phoenix GPU when they are released, the excitement is on the edge, not in the back row.
Microsoft will be requiring this and needs at least 40-50 TOPS of performance for it to be a smooth AI experience with windows 11, presumably with the upcoming Co-Pilot.
i wonder if it will help at game resolution upscaling and frame interpolation
As an FPGA engineer I've used DSP cores to accelerate certain algorithms in hardware and upon hearing that MAC is the basis of AI I pictured the Leonardo DiCaprio pointing meme, where DSP cores are pointing at AI cores
Very helpful 👍
Windows 11 getting its own software based AI-engine to complement these AI hardware accelerators that can improve Audio, video and telecommunications would be amazing, and about time as Apple has been doing this for years since they moved to M-series Macs.
I'm sure Microsoft is already hard at work.
The more to spy on you with.
@@craneologysame
Are these type of engine more like a Coral o Jetson Nano, used only for inference, or can be used efficiently also for training?
I guess its mostly inference, but Xilinx AI can do both: www.xilinx.com/applications/ai-inference/difference-between-deep-learning-training-and-inference.html
Going to need some new vector extensions to accelerate ai type workloads on regular cpus
What AMD needs to do is innovate further on their cache chiplet design and SoC infinity fabric IP to form a VRAM like cache for these DSPs. This is just another AVX extension or Snapdragon DSP equivalent (still awesome to see) but AMD is positioned to fix the real problem with machine learning models which is memory hierarchy. CPUs are surprisingly powerful compared to GPUs it's the memory locality that really makes GPUs outperform CPUs by so much due to cache misses in parameter space. Throw an L4 equivalent on the outside of the CCX chiplet and extend the ISA for AVX (also throw bfloat16 in there please).
Dude your accent is perfect for explaining technical stuff. Consider using a German word once in a while to make it perfect. Great work !
Its a matter of cost as if the 7040 undercuts what is likely to be the m3 then there is a huge new arena of local voice activated LLMs that a single server can service a reasonable number of zones for most homes.
The Home HAL is on the way and that is the 2001 type not abstraction layer.
You should take a look at Alethea AI. They are introducing CharacterGPT. We can create interactive AI characters by simply entering some text. Also they are working on the ownership of AI generative content.
Interesting, I see that the AI cores are VLIW.
If that's VLIW like Itanium as opposed to VLIW like ATI's old GPU instruction set it's fascinating.
Could VLIW work when in the limited context of machine learning inference? Will it be compiled or hand written?
Yes, interesting indeed.
i'm pretty sure that it would be compiled
If it only runs on AMD's APUs, then it will only run on a fraction of PCs, making for a small target market for software developers. It makes sense to add it to their desktop Ryzens to, and possibly their discrete GPUs, possibly even separate PCIe cards with just XDNA on it (market dominance will require the tech to be available to PC users with Intel CPUs and nVidia GPUs). But I can't see how the market will embrace this technology if it is only available on AMD's APUs.
Mmmm... 🤤 That notion of game enemies leveraging the AI Engine, is nifty! I don't know exactly how improved it would be over the current means, which have already had "learning" abilities; albeit, minimal and session based. if the new one could store complex info and re-use it on the next game load, that'd be great. (although, this probably falls under "machine learning", not "general AI" 😕)
Why there is "bleeding edge" instead of leading at 0:14?
en.wikipedia.org/wiki/Bleeding_Edge
The'ol Math Co-processor is back!
I'm actually very happy to see this. I have been expecting something similar for a while. Although I was envisioning it being more like those "Physics" cards in the early 2000s. It seems that ever since the Xbox 360/PS3 era began, in game AI (NPCs, Enemies, wildlife, etc) has basically been an afterthought IF that. I believe it's because by the time those consoles were releases they were already rather outdated compared to PC capabilities and have been trying g to keep up (and failing) ever since. That means when studios make games, even if they will be prominently for PC, they can't get too "fancy" or else the difference between the PC version and the console versions wpuld be too great and point out how bad the system is. I doubt they would get a license to sell such a bad port and without console licenses they will not get budget. So in order to maintain the illusion of graphical improvements over time, things like AI and view distance etc. were left on the cutting room floor. Think about A-Life in STALKER games that allows for AI based enemy tactics, wildlife, and NPC interactions. It makes for a much more realistic, immersion heavy game experience that is almost always diffrent since even when you are not around or on the map, the NPCs etc do their own thing. Also think about the first FEAR game, the enemy tactics were amazing and felt real, but graphical quality was compromised to do so(was worth it). Anyway, my point is that I hope this is used for such things moving forward. I know publishers would almost never approve lowering graphical quality just for better AI since their market research says "graphics are the most importantpart of a game" (aka asking random people who have no idea what makes a game good, "what makes a game good?"). However if this hardware becomes more common, they wouldn't have to make that trade off. Lastly, I believe that MS and Sony only have maybe one more "Next Gen" console in them before the price/performance of a PC would surpass what they can make and sell. They already take a bigger and bigger loss on the system sales each cycle and rely on licensing to make up for it. However since they seem to use next gen APUs now, so that means of they can get, say a AMD APU with RDNA3/4 and "AI cores" games may start making use of in game AI again since it won't have to be a trade off/can be applied to console titles. They could also allow for things like DLSS type AI upscalling can be taken off the GPU and given to the CPU perhaps. I see APUs being the main go to in the future. AMD has the head start and chiplet stacking/3D cache can make them extremely powerful. I also see dedicated APU motherboards that have both system RAM and VRAM slots that will allow for more upgrade paths and less waste. Yes a mobo will cost like $300+, but you won't need the whole GPU PCB and there could even be some performance gains by having all of that in the mobo instead of having to all go through PCIE slots. Anyway, this is good news I think! There are also really promising potential for other things, but that is a secret as I am currently working on something that would benifiet greatly from such a thing. It wpuld also be nice to have offline home assistant/automation computing more readily available to more people instead of having everything that happens in their home get sent to an Amazon server to be analyzed and archived just so it can play a song when you ask it to. This is possible now if you have a server with a GPU now(like me) but it's not supported very well as far as software choices. If it wasn't such a weird thing to set up, I'm sure there wpuld be much more options. I will conclude here, thanks for the video!! I'm looking forward to hearing more on the new "Zen 4D" and how RDNA3 is evolving. I'm not keeping up with most the main outlets because they are getting annoying, so I'm counting on you to keep me up to date! :-D have a great weekend!!
I remember "PhysX" very well. At some point Nvidia thought everyone would have a dedicated physics card in the future. But unlike Nvidias proprietary API, I'm sure AI engines will make their way into most computer chips eventually.
The question is, with local AI processing, will applications stop sending PII to the cloud to be processed and cataloged improving use privacy, or will it just save Google, Facebook, Amazon, Microsoft, and others money on processing data, letting them harvest more "polished" PII.
What is your thoughts on Brainchip's Akida?
I am about to have lunch and that sandwitch looked so tasty that it distracted me from the topic of the video xD
Okay should I be buying M2 apple mini or Laptop or Win based intel i7 or Ryzen ? I am developer need lots of RAM, CPU, DDR memory and ability to work on databases, programming IDEs like visual studio, anaconda.
Honestly, that hard to say. If you need lots of RAM, building a system yourself can be much cheaper. Do you rather like to work on OSX or Windows or Linux? So many questions.
@@HighYieldThank you for the response. I work with lot SQL daabase, Visual studio, Anaconda and stuff like that, GIMP sometimes, not into gaming and Important I am on contract and move around the country and can only carry portable. I am find with M2 mini or other compact desktops as I can buy and discard cheap used monitors. Yes I am conversant to work with linux and windows but most of my work is based on Microsoft
I'm a developer who used to use Macs. But x86 is still the king. Give Pop_OS! a try. Pop_OS! runs amazing on AMD's hardware. You get the best of both worlds. An OS as productive as MacOS and the ability to chose from a vast array of hardware available on PC that can keep any power user happy. I made a switch few years back and I'm never going back to Mac.
This week I built an AMD system based on the x670e-Pro chipset (Pcie5) with an 8-core processor. When they come out, I will drop in a CPU with Ryzen AI ...
Would be very cool with rpg games where the plot is set up, but AI follows the actions and style of the player to update the story during play. Making each playthrough unique...
Notably, Apple A11 neural engine has never been used outside of Face ID, Apple made neural engine effectively public, starting with A12.
Apple is gay
Really, only Face ID? Didnt know that but it kinda makes sense.
@@HighYield Its more like only Apple could use A11 NPU, Animoji also used it just looked it up.
AMD should work to make it compatible with the ONNX format (by microsoft) it's open source and support a lot of hardware. It's the beginning of a "standard" for this industry.
I wonder if AMD's HSA (Heterogenous System Architecture) can rise from the grave now.
Seems the perfect fit for adding AI inference to your code?
Would be nice to have A.I. accelerated graphics replacing traditional raster tech in the near future. Maybe full Path-Traced graphics with A.I. accelerators can make huge GPUs unnecessary and we can simply use APUs and re-shrink the ultimate gaming machines to the size of watches.
Sheesh, it's about time. Apple and Google have had "neural engines" for years. Apple's new M-series SoCs have also good AI accelerator blocks.
Any news about apples rumored m2 pro and m2 max refresh?
M2 Pro & Max have the exact same 16-core NPU as the base M2 model.
I spent the weekend benchmarking apple M2Max and newer ANE. For densenet121, it can do over 700 FPS versus 100 FPS on GPU. It's taken AMD far too long to add tensor processors.
Is it going to replace tensor cores?
No, its basically something similar, not a replacement.
RX-DNA is something I can see happening in the near future, imo this is too good of an opportunity to miss for naming a GPU
Didn't Intel Icelake have AI accelerators? I may be mistaken though.
IIRC, Ice Lake had AVX-512 and specific Deep Learning libraries to speed up AI workloads (called "DL Boost"), but not dedicated AI hardware.
Is that a grandfather clock running off camera?
You hear a ticking sound?
@@HighYield Actually, I do. Not constantly, though...Am I hearing things? e. g. 2min40 - 2min49, 3min03 - 3min13 or 3min43 - 4min11...Grandfather clock!
As we can increase system RAM, if this tech is harnessed well, and if it can compete and give outputs similar to RTX 4090 cards, "in terms of only AI", that would be great.
It definitely won't have 4090 fluidity. Maybe 3080
I have a feeling this is gonna be obsolete in five years when they come up with non von Neumann AI. The linear algebra accelerators in CPU and GPU they're still pretty competitive cause I don't think programmers want to work with the asic or they might need a more complex algorithm.
Interesting. This with artificial intelligence sounds nice and great, if it is programmed and used correctly. Can make cpu/gpu more efficient. Clearly this is part of the "internet of things" where everything is connected. But not many people think that artificial intelligence is actually fallen angel technology.
LOL
I'd like to understand how strategies deployed by AMD will compare to NVDA, and maybe Broadcom with RISC /ARM, and is this why NVDA bought ARM? There is a hell of alot of hype about NDVA, are they likely to live up to that? What will AI do to already seemingly dying INTC?
Cool, but what's the killer app??
you did not understand when you ask that
Why don't we get this accelerator on desktop chips?
Because Phoenix is just the first step and since AI can save battery life, its more useful on mobile devices. But I'm sure we will get AI Engines on desktop CPUs in the future.
Because you aren't as concerned with battery life on a desktop PC, so using brute force approach works well enough. Though I'm sure we will see this on desktop PC's at some point.
What about ai in smartphone chips?
There's still no benchmark results about 'Ryzen AI' till this moments.
@@blue-lu3iz so you mean it's non of NDA's business?
i wonder how this is going to be exposed to the os and software. i'd like them to make this configurable through the compilers so that devs could use the ai cores if available.
I also hope they will provide open APIs.
@@HighYield that would be spectacular! :-)
@@HighYield it's AMD, so they probably will
We need ai shader compilation to get rid of stutters.
That's a developer issue. Not really something you can fix in hardware. Some games do it correctly.
Is this the AMD 7040 series?
Correct, Phoenix is the Ryzen Mobile 7040 series.
A big use will be night to light. Perfect sunny days in even pitch black. What is black for the human eye is just nuances of dark for the AI with a decent optic. and as such, you can implement it in the windshield and sidewindows in your car so at night, you get 240fps AI sunshine at 1am during pitchblack driving. On the phone too.. just hold up your phone or put it in a headset to see around you while underground, or outdoor while its dark.
I am concerned that if every tech company gets in on AI and it all backfires what the fallout will be and how they will try to make consumers pay for it to bail them out.
What’s the difference between this and MMX?
Do you mean Intel's XMX on their Arc GPUs? If so, that's very similar to AMD's XDNA engine, both are dedicated AI-Engines that accelerate the most common ML calculations.
@@HighYield, No. I mean the ancient MMX from back in the olden days. Isn’t MMX great for performing high speed matrix operations?
What’s the difference between MMX and XMX?
Interesting, but it'll need to be sold to the average user of a PC as something they need. That will need to be built into the Windows 11 scheduler, which for some reason is having problems with Zen 4 cores over 2 CCD's, and Intel's big, little design. Understandably with 2 different core designs for Intel, but CCD's have been an AMD standard for generations now. Also Zen 5 will use a big Zen 4+ core and smaller Zen 5 little core. It'll be another generation, Zen 6 earliest we see AI in desktop CPU's from AMD.
There is always some hype.
Not always being bad.
I think it depends on both demand and chip design possibilities.
I just don't know if that AI chip cores is not that what was otherwise listed in chip specs as there were AVX, MMX and other cpu extensions if correct.
AMD calling software devs to make good use of those AI areas in chip really looks like other attempt to make use of raytracing or other cores. It's AMD business to sell what they make.
I remember Intel Pentium MMX having some extensions, claiming better gaming performance, but when you went in AMD chip one gen later, you usually found that extensions there too, with sometimes either better pricing or simply higher raw performance over Intel.
"I just hope my computer won't watch me for every single action one day in future, giving me advice to live better, faster, more efficient and doing more things at once, asking only plug me into grid and slap cooler on my head.
Turn me off if I become idle but consuming too much." (xd)
When I played chess with computer, it was tough at medium difficulty on PC 8086 upto 486. 3D shooter bots in Quake 1 or 3 were beatable bit higher, being quite fast, dextrous and a accurate. Playing vs bots in World of warships seems quite easy most of time, they sometimes get stuck at islands( no complain on devs), mostly not that devastating gunners, but already change course, speed etc. It is still programmed, having bots with performance equal to human is not goal of coop game there (IMHO), many players prefer relax, easier game than that one pvp.
But I can imagine AI could make fair bot enemy either matching player skills or surpassing and teach what to improve passive way or with guidance/AI tips.
Finally AI could just teach car drivers how to drive well without complete automatization of it. I personally don't seek AI driving my life, some Google results on my casual search is enough.
I think its important to make the AI engine accessible and with time, real use cases will appear.
maybe they can use this AI for pricefinding that isnt redicoulus for the 7000 series cpu....
Basically AI is not well suited with Windows 10/11 OS, but if Windows 12 comes with an AI software integration with the AIE processors...we will see a huge revolution in processing data and information....when playing games, your computer will know straight what is the average resolution and quality to play the game before installing....or youtube will know the perfect resolution and internet speed for your videos, and aditional content...and in Microsoft office.....in excel to predict about your data imput...or microsoft word to correct your spell(s)....and if microsoft aquire Chat GPT, plus a software integration with AIE processors.....then will be a feast
Hah, most of it is easily done without AI. It's just not worth implementing now, because it need a few sec of thought from a user.
Some people are enough braindead, thanks to others apps and time or focus eaters... Soo you don't need AI to make them even more shallow.
Nice
I want my CPU, GPU, APU, and QPU (Quantum Processing Unit).
Nvidia have this for several years already. AMD was needing it.
Are AI processors going to be programmed by a specialized programming language like GLSL or OpenCL for GPU's? I hope they get standardized soon so software can take advantage of the hardware even if it is from various different APU or AI hardware producers.
the extremely rapid AI development in form of software and hardware
implies this that, the hardware must be replaced much faster now
It seems Meteor Lake is d e l a y e d again and may even be scrapped ( at least for Desktop) due to the lack of high CPU Frequency , instead there may be another Raptor Lake Refresh , Raptor Lake itself wasnt planned and is only a refresh of Alderlake . Maybe we will see Meteor Lake on mobile 2023 but that depends on " Intel 4 " which still has some issues but may be good enough for mobile .
Yes, Meteor Lake is hanging in the ropes right now, but I still think we might see a mobile version this year.
I want an AI that will help me get rid of Noisy neighbors
AMD must adopt the soc design.
I don't see what this gives over just using the GPU. Also I think ai engine = DSP not really ai. Will it support onnx, Pytorch, Tensorflow? Will stable diffusion, whisper, gpt-2 etc run on it?. I looked at the AI engine examples and they where signal processing. That's not "AI" it's DSP.
For training workloads you'd still want to use a GPU. But for inference this should provide a significant boost in efficiency. Efficiency is the name of the game. Your CPU idling while you're on a Zoom call with the AI engine doing the blurring of the background for instance. Or the AI engine removing background noise from your audio.
@@SirMo but for the die space you could add another CPU core or additional GPU cores (for an apu). The AI engine seems to just complicate matters with something in between. And then you need a third driver to run it. You don't want to run stable diffusion inference (creating images) on a CPU. It's very slow. And I'm not convinced running something like nvidia broadcast (inference again) on an ai engine has any advantage. So your just left with a small number of dsp type use cases.
Thats no a big deal for 90% of home users, for shure. But wait till they make a partnership with unreal and we start seeing it on NPCs or something like that lol
can't wait for my cpu to have existential crisis
So you are saying it can run crysis?!
will copilot use this engine?
Most likely yes, there's a rumor Co-pilot will require on-chip AI Engines with 45+ TOPS of ML performance.
@HighYield are there competing instructors os is it likely that a certain standart wins?
@@HighYield How many tops require Windows 12 to work smoothly
Yeah, AI hardware is definitely mass producing fast food. Or even junk food, when you consider that precision can get as low as just two bits per coefficient.
I honestly think my analogy isnt that far off :D
my heart broke when i saw that DSP, i shouldn't have upgraded too soon
Why did your heart break? :(
@@HighYield because DSP is what i really needed back than, processing digital signal inviting delay no matter how strong the raw performance, with DSP that delay will decreased significantly
Ah now it makes sense.
Wouldn't enemies learning player patterns require ridiculous amount data and cycles in other to find something? Usually AI enemies just need some extra information on the player to get a leg up. It's not like a game can't let enemies get headshots more often.
Anyway, my take. I'm just a measly web dev that works on old PHP scripts.
Have you played FEAR? That's what good enemy AI looks like.
Much better than what we have today, and that game is so old.
I'm not any kind of developer, but I know that video game AI has made pretty much 0 progress in, like, forever.
@@teapouter6109 "video game AI has made pretty much 0 progress in, like, forever"
Because people keep complaining about the AI being "too hard", there's essentially no market for it other than the more competitive gamer, but that kind of person will probably play against other players instead of AI.
I think that playing a game like League of Legends just with highly advanced AI teammates/enemies would be nice, since you wouldn't have to deal with trash teammates/enemies, but at the same time it would be really weird to have played a game with "bots" that felt like a normal game with humans. Something like this would definitely kill multiplayer for a lot of people, at first it would be weird but over time I think that many would choose playing with the AI over the occasional stupid humans, especially after raging hard.
@@vikhr But we have difficulty sliders…
@@teapouter6109 The point is that almost no dev team is going to waste magnitudes more time making a complex AI only for 0.1% of the playerbase to even try it out.
@@vikhr It’s not 0.1%
It’s a feature of the game that casuals can turn off if they even notice
Not every game has to be made for slack jawed losers who don’t know how to hold a controller