RYZEN AI - AMD's bet on Artificial Intelligence

Поділитися
Вставка
  • Опубліковано 1 чер 2024
  • With Ryzen AI, AMD introduced the first artificial intelligence engine on a x86 chip. In this video we will take a look at AI computing for consumers, the AMD Ryzen Phoenix APU that contains the first AI Engine and talk about the future of AI in general, from hardware to software.
    Follow me on Twitter: / highyieldyt
    0:00 Intro
    0:50 AI Engines explained (vs CPU & GPU)
    4:03 AMD XDNA & Ryzen AI
    6:13 AMDs AI focus & Phoenix
    8:13 x86 AI Hardware & Software
    11:10 Future AI use cases
  • Наука та технологія

КОМЕНТАРІ • 332

  • @siliconalleyelectronics187
    @siliconalleyelectronics187 Рік тому +181

    The close business relationships between AMD, Microsoft, and OpenAI are starting to make a lot of sense.

    • @HighYield
      @HighYield  Рік тому +14

      Fully agree!

    • @rkalla
      @rkalla Рік тому +15

      I didn't realize AMD and MSFT were super close - I missed that manifestation. Do you have any events or partnerships I could go lookup to get context?

    • @GuinessOriginal
      @GuinessOriginal Рік тому +3

      More like a cabal

    • @thesolidsnek8096
      @thesolidsnek8096 Рік тому +6

      @@rkalla Does CES ring a bell? They publicly announced their love relationship just a few weeks ago.

    • @diamondlion47
      @diamondlion47 Рік тому +13

      @@rkalla AMD is in Azure and have been in the xbox for a few gens now.

  • @RealLifeTech187
    @RealLifeTech187 Рік тому +27

    I definitely was considering a Phoenix APU before knowing about Ryzen AI and my excitement only increased hearing this news. AI upscaling for video content is the thing I'm most excited about because there are so many low bitrate low resolution videos out there and the potential for conferencing is also huge since webcams probably won't get any better (if the covid home office years didn't get OEMs to improve their webcams nothing will)

    • @juliusfucik4011
      @juliusfucik4011 Рік тому +1

      But any video card that is less than 5 years old can already do this... why want it in the CPU as well?

    • @polystree_
      @polystree_ Рік тому +4

      @@juliusfucik4011 because most ultrabooks & office computers don't have any dGPUs? Also, running an "AI Assistant" or any other AI task with a GPU is for sure not the most efficient way to do on laptops.
      I think this product is part of AMD and Microsoft cooperation. Microsoft want to try AI-powered Windows on mobile devices (Surface lineup) and AMD want to try their AIE in real life workload before launch it on other segments with little to no use.

  • @alirobe
    @alirobe Рік тому +3

    Potentially easier analogy; A CPU core is like a 4 math professors, a GPU core is like a 1000 promotional pocket calculators.

  • @Akveet
    @Akveet Рік тому +55

    I hope there will be a common instruction set for matrix operations (what's in all of these AI-branded coprocessors) so that developers could just use it not specializing for a specific hardware implementation.

    • @HighYield
      @HighYield  Рік тому +17

      That's super important, otherwise it wont take off. We dont need closed source shenanigans.

    • @004307ec
      @004307ec Рік тому +5

      Microsoft's take is DirectCompute.

    • @juliusfucik4011
      @juliusfucik4011 Рік тому +5

      I don't think these instructions are needed as matrix addition and multiplication is fairly generic. It suffices to have good libraries such as BLAS and IPP that make optimal use of the existing instruction set.
      Training online takes only little computational power. It is the initial training that is expensive. For that, we have GPUs.
      The AI cores are only meant for running the network forward for inference. This means no feedback, gradient calculation and weight adaptation is needed.
      Fun fact; if you quantizatize a typical neural network from floating point to integer you can get 30+fps on a single core of a Raspberry Pi 4. Inference just isn't that expensive.

    • @ttb1513
      @ttb1513 11 місяців тому

      A library or SW layer is where matrix operations belong. And it needs to be optimized for the specific hardware implementation, including compute cores, cache sizes, DRAM sizes and bandwidth and so much more.
      Take a look at how very large matrix multiplies are done. They are not done in the simple way that would take N^3 multiplies and ignore the HUGE differences differences in each level of the memory hierarchy.
      Standardization is helpful, but not at too low of an abstraction level that prevents optimizations.

    • @mrrolandlawrence
      @mrrolandlawrence 8 місяців тому

      Lets hope the scalable matrix extension for ARM delivers. Coming to ARM v9A. Something they should have added some years ago IMO.

  • @theultimatekehop
    @theultimatekehop Рік тому

    Great video! First time here but Im subbed. Loved the format and the info given. Well done!

  • @nicholassabai7284
    @nicholassabai7284 Рік тому +4

    Manual rotoscoping in video editors would take from a few minutes to hours depending on the complexity of the scenes, and I was surprised to see an AI engine pull that off in seconds.

  • @cameronquick1157
    @cameronquick1157 Рік тому +4

    Mindblowing stuff. Definitely convinces me to continue waiting for 7040 availability in thin and lights. Aside from all the potential applications, battery life improvements should also be significant.

  • @kirby0louise
    @kirby0louise Рік тому +58

    You've absolutely nailed it on the need for strong software support.
    I looked into it, and apparently it has it's own special API/SDK required to utilize it. This is a big disappointment, they should allow it to plug in to DirectML (this is how AI acceleration works on Xboxes and it's great). By integrating it into existing APIs AMD would have a large amount of support out of the gate and avoid further fracturing the programming ecosystem.

    • @RainbowDollyPng
      @RainbowDollyPng Рік тому +9

      I mean, every specialized hardware implementation needs its own SDK, handling the specifics. That alone doesn't prevent it from plugging into DirectML.

    • @RainbowDollyPng
      @RainbowDollyPng Рік тому +8

      ​@@leeloodog DirectML is a pretty high level abstraction and one that's Windows exclusive at that. You don't build hardware directly to that standard. There is always going to be a low-level SDK that handles the hardware access.
      Now of course, it could be handled differently, DirectML could be supported from the get-go, which is a shame that they didn't do that, I agree.

    • @zhafranrama
      @zhafranrama Рік тому +6

      One reason I can think why DirectML is not a focus for AMD is because it's not cross platform, and doesn't work on Linux.
      Why is this important? AI computation in enterprises are usually done on Linux. Enterprises are one of the biggest consumer of AI compute

    • @dennisp8520
      @dennisp8520 Рік тому

      There is irony in this to, I think AMD is making some of the same mistakes that Intel made whenever they got large and powerful. For now this stuff isn’t gunna be very useful until all chip makers get on board working on a standard

    • @RainbowDollyPng
      @RainbowDollyPng Рік тому +5

      @@dennisp8520 Although from what I can tell, PyTorch, Tensorflow and ONNX are all supported by the Xilinx AI Framework as Frontends. So really, there is no huge need to support DirectML as Middleware between Frontend Frameworks and the hardware Backend.

  • @kaemmili4590
    @kaemmili4590 Рік тому +1

    quaity content, thank u so much
    wd love to see more of you, specially for new chip paradigms , on the research side of things

  • @thebritishindian1
    @thebritishindian1 Рік тому +5

    Great video, very informative. These are the kinds of videos I like to educate myself on the future of computing. Coreteks is a great channel, but his niche is mainly for the future of gaming and graphics, which is less relevant to what I need to know about.

  • @MostlyPennyCat
    @MostlyPennyCat Рік тому +25

    My best idea for AI in games is AI vision and hearing systems for NPCs.
    At the moment in gaming, let's take a Stealth game example, the enemies have vision and hearing cones, dumb pure distance mechanics triggering a behaviour branch if the player is close enough or loud enough, usually augmented with simplistic rules based around crouching, movement speed limits, baked shadow regions and 'special grass'
    Replace that with a quick and dirty low resolution rendering of what the NPC is looking at using the GPU.
    Now run that image through a trained neutral network.
    Suddenly this opens up the possibilities of real effects from movement, lighting and camouflage.
    Literal camouflage, you're trying to fool the pattern matching algorithm in the machine in exactly the same way we try to fool the pattern matching algorithm situated between every humans ears 👉🧠
    Same with audio, you render the sound at where the NPC is and run it through another NN and see if it meets a threshold to trigger the NPC AI (too many things named AI) universities branch.
    The game design trick is feeding back to the player the level of danger they are in without hokey constructs like the 'interest danger' markers in games like far cry.

  • @Bubu567
    @Bubu567 Рік тому +3

    Basically, AI needs to multiply the weights set by a model across the whole network to figure out the best fit output, but it doesn't need high precision, since it only needs to determine a rough estimate on it's certainty.

    • @axe863
      @axe863 Рік тому

      Everything can be approximated with randomized varying depth relus with proper regularization......... standard sparse linear learners no complicated solver needed. Algorithmic complexity is far far more important than hardware power

  • @christheswiss390
    @christheswiss390 Рік тому +3

    A highly interesting and insightful video! Thank you.

  • @HazzyDevil
    @HazzyDevil Рік тому +2

    Always happy to see a new upload. Thanks for covering this!
    As a gamer, I’m excited to see how good AI upscaling become. DLSS 2/3 has already shown a lot of promise, now just waiting for AMD to release their version.

    • @adamrak7560
      @adamrak7560 Рік тому

      I am more excited about neural rendering (neural radiance fields), it is not real-time on current hardware, but with the right dedicated hardware it will be soon.

  • @Bianchi77
    @Bianchi77 8 місяців тому

    Nice video shot, thanks for sharing with us, well done :)

  • @danimatzevogelheim6913
    @danimatzevogelheim6913 Рік тому +1

    Wieder top Video!
    Again, a high quality video!

  • @RM-el3gw
    @RM-el3gw Рік тому +17

    ah yes, the APU that I've been waiting for. Not yet out there but looking very promising. Any idea of when it will be out? Also, are those AI cores also supposed to be used for something like FSR, such as in the way that Nvidia uses AI cores in its GPUs to sharpen and upscale stuff? Thanks and cheers.

    • @teapouter6109
      @teapouter6109 Рік тому +3

      If AI cores are to be used for FSR, then FSR will not work on the vast number of GPUs that it currently works on. I do not think AMD would go in that direction for the time being.

    • @HighYield
      @HighYield  Рік тому +5

      The RDNA3 cores come with their own smaller AI cores which are used for FSR, and FSR in general doesnt even need AI acceleration iirc, thats why it also runs on older GPUs.
      Phoenix should be out in late Q1, but thinking back to Rembrandt last year, it might take AMD longer. Lets hope the rollout will happen faster this time!

    • @fleurdewin7958
      @fleurdewin7958 Рік тому

      AMD APU for notebooks , Phoenix Point will arrive in March 2023 . It was announced by AMD in CES 2023

    • @zdenkakoren6660
      @zdenkakoren6660 Рік тому

      AI will just learn what is best and fastest way to make the use of gpu or cpu, and it doesnt even have to send data to AMD or NVIDIA. It is a baby learning machine. it may work or not, like GCN 5 had primitive shaders that never got the usage...Radeon had tessalation way back and was not used ATI Radeon 8500 in 2001...Nvidia Physx was short lived...4870x2 had double gpu and between PLX chip that was never really used.....Intel had AVX 512 in cpu and now its removed,AMD has this only now in 7000 series xD.....Nvidia RTX 2000 series had AI and it learned how to better use DLSS and optimize drivers, but AMD has better stronger hardware so this will help driver team alot. AI will need time like 3-4Years to make some proper use of it IF it will work like people think.

  • @claymorexl
    @claymorexl Рік тому +6

    I feel like, outside of notebooks and mobile computing, by the time specialized hardware is preferable for handling AI tasks, that discrete accelerator cards will be the market standard. Either that, or GPUs will market AI accelerators on their boards and make use of the insane bandwidth PCIe 5 gives them.
    Integrated AI cores will be more or less like integrated graphics in future in x86/PC applications

    • @redsnow846
      @redsnow846 Рік тому

      Nvidia already sells GPU's that do this.

  • @oliviertremois1500
    @oliviertremois1500 Рік тому +8

    Very good video. Most of the information on XDNA is exact, I mean not overestimated!!! All the animations on the AI Engine are really nice, compared to my poor PPTs !

  • @gamingscreen4059
    @gamingscreen4059 Рік тому

    great vid as always, thanks

  • @fakshen1973
    @fakshen1973 Рік тому +2

    GPU-style parallel processors are very nice-to-haves for digital artists such as musicians, video editors, and animators.

  • @matthewstewart7077
    @matthewstewart7077 6 місяців тому +1

    Thank you and a great overview. I'm getting into AI machine learning and am hoping to utilize this new feature for training models. Do you have any resources on how to utilize Ryzen AI for machine learning model training?

  • @electrodacus
    @electrodacus Рік тому +10

    This is what I'm waiting for. Hope it will be available in some mini PC form. Also hope there will be an API available for XDNA in Linux.

    • @HighYield
      @HighYield  Рік тому +8

      With how well AMD is doing for their GPU drivers on Linux, I think theres a good chance.

    • @VideogamesAsArt
      @VideogamesAsArt 4 місяці тому

      sadly a year later, Linux support is still missing. Did you get a mini PC though? I have a Framework 13 with Phoenix myself, although not for the AI engine but more for the battery life and incredible efficiency

    • @electrodacus
      @electrodacus 4 місяці тому

      @@VideogamesAsArt I did not got one as I was busy with other things ans since there is no support for XDNA I will probably wait for XDNA2. There is not as much progress as I will have hoped. Still use a i7 - 3770 so over a decade old.

  • @bev8200
    @bev8200 Рік тому +2

    Just looking at AI art, this makes me super optimistic about the gaming industry. The environments that AI will create will be incredible.

    • @MWcrazyhorse
      @MWcrazyhorse Рік тому

      Or they will be hellish.
      ah fuck it what could go wrong?
      Let's gooooo!!!

  • @Yusufyusuf-lh3dw
    @Yusufyusuf-lh3dw Рік тому +1

    AI engines and dedicated AI capabilities are already available on Apple and Intel cpus. Apple has dedicated IP for AI offloading and intel has TMUL instructions in Alder lake for AI operations. It's just a matter of which one has more application support and which one is more effective in terms of performance and power consumption. Secondly as you said meteor lake has dedicated AI engine on the cpu and raptor lake has onboard AI IP.

  • @zajlord2930
    @zajlord2930 Рік тому +2

    sorry if you already mentioned this but are the ai cores only for amd to use for fsr or something or its something users can use for machine learning or whatever
    and how do these compare to gpu? like can i do as much as on gpu with this or how much better or worse is it?
    again, if you already mentioned this im really sorry but im too tired to rewatch it again today

    • @HighYield
      @HighYield  Рік тому +3

      It’s not meant for FSR, those cores are inside the GPU. In theory you should be able to use it for machine learning code.

    • @RM-el3gw
      @RM-el3gw Рік тому +1

      @@HighYield ah crap just asked this question haha.

    • @kirby0louise
      @kirby0louise Рік тому +4

      CPUs, GPUs and the AI Engine all are Turing complete, so technically they all can execute the same tasks provided they are programmed for the respective processor. What differs is the speed at which they can do certain tasks. Linear, logic heavy code will perform best on CPUs. General purpose parallel number crunching will be best on GPUs. Specialized parallel matrix math will perform best on the AI Engine.
      Comparing it to the rest of the Pheonix APU (Ryzen 7 variant), the integrated 780M provides up to 8.9 TFLOPS of FP32/17.8 TFLOPS FP16 (possibly 35.6 TOPS Int8/71.3 TOPS Int4? The ISA manual states support for Int8/Int4 matrix math but not packed acceleration of it. I would assume this is carried over from Xboxes but I can't be totally sure). The AI Engine hits 12 TOPS (unspecified, assuming Int4).
      While it might sound like this makes the AI Engine pointless, the real story is in the perf/watt. The AI Engine according to AMD has power usage measured in milliwatts, while the 780M could easily pull 20W+. Thus, the AI Engine is great for ultrabooks that cannot afford to be blasting the GPU like that.

  • @azurehydra
    @azurehydra Рік тому +1

    Ai is the future. Even now in its infancy it helps me a bunch. If it became 100x better in assisting me. DAMN! It'll do all my work for me.

  • @quinton1630
    @quinton1630 Рік тому +2

    7:39-7:41 audio blip from the audio editing on “transistor”

    • @HighYield
      @HighYield  Рік тому +1

      I'm pretty sure I just had a horrible microphone pop at this point and tried to remove it, the result is a few missing frames and the audio blip. Why are you paying so much attention? Cant even make my mistakes in peace ;)
      Good catch tho!

    • @quinton1630
      @quinton1630 Рік тому

      @@HighYield I’m a professor who pre-records some lessons, so I’m all too familiar with replaying 1 second of audio a dozen times to fix pops, blips and doots :P
      Great video by the way, kudos on being informative and entertaining!

  • @Silent1Majority
    @Silent1Majority Рік тому +1

    This was GREAT!!

  • @markvietti
    @markvietti Рік тому

    what a great channel. i am so glad I found it..

  • @EthelbertCoyote
    @EthelbertCoyote Рік тому +1

    A Ai engine coupled with a small FPGA on chip could cover a lot of non efficient tasks that would burden a GPU or CPU's main task set correct?

  • @brendanmeyler1641
    @brendanmeyler1641 Рік тому +1

    Man I wish I had something like this when I was taking a class on ai last year. Some code would take several minutes to run. This probably would have cut that down a bit. If compilers can take advantage of such features on the silicon automatically it will have huge implications for students. Additionally once ai cores are common to most laptop chips universities can adjust curriculums to teach cs students how to leverage them before they graduate.

  • @joehorecny7835
    @joehorecny7835 Рік тому

    Great analogy using the cooking! AI is here to say, and this is only the beginning. There will be more and AI in the future, behind the scenes, you won't even know its there, but it ill make tasks easier and better. Of course I'll get a phoenix GPU when they are released, the excitement is on the edge, not in the back row.

  • @leorickpccenter
    @leorickpccenter 7 місяців тому +2

    Microsoft will be requiring this and needs at least 40-50 TOPS of performance for it to be a smooth AI experience with windows 11, presumably with the upcoming Co-Pilot.

  • @jktech2117
    @jktech2117 Рік тому

    i wonder if it will help at game resolution upscaling and frame interpolation

  • @DeadCatX2
    @DeadCatX2 6 місяців тому

    As an FPGA engineer I've used DSP cores to accelerate certain algorithms in hardware and upon hearing that MAC is the basis of AI I pictured the Leonardo DiCaprio pointing meme, where DSP cores are pointing at AI cores

  • @jrvgameplaytrailers8527
    @jrvgameplaytrailers8527 Рік тому

    Very helpful 👍

  • @erlienfrommars
    @erlienfrommars Рік тому +8

    Windows 11 getting its own software based AI-engine to complement these AI hardware accelerators that can improve Audio, video and telecommunications would be amazing, and about time as Apple has been doing this for years since they moved to M-series Macs.

    • @HighYield
      @HighYield  Рік тому +3

      I'm sure Microsoft is already hard at work.

    • @craneology
      @craneology Рік тому

      The more to spy on you with.

    • @kekkodance
      @kekkodance Рік тому

      ​@@craneologysame

  • @giu_spataro
    @giu_spataro Рік тому

    Are these type of engine more like a Coral o Jetson Nano, used only for inference, or can be used efficiently also for training?

    • @HighYield
      @HighYield  Рік тому

      I guess its mostly inference, but Xilinx AI can do both: www.xilinx.com/applications/ai-inference/difference-between-deep-learning-training-and-inference.html

  • @Phil-D83
    @Phil-D83 Рік тому

    Going to need some new vector extensions to accelerate ai type workloads on regular cpus

  • @first-thoughtgiver-of-will2456

    What AMD needs to do is innovate further on their cache chiplet design and SoC infinity fabric IP to form a VRAM like cache for these DSPs. This is just another AVX extension or Snapdragon DSP equivalent (still awesome to see) but AMD is positioned to fix the real problem with machine learning models which is memory hierarchy. CPUs are surprisingly powerful compared to GPUs it's the memory locality that really makes GPUs outperform CPUs by so much due to cache misses in parameter space. Throw an L4 equivalent on the outside of the CCX chiplet and extend the ISA for AVX (also throw bfloat16 in there please).

  • @randomsam83
    @randomsam83 Рік тому

    Dude your accent is perfect for explaining technical stuff. Consider using a German word once in a while to make it perfect. Great work !

  • @rolyantrauts2304
    @rolyantrauts2304 Рік тому

    Its a matter of cost as if the 7040 undercuts what is likely to be the m3 then there is a huge new arena of local voice activated LLMs that a single server can service a reasonable number of zones for most homes.
    The Home HAL is on the way and that is the 2001 type not abstraction layer.

  • @alexx7643
    @alexx7643 Рік тому

    You should take a look at Alethea AI. They are introducing CharacterGPT. We can create interactive AI characters by simply entering some text. Also they are working on the ownership of AI generative content.

  • @MostlyPennyCat
    @MostlyPennyCat Рік тому

    Interesting, I see that the AI cores are VLIW.
    If that's VLIW like Itanium as opposed to VLIW like ATI's old GPU instruction set it's fascinating.
    Could VLIW work when in the limited context of machine learning inference? Will it be compiled or hand written?
    Yes, interesting indeed.

  • @Chalisque
    @Chalisque Рік тому +1

    If it only runs on AMD's APUs, then it will only run on a fraction of PCs, making for a small target market for software developers. It makes sense to add it to their desktop Ryzens to, and possibly their discrete GPUs, possibly even separate PCIe cards with just XDNA on it (market dominance will require the tech to be available to PC users with Intel CPUs and nVidia GPUs). But I can't see how the market will embrace this technology if it is only available on AMD's APUs.

  • @DUKE_of_RAMBLE
    @DUKE_of_RAMBLE Рік тому

    Mmmm... 🤤 That notion of game enemies leveraging the AI Engine, is nifty! I don't know exactly how improved it would be over the current means, which have already had "learning" abilities; albeit, minimal and session based. if the new one could store complex info and re-use it on the next game load, that'd be great. (although, this probably falls under "machine learning", not "general AI" 😕)

  • @Gorion103
    @Gorion103 Рік тому

    Why there is "bleeding edge" instead of leading at 0:14?

    • @HighYield
      @HighYield  Рік тому

      en.wikipedia.org/wiki/Bleeding_Edge

  • @em0jr
    @em0jr Рік тому +1

    The'ol Math Co-processor is back!

  • @theminer49erz
    @theminer49erz Рік тому +1

    I'm actually very happy to see this. I have been expecting something similar for a while. Although I was envisioning it being more like those "Physics" cards in the early 2000s. It seems that ever since the Xbox 360/PS3 era began, in game AI (NPCs, Enemies, wildlife, etc) has basically been an afterthought IF that. I believe it's because by the time those consoles were releases they were already rather outdated compared to PC capabilities and have been trying g to keep up (and failing) ever since. That means when studios make games, even if they will be prominently for PC, they can't get too "fancy" or else the difference between the PC version and the console versions wpuld be too great and point out how bad the system is. I doubt they would get a license to sell such a bad port and without console licenses they will not get budget. So in order to maintain the illusion of graphical improvements over time, things like AI and view distance etc. were left on the cutting room floor. Think about A-Life in STALKER games that allows for AI based enemy tactics, wildlife, and NPC interactions. It makes for a much more realistic, immersion heavy game experience that is almost always diffrent since even when you are not around or on the map, the NPCs etc do their own thing. Also think about the first FEAR game, the enemy tactics were amazing and felt real, but graphical quality was compromised to do so(was worth it). Anyway, my point is that I hope this is used for such things moving forward. I know publishers would almost never approve lowering graphical quality just for better AI since their market research says "graphics are the most importantpart of a game" (aka asking random people who have no idea what makes a game good, "what makes a game good?"). However if this hardware becomes more common, they wouldn't have to make that trade off. Lastly, I believe that MS and Sony only have maybe one more "Next Gen" console in them before the price/performance of a PC would surpass what they can make and sell. They already take a bigger and bigger loss on the system sales each cycle and rely on licensing to make up for it. However since they seem to use next gen APUs now, so that means of they can get, say a AMD APU with RDNA3/4 and "AI cores" games may start making use of in game AI again since it won't have to be a trade off/can be applied to console titles. They could also allow for things like DLSS type AI upscalling can be taken off the GPU and given to the CPU perhaps. I see APUs being the main go to in the future. AMD has the head start and chiplet stacking/3D cache can make them extremely powerful. I also see dedicated APU motherboards that have both system RAM and VRAM slots that will allow for more upgrade paths and less waste. Yes a mobo will cost like $300+, but you won't need the whole GPU PCB and there could even be some performance gains by having all of that in the mobo instead of having to all go through PCIE slots. Anyway, this is good news I think! There are also really promising potential for other things, but that is a secret as I am currently working on something that would benifiet greatly from such a thing. It wpuld also be nice to have offline home assistant/automation computing more readily available to more people instead of having everything that happens in their home get sent to an Amazon server to be analyzed and archived just so it can play a song when you ask it to. This is possible now if you have a server with a GPU now(like me) but it's not supported very well as far as software choices. If it wasn't such a weird thing to set up, I'm sure there wpuld be much more options. I will conclude here, thanks for the video!! I'm looking forward to hearing more on the new "Zen 4D" and how RDNA3 is evolving. I'm not keeping up with most the main outlets because they are getting annoying, so I'm counting on you to keep me up to date! :-D have a great weekend!!

    • @HighYield
      @HighYield  Рік тому +3

      I remember "PhysX" very well. At some point Nvidia thought everyone would have a dedicated physics card in the future. But unlike Nvidias proprietary API, I'm sure AI engines will make their way into most computer chips eventually.

  • @hellraserfleshlight
    @hellraserfleshlight Рік тому +1

    The question is, with local AI processing, will applications stop sending PII to the cloud to be processed and cataloged improving use privacy, or will it just save Google, Facebook, Amazon, Microsoft, and others money on processing data, letting them harvest more "polished" PII.

  • @mapp0v0
    @mapp0v0 Рік тому

    What is your thoughts on Brainchip's Akida?

  • @6SoulHunter9
    @6SoulHunter9 Рік тому

    I am about to have lunch and that sandwitch looked so tasty that it distracted me from the topic of the video xD

  • @granand
    @granand Рік тому

    Okay should I be buying M2 apple mini or Laptop or Win based intel i7 or Ryzen ? I am developer need lots of RAM, CPU, DDR memory and ability to work on databases, programming IDEs like visual studio, anaconda.

    • @HighYield
      @HighYield  Рік тому

      Honestly, that hard to say. If you need lots of RAM, building a system yourself can be much cheaper. Do you rather like to work on OSX or Windows or Linux? So many questions.

    • @granand
      @granand Рік тому

      @@HighYieldThank you for the response. I work with lot SQL daabase, Visual studio, Anaconda and stuff like that, GIMP sometimes, not into gaming and Important I am on contract and move around the country and can only carry portable. I am find with M2 mini or other compact desktops as I can buy and discard cheap used monitors. Yes I am conversant to work with linux and windows but most of my work is based on Microsoft

    • @SirMo
      @SirMo Рік тому

      I'm a developer who used to use Macs. But x86 is still the king. Give Pop_OS! a try. Pop_OS! runs amazing on AMD's hardware. You get the best of both worlds. An OS as productive as MacOS and the ability to chose from a vast array of hardware available on PC that can keep any power user happy. I made a switch few years back and I'm never going back to Mac.

  • @petershaw1048
    @petershaw1048 Рік тому

    This week I built an AMD system based on the x670e-Pro chipset (Pcie5) with an 8-core processor. When they come out, I will drop in a CPU with Ryzen AI ...

  • @janivainola
    @janivainola Рік тому

    Would be very cool with rpg games where the plot is set up, but AI follows the actions and style of the player to update the story during play. Making each playthrough unique...

  • @mtunayucer
    @mtunayucer Рік тому

    Notably, Apple A11 neural engine has never been used outside of Face ID, Apple made neural engine effectively public, starting with A12.

    • @wawaweewa9159
      @wawaweewa9159 Рік тому

      Apple is gay

    • @HighYield
      @HighYield  Рік тому

      Really, only Face ID? Didnt know that but it kinda makes sense.

    • @mtunayucer
      @mtunayucer Рік тому

      ​@@HighYield Its more like only Apple could use A11 NPU, Animoji also used it just looked it up.

  • @e.l809
    @e.l809 Рік тому +1

    AMD should work to make it compatible with the ONNX format (by microsoft) it's open source and support a lot of hardware. It's the beginning of a "standard" for this industry.

  • @MostlyPennyCat
    @MostlyPennyCat Рік тому +1

    I wonder if AMD's HSA (Heterogenous System Architecture) can rise from the grave now.
    Seems the perfect fit for adding AI inference to your code?

  • @Alauz
    @Alauz Рік тому

    Would be nice to have A.I. accelerated graphics replacing traditional raster tech in the near future. Maybe full Path-Traced graphics with A.I. accelerators can make huge GPUs unnecessary and we can simply use APUs and re-shrink the ultimate gaming machines to the size of watches.

  • @briancase6180
    @briancase6180 Рік тому

    Sheesh, it's about time. Apple and Google have had "neural engines" for years. Apple's new M-series SoCs have also good AI accelerator blocks.

  • @jitterrypokery1526
    @jitterrypokery1526 Рік тому

    Any news about apples rumored m2 pro and m2 max refresh?

    • @HighYield
      @HighYield  Рік тому

      M2 Pro & Max have the exact same 16-core NPU as the base M2 model.

  • @woolfel
    @woolfel Рік тому

    I spent the weekend benchmarking apple M2Max and newer ANE. For densenet121, it can do over 700 FPS versus 100 FPS on GPU. It's taken AMD far too long to add tensor processors.

  • @OnyxLee
    @OnyxLee Рік тому

    Is it going to replace tensor cores?

    • @HighYield
      @HighYield  Рік тому +1

      No, its basically something similar, not a replacement.

  • @theencore398
    @theencore398 Рік тому

    RX-DNA is something I can see happening in the near future, imo this is too good of an opportunity to miss for naming a GPU

  • @Powerman293
    @Powerman293 Рік тому

    Didn't Intel Icelake have AI accelerators? I may be mistaken though.

    • @HighYield
      @HighYield  Рік тому +1

      IIRC, Ice Lake had AVX-512 and specific Deep Learning libraries to speed up AI workloads (called "DL Boost"), but not dedicated AI hardware.

  • @pichan8841
    @pichan8841 Рік тому +1

    Is that a grandfather clock running off camera?

    • @HighYield
      @HighYield  Рік тому +1

      You hear a ticking sound?

    • @pichan8841
      @pichan8841 Рік тому +1

      @@HighYield Actually, I do. Not constantly, though...Am I hearing things? e. g. 2min40 - 2min49, 3min03 - 3min13 or 3min43 - 4min11...Grandfather clock!

  • @Endangereds
    @Endangereds Рік тому

    As we can increase system RAM, if this tech is harnessed well, and if it can compete and give outputs similar to RTX 4090 cards, "in terms of only AI", that would be great.

    • @Le4end
      @Le4end 9 місяців тому

      It definitely won't have 4090 fluidity. Maybe 3080

  • @AndrewMellor-darkphoton
    @AndrewMellor-darkphoton Рік тому

    I have a feeling this is gonna be obsolete in five years when they come up with non von Neumann AI. The linear algebra accelerators in CPU and GPU they're still pretty competitive cause I don't think programmers want to work with the asic or they might need a more complex algorithm.

  • @ShyFx8
    @ShyFx8 Рік тому

    Interesting. This with artificial intelligence sounds nice and great, if it is programmed and used correctly. Can make cpu/gpu more efficient. Clearly this is part of the "internet of things" where everything is connected. But not many people think that artificial intelligence is actually fallen angel technology.

  • @wilmarkjohnatty4924
    @wilmarkjohnatty4924 Рік тому

    I'd like to understand how strategies deployed by AMD will compare to NVDA, and maybe Broadcom with RISC /ARM, and is this why NVDA bought ARM? There is a hell of alot of hype about NDVA, are they likely to live up to that? What will AI do to already seemingly dying INTC?

  • @dakrawnik4208
    @dakrawnik4208 Рік тому

    Cool, but what's the killer app??

    • @samghost13
      @samghost13 Рік тому

      you did not understand when you ask that

  • @yogiwp_
    @yogiwp_ Рік тому

    Why don't we get this accelerator on desktop chips?

    • @HighYield
      @HighYield  Рік тому +3

      Because Phoenix is just the first step and since AI can save battery life, its more useful on mobile devices. But I'm sure we will get AI Engines on desktop CPUs in the future.

    • @SirMo
      @SirMo Рік тому +2

      Because you aren't as concerned with battery life on a desktop PC, so using brute force approach works well enough. Though I'm sure we will see this on desktop PC's at some point.

  • @flytie3861
    @flytie3861 Рік тому

    What about ai in smartphone chips?

  • @yuan.pingchen3056
    @yuan.pingchen3056 10 місяців тому

    There's still no benchmark results about 'Ryzen AI' till this moments.

    • @yuan.pingchen3056
      @yuan.pingchen3056 9 місяців тому

      @@blue-lu3iz so you mean it's non of NDA's business?

  • @NaumRusomarov
    @NaumRusomarov Рік тому +4

    i wonder how this is going to be exposed to the os and software. i'd like them to make this configurable through the compilers so that devs could use the ai cores if available.

    • @HighYield
      @HighYield  Рік тому +4

      I also hope they will provide open APIs.

    • @NaumRusomarov
      @NaumRusomarov Рік тому

      @@HighYield that would be spectacular! :-)

    • @SundaraRamanR
      @SundaraRamanR Рік тому

      @@HighYield it's AMD, so they probably will

  • @TerraWare
    @TerraWare Рік тому +1

    We need ai shader compilation to get rid of stutters.

    • @SirMo
      @SirMo Рік тому +1

      That's a developer issue. Not really something you can fix in hardware. Some games do it correctly.

  • @JKTPila
    @JKTPila Рік тому

    Is this the AMD 7040 series?

    • @HighYield
      @HighYield  Рік тому

      Correct, Phoenix is the Ryzen Mobile 7040 series.

  • @larsb4572
    @larsb4572 Рік тому

    A big use will be night to light. Perfect sunny days in even pitch black. What is black for the human eye is just nuances of dark for the AI with a decent optic. and as such, you can implement it in the windshield and sidewindows in your car so at night, you get 240fps AI sunshine at 1am during pitchblack driving. On the phone too.. just hold up your phone or put it in a headset to see around you while underground, or outdoor while its dark.

  • @RaidenKaiser
    @RaidenKaiser Рік тому

    I am concerned that if every tech company gets in on AI and it all backfires what the fallout will be and how they will try to make consumers pay for it to bail them out.

  • @terjeoseberg990
    @terjeoseberg990 Рік тому

    What’s the difference between this and MMX?

    • @HighYield
      @HighYield  Рік тому

      Do you mean Intel's XMX on their Arc GPUs? If so, that's very similar to AMD's XDNA engine, both are dedicated AI-Engines that accelerate the most common ML calculations.

    • @terjeoseberg990
      @terjeoseberg990 Рік тому

      @@HighYield, No. I mean the ancient MMX from back in the olden days. Isn’t MMX great for performing high speed matrix operations?
      What’s the difference between MMX and XMX?

  • @stevenwest1494
    @stevenwest1494 Рік тому

    Interesting, but it'll need to be sold to the average user of a PC as something they need. That will need to be built into the Windows 11 scheduler, which for some reason is having problems with Zen 4 cores over 2 CCD's, and Intel's big, little design. Understandably with 2 different core designs for Intel, but CCD's have been an AMD standard for generations now. Also Zen 5 will use a big Zen 4+ core and smaller Zen 5 little core. It'll be another generation, Zen 6 earliest we see AI in desktop CPU's from AMD.

  • @gstormcz
    @gstormcz Рік тому +2

    There is always some hype.
    Not always being bad.
    I think it depends on both demand and chip design possibilities.
    I just don't know if that AI chip cores is not that what was otherwise listed in chip specs as there were AVX, MMX and other cpu extensions if correct.
    AMD calling software devs to make good use of those AI areas in chip really looks like other attempt to make use of raytracing or other cores. It's AMD business to sell what they make.
    I remember Intel Pentium MMX having some extensions, claiming better gaming performance, but when you went in AMD chip one gen later, you usually found that extensions there too, with sometimes either better pricing or simply higher raw performance over Intel.
    "I just hope my computer won't watch me for every single action one day in future, giving me advice to live better, faster, more efficient and doing more things at once, asking only plug me into grid and slap cooler on my head.
    Turn me off if I become idle but consuming too much." (xd)
    When I played chess with computer, it was tough at medium difficulty on PC 8086 upto 486. 3D shooter bots in Quake 1 or 3 were beatable bit higher, being quite fast, dextrous and a accurate. Playing vs bots in World of warships seems quite easy most of time, they sometimes get stuck at islands( no complain on devs), mostly not that devastating gunners, but already change course, speed etc. It is still programmed, having bots with performance equal to human is not goal of coop game there (IMHO), many players prefer relax, easier game than that one pvp.
    But I can imagine AI could make fair bot enemy either matching player skills or surpassing and teach what to improve passive way or with guidance/AI tips.
    Finally AI could just teach car drivers how to drive well without complete automatization of it. I personally don't seek AI driving my life, some Google results on my casual search is enough.

    • @HighYield
      @HighYield  Рік тому +3

      I think its important to make the AI engine accessible and with time, real use cases will appear.

  • @Z0o0L
    @Z0o0L Рік тому

    maybe they can use this AI for pricefinding that isnt redicoulus for the 7000 series cpu....

  • @flaviusradac4602
    @flaviusradac4602 Рік тому

    Basically AI is not well suited with Windows 10/11 OS, but if Windows 12 comes with an AI software integration with the AIE processors...we will see a huge revolution in processing data and information....when playing games, your computer will know straight what is the average resolution and quality to play the game before installing....or youtube will know the perfect resolution and internet speed for your videos, and aditional content...and in Microsoft office.....in excel to predict about your data imput...or microsoft word to correct your spell(s)....and if microsoft aquire Chat GPT, plus a software integration with AIE processors.....then will be a feast

    • @zbigniew2628
      @zbigniew2628 Рік тому

      Hah, most of it is easily done without AI. It's just not worth implementing now, because it need a few sec of thought from a user.
      Some people are enough braindead, thanks to others apps and time or focus eaters... Soo you don't need AI to make them even more shallow.

  • @heinzbongwasser2715
    @heinzbongwasser2715 Рік тому

    Nice

  • @SmartK8
    @SmartK8 Рік тому

    I want my CPU, GPU, APU, and QPU (Quantum Processing Unit).

  • @jmtradbr
    @jmtradbr Рік тому

    Nvidia have this for several years already. AMD was needing it.

  • @IARRCSim
    @IARRCSim Рік тому +1

    Are AI processors going to be programmed by a specialized programming language like GLSL or OpenCL for GPU's? I hope they get standardized soon so software can take advantage of the hardware even if it is from various different APU or AI hardware producers.

  • @6XCcustom
    @6XCcustom 6 місяців тому

    the extremely rapid AI development in form of software and hardware
    implies this that, the hardware must be replaced much faster now

  • @MK-xc9to
    @MK-xc9to Рік тому +1

    It seems Meteor Lake is d e l a y e d again and may even be scrapped ( at least for Desktop) due to the lack of high CPU Frequency , instead there may be another Raptor Lake Refresh , Raptor Lake itself wasnt planned and is only a refresh of Alderlake . Maybe we will see Meteor Lake on mobile 2023 but that depends on " Intel 4 " which still has some issues but may be good enough for mobile .

    • @HighYield
      @HighYield  Рік тому

      Yes, Meteor Lake is hanging in the ropes right now, but I still think we might see a mobile version this year.

  • @icweener1636
    @icweener1636 Рік тому +1

    I want an AI that will help me get rid of Noisy neighbors

  • @ps3301
    @ps3301 Рік тому

    AMD must adopt the soc design.

  • @moozoowizard
    @moozoowizard Рік тому

    I don't see what this gives over just using the GPU. Also I think ai engine = DSP not really ai. Will it support onnx, Pytorch, Tensorflow? Will stable diffusion, whisper, gpt-2 etc run on it?. I looked at the AI engine examples and they where signal processing. That's not "AI" it's DSP.

    • @SirMo
      @SirMo Рік тому

      For training workloads you'd still want to use a GPU. But for inference this should provide a significant boost in efficiency. Efficiency is the name of the game. Your CPU idling while you're on a Zoom call with the AI engine doing the blurring of the background for instance. Or the AI engine removing background noise from your audio.

    • @moozoowizard
      @moozoowizard Рік тому

      @@SirMo but for the die space you could add another CPU core or additional GPU cores (for an apu). The AI engine seems to just complicate matters with something in between. And then you need a third driver to run it. You don't want to run stable diffusion inference (creating images) on a CPU. It's very slow. And I'm not convinced running something like nvidia broadcast (inference again) on an ai engine has any advantage. So your just left with a small number of dsp type use cases.

  • @pedro.alcatra
    @pedro.alcatra Рік тому +1

    Thats no a big deal for 90% of home users, for shure. But wait till they make a partnership with unreal and we start seeing it on NPCs or something like that lol

  • @user-qr4jf4tv2x
    @user-qr4jf4tv2x Рік тому +1

    can't wait for my cpu to have existential crisis

    • @HighYield
      @HighYield  Рік тому +1

      So you are saying it can run crysis?!

  • @heinzbongwasser2715
    @heinzbongwasser2715 7 місяців тому

    will copilot use this engine?

    • @HighYield
      @HighYield  7 місяців тому +1

      Most likely yes, there's a rumor Co-pilot will require on-chip AI Engines with 45+ TOPS of ML performance.

    • @heinzbongwasser2715
      @heinzbongwasser2715 7 місяців тому

      @HighYield are there competing instructors os is it likely that a certain standart wins?

    • @6XCcustom
      @6XCcustom 6 місяців тому

      @@HighYield How many tops require Windows 12 to work smoothly

  • @procedupixel213
    @procedupixel213 Рік тому +1

    Yeah, AI hardware is definitely mass producing fast food. Or even junk food, when you consider that precision can get as low as just two bits per coefficient.

    • @HighYield
      @HighYield  Рік тому +1

      I honestly think my analogy isnt that far off :D

  • @alexamderhamiltom5238
    @alexamderhamiltom5238 Рік тому

    my heart broke when i saw that DSP, i shouldn't have upgraded too soon

    • @HighYield
      @HighYield  Рік тому

      Why did your heart break? :(

    • @alexamderhamiltom5238
      @alexamderhamiltom5238 Рік тому

      @@HighYield because DSP is what i really needed back than, processing digital signal inviting delay no matter how strong the raw performance, with DSP that delay will decreased significantly

    • @HighYield
      @HighYield  Рік тому

      Ah now it makes sense.

  • @PurpleWarlock
    @PurpleWarlock Рік тому

    Wouldn't enemies learning player patterns require ridiculous amount data and cycles in other to find something? Usually AI enemies just need some extra information on the player to get a leg up. It's not like a game can't let enemies get headshots more often.
    Anyway, my take. I'm just a measly web dev that works on old PHP scripts.

    • @teapouter6109
      @teapouter6109 Рік тому +1

      Have you played FEAR? That's what good enemy AI looks like.
      Much better than what we have today, and that game is so old.
      I'm not any kind of developer, but I know that video game AI has made pretty much 0 progress in, like, forever.

    • @vikhr
      @vikhr Рік тому

      @@teapouter6109 "video game AI has made pretty much 0 progress in, like, forever"
      Because people keep complaining about the AI being "too hard", there's essentially no market for it other than the more competitive gamer, but that kind of person will probably play against other players instead of AI.
      I think that playing a game like League of Legends just with highly advanced AI teammates/enemies would be nice, since you wouldn't have to deal with trash teammates/enemies, but at the same time it would be really weird to have played a game with "bots" that felt like a normal game with humans. Something like this would definitely kill multiplayer for a lot of people, at first it would be weird but over time I think that many would choose playing with the AI over the occasional stupid humans, especially after raging hard.

    • @teapouter6109
      @teapouter6109 Рік тому +1

      @@vikhr But we have difficulty sliders…

    • @vikhr
      @vikhr Рік тому

      @@teapouter6109 The point is that almost no dev team is going to waste magnitudes more time making a complex AI only for 0.1% of the playerbase to even try it out.

    • @teapouter6109
      @teapouter6109 Рік тому

      @@vikhr It’s not 0.1%
      It’s a feature of the game that casuals can turn off if they even notice
      Not every game has to be made for slack jawed losers who don’t know how to hold a controller