Intrinsic Functions - Vector Processing Extensions

Поділитися
Вставка
  • Опубліковано 29 чер 2024
  • Ooof! Well you guys asked for it, and it's up there in complexity for this channel! XD In this video I demonstrate how CPU Extensions can be used in your C++ programs via Compiler Intrinsic Functions to perform SIMD parallel operations. First I demonstrate how these extensions look and feel, then I implement the Mandelbrot Fractal generation code form my previous video.
    Source: github.com/OneLoneCoder/Javid...
    Patreon: / javidx9
    UA-cam: / javidx9
    / javidx9extra
    Discord: / discord
    Twitter: / javidx9
    Twitch: / javidx9
    GitHub: www.github.com/onelonecoder
    Homepage: www.onelonecoder.com
  • Наука та технологія

КОМЕНТАРІ • 375

  • @javidx9
    @javidx9  4 роки тому +179

    I will also add that branching can stall a CPU, particularly as processors attempt to "guess" which bit of code will be executed next. If it guesses wrong, it has to effectively "go back", so removing branching is a good strategy for optimisation.

    • @Astravall
      @Astravall 4 роки тому +6

      @javidx9 ... hmm did you ever calculate _c in your code example? Well it is likely in the git repository ;) but in your video i think that part is missing (e.g. at 54:36 ). I just see comments on what you want to achieve ... or did i overlook that part?
      Nevertheless a cool video, as a long time ago i programmed in Assembler but nowadays i'm relying on the C#-Compiler ;).

    • @dieSpinnt
      @dieSpinnt 4 роки тому +11

      C++ 20 brings us [[likely]] and [[unlikely]] that may help to fix a branching conflict.
      See Jason Turner on this topic at ua-cam.com/video/ew3wt0g99kg/v-deo.html
      Thank you for the educating video javidx9. Stay safe.
      P.S.: Isn't it nice that meat-bags(humans) are still useful for optimization work and making videos?:)

    • @notnullnotvoid
      @notnullnotvoid 4 роки тому +10

      @@dieSpinnt It's worth noting that the [[likely]] and [[unlikely]] tags (or the equivalent compiler-specific markup you would have used prior to C++20, such as __builtin_expect) can't really directly help the CPU to predict branches better, they mainly help make correctly-predicted branches perform better, by hinting the compiler to, for example, reorder branches to reduce the overall number of jumps in the expected code path, or improve the cache locality of the expected code path by laying it out contiguously in memory, or to decide whether to use a branch vs. a cmove.

    • @Bvic3
      @Bvic3 4 роки тому +5

      Why is it so hard to find resources about the incredible branching prediction of processors ? I only saw it mentionned in a talk by former Intel/Tesla chief processor architect Jim Keller.
      It's not just predicting what will be used next, but parallelising automatically by finding independant pieces, like initialising variables can be done before the function is called!
      It seems that there is a processor inside the processor doing those predictions live depending of the current run time and other threads from other programs. The firmware can optimise machine code live, not following the .exe machine code.
      And Intel wants to use neural networks to predict branching. That's how they manage go make code run faster without increasing clock speed.
      Also, there are professional grade Intel compilers with licence prices higher than consumer processors that make much more advanced optimisation than the generic GCC compiler.
      It seems such a fascinating topic, but surprisingly secret.

    • @jon9103
      @jon9103 4 роки тому +3

      @@Bvic3 if you're interested in how branch prediction works, you might want to read the Wikipedia article en.m.wikipedia.org/wiki/Branch_predictor. If you look at the reference section you'll find that much if the theory is freely available, what's secret isn't usually really how things work, rather its all the work that goes into implementing something that can actually put it into practice and be competitive.
      As to the Intel compiler vs GCC, a lot of that is marketing, sometimes Intel does better, sometimes GCC does, it really depends on specifics (i.e. what code is being compiled, how is performance being measured, what system is it running on, what version of the compiler, what compiler options were selected, etc.) Naturally its easy for Intel marketing to cherry pick scenarios that put their compiler in the best light, so its important to understand that your results will vary.

  • @nikola7377
    @nikola7377 4 роки тому +210

    The most handsome C++ guy that ever walked this planet

    • @DlCartof
      @DlCartof 4 роки тому +7

      if u like javix check out chilli tomato noodle too, for some more sweet c++ 😃

    • @mjthebest7294
      @mjthebest7294 4 роки тому +10

      Javidx9 and ChiliTomatoNoodle are surely the best C++ teachers I ever had. :)

    • @maddjhdhdhdhd6917
      @maddjhdhdhdhd6917 4 роки тому +15

      The cherno is also a great guy

    • @leocarvalho8051
      @leocarvalho8051 4 роки тому +1

      Theres also the chinese guy i dont remember the name and Jason

    • @92309858
      @92309858 4 роки тому

      leo carvalho Thomas Kim or Bo qian?

  • @NeilRoy
    @NeilRoy 4 роки тому +134

    *head explodes* - I see a lot of basic programming videos online with all the usual fair, and they are very nice. But it's refreshing to see more advanced topics like this covered, and covered so well.

  • @hu-ry
    @hu-ry 4 роки тому +66

    OMG HE HEARD OUR BEGGING FOR MORE SIMD COVERAGE! Blessed shall you be, you immortal being :D

  • @richardbloemenkamp8532
    @richardbloemenkamp8532 4 роки тому +28

    Both your C++ and your teaching skills are absolutely excellent! They should give you a Bjarne Stroustrup Award.

  • @whirvis
    @whirvis 4 роки тому +80

    Quite the intrinsic video! I haven't even watched the video long enough to know what it means, but I wanted to use that adjective! :)

  • @RichBoud1
    @RichBoud1 3 роки тому +5

    I was watching this when I couldn't get to sleep. It is so fascinating that I kept watching and watching. It didn't help me get to sleep at all ;-). Thanks for a great lesson.

  • @malstroemphi1096
    @malstroemphi1096 2 роки тому +24

    I believe "pd" stands for "packed double" and not "parallel double"

  • @tusharsankhala9521
    @tusharsankhala9521 4 роки тому +14

    Please keep this series of explaining parts used in C++ SIMD to continue, your way of explaining its awesome, Thanks for putting such a high quality content out in public.

  • @ademarsj
    @ademarsj Рік тому +2

    Interesting, I watched the video and thought: "Wow, what a amazing teacher, full of content", then i subscribed and check the channel videos and realize that when i was in the beginning of graduation I visited that same channel to see start level content and now, almost finishing the course, here i am, seeing a more complex thing, moral of the history: The channel and his creator is both incredible.
    Thank you !!!
    Sorry for my poor english....

  • @Schwuuuuup
    @Schwuuuuup 4 роки тому +43

    That was great - and now CUDA ;-)

    • @Mozartenhimer
      @Mozartenhimer 3 роки тому +1

      Then PTX assembly.

    • @guiorgy
      @guiorgy 3 роки тому +2

      Recently I had a C# code that would take about 50 minutes to execue and calculate. Running parallel got it to about 5 minutes. Using OpenCL (kinda like CUDA) got it a little under 10 seconds xd
      Edit: And yes, I did run the code for 50 minutes xd

    • @Schwuuuuup
      @Schwuuuuup 3 роки тому +2

      @@guiorgy I wish I had the time to bring me up to speed with Cuda or Open CL, but besides a little bit of programming Arduinos I'm not a C programmer, and I struggle with basic concepts like 'const * char const' etc.
      I have a project regarding gamified genetic algorithm which I have done in Java years ago, and someime I have to recode in C, GPU computing and a powerful graphic engine

    • @aamirpashah7159
      @aamirpashah7159 11 днів тому

      ​@@Schwuuuuupwrite it like this const * char data; this will make more sense

    • @Schwuuuuup
      @Schwuuuuup 11 днів тому

      @@aamirpashah7159 dude, my post is over 3 years old

  • @londonbobby
    @londonbobby 4 роки тому +13

    A bit late to the party, but here goes... This video has inspired me to try SIMD programming. I have long been a fan of Mandelbrots and many years ago wrote a program to plot and explore them. Eventually I got myself a PC with an i7 processor and explored making my Mandelbrot program multi-threaded which worked well. Now is the time to upgrade it again with SIMD. Now my cpu is still the same i7 which does not support anything past SSE4, but then compiler of choice is Delphi 6 (don't judge), which completely does not support intrinsic functions at all. However it does have an in-built assembler which supports up to SSE2. So my task has been to translate all this C++ code into Pascal/assembler. I have eventually got this to work - a few radical changes were required - e.g. I only have 8 x 128 bit mmx registers to play with so only 2 pixels at a time, but the speed-up is amazing. My program is rendering full screen images in just a few hundreds of milliseconds (sometimes much less) where it was taking multiple seconds before. The most complex image so far has only taken slightly over a second to process. Thank you so much for explaining this in such simple terms that I was able to do this and learn about SIMD.

    • @javidx9
      @javidx9  4 роки тому +6

      Hey that's great Bobby! SSE4 is no slouch, and I'm pleased you got it working to your expectations. I must confess I'd not considered the availability of intrinsics in other languages before, so this is quite interesting.

    • @Dave_thenerd
      @Dave_thenerd 4 роки тому +1

      @@javidx9 C# Recently added Intrinsics via the System.Numberics namespace and they work pretty well. See: devblogs.microsoft.com/dotnet/hardware-intrinsics-in-net-core/
      and: docs.microsoft.com/en-us/dotnet/api/system.numerics?view=netcore-3.1

  • @ddummer
    @ddummer 4 роки тому +7

    Just watched Linus tech tips where Anthony mentioned "AVX 512" support on a new macbook and since I recently watched this video I could say "Oh yeah... I understand that... in depth." :)

    • @obinator9065
      @obinator9065 3 роки тому

      Yeah thing is... AVX512 takes a way bigger CPU hit, not worth it.

  • @dorjderemnamsraijav5182
    @dorjderemnamsraijav5182 4 роки тому +2

    Javidx9, my hero. Why? He read every single comment i wrote on this channel and im sure it applies to everyone else. If I become successful person one day, the reason must be your videos. They are very well made and he explains every single step he made on his videos. I cant help you with financial part right now, but I will make sure pay what you did to me in the future after I get some job. You are very cool man (I cant even describe it with word). And thinking about what you did for me make me so emotional.

    • @javidx9
      @javidx9  4 роки тому

      lol, thank you Dode XD

  • @valkarion9
    @valkarion9 4 роки тому +4

    I will have a Computer Architecture exam next week and a significant chunk of the material is about SIMD extensions but since it's a university course it's all theory, so it's nice to see it in action.

  • @inon4037
    @inon4037 4 роки тому +11

    Exactly when I needed it! The timing couldn't be more than perfect

  • @dorjderemnamsraijav5182
    @dorjderemnamsraijav5182 4 роки тому +2

    Cant get enough of your videos javidx9! Love your videos man

  • @qwedschy8285
    @qwedschy8285 4 роки тому

    Spending my summer break learning more about coding, but what can I say, these videos are too good!
    Thank you.

  • @LevPleshkov
    @LevPleshkov 4 роки тому +2

    Probably the most valuable video on UA-cam so far!

  • @Gabriel38196
    @Gabriel38196 3 роки тому +2

    Thanks for what you are doing for the community javid.

  • @simonegiuliani4913
    @simonegiuliani4913 4 роки тому +1

    You are very gifted at explaining things.

  • @mycotina6438
    @mycotina6438 3 роки тому

    Loved it! Simple, easy to understand yet complete. Thank you!

  • @karma6746
    @karma6746 4 роки тому

    Your ability to simplify complicated stuff borders on the divine - Thank You!

  • @toma.a7146
    @toma.a7146 3 роки тому

    It is nice to see more complicated stuff like this on UA-cam!

  • @CrazyAssDrumma
    @CrazyAssDrumma 4 роки тому +2

    This video was so cool, and you explained it so well! Thank you so much!

  • @gosnooky
    @gosnooky 4 роки тому +4

    I'm tired and I need sleep. Oh! A new javidx video.

  • @hl2mukkel
    @hl2mukkel 4 роки тому +1

    Thank you so much for this video, I learned so much! You truly are a blessing for the C++ UA-cam community :-)

  • @wowLinh
    @wowLinh 4 роки тому +5

    Amazing as usual!! I am simply amazed by the quality of your videos, topics and explanations.

    • @javidx9
      @javidx9  4 роки тому +3

      Thanks wowLinh - It always pleases me when I see you comment - you've been around a loooong time now XD

  • @will1am
    @will1am 4 роки тому +4

    By far the best video about this topic on youtube in overall.
    I only found much less detailed videos or way too detailed only on some specific parts.
    Cheers :)

  • @Cyberspine
    @Cyberspine 4 роки тому +4

    Thank you for this video. I took a CS course in parallel computing this semester, and it demystified a lot of what makes high-performance code tick. This video helped me to connect what I've learned with what is going on in an IDE like Visual Studio.

  • @pythagorasaurusrex9853
    @pythagorasaurusrex9853 3 роки тому +5

    Hell yeah! I tried those functions myself. Amazing tutorial. The speed gain is insane combined with using threads :) Thank you!

  • @Drunkenkatana
    @Drunkenkatana 4 роки тому +1

    Thanks for your videos! I love the way you explain things!

  • @jsflood
    @jsflood 4 роки тому +4

    Great video, it went from totally cryptic gibberish code to understandable logical code thanks to your elite explaining. Thank you !

    • @javidx9
      @javidx9  4 роки тому

      XD err thanks John!

  • @jordanclarke7283
    @jordanclarke7283 4 роки тому +1

    Mind blown! 🤯 Excellent video!

  • @tmbarral664
    @tmbarral664 3 роки тому

    Bow to you, Sir, for the quality of your explanation. I love how your mind works.

  • @Mrav79
    @Mrav79 4 роки тому +1

    So this eases the old school approach of having an __asm {} block to optimize what logic the compiler would not be able to do like we find in some older open sourced e.g. games engines, with organized instrinsic functions for exposing modern cpu instructions via modern compilers. Nice.

  • @rperanen
    @rperanen 4 роки тому +1

    Another great video and little trip to memory lane. Few years ago, I had to work image processing with older hardware which did not had any GPU acceleration and some algorithms had to be written with SIMD. After getting mind wrapped to work in vector-oriented mode the project was surprisingly pleasant to code.

  • @darkobakula5190
    @darkobakula5190 9 місяців тому

    As always, the best content one can find on UA-cam!

  • @spinthma
    @spinthma 4 роки тому +2

    Thank you for the insights to programming with intrinsics!

  • @Z0MBUSTER
    @Z0MBUSTER 4 роки тому +1

    I showed one of your video to my father to make him believe you were me, we look exactly alike, it took him a good minute to realise it was'nt me !!! We laughed so hard, keep up the good work =)

    • @javidx9
      @javidx9  4 роки тому +1

      a doppelganger eh?

  • @Kollegah9997
    @Kollegah9997 4 роки тому +2

    You sir are a beast! I'm a senior developer coding for 10 years, your knowledge is serios :)

  • @hippzhipos2385
    @hippzhipos2385 3 роки тому

    You are an absolute legend. I was wondering how much experience one needs to have to get that good

  • @miguel_franca
    @miguel_franca 4 роки тому

    Loved it! Clear explanations, awesome video

  • @arcadely
    @arcadely 3 роки тому +2

    Ha! And here it is: the SIMD video I asked for earlier today, along with plenty of others who asked before that, because I didn't check the post date on the brute forcing video. Great stuff!

    • @javidx9
      @javidx9  3 роки тому

      lol thanks arcade, I was gonna say something earlier, but I figured you'd find it! XD

  • @jajwarehouse1
    @jajwarehouse1 4 роки тому +46

    It would be very interesting to see this programmed for CUDA processing.

    • @judgeomega
      @judgeomega 4 роки тому +29

      gpu optimization information is rare and valuable. i dont know if hed be willing to expose such secrets of the dark arts.

    • @michelefaedi
      @michelefaedi 4 роки тому +3

      Simd is better than cuda in some cases. It don't need to transfer the data to the GPU and the loop is faster with simd(is complicated to explain why)

    • @karma6746
      @karma6746 4 роки тому

      @@michelefaedi Oh but you do need to transfer data to the GPU anyways. GPU is the one that actually does the drawing, isn't it?

    • @michelefaedi
      @michelefaedi 4 роки тому +1

      @@karma6746 only if you consider the graphics calculation. CUDA can do any algorithms you want. Even the one that don't require the video directly

    • @achtsekundenfurz7876
      @achtsekundenfurz7876 3 роки тому

      Fractals and similar iterations sound like a close second to me.
      There's very little to be transferred into the GPU, and very little back out. Moving the heavy lifting into the GPU could be very profitable, even more so since modern GPUs tend to have 100s of cores, even the better consumer-grade models.
      Not exactly your everyday algorithm, but even if you want to save the data to disk, it looks very promising. If you don't, real-time animation in full HD is definitely on the horizon thanks to Cuda.
      For other stuff, it can be the other way around. Instead of freeing CPU cores, it could tie cores down with management duties (or even worse: tie ONE core and block the others out), which is probably a workload for which most modern OSes are not optimized (unlike processing in the CPU or pure output generation in the GPU).

  • @lincolnsand5127
    @lincolnsand5127 4 роки тому +9

    I used to heavily use SSE2. Excited to see you cover AVX256

    • @truboxl
      @truboxl 4 роки тому +4

      ohhhh.... that's why its called avx2 for short...

    • @ilieschamkar6767
      @ilieschamkar6767 Рік тому +1

      ​@@truboxlnow it makes sense to me as well even tho i wouldn't shorten something already short

  • @Antagon666
    @Antagon666 3 роки тому +1

    When I first looked on to the intrinsic code, I thought how complicated it was...
    But you explained perfectly, and something clicked, and I realized how easy it really is.
    Thanks to the avx, I'm getting double the performance on my Mandelbrot set renderer. The best thing is, it even works on multiple cores with OpenMP directive . The performance on CPU is as good if not better than on GPU.

  • @Dr10na1995
    @Dr10na1995 4 роки тому +2

    So that is why these AVX flags are used in GCC! Thank you for the explanation :)

  • @ZOMGWTFALLNAMESTAKEN
    @ZOMGWTFALLNAMESTAKEN 4 роки тому +1

    i know nothing about coding and have 0 experience, I do like these videos and hope they continue

  • @adamodimattia
    @adamodimattia 4 роки тому +2

    Incredibly informative, the most hardcore but so enjoyable. Personally, I found masking not the hardest thing in it, instead it was the x positions and offsets, especially 52:04 - 52:12, what a... Fantastic stuff, thanks to your channel I really got more and more interested in more low level coding. The way you present it makes it much less scary, even the assembly code :)

  • @nishantraj8391
    @nishantraj8391 4 роки тому +4

    Are you a wizard? I was trying to learn about this just recently, and then your video comes out. Thank You

  • @laureven
    @laureven 4 роки тому +1

    Is there space where we can give Ideas for the new videos (so we have a list) and then we can vote witch subject is most selected ...obviously this is Your channel and Your vote is final but one thing is certain. You have a gift and the way and Your voice is just in perfect balance: a very very good teacher. We are very lucky You have time for those videos.

    • @javidx9
      @javidx9  4 роки тому

      Hi Marcin - kind of, but mostly no - On the discord we have a requests board, though fundamentally it requires that I feel confident enough about the subject matter to demonstrate it. I simply wont make videos about subjects I dont have a good understanding/experience of, they wouldn't help anybody! Also, I often disappoint people with timing of videos, since this is a hobby for me, it helps if the video i'm making is related to some project I'm working on at the time. In the case of intrinsics for example, I've been using them a lot in a different project which isnt a video, so its fresh in my mind. But always happy to see a comment from your good self, a long time supporter and I thank you for that!

  • @alexkval
    @alexkval 4 роки тому +1

    Thank you very much for such a detailed explanation 👍

  • @rhutajoshi9288
    @rhutajoshi9288 Рік тому

    This is so well explained!!
    Thank you!

  • @leonbutlermusic
    @leonbutlermusic 4 роки тому +1

    Excellent explanation

  • @yuushabio4529
    @yuushabio4529 4 роки тому +1

    Finally, a video on UA-cam i can relate to 😆

  • @benjaminshinar9509
    @benjaminshinar9509 3 роки тому

    I will need to watch this again in the future.

  • @jayasribhattacharya2048
    @jayasribhattacharya2048 4 роки тому +3

    You are just awesome. I have learned many things from your videos. 😍😀 thank you so much 😊.

    • @javidx9
      @javidx9  4 роки тому

      Thanks Jayasri!

  • @peterbonnema8913
    @peterbonnema8913 4 роки тому

    Yes! This is great. More advanced topics please!!

  • @duality4y
    @duality4y 4 роки тому +3

    need more of this

  • @motbus3
    @motbus3 4 роки тому

    still the best c++ videos

  • @josedejesuslopezdiaz
    @josedejesuslopezdiaz 4 роки тому +1

    thank u for your amazing content.

  • @zubble7144
    @zubble7144 4 роки тому +2

    It might be instructional to add the benefit of using intrinsics by showing a sid-by-side video of the fractal generations. IOW a "what is there to gain" for all your extra coding efforts. Well done, I have recommended this on IDZ

    • @javidx9
      @javidx9  4 роки тому

      Hi Zubble and thanks - In principle this was a follow up to the previous video that did show the the difference with/without intrinsics, its just that one did not show the intrinsic code in detail.

  • @JackPunter2012
    @JackPunter2012 4 роки тому +1

    Great video as always!
    For those who want a more detailed look at the difference in timings for cache vs Memory vs Hard drive I recommend the talk "Getting Nowhere Faster" By Chandler Carruth at Cppcon 2017.

  • @NolePTR
    @NolePTR 4 роки тому

    I'd love more technical videos like this in the future. It's hard to get tutorials for this type of stuff.

  • @zrodger2296
    @zrodger2296 Рік тому

    I think I found a really cool problem that could use intrinsics so I'm excited. A couple of other optimizations and I'm aiming to solve out to 1 million instead if grinding it out to 50 thousand or so. Great video!

  • @akhial
    @akhial 4 роки тому +1

    Awesome! Thanks for this!

  • @christophfriedrich5092
    @christophfriedrich5092 4 роки тому +2

    Love your vids. Even if I don't understand them the first time I watch because I'm just a simple web developer (PHP, NodeJS) but the way you explain helps me to understand more of our computers and the way programs work (and I hope they make me a better programmer - even on simpler stuff ^^)

  • @passwordmaze5789
    @passwordmaze5789 3 роки тому

    Great video!

  • @notnullnotvoid
    @notnullnotvoid 4 роки тому +1

    I'm not quite sure why you talked about cache locality when you did, as it's unrelated to the loop unrolling optimization. The cache behavior of the loop is the same either way - the reason it gets unrolled is just to reduce loop overhead (fewer compare and branch instructions per iteration). Other than that, this seems like a great video for introducing people to SIMD programming. Your explanations of lanes vs. register width, masking, and the utility of intrinsics in general, are all very clear, concise, and thorough. Good stuff!

    • @javidx9
      @javidx9  4 роки тому +1

      Thanks Not Null - I kind of agree with you, I wanted to fit in locality somewhere, and there is some truth to unrolling being advantageous to cache usage, for the reason you describe in fact - aside from branching having its own overhead which you want to reduce, and of course branch prediction being a factor, the branch test itself could potentially pollute the cache. SIMD stuff works best when streamed, and there are in fact cache organisation intrinsic functions to hint where the data should be moved to before the next set of instructions. Streaming of course works best with contiguous data in memory, and typically such memory is moved around "together". Once that extension pipeline is fired up, you want to cram as much data through it as possible, so I dont agree that its unrelated, but I do concede it is secondary to chaos branching can cause.

    • @notnullnotvoid
      @notnullnotvoid 4 роки тому

      @@javidx9 Doesn't the loop condition (at least in this case) just come down to a compare instruction and a conditional jump on the relevant flag bit? I don't see how that would pollute the cache, but I might be missing something.

    • @javidx9
      @javidx9  4 роки тому

      @@notnullnotvoid On powerful processors such as desktop ones, its not quite that simple. Yes, the condition is based off a single bit, but 2 things, firstly the pipelined nature of the processor requires branch prediction, and flushing out the pipeline is undesirable for performance, secondly the arguments for the condition itself may require memory to be read, thus potentially polluting the cache.

  • @darthxertor3617
    @darthxertor3617 4 роки тому +3

    So THIS is an actual practical use of bit masks. Very good to know, thank you!

  • @TOMMYMAJORS
    @TOMMYMAJORS Рік тому

    incredible video, thank you

  • @mido09z
    @mido09z 4 роки тому +4

    Great video and amazing channel. I just want to point out a small note at 41:57 which is n < iterations is not the same as iterations > n because of the case where n = iterations

    • @javidx9
      @javidx9  4 роки тому +3

      This is a good point Mohamed - combined with the way the loop is structured now, I think this approach always does one further iteration compared with the reference function.

    • @achtsekundenfurz7876
      @achtsekundenfurz7876 3 роки тому +1

      (1) n < iterations
      (2) iterations > n
      If n = iterations , both expressions are false, since both comparators exclude equality. They are in fact the same.

  • @greob
    @greob 4 роки тому +1

    Fascinating stuff! I was hooked the whole time. Thanks for sharing!

  •  4 роки тому +1

    You’re such a smart dude.

  • @atrumluminarium
    @atrumluminarium 4 роки тому +1

    Yes! Thank you for the video

  • @danielkrajnik3817
    @danielkrajnik3817 3 роки тому +18

    31:15 just a detail, but I think 'p' in '_mm256_mul_pd' stands for 'packed' not 'parallel'

    • @axelanderson2030
      @axelanderson2030 Рік тому +1

      What does epi stand for? I assume 'something packed integer'

    • @orbik_fin
      @orbik_fin Рік тому +1

      @@axelanderson2030 "extended packed integer", for 128+ bit registers, because _pi* was already taken for MMX.

    • @axelanderson2030
      @axelanderson2030 Рік тому

      @@orbik_fin thanks

  • @GNARGNARHEAD
    @GNARGNARHEAD 3 роки тому

    incredibly helpful, thanks :)

  • @danielkrajnik3817
    @danielkrajnik3817 3 роки тому

    This is brilliant

  • @JanHorcicka
    @JanHorcicka 4 роки тому

    Great video! Thank you very much.

  • @dozafixusa
    @dozafixusa 4 роки тому +1

    At 49:40, it is also possible to use _mm256_extract_epi64 to get simple types out of a register again, which would get rid of the ifdef
    Having done some intrinsic programming before, and i think that your video is an amazing ressource on how you programm with it's quirks in minds
    Well, all of your videos are an amazing ressource - keep up the good work! :)

    • @javidx9
      @javidx9  4 роки тому +2

      Cheers buddy, The problem I find with intrinsics is there are so many functions, but Ive not found a sensible "high level" list of function catagories XD so thanks!

  • @Spikehead777
    @Spikehead777 4 роки тому +1

    Intrinsics look scary. They're not as scary now that I've seen this video!

  • @federicopanichi9874
    @federicopanichi9874 2 роки тому

    nice, nice, nice !!!! More of those Hardcore videos. Pleeaaase :)

  • @Wayne-wo1wc
    @Wayne-wo1wc 4 роки тому +1

    Thank you Dave

  • @47Mortuus
    @47Mortuus 3 роки тому +4

    44:34 ++++
    You don't need to use the comparison mask to select/blend between '0' and '1'. Since 'all ones' is the two's complement representation of '-1', you can simply subtract the mask from your iteration counter (x + 1 x - (-1) AND x + 0 x - 0).
    You could've explained the blend intrinsic with this code segment, going from where you were with your AND equivalent, but also showing off the trick I mentioned afterwards.

  • @leepro
    @leepro Рік тому

    _c missed in the video but I found it in the github repo. Thanks for the video!!!

  • @dieSpinnt
    @dieSpinnt 4 роки тому +2

    !!!Beware!!! Don't ship into dangerous waters.
    Rule 16
    Do not use identifiers which begin with one or two underscores (`_' or `__').
    > The use of two underscores (`__') in identifiers is reserved for the compiler's internal use according to the ANSI-C standard.
    > Underscores (`_') are often used in names of library functions (such as "_main" and "_exit"). In order to avoid collisions, do not begin an identifier with an underscore.
    via www.doc.ic.ac.uk/lab/cplus/c++.rules/chap5.html
    Just my two nit-picky cents:P

  • @JonnyRobbie
    @JonnyRobbie 4 роки тому +1

    Jesus Christ, you've outdone yourself. But thank you, I like videos where I learn something new and this certainly exceeded that by a long shot.

  • @paulmoore7964
    @paulmoore7964 4 роки тому +1

    one of the biggest issues today is that cpu % meters do not show stall time. SO you can have a horrifically inefficient data layout and be running at 5% cpu speed but the cpu meter will show 100%, I am amazed that there is still no way in perfmon, VS ,... to see the real cpu load. I did not realize how truly huge the impact was

  • @eopXD
    @eopXD Рік тому

    Thank you for the video.

  • @Andrew90046zero
    @Andrew90046zero 3 роки тому

    I think what there needs to be is a nice api that allowed you to "agnostically" use the SIMD extensions without needing to know which ones your cpu support. And the api will provide a way to manually leverage the registers in a more human-readable way without having to pay attention to choosing the right set for your cpu. It just generates the right intrinsics for your system. And you won't need to think about if the registers are 128, 256, or 512. The system will pack in the data automatically and its up to you to manually use it to process data in bulk.

  • @rachelmaxwell4936
    @rachelmaxwell4936 4 роки тому +2

    An excellent video! Thank you for taking the time to respond to user feedback. Appreciate the details about masks and how to use them to perform logical operations. I've beem learning x64 programming via "Beginning x64 Assembly Programming: From Novice to AVX Professional" by Jo Van Hoey and this is an incredible suppliment to the C/C++ side of things.

  • @epimenide9i
    @epimenide9i 3 роки тому

    Amazing, thanks!!!

  • @nonchip
    @nonchip 4 роки тому +1

    i like how VS shows a small "

  • @timcain1418
    @timcain1418 Рік тому

    That was a very interesting video - you have a rare knack for hitting a happy medium in the conflict between "informative" and "comprehensible". Your vids are usually pretty entertainig too, so double thanks.
    I was wondering - having taken the fractal rendering so far with intrinsics and multithreading, Could yoU Devise An even more hardcore strategy to get even higher performance?

    • @javidx9
      @javidx9  Рік тому +1

      Thanks Tim, the next stage would realistically be GPU computation, SIMD devices can do this sort of thing orders of magnitude more performant.

  • @dustfs
    @dustfs 2 місяці тому +1

    wonderful thank you!

  • @mika2666
    @mika2666 3 роки тому +1

    Definitely liked and subscribed for this one, already had assembly and all the bitwise and mask stuff in school but this really helped me with how to convert complicated things into intrinsics 😄

  • @mikemontana7436
    @mikemontana7436 4 роки тому +1

    EXCELLENT!!!!

  • @ristopaasivirta9770
    @ristopaasivirta9770 3 роки тому +1

    The way to outsmart the compiler is to become the compiler!

  • @philtoa334
    @philtoa334 4 роки тому +1

    So nice.