C++ Programming Tutorial 57 - Array Vs Vector Vs STL Array

Поділитися
Вставка
  • Опубліковано 15 вер 2024

КОМЕНТАРІ • 38

  • @louvierejacques
    @louvierejacques 4 роки тому +11

    Man, you are a stone-cold BOSS. Every single video of yours that I've seen is concise, informative, and well-delivered. Thanks!

  • @TheMazanec
    @TheMazanec 5 років тому +33

    Can't believe this video has so few views, it deserves much more. Thanks!

  • @videofountain
    @videofountain 4 роки тому +1

    I heard what you said. Yet in the final minute .. to help the student you could say and write that ... passing by reference for std::vectors in completely normal on many occasions, if not most. Passing by reference is common place and could be on the chalk board.

  • @princeofori-boateng6377
    @princeofori-boateng6377 2 роки тому

    Hope, I didn't miss 56 in the series. Very good series. Keep it up Caleb

  • @nilupulperera
    @nilupulperera 3 роки тому +2

    Nicely summarized. Thank you very much.

  • @thedaranesianconfederation7221
    @thedaranesianconfederation7221 10 днів тому

    For any beginner trying to optimise their code I'd say
    The fact that you're using C++ is enough optimization as it is

  • @chimpspecialist
    @chimpspecialist 4 роки тому +4

    Could just be me, but when you were talking there for a stretch, the info went in one ear and out the other.

  • @simonezuccarello6969
    @simonezuccarello6969 17 днів тому

    Screenshot at 6:49, don't care about speed when choosing as it doesn't really matter.

  • @hardikrajpal2410
    @hardikrajpal2410 3 роки тому +2

    Here's a toast to the only code professor to use a blackboard ever.

  • @daxu9605
    @daxu9605 4 роки тому +1

    C style arrays are the easiest to copy. Just memcpy. If you have a contiguous set of data and you know # of elements, memcpy is extremely fast. No iteration involved.

    • @kaotony4068
      @kaotony4068 4 роки тому

      Memcpy does the iterationfor you

    • @daxu9605
      @daxu9605 4 роки тому +1

      Kao Tony im not sure if technically that’s true. Memcpy is unaware of your data type or structure. It views everything as bytes. If it iterates anything, it just iterate through the underlying bytes. I don’t think thats iterating in the same sense as iterating through iterators.

  • @harshdevmurari007
    @harshdevmurari007 Рік тому

    crystal clear

  • @user-cl8fz4wk8e
    @user-cl8fz4wk8e 8 місяців тому

    Hi @codebreakthrough Which classes you have that is recommended for someone moving from C and C# to C++. I have many years of experience using embedded C (10 years), and also many years using C# and understand OO.

  • @danishuddin9752
    @danishuddin9752 2 роки тому

    Beautiful explanation! thank you man!

  • @knofi7052
    @knofi7052 Рік тому

    As a C developer I really never had such questions to answer...😉

  • @CodeWithPurpose4
    @CodeWithPurpose4 2 роки тому

    This video is helpful but
    How do I push back a vector using struct

  • @MorbidPasta
    @MorbidPasta 2 роки тому

    goated series

  • @lonewolfp8802
    @lonewolfp8802 3 роки тому +1

    Awesome 👍🏼

  • @sarthakgaba1583
    @sarthakgaba1583 4 роки тому +1

    Brilliant explanation, Thank you!!

  • @guitarhax7412
    @guitarhax7412 5 років тому +7

    Like I said. Go vector or go home!!!

  • @xiaonaiheme
    @xiaonaiheme 2 роки тому

    Done 🙃❤️

  • @codingwithanu9609
    @codingwithanu9609 4 роки тому +1

    Nice 💖

  • @debapriyoray9522
    @debapriyoray9522 Рік тому

    Hello Caleb, nice video, one concern though at one point in the video(6:00) in the table you say we cannot pass vector by reference?? We can pass the vector to a function by reference, right?

  • @prakhar_pratyush
    @prakhar_pratyush 3 роки тому +1

    1:14

  • @chrischoir3594
    @chrischoir3594 Рік тому

    @2:38 you are confused

  • @mygoodsir539
    @mygoodsir539 3 роки тому

    9,999th viewer

  • @minRef
    @minRef 5 років тому +3

    The largest concentration of wrongness in the whole video:
    "Usually, problems with your application's speed are not going to come from what data type you used"
    Outdated "lessons" like this take YEARS to unlearn and cause untold amounts of ruin. This was forgivable in 1990's compsci textbooks, but not anymore.
    Meanwhile in reality:
    ua-cam.com/video/fHNmRkzxHWs/v-deo.html

    • @codebreakthrough
      @codebreakthrough  5 років тому +3

      I think you're taking this out of context. I understand the value of choosing the best algorithm or data structure. But choosing a standard array over a vector because it offers a minute potential for speed or memory increase is an unnecessary "optimization"
      Yeah, if you're working with large scale data or algorithms, shooting for something that's closer to log n over n! is an obvious best practice....but we are talking about 3 specific collections here (in case you didn't read the title). And I have a lot of beginners watching this stuff who are worrying about little things like using floats over double to save on memory while they neglect various coding best practices that would make up for the difference.
      I'm working with data in this series that, regardless of the algorithm or data structure, the processing is pretty much instantaneous. Apologies if I did not have the bigger picture in mind during the creation of this video. I talk about it more holistically in other videos and even have partnerships with companies helping people understand the importance of data structures and algorithms.
      Appreciate the input, just wish it was more relevant to the context.

    • @minRef
      @minRef 5 років тому +10

      I understand the argument, and I don't disagree with the intent to reduce the student's fear of just getting something started. However, the last 5 years have demonstrated that this approach is unfortunately counterproductive. I may be spending too much time trying to explain this, but I feel I have to.
      There is a fundamental difference between telling students
      "don't worry about whether you're using vector or std array to solve this particular problem"
      vs
      a spectacularly false statement like:
      "Usually, problems with your application's speed are not going to come from what data type you used"
      For a moment, forget about big-O analysis. If you test ANY real-world application on any processor, you'll find that nearly 100% of any delay is caused by various locality-hostile probing patterns, in other words, poorly chosen data structures for the problem at hand. These inevitably account for about 99% of CPU cycles to be wasted waiting on L2 CPU cache misses.
      Every.
      Single.
      Piece of code.
      Out there.
      Don't take this statement on faith - you can try it for yourself. Request a 30-day sample of Intel's VTune (the free linux tool "perf" does not yet have reliable L2 measurement, only L1, which isn't as important but can demonstrate as well). Google's Carruth was very generous in the first vid I linked when he said at least "50%". In my experience I have only seen a handful of executables that don't burn almost every cycle on unnecessary L2 cache misses.
      If you're super time-constrained, maybe just watch part of this video from 31m10s to 41m50s
      ua-cam.com/video/rX0ItVEVjHc/v-deo.html
      It's the most viewed video on the entire CppCon channel for a reason.
      In the current decade, even most well-designed software modules waste most of their time doing nothing, and this is almost always caused by data structure locality, not algorithmic effects as some oversimplified textbooks would have us believe. (watch the 70 seconds from 25m55s to 27m09s of this Scott Meyers presentation: ua-cam.com/video/WDIkqP4JbkE/v-deo.html)
      So, Back to the statement that
      "Usually, problems with your application's speed are not going to come from what data type you used"
      It might reduce the student's fear of getting their feet what. But what if some poor student accidentally remembers this line for years past this course and cements this into their brain? It's the equivalent of saying to a scared child that "It's impossible to drown in water". It may serve a purpose and be kinda true at the moment, but it's clearly wrong to say this and can cause tragic results if taught at scale. It helps nobody for people to statements that are the unambigiously the complete opposite of reality, especially when these assumptions have such far reaching implications.
      Re: "minute potential for speed increase"
      1.5 million times speedup, as in 150,000,000% efficiency increase in performance is, sadly, not "minute potential"(referring to Mike Acton's example here ua-cam.com/video/GPpD4BBtA1Y/v-deo.html). This is not unusual in real-world code. 1.5 million times difference is the difference between a startup concept/product succeeding or being unusable. I would love to live in the world we were promised by crappy textbooks that said that only algorithm selection matters, and that data structure selection/optimization would only "squeeze the last 20% percent of performance". That's low number is a fantasy. If that were true, then people would be doing 3D simulations in MS Word, game engines would be written in Python, and Mike Acton's and his friends wouldn't be called in to rewrite the burst compiler at Unity.
      (I could flesh this out here, but you'd be better served by keeping current with things like "CPU Caches and Why You Care" by Scott Meyers" ua-cam.com/video/WDIkqP4JbkE/v-deo.html)
      other youtube references
      ua-cam.com/video/yy8jQgmhbAU/v-deo.html
      ua-cam.com/video/rX0ItVEVjHc/v-deo.html
      As of 2019 Most CS university courses, even at places like Stanford and MIT, are still quite behind the times in this regard. I'm not saying that traditional big-O should not be considered, but if you're looking into teaching this stuff(and among the best way to learn is by teaching) then try this textbook. Read it while you still have time.
      csapp.cs.cmu.edu/
      and the accompanying CS213 class playlist
      ua-cam.com/video/4CpHpFu_KYM/v-deo.html
      Re: "I have a lot of beginners watching this stuff who are worrying about little things like using floats over double to save on memory"
      - good! Good programmers always play around and test hypotheses for themselves. (Do both and measure for themselves). At this stage, the curious students shouldn't be discouraged from occasionally measuring things, and if it makes less than a 100X difference, just continue. Big-O analysis is insufficient without real testing because intuition without data is almost always wrong!
      I don't mean to be super harsh, but I wish somebody had corrected my wrong assumptions when I was your age instead of me having to figure it out on my own.
      Best of luck!

    • @mytech6779
      @mytech6779 2 роки тому

      ​@@minRef I think the short version is that memory latency measured in cpu cycles has grown massively over the last 3 decades. Memory bandwidth is very high but random access is terrible. Recent generation machines can do many computations in 0.5-5 cycles but fetching the data from main memory if there is a misprediction has a latency of over 3000 cycles, an algorythm would need to offer more than a 600x gain to cover this idle time. (A register move/copy is 1cycle, L1 is about 3 cycles, L2 10 cycles, L3 50 cycles, very roughly and with variation. Storage drive latency is several million cycles, and anything network is continental drift.)
      Some may ask how a calculation can be less then one cycle on a single core. It is due to vector math units that can compute multiple items as a single operation. (64 byte length vector registers generally) and separately by good use of pipe-lining which allows an integer unit to operate at the same time as the floating point unit. There are also numerous registers that get dynamically alocated to the externally named registers, such that there can be multiple copies of EAX each preloaded with different values for upcomming calculations and to handle the needs of branch prediction.
      Just 10% misprediction by the cpu branch predictor can easilly cause a 5x increase in overall computation time.
      Now it should be said that on modern general purpose machines(x86-64, arm, etc) the programmer, even bit twiddling assembly, does not have any direct control over cache, pipe-lining, or branch prediction. (They did have direct hardware control into the mid 1980s, and still do on many low cost microcontrollers.) However, the programmer has substantial indirect control by being "cache friendly" or using algorythm structures that favor reliable predictions or simply avoid frequent branching with traditional `if()` like structures or loops with random inputs.

    • @mytech6779
      @mytech6779 2 роки тому

      @@codebreakthrough There is a big difference between teaching a person to automate a manual process with some quick scripting and teaching up to date computer science to a potential software engineer. Totally different cases that unfortunately are often lumped together to the detriment of both.
      I agree that both benefit from being taught techniques to efficiently trial and error/test, it is not like physical engineering where experimenting is expensive.
      Scripting is about efficiency for the computer admin/programmer, not the machine because the comparison here is with manual human calculations. Here the development costs are weighted higher than the execution costs.
      These scripting students are best served by a syntax and algorithym focused curriculum that mostly ignores most computer science below the most basic abstracted machine concepts.
      Software engineering on the other hand is about well designed, efficient to execute, programs and the performance requirements are set by reference to things much more demanding than completing some arithmetic faster than a human. And while time is money, generally in this category the economics weight outside execution costs higher than development costs.
      For these students a correct and up to date understanding of specific machines, theory, and build tools is needed, for speed, for battery consumption, for heat, for verification of safety critical behavior, for reducing hardware capital.

  • @chrischoir3594
    @chrischoir3594 Рік тому

    you have no idea what you are talking about, you are wrong on virtually every topic here