Concurrency is not what you think it is...

Поділитися
Вставка
  • Опубліковано 3 січ 2025

КОМЕНТАРІ •

  • @juanmacias5922
    @juanmacias5922 3 дні тому +19

    1:41 I was a little surprised by the results, I expected the approach on the right to be faster lol

    • @Darkev77
      @Darkev77 3 дні тому +4

      maybe has something to do with Python's GIL

    • @blanky_nap
      @blanky_nap 3 дні тому +6

      my first assumption would be the difference in workers, in case of threads there are 16 and they're not limited ( i mean by smth like Semaphores) whereas in case of processes the number of workers is equal to cpu_count() which i assume is definitely less than 16.

    • @juanmacias5922
      @juanmacias5922 3 дні тому

      @@blanky_nap oooooooooh, thanks for your input, I could see that being the case!

    • @marh122
      @marh122 3 дні тому +2

      @@blanky_nap very good point, so if we had multiple processes and inside of them multiple threads it would be a lot lot faster

    • @nottomention-h8w
      @nottomention-h8w 3 дні тому +2

      I was also surprised after seeing the results but got the answer here

  • @mourensun7775
    @mourensun7775 3 дні тому +18

    I thought I was watching the coredumper channel video

    • @2MinutesPy
      @2MinutesPy  3 дні тому +2

      Oh

    • @CEO_of_memes2
      @CEO_of_memes2 2 дні тому +4

      The animation is either "heavily inspired" or straight up copied from the Coredump's video.

  • @compositeboson123
    @compositeboson123 2 дні тому +3

    parallelism is good for cpu bound tasks and concurrency is good for i/o bound tasks fyi

  • @somnathbose7455
    @somnathbose7455 3 дні тому +6

    i hope u also explain why concurrency is faster than paralellism ...because according to what i understood in paralellism there is always a possibility of multiple tasks being performed at the same instance as dedicated cores have been assigned but in concurrency only one task is being worked upon at any instance of time

    • @justinnguyen523
      @justinnguyen523 3 дні тому +2

      I think it could be because there are 16 threads but the parallelism example is using one thread per core which could be less. Thus the threads can fetch multiple pages first

  • @CodePhiles
    @CodePhiles 3 дні тому +5

    thank you for illustration, where I thought that parallelism will be faster than concurrency but in the last example it shows the opposite , is due to selecting 16 workers in threading which could be more than number of cores, am little confused and need to check ti in details !!

  • @maxm4183
    @maxm4183 3 дні тому +2

    If concurrency means only one worker working at a time, how can it be faster than parallellism?

    • @Jbombjohnson
      @Jbombjohnson 2 дні тому

      Because it’s being used in an I/O bound operation, and not a CPU bound operation. Concurrency would be much, much slower if the example in the video was processing data instead of fetching data from an external source.

  • @Jbombjohnson
    @Jbombjohnson 2 дні тому +1

    This is actually a great example of why these computing paradigms exist in the first place, and how they are utilized to solve different problems.
    Concurrency is generally faster and more efficient for “I/O bound” operations where there is an external dependency that needs to be waited on before a computation can be completed. (A server sending over some requested data such as in the video, a user input, etc.) Parallelism on the other hand is faster for “CPU bound” operations, where there is no external dependency, and all data is already locally accessible (summing up an array of a billion integers in RAM, for example). The fundamental difference is in identifying where the bottleneck lies.
    Concurrency is faster in the video because a single shared CPU core starts the fetch() call for the first link in the array, and then immediately context switches over to a new thread to make the second fetch() call, and so on, until all fetch() calls are made with N number of threads. The dispatched fetch() calls can resolve at any point in time during this process of making all N calls, and the shared core is free to return the result of a call once it switches back to a resolved fetch() thread. The timer stops once all fetch() calls have resolved and returned the requested data, which is almost completely dependent on the I/O of external systems and not the local CPU.
    The parallel processing solution was slower because the number of fetch() calls that can be started at the same time is limited by however many CPU cores are passed into the .map() method, which is hard-limited by however many physical hardware cores exist in the local system. This means that we can make at most N fetch() calls at the same time, and we need to wait for one of those calls to resolve before we can make another one, since we assign 1 core to a single fetch() call and only have N cores to use in total.
    Once you understand these distinctions, the results of this video shouldn’t come as a surprise!

  • @Tarodev
    @Tarodev 2 дні тому +3

    What an excellent video.

  • @worldhello-v2t
    @worldhello-v2t 2 дні тому +5

    This video is a copy of Core Dumped's video ua-cam.com/video/5sw9XJokAqw/v-deo.htmlsi=JuoE2ufzXhMCNR4r

  • @balasuar
    @balasuar 2 дні тому

    While you need multiple cores/thread execution engines to achieve parallelism, concurrent threads executing is effectively parallel execution.

    • @Jbombjohnson
      @Jbombjohnson 2 дні тому

      This is untrue. If it were, there wouldn’t be any use for parallelism.
      Processing a large array of integers concurrently with an arbitrary number of threads is the same as processing it with a single thread, and saves zero CPU time. Whereas, processing the same array but slicing it into N pieces through parallel processing does save CPU time, resulting in a quicker computation and end result.
      This is why there is a distinction.

  • @keipfar
    @keipfar 15 годин тому

    Aaaaaaah, like the stroboscopic effect!!!

  • @tszyuk3861
    @tszyuk3861 2 дні тому +2

    This animation is copied from core dump

  • @fcolecumberri
    @fcolecumberri 3 дні тому

    You should have used ProcessPoolExecutor to have the exact same code on both sides.

  • @adhyadeshdahal7
    @adhyadeshdahal7 2 дні тому +2

    Coredumped copied

  • @maxpythoneer
    @maxpythoneer 3 дні тому +2

    Cool animation