Async Rust Is A Bad Language | Prime Reacts

Поділитися
Вставка
  • Опубліковано 22 лис 2024

КОМЕНТАРІ • 265

  • @kyguypi
    @kyguypi Рік тому +324

    I mean, the rust book does encourage channels and also quote the go docs "don't communicate by sharing memory, share memory by communicating".

    • @KyleSmithNH
      @KyleSmithNH Рік тому +26

      I think the main issue Rust has in this space is that you're not really all that likely to stumble on an actor model as a solution. You're far more likely to try to run two functions at once and find out the borrow checker is upset but it gets less upset if you Arc (it likely even recommends this). The path you're pushed on is the hard path, and it's partly because of the types of projects that are using Rust in the large.

    • @alexandrep4913
      @alexandrep4913 Рік тому +7

      I'm not entirely sure what that abstraction is trying to say without context. Sound cool though

    • @kyguypi
      @kyguypi Рік тому +4

      @@KyleSmithNH yeah, my take (and this might be what you're saying) is more that Mutex is more in line with Rust's raison d'etre. It exists to get as close to the bare metal as possible (as rust often does), so it kind of betrays that to enforce anything that adds overhead, even if its typically a bad idea to go without. Like, Lamborghini might strongly suggest you obey the speed limit, but they're certainly going to make it possible to speed feloniously. Probably. I don't write php.

    • @Psy45Kai
      @Psy45Kai Рік тому +3

      Thanks, I thought the same! The article states Hoare and the solution to all the problems: channels! But then sh*ts about how async rust sucks using anything other than channels...

    • @Reydriel
      @Reydriel Рік тому

      @@Psy45Kai The point is that Rust doesn't lead you down the path of using them naturally, therefore many make the mistake of not doing so. Whereas other languages make it much easier

  • @minikame2272
    @minikame2272 Рік тому +173

    Parallellism is several chefs preparing different dishes at the same time, concurrency is one chef preparing multiple dishes by flitting between tasks as efficiently as possible, but always one task at a time.

    • @jacekkurlit8403
      @jacekkurlit8403 Рік тому +14

      That's the best explanation of parallel vs concurrent I have ever read. I'm stealing this.

    • @ChrysusTV
      @ChrysusTV Рік тому +6

      Concurrency is simply what people call multi-tasking. Very few people actually do multiple things simultaneously for any consequential amount of time. They just rapidly (for a human) task switch.

    • @wdestroier
      @wdestroier Рік тому +1

      ​@@jacekkurlit8403No, I am stealing this explanation

    • @nodidog
      @nodidog Рік тому

      Great way of simplifying a concept that people often struggle with for a while!

    • @BenAshton24
      @BenAshton24 Рік тому +2

      To the best of my understanding: Parallelism is a subset of concurrency. Concurrency just means running at the same time which includes parallelism. Parallelism implies concurrency, but concurrency doesn't necessitate paralelism. Concurrency is both one chef or many chefs. Single threaded concurrency is explicitly a single chef.

  • @TheCalcaholic
    @TheCalcaholic Рік тому +88

    Concurrency is how people with ADHD process their todo list

    • @markmcdonnell
      @markmcdonnell Рік тому +2

      I feel seen 😂

    • @cornoc
      @cornoc Рік тому +4

      concurrency is what everyone really does when they pretend to be "multitasking", except their brains have a very small amount of memory to store those tasks in the queue and a very slow single-core task runner whose limited attention is split among the tasks evenly

    • @StinkyCatFarts
      @StinkyCatFarts 6 місяців тому +1

      Without joining

    • @benjaminbras7475
      @benjaminbras7475 5 місяців тому

      facts

    • @advanceringnewholder
      @advanceringnewholder 5 місяців тому

      I'm not diagnosed with ADHD and yet this is how I understand how I do multitasking

  • @azratosh
    @azratosh Рік тому +68

    Honestly, the article makes some really good points, but after I've been learning async Rust for quite some time (and I have to admit, the learning curve is steep if you *really* want to get to the spicy parts), I feel like it's a complete exaggeration that async Rust is bad. The Tokio runtime provides enough things to not have function coloring be a problem, namely tokio::task::spawn_blocking and tokio::task::block_in_place. Go doesn't have function coloring problems because it's all async under the hood anyway. Go also actually does something similar to tokio::task::block_in_place for blocking code. I agree that this is where Go shines, but the projects I work on *require* the control that Rust provides; Go is just not an option at all in this case.
    It's not easy, but if there is one thing I am more tired of than complexity, then it's complexity being hidden for the sake of simplicity. Async Rust doesn't hide anything from me, and that's exactly what I need.

    • @metaltyphoon
      @metaltyphoon Рік тому +7

      If there wasn’t a function coloring problem, we would be using the std library when doing async work yet thats not the case.

    • @AndrewBrownK
      @AndrewBrownK Рік тому +5

      last sentence is absolutely on point, and why Rust is truly special

    • @azratosh
      @azratosh Рік тому +4

      @@metaltyphoon I think I didn't manage to get my point across properly (that's on me); of course function coloring still exists, what I meant is that it's not a *problem* with Tokio. For example, Tokio's channels e.g. can send and receive messages in a blocking context, too. Let's say you have a worker thread (plain old std::thread::Thread somewhere) that's doing synchronous stuff. You can give that thread the receiver of a tokio::channel, and it will be able to call blocking_recv(), or vice versa, blocking_send() on a sender. You *can* therefore "go back and forth" between sync and async contexts. One just needs to learn how to do so.
      That's why I personally don't consider function coloring a problem.
      Now, my points are heavily relying on Tokio here; if I had to build my own runtime, I would have to build all of those facilities myself, which I admit would be a PITA. But that's why we have Tokio.

    • @JaconSamsta
      @JaconSamsta Рік тому +4

      "the learning curve is steep if you really want to get to the spicy parts"
      But you honestly don't have to learn *that* much, for async to be very usable. I feel like people sometimes treat it as if they had to write their own runtime from scratch. Most of the time it's writing tokio::spawn instead of std::thread::spawn and using the async version of calls that would usually be blocking.
      And for some things it's even easier. Graceful shutdown (of the whole application, or parts of it) is a breeze with the tools tokio provides. And things like tokio-console are a godsent for taking a peek into a running program.
      "Async Rust doesn't hide anything from me, and that's exactly what I need."
      Eh, I personally think there is a bit too much magic involved. As with anything: If you understand the magic, it ceases to be magic. And to Rusts credit, at least it makes understanding said magic possible. But I do think there was room to be more transparent with the whole thing.
      I'd like to roll my own generators for instance, instead of having them perpetually locked away in nightly.

    • @JaconSamsta
      @JaconSamsta Рік тому +1

      @@jfcfjcjfq
      Arcs are great if you just need to share data (especially big, immutable data) with complicated lifetime implications and just need a quick solution.
      There are almost always alternatives, but an Arc/Rc is pretty much the only "general" way that can act as a catch all solution and can also easily be tacked on.
      Once you've cloned the Arc, accessing the data is basically as free as if you just had a Box on it (loading is atomic by default on Intel, for instance). The trouble comes when you are sharing lots of small bits of data, especially when you are constantly spawning new threads/tasks and copying the Arc over all the time/they are going out of scope and calling Drop. Atomic operations can have surprisingly little overhead when there is low contention. Otherwise it can be better to just copy over the underlying data
      That said, if you are looking for a solution for configurations, may I suggest checking out something like tokio::sync::watch, as an alternative.

  • @EricSampson
    @EricSampson Рік тому +36

    As trivia, C# creates hot tasks and F# creates cold tasks (and has a lot finer control of them, in general)

    • @ruanpingshan
      @ruanpingshan Рік тому +1

      Oh interesting that it's different.

    • @Kane0123
      @Kane0123 Рік тому +5

      Clearly dotnet wins the language competition

    • @midlFidl
      @midlFidl Рік тому +11

      f# is so underrated

  • @adityabanka_iso
    @adityabanka_iso Рік тому +68

    I had a very profound realization, I have been watching you videos for the past couple of months and have learned more than I have in my last 2 years of being a CS undergrad. Conceptual things that I though were clear to me, being completely rewritten in my head. Thank you soo much. This has to be the most entertaining way I have ever gone over concurrency, it just clicked.

    • @adityabanka_iso
      @adityabanka_iso Рік тому +6

      @@mythbuster6126 I was talking more as getting a new perspective to the problem, surely I'll check Rob out. This was a very new take to me, thought it was very intriguing. I am still learning the craft, thank you for pointing in the right direction.

    • @paulorubebe7308
      @paulorubebe7308 Рік тому +1

      ​@@mythbuster6126 actually, it makes sense. Pls, correct me if I am wrong. Two threads running concurrently on the same CPU core do not actually run at the same time or in Parallel.
      The OS scheduler decides which one in the ready state runs. Say it begins with thread A and for some reasons A is interrupted (timer interrupt) the CPU then becomes free and the OS scheduler begins executing thread B. B either runs till it finishes or itself is interrupted (another timer interrupt) and the OS continues to execute thread A till A finishes and it continues with B till it finishes.
      That's how concurrency works. Say these two threads were working on the same file(with critical sections) it would be easy to get into a race condition here. Hence the need for locks, condition variables, and/or semaphores.

    • @lostsauce0
      @lostsauce0 Рік тому +2

      I've been coding for 10 years and watching Prime's videos has made me rethink a lot

    • @artembaguinski9946
      @artembaguinski9946 8 місяців тому

      @@paulorubebe7308 ackchuwally, in CPUs with simultaneous multithreading (SMT, or hyperthreading, HT, if you're Intel) several threads running concurrently on the same CPU core do actually run at the same time or in parallel. Executing an instruction takes several "stages" and multiple stages can run simultaneously. They can process the instructions from the same thread, which CPU expects will follow one after another, or instructions from several threads could be interleaved: while finishing an instruction from thread A it could be starting an instruction from thread B and preparing the next instruction from thread A. On the next tick it would be finishing B, starting A and preparing B etc (here I imagined a hypothetical core with three stages - prepare, start and finish, in actual CPUs there can be more stages, so even more instructions from 1 or more threads may be processed simultaneously).

    • @vaolin1703
      @vaolin1703 Місяць тому +1

      This means that either your degree is bad or you haven’t been paying attention much

  • @thingsiplay
    @thingsiplay Рік тому +52

    I tried to convert a Python program with Async into Rust Async. I'm still not finished, but got most things done. It's really hard to do, even though the logic of the program is already solved.

    • @orbatos
      @orbatos Рік тому +2

      You'll have to relate the gains.

    • @youtubeenjoyer1743
      @youtubeenjoyer1743 Рік тому +2

      Python async is absolutely useless even if you use uvloop. Just using a huge threadpool always beats asyncio solutions in throughput AND latency.

    • @Krow-n3o
      @Krow-n3o Рік тому

      @@youtubeenjoyer1743 if i'm not mistaken, python 3.12~13 will improve this.

    • @neniugrava
      @neniugrava 4 місяці тому +1

      I think it's not correct to equate different async runtimes. Solving a problem in a high-level hot async dynamic language with a GC is does not mean the problem is also solved in a low-level systems language with stackless cold async. They are only loosely related.

    • @thingsiplay
      @thingsiplay 4 місяці тому

      ​@@neniugrava The async specific problems are the same regardless of the language. GC doesn't help here at all. Plus I did not use any RC and really did it the manual way in Rust.

  • @dnullify100
    @dnullify100 Рік тому +40

    You very often say "I love rust for CLI tools", can you maybe go over some you've built and what approaches you follow?

    • @Reydriel
      @Reydriel Рік тому +6

      He's mentioned before that he uses Rust at his job in Netflix, so maybe he can't exactly show them to us lol

  • @sebred
    @sebred Рік тому +12

    Jon Gjengset has a great stream about pinning and unpinning

  • @Sneg00vik
    @Sneg00vik Рік тому +4

    When I figured out how asynchrony works in Rust, it helped me in understanding asynchrony in other languages. For example, like Python, where I understood asynchrony quite vaguely.

    • @inertia_dagger
      @inertia_dagger 11 місяців тому +2

      that happens to me a lot in rust, it's friendly enough that you don't really HAVE to think about some things that the language is doing, but if you do, you start understanding why it's doing it

  • @ceigey-au
    @ceigey-au Рік тому +5

    More trivia: Meteor JS has a forked version of Node 14 using a Fibers extension to do "colourless" concurrency instead of await/async. Meteor 3.0 will be going 100% async to get away from that weird fork situation though, so it's sort of the end of a weird parallel universe of JS concurrency (pun not intended).

  • @rumplstiltztinkerstein
    @rumplstiltztinkerstein Рік тому +8

    One thing I learned about Rust is that we can't try to be big brained with it. Often it is better to use the simplest solution possible instead of just trying to specify exactly how everything should work. Better to avoid fancy code as much as possible to avoid the weirdest bugs.

    • @himboslice_
      @himboslice_ Рік тому +3

      sounds like you'd like go lol

    • @rumplstiltztinkerstein
      @rumplstiltztinkerstein Рік тому +2

      @@himboslice_Definitely.

    • @justin_ooo
      @justin_ooo Рік тому +3

      @@himboslice_ i love Rust and Go for very different reasons but they're both easily amongst the best programming languages in the world

    • @vadzimdambrouski5211
      @vadzimdambrouski5211 10 місяців тому

      That's a recipe for a slow code. With that approach it's better to just write Go instead and get faster performance.

    • @rumplstiltztinkerstein
      @rumplstiltztinkerstein 10 місяців тому +1

      @@vadzimdambrouski5211 I didn't explain my point very well. I apologize. What I think it is best to say is "Better to not try reinventing the wheel when coding in Rust every time. There is most likely a crate that does what you need. Whatever it is."

  • @conceptrat
    @conceptrat Рік тому +2

    I used to do this on an old IBM S360 reducing my shifts from 9-10 hours to only 4. Two tape units and printer running flat out. And me smashing keys to run the JCL that I'd split up to run the stock control and invoicing code at the same time. Hehe. No parallel tasks of the same code though.

  • @connormc711
    @connormc711 Рік тому +8

    please review Rob Pikes "Concurrency is not Parallelism" its a very good talk and a critical one for Go devs

    • @kyjo72682
      @kyjo72682 4 місяці тому +4

      Oh, I see. This is where all this needless confusion started.

  • @Tony-dp1rl
    @Tony-dp1rl Рік тому +38

    It is kinda ironic to me, or surprising, that WebWorkers in JavaScript actually have one of the best models for parallelism going.

    • @PanSzymek
      @PanSzymek Рік тому

      wat

    • @ra2enjoyer708
      @ra2enjoyer708 Рік тому +2

      They don't work in private browsing mode, so you have to write a conditional init logic (async ofc) for any function which can potentially interact with workers so it could work without them (potentially even after initialising with them). And don't forget interaction with workers incurs serialization/deserialization overhead, so you also want to write some sort of streaming logic on top. Which means you are going to write statically unoptimizable spaghetti and typescript makes it very painful to write cross-environment code like this. Good luck debugging all of that.

  • @emptystuff1593
    @emptystuff1593 Рік тому +4

    Explaining concurrency vs parallelism. Image building a house. You could have two workers laying bricks simultaneously. That's parallelism. You could have the same worker lay bricks, be out of bricks because there weren't enough bricks, go work on the plumbing, then go back to laying bricks when a new shipment of bricks arrived. By the way, I have no idea how to build a house.

    • @viktorshinkevich3169
      @viktorshinkevich3169 Рік тому +1

      I would rather say a kitchen. When you are making breakfast you can make a coffee while your wife makes a toast.
      Two people for two tasks. That's parallelism.
      The problem is that kitchen is small and you can't have many wives.
      Another problem is that when making a coffee the part of it is boil the water. While water is boiling you just sit and wait. The same goes for toast - while it's being heated your wife waits and does nothing useful.
      Here when concurrency comes to the rescue. You dont need a wife to make both coffee and toast concurrently - just load the toast, while heating boil the water, when toast is ready put a batter on it, then water is boiled so you switch there and finish coffee.
      Boom!
      TLDR; no wife no problem

    • @viktorshinkevich3169
      @viktorshinkevich3169 Рік тому

      People on a kitchen are CPU cores (processing unit), making a coffee (1) and making a toast (2) are tasks or jobs, loading toast/finishing coffee is computation, waiting for toast to heat/water to boil is waiting for IO, the toaster/kettle are OS/network.

    • @kyjo72682
      @kyjo72682 4 місяці тому

      What you are describing is called parallelization. Designing, planning and scheduling tasks in such a way that they can be done in parallel (= concurrently = at the same time) by multiple execution units, while also minimizing their idle time.

    • @criptych
      @criptych Місяць тому

      Don't know much about building houses myself, but can confirm it probably involves both bricks and plumbing.

  • @numeritos1799
    @numeritos1799 19 днів тому

    1:55 Parallelism can be seen as a subset of concurrency, whether you "believe that" or not.
    Parallelism: **executing** multiple tasks simultaneously across multiple resources (CPU cores).
    Concurrency: **managing** multiple tasks that could, although not necessarily, be running on a shared resource.

  • @anotherelvis
    @anotherelvis Місяць тому +1

    The rust project should have implemented async as a library in userspace.
    There was no need to bake all the syntax-sugar into the language.

  • @trashcan3958
    @trashcan3958 Рік тому +14

    At the end of the day I personally don't give a shit about articles like these. I have written a very large backend in Rust and it works perfectly and it is much more maintainable than any TS nightmare a different programmer would have done. The places where you need to use stuff like pin are very uncommon. Also, most frameworks do the heavy lifting for you so shard global variables become non-issues.

    • @dn5426
      @dn5426 Рік тому +4

      backend as in something that does HTTP + DB calls?

    • @0xCAFEF00D
      @0xCAFEF00D Рік тому

      Why are you comparing to TS?
      That's garbage.

    • @ra2enjoyer708
      @ra2enjoyer708 Рік тому

      @@dn5426 No he meant a middleware which passes the values to a real backend. I'd also add FS on top, because it can get pretty spicy with all the multi-platform stuff, so I don't believe you can write a "maintainable" code in this context, especially if you need some custom storage implementation logic (multi-platform ofc).

  • @bitskit3476
    @bitskit3476 Місяць тому

    Concurrency = Doing two or more things by switching between tasks either periodically or when needed. E.g. when one thing is waiting for something else to happen.
    Parallelism = Doing two or more things at the exact same time.
    Threads are a form of concurrency, where the hardware periodically interrupts a task in order to preemptively switch to doing something else. Two or more threads could potentially be executed in parallel, but only if you have multiple processors or processor cores.

  • @sj9851
    @sj9851 2 місяці тому

    Concurrency is like manager trying to do multiple works by himself, where parallelism is manager defers these tasks to his team members which is much more efficient. When worked in Nodejs, I have always consider event loop as the manager.

  • @kc3vv
    @kc3vv Рік тому +4

    I wonder if it is possible to somehow introduce things to rust that "fix" async to be more approachable.

  • @rosehogenson1398
    @rosehogenson1398 Рік тому +3

    I like the article's explanation for parallelism vs concurrency. Concurrency is when you break your problem into independent pieces. Those pieces might run on the same CPU, they might run at the same time, or they might not.
    Parallelism is what happens when you run concurrent tasks at the same time.

  • @Iceman259
    @Iceman259 Рік тому +8

    The fact that there is this perpetual need to re-explain and argue about concurrency vs parallelism makes me feel really secure about my continuing employment.

    • @numeritos1799
      @numeritos1799 19 днів тому

      On the other hand, I feel very uneasy to see how so many people get it confidently incorrect.

  • @tusharsnn
    @tusharsnn Рік тому +6

    Fun fact:
    You can't explain concurrency without explaining what those tasks do. You must first let the user getting explained understand that a task gets "paused" by the kernel during an IO op, but with modern kernel apis you can continue doing some other task by the time this previous task is again resumed. In the past application would wait for a task to get completed before it "can" move to the next one, regardless of the task being an IO op where it's simply waiting for a thing to happen.

    • @PanSzymek
      @PanSzymek Рік тому

      its mentioned by all possible names in the article

  • @creativecraving
    @creativecraving 9 місяців тому

    2:20 _Parallelism_ exploits algorithmic efficiencies to solve underutilization of CPU cores. _Concurrency_ exploits I/O bottlenecks to do the same thing _or_ allows you to logically separate the work among several actors with distinct roles.

  • @atiedebee1020
    @atiedebee1020 Рік тому

    NFS is the network file system. We use it at home to have all computers' home directory on a central computer

  • @homelessrobot
    @homelessrobot Рік тому +1

    Concurrency is parrallel waiting. In concurrency, you can sit around and do nothing while other people have their turn, and then pick up where you left off when some resource becomes available.

    • @kyjo72682
      @kyjo72682 4 місяці тому

      That's called multitasking or multiplexing or time-sharing, not concurrency. It's only emulating concurrency as there is always only one task that is running at one time.

  • @Omnifarious0
    @Omnifarious0 Рік тому +4

    Concurrency is about having multiple tasks running simultaneously from a conceptual standpoint. Parallelism is about literally being able to execute more than one CPU instruction at a time. Which is made somewhat weird by the fact that pipelining makes even a single core CPU engage in parallelism even though there is no concurrency.

    • @JaconSamsta
      @JaconSamsta Рік тому +2

      That's certainly a correct definition of parallelism, but I would constrain it to "executing more than one stream of instructions at a time".
      A CPU can definitely do fancy out-of-order and super-scaler stuff, but that only works as long as it ends up being sequential at the other end again.
      Of course a superscalar architecture, pipelining, etc. is a large part of what enables this (SMT), but I find that executing multiple streams of instructions simultaneously is usually what people mean when they talk about parallelism!

    • @kyjo72682
      @kyjo72682 4 місяці тому

      Nah. Running tasks in parallel is exactly the same thing as running them concurrently. It has literally the exact same meaning.
      What prime is describing as "parallelism" is in fact called "parallelization", ie. designing tasks in such a way that sequences that are independent and don't have to wait for one another may be executed in parallel. But this might just as well be called "concurrentization".
      And what you are describing is in fact called multi-tasking, or multiplexing, or time-sharing. This is a method spliting a single shared resource such as an execution unit (or a communication channel in networking) into multiple time-chunks so that over a longer period of time it effectively seems as multiple units. This is only "emulated" concurrency not true concurrency as there is always only a single task running on the unit at any given moment.

    • @Omnifarious0
      @Omnifarious0 4 місяці тому

      @@kyjo72682 - There is a very good cppcon talk a couple of years ago that talks about why my way of viewing this is actually more useful. Time sharing and multi-tasking are indeed a form of concurrency. But concurrency is about the logical structure of your program. You can have concurrent tasks executing in parallel. You can have all the parallelism in the world, but if you using something like Python's GIL, you have no concurrency.

  • @kahnzo
    @kahnzo Рік тому +9

    Please read through Without Boats's response to this article. This is the problem with react videos. You can't just say, "oh, that sounds good" and not dig in a little further. Async without GC is hard. Linear types are interesting. Passing information between Rust and a GC language requires a LOT of forethought, knowledge, and work. But that why there are libraries and amazing groups of people to work through the nuances.

  • @baxiry.
    @baxiry. Рік тому +6

    By the way, 39 data-racing vulnerabilities were announced in Rust programs, compared to 2 data-racing vulnerabilities in Go

  • @ra2enjoyer708
    @ra2enjoyer708 Рік тому

    For people elated about type annotations in the javascript files, they won't happen because they will straight up add parsing time and size to scripts with no runtime benefit whatsoever (and therefore the implementors will be blamed for "slow sites"). It might somewhat work in Python, because it's parse once and use many more times in prod, but not in browser js.
    The same reason there won't be pipe operators in JS either, because they incur a performance penalty on almost entire runtime, and implementors will be blamed for it.

  • @tenshizer0
    @tenshizer0 Рік тому

    parallelism is a subset of concurrency... all parallel tasks are concurrent tasks, but not all concurrent tasks are parallel... concurrency is about solving problems of sharing resources (CPU, memory, files, etc.) between processes/threads/tasks both in parallel or in pseudo-parallelism

  • @MrHirenP
    @MrHirenP Рік тому +1

    Hi! Can you explain why rust is great for CLI tools? What does it offer that, say, Go doesn’t?

    • @谢智斌-q9l
      @谢智斌-q9l Рік тому

      serde and clap, all about the beautiful #[derive] macros

  • @crimsonbit
    @crimsonbit Рік тому +3

    I'm so confused. If async await is concurrency, then are goroutines parallelism? If so, does this mean than go doesn't actually have concurrency since it just constantly spawn green threads? In c# you have async await, green threads and "normal" threads if I understand correctly. So are languages like Go and Rust a step backwards? Are tokio spawns stackful or stackless? I'm I even alive right now?

    • @viktorshinkevich3169
      @viktorshinkevich3169 Рік тому +10

      Go abstracts the difference - goroutines are both concurrency and parallelism.
      You can have 16 CPU cores, spawn 16 computing goroutines and benefit.
      You can also have 1 CPU core, spawn 16 IO bounded goroutines and benefit.
      Because GO hides away the difference.

    • @crimsonbit
      @crimsonbit Рік тому +1

      @@viktorshinkevich3169 Thanks for the explanation. But how does it know when to do concurrency for io bound stuff and when to do parallelism for cpu bound stuff? And is there a way to set it manually?

    • @viktorshinkevich3169
      @viktorshinkevich3169 Рік тому

      @@crimsonbit
      You are welcome!
      So again that's the cool part about go - you dont nesseceraly need to know.
      You are computing 4 different unrelated values that take time on 4CPU core computer spawning 4 goroutines? Fine, go scheduler will try and map those 4 goroutines on 4 Threads on 4 Cores. And speeding up computation 4x times
      You are requesting 4 different websites with 4 URLs on 1 CPU core toaster? Fine, go scheduler run them concurrently on 1 Thread, switching from one goroutine to another while waiting for network call response speeding you program to the time of slowest HTTP call.
      Go does so because he can difference IO bound from non IO bound (+timers).
      If you spawn goroutine that just calculates something Go will know it should not be "async".
      If you spawn another goroutine then does Timer (sleep) or that makes an HTTP call or reading from a file - that goroutine know is "async" since Go scheduler knows that it could be easily put on "waiting" list since there is nothing useful it can do in the meantime - it just awaits for response. If it just waits for HTTP call to come back and forth across ocean thru 15.000Km for a sec or two - it's totally legit.
      For the programmer in Go, there is no special syntax to know it he writes async function or a regular one.
      In Rust, NodeJS, Java, C# there is.

  • @aurinator
    @aurinator 5 місяців тому

    "Fibers" were a thing way back in C++ & COM too I remember. I don't believe they ever really went anywhere though.

    • @headlibrarian1996
      @headlibrarian1996 4 місяці тому

      boost::fiber is a nice C++ library. I’ve written some nice async apps with it.

  • @jeremycoleman3282
    @jeremycoleman3282 Рік тому

    parallel is the same task running in multiple threads, concurrent is different tasks running at the same time maybe in different threads.

  • @simonfarre4907
    @simonfarre4907 Рік тому +10

    I still find the discussion of channel's funny though, particularly this article when it says "after decades of mutex madness". How do people think channel's work? They're *literally* designed as a critical section (CS) that's behind a - you guessed - a mutex (or some other locking/guarding mechanism).

    • @bonsairobo
      @bonsairobo Рік тому +13

      Yes but the abstraction is significantly less powerful (and less dangerous) than a raw mutex. The programmer never needs to worry about shared memory, all of the messages sent and received are owned by one thread until they are moved/sent.

    • @matheusjahnke8643
      @matheusjahnke8643 Рік тому +3

      Yes, on the same line, control structures(loops, if statements and function calls) are also just glorified gotos and branches(and stacking)... they provide a structured interface which helps prevent foot guns.
      The deadlock condition on a correctly implemented queue is when there's mutual waiting because there are threads which want to consume while there is noone to produce(and the queue is empty).

    • @simonfarre4907
      @simonfarre4907 Рік тому +2

      @@bonsairobo Yes, I'm just saying, writing an ad-hoc channel is fairly trivial, particularly in languages like C++. And doing it is trivial, since you'll use a queue and a mutex that wraps the pushing and popping of values off of it.
      Reading off of a channel, is a blocking operation and as such, looks and behaves exactly like a raw mutex. And when writing your own solution, you can add a try_read that checks for content before reading, and just like that, you have implemented the mpsc channel in Rust.

    • @simonfarre4907
      @simonfarre4907 Рік тому

      @@matheusjahnke8643 That wasn't my point, or what I was implying though. What I was implying is that, writing your own ad-hoc channel abstraction is pretty trivial in languages like C++. Is it going to be *as general* as a channel from the standard library? Well, that depends on how good you are at writing templated generic code, but even with simple OOP-techniques it is extremely trivial.

    • @someonespotatohmm9513
      @someonespotatohmm9513 Рік тому

      @@simonfarre4907 What is your point? Because I read it as: "this thing is so simple ppl should just do it themselfes". Hopefuly it is clear why this is a weird statement to make, especialy in software.

  • @muayyadalsadi
    @muayyadalsadi 5 місяців тому

    9:34 in many languages queue-like solutions involve not only "copy" but also serialization (pickle in python terms). Which is a huge waste of resources.

  • @unique1o1-g5h
    @unique1o1-g5h Рік тому +2

    i'm with you on Pin and Unpin. I think i get it then the next day when i think about it again, i don't

  • @muayyadalsadi
    @muayyadalsadi 5 місяців тому

    21:39 nfs network file system. A shared folder in unix. Filesystem shared over the network.

  • @BosonCollider
    @BosonCollider Рік тому +1

    Go is not really CSP. The memory passed is not immutable so you can still have data races.

  • @skeleton_craftGaming
    @skeleton_craftGaming 27 днів тому

    Something about a ace of all trades is a master of none, but more often is more useful than a master of one

  • @MrXperx
    @MrXperx Рік тому

    Parallel is when two chefs prepare two meals in the kitchen. Concurrent is when the same waiter two customers those two meals.

    • @tim.martin
      @tim.martin Рік тому

      Other reply said concurrent is 1 chef making 2 meals...
      Problem is both scenarios need additional constraints to be always true. E. G.,
      1 Chef can bake 2 simple dishes in parallel, 1 waiter can serve 2 side by side customers in parallel (this would be super unusual). After careful thought, I think the waiter analogy wins because serving 2 customers simultaneously is not typical.

  • @jordixboy
    @jordixboy Рік тому +2

    whatso hard about pointers and malloc, i dont get it

    • @spaceowl5957
      @spaceowl5957 Місяць тому

      I don’t get it either, but I haven’t programmed that much c. The only problem I’ve come across is if you have many exit points to a routine it’s easy to forget to free something at some exit point. Also you might have to look at the man page or docs of a function to understand whether you are responsible for freeing the data.
      But I feel loke all this could be solved by adding a defer statement to the language and perhaps adding some annotations to signify that a function returns heap-allocated data (which would let the linter earn you if you forget to free something)
      Other than that it’s a very simple concept I think

  • @muayyadalsadi
    @muayyadalsadi 5 місяців тому

    10:09 in php it's called fibers. But coroutines or whatever they are called in your language is not about who schedule the tasks the os or the app. It's about resumable suspendable functions. When you do await fetch() from inside myfunc(), myfunc should be suspended and the control go to the main loop, when fetch is fulfilled myfunc is resumed. How is that is done? The stack of the function is saved to the heap then when resumed it's switched back.

  • @neniugrava
    @neniugrava 4 місяці тому

    It's almost like people are equivocating on the meaning of "async". Assuming async is a transferrable skill between languages is as absurd as assuming any other feature maps one-to-one, but worse, because it is a very involved feature. It is not a bad thing to require knowledge of how it works, it's a *requirement*, and people are assuming that one set of design/implementation decisions (of their familiar language or type of languages) is "correct" async.
    Hot async with an implicit runtime in a dynamic or GC'd language is not the same thing as stackless cold async in an unmanaged systems language. Being upset that design patterns and practices don't transfer from one to the other is like complaining that you can't use your dynamic programming style with objects and closures and lambdas in C. It's a sign that you lack understanding and did not consider the very different design goals.
    It's great that popular languages can make it possible to use async and other abstractions without understanding how they work (in simple cases), but I will not accept that people are writing real non-trivial production async code in any other language without *any* level of understanding (or training) in what it actually does under the hood. They are forgetting the fact that they were not born writing async code.
    I typically am stuck in embedded C land, but when I first started working with async in python and C# I absolutely ran into issues where that "async code is code" abstraction fails. Working with async required a high level understanding of the implementation details to truly utilize it in a non-trivial application. The ease of apparently-working code is as much a weakness as a strength. The magical runtime can lead you into thread safety violations if you don't know that it schedules to a thread pool and that you must use synchronization tools in many cases. I've inherited some code with many un-awaited async tasks, and at first I'm like how is that even working? It isn't all intuitive in any language for non-trivial uses.
    I remember when async was first becoming popular, and there were *so many articles* explaining it, because it is literally just a shorthand for a specific set of design patterns.

  • @complexity5545
    @complexity5545 Рік тому +1

    Boy that articles's author thinks in (congested) bloat. But I think his conclusion is correct.

  • @orbatos
    @orbatos Рік тому

    Oh i love the days of maniacally chasing down semaphores locked by zombies.

  • @lainiwakura3741
    @lainiwakura3741 11 місяців тому

    Why is it so difficult to define concurrency? I feel every time I hear the term, people always give a different definition and can never really agree on what it actually means.. It's either that the order of operation doesn't matter, that it presents an opportunity to parallelize (i.e. concurrent tasks can be parallelized), that the code is running independently but always executes one task at a time (big difference since it means that parallelized tasks are not concurrent; prime literally said this at 1:56) and endless variation of these. The example Prime gives at 2:17 makes me think that it is all about utilizing wait time, which I've seen in several examples but I have never heard anyone explicitly says this out loud. But this doesn't feel like a definition of the word concurrent and instead is how to make concurrent tasks run faster. So it shouldn't be brought up when defining the word, but only when you talk about why concurrency is important.
    I dunno.. It feels like the concept is not complicated at all but different people focus on different details on the definition and as soon as you don't focus on the right detail in the same way as the other person, they will tell you that your idea of concurrency is wrong. I hate words.. so much...

  •  19 днів тому

    Parallelism is number of washing machines, concurrency is independent loads of laundry.

  • @oblivion_2852
    @oblivion_2852 Рік тому

    Number 5. Each thread having a 4kb control block... Is this related to why when piping data between processes the buffer size is 4096 bytes?

  • @alexpyattaev
    @alexpyattaev Рік тому +3

    Rust async should be compared with c++ async. there you do not have pin.

  • @cheaterman49
    @cheaterman49 Рік тому

    6:45 we've been doing that in Python for 32 years haha

  • @CristianGarcia
    @CristianGarcia Рік тому +2

    Classic parallelism vs concurrency confusion. I don't think Prime's definition is good enough.

  • @jonaskoelker
    @jonaskoelker Рік тому +2

    I can put on my left shoe and my right shoe in any order (concurrently), but I don't have the dexterity to put both of them on at the same time (in parallel).

  • @graphicdesignandwebsolutio365
    @graphicdesignandwebsolutio365 6 місяців тому

    if you want nodejs to use ALL your cores, try scanning the Arbitrum blockchain, block by block for arbitrage and liquidation oppurtunities, calculating 100+ token pairs price data on each block...

  • @blenderpanzi
    @blenderpanzi Рік тому

    4:50 It's 50 years. Pipes exist in Unix for 50 years this year, not just 40.

  • @irfanhossainbhuiyanstudent3757

    I really like the syntax of rust,just the lifetime complexity make it hard to use it,if rust would have a builtin semi automated garbage collector in std library,I won't gonna use other language anytime soon.

  • @rossbagley9015
    @rossbagley9015 Рік тому

    What I really want is Rust with all the memory safety, true functional language features, and Hoare concurrency (all communication with green threads occurs via channels with immutable messages). Who is working on this?

  • @dc0d
    @dc0d Рік тому

    (alert: reductionism in progress)
    Parallelism is when everyone is drinking from their own bottle. Concurrency is when everyone passes the bottle to the next person after a sip.
    #erlang #elixirlang #golang

  • @poorpotato4467
    @poorpotato4467 Рік тому +23

    baby shark doo doo, doo doo. doo doo

  • @thefiredman0
    @thefiredman0 Рік тому +1

    man has a typescript cold, can't stop talking about damn node lol

  • @mannycalavera121
    @mannycalavera121 Рік тому +1

    Poor Ryst, getting hammered

  • @muayyadalsadi
    @muayyadalsadi 5 місяців тому +1

    Hidden fact: there is no real async filesystem operations. All nodejs fs.promise is faked behind a thread pool. Unlike network kernel routines which are really async. There is no really that useful async file routines. You can do something else while the network socket consume the data you have sent but you can't do the same with a regular file.

  • @kyjo72682
    @kyjo72682 4 місяці тому

    I still don't get the difference between parallelism and concurrency. I see them as synonyms. Two tasks running in parallel means the same thing as two tasks running concurrently. Same thing. Instead what I think people are trying to describe is the concept of PARALLELIZATION of tasks. I.e. writing programs in such a way that sequences that CAN run in parallel are allowed to do so, that they are not unnecessarily blocked from doing so by how the program is written. (This also generalizes to how machines and factory lines are designed and to any planning and scheduling of tasks in general.)
    I also see people confuse concurrency with multi-tasking (multiplexing/timesharing a single execution resource). While the goal here is to achieve "effective" concurrency with respect to some longer period of time, this is technically not true concurrency as there is always only a single task that is running at any single moment.

  • @PanzerschrekCN
    @PanzerschrekCN Рік тому +9

    The main advantage of async Rust over Go is possibility to throw all async-unrealated stuff in non-async program part. In Rust you just do not use Arc, Pin and other stuff like that. In Go you still pay penalty for GC, stack grow overhead, etc.

    • @rw_panic0_0
      @rw_panic0_0 Рік тому +4

      because Rust is designed to provide zero cost abstractions, Go is not, and what you said is as advantage as it is a disadvantage

  • @chaorrottai
    @chaorrottai Рік тому +1

    The problem with multithreading is that the programmer is the one implementing it. The CPU hardware should be delegating work to it's worker cores. It should opperate internally as a multithreaded system and present the user with an apparent single thread and the user should have the option to bypass the thread overseer but the default behaviour should be to just let the thread overseer handle threading and data race prevention.
    Multi-threading should be a hardware level issue, not a software level issue. There is no reason anti-data-race circuits could not be added to memory modules using a XOR-gate access lock at minimum.
    Hell, it could even be implemented compiler level, instead of programmer level, you just have the complier mutex lock any variable that can be access from multiple threads: internally handle those as structs containing var and mutext and then overload the access opperators to aquire and release the mutext locks.
    .....at the developer level, this is a single template class.........

  • @noredine
    @noredine Рік тому +2

    A recursive async function? Would it garble the call stack, or would each recursive call not be async (for my sanity)?

    • @protox4
      @protox4 Рік тому

      It has the same problem as synchronous recursion (actually it's worse). If it's unbounded, you'll either get a stack overflow, or out of memory, depending if the recursive call is synchronous or asynchronous. Even if it's not unbounded, you could also get a stack overflow when the deepest task completes and starts to unwind the async call stack (.Net has special handling for such a situation, but it's an extremely rare case).

  • @Purkinje90
    @Purkinje90 Рік тому +1

    send in the chungus

  • @cariyaputta
    @cariyaputta Рік тому +14

    "Just use Go" - Rust Programming Language Book.

  • @freeideas
    @freeideas Рік тому +3

    Meanwhile, Java's managed threads quietly gives you the memory benefits of async/await without all the callback complexity. With async/await to call A, B, and C, you have to say something like, "tell B to call C when B is done, then tell A to call B when A is done". In java you can say "do A, do B, then do C". Behind the scenes, java has a number of real threads proportional to the number of cores in your hardware, and all the "virtual" (aka micro-) threads just take turns using the real threads. I love the way java hides most of the ugly complexities from you while still giving you compiled-language performance. Not as fast as Rust of course, but more than fast enough for almost any job.

  • @32zim32
    @32zim32 Рік тому

    Soon graalvm will gain popularity and java again will become popular. I noticed that while coding in rust I become tired very quickly. This is not I expect from modern language

  •  5 місяців тому

    I like semaphores, because they remind me of trains. Checkmate!

  • @davidserrano2091
    @davidserrano2091 Рік тому

    Do u think he listens to Tokyo drag while using tokio

  • @9SMTM6
    @9SMTM6 Рік тому +3

    Yes, I agree on that. I'd argue, that this is the cause of part of your souring on Rust. I imagine that at Netflix you've been mostly writing async Rust, and I've long been saying, with async you loose a lot of the advantages of Rust.

  • @nomadshiba
    @nomadshiba Рік тому

    rust is really good for data oriented ecs based games

  • @pirieianip
    @pirieianip Рік тому

    Parallelism may not use two separate CPUs; it may use two different threads, and you may use the same CPU for those threads. Splitting hairs, but, meh.

  • @doomguy6296
    @doomguy6296 Рік тому

    Never had to use Pin. What the h*ll were you developing? An async linked list?

  • @motbus3
    @motbus3 2 місяці тому

    Parallelism does not require multiple CPUs. Just to be clear

  • @NVM_SMH
    @NVM_SMH Рік тому

    Can't someone write the equivalent of go in a rust library

  • @connormc711
    @connormc711 9 місяців тому

    Prime I know you read parallelism vs concurrency by rob pike but you should watch the video too

  • @yenonn
    @yenonn Рік тому

    Love the humour!

  • @MuskW-e9x
    @MuskW-e9x Рік тому

    "pretty neat"

  • @Cookiekeks
    @Cookiekeks Рік тому +5

    So why the fuck did I learn Rust then? It was being advertised as having excellent concurrency, and up until now I just thought I was too stupid to get, but apparently other languages have it easier. Wow, what a waste of time that was. Shit. Should I learn Go?

  • @freeideas
    @freeideas Місяць тому

    Async obfuscates every language. Solution #1: blocking IO with multiple threads is very straightforward, especially if you use immutable data, so race conditions are rare. Problem with solution #1; with many thousands of threads this will unnecessarily use up all your memory. BUT are you really going to need tens of thousands of threads? Probably not, so we can stop at solution #1. Solution #2, use solution #1 but with virtual threads. This uses microthreads behind the curtains, but they look like normal threads. Java has this, and it is coming soon or is already in every other popular language. Problem solved.

  • @maxmustermann3938
    @maxmustermann3938 Рік тому

    I'd say parallelism is when something runs in parallel on a given processing unit (i.e. your GPU or your CPU) while concurrent means that we are doing things in parallel across different processing units/hardware, i.e. doing work on the CPU while some other component is doing data transfers instead of just waiting for that to finish. When you're calling some function on the CPU that tells the GPU to do something, that will run concurrently. The task you gave to the GPU itself will execute in parallel on the GPU, and the GPU itself can also run things concurrently since it has more than just one streaming multiprocessor and can very well be running different tasks at the same time, but only one dedicated task in parallel on a given SMP.

  • @matheusjahnke8643
    @matheusjahnke8643 Рік тому

    My turn to ackchually about parallelism and concurrency.
    Parallelism is when things run at the same time... so processes aren't parallel in an old, single core, computer
    Concurrency is when things share resources... processes are concurrent because they share memory and CPU(and access to disk space... and internet and stuff)

  • @danmerillat
    @danmerillat Місяць тому

    I disagree that other languages do async "better". Async as a language feature adds exponential complexity by trying to pretend everything is single-threaded. Unroll the state machines explicitly and you can reason about what your code is doing in the real world.

  • @timothygibney159
    @timothygibney159 13 днів тому

    Why is it that everyone loves RUST … except the Linux folks. Same ones who actually think X11 is secure and perfect and not 40 years behind gdi in Windows and Macosx hardware acceleration, and actually think initd is superior to Sustemd.
    It’s old people afraid of change. Good god how did Linux become so technophobic

  • @twitzel25
    @twitzel25 Рік тому

    NFS was great (other than the stale mounts of course)!!

  • @transimpedance
    @transimpedance Рік тому

    Nice explanation of parallelism vs concurrency 👍

    • @kyjo72682
      @kyjo72682 4 місяці тому

      I don't think so.

  • @sinasalahshour
    @sinasalahshour Рік тому

    deez parallel units

  • @colemichae
    @colemichae Рік тому

    Interesting comment on Bun i heard that also from another person this week. Great for some areas but needs work in so many to really be the perfect solution

  • @ivanjermakov
    @ivanjermakov Рік тому +5

    If async/await/futures/promises is a leaky abstraction, how channels is better? I think it will become event hell the same as RxJS observables.

    • @viktorshinkevich3169
      @viktorshinkevich3169 Рік тому +1

      Its not hell in golang, its not hell in elixir, why should it be hell here?

    • @AnthonyBullard
      @AnthonyBullard Рік тому +1

      Async and all that color the function, and that coloring spreads to its caller. With channels, you don’t know if a function you call is shelling out to something else, and don’t have to refactor everything to use it

    • @ivanjermakov
      @ivanjermakov Рік тому

      @@AnthonyBullard but if the called function is inherently asynchronous, how does the caller can not now this? So if the caller just cares about the value it will act as await, but if the caller needs async interaction it will invoke it differently and subscribe to its channel?

  • @LostInAutism
    @LostInAutism Рік тому

    Pixel 7 eh? The volume rocker is soon to fall out.

  • @viktorshinkevich3169
    @viktorshinkevich3169 Рік тому +4

    Can we just all agree than when it comes to IO go abstraction is so much easier and convenient that rusts.

  • @sharperguy
    @sharperguy Рік тому +1

    Parallelism is two people each chopping an onion at the same time. Concurrency is one person chopping an onion while they wait for the oil in the frying pan to heat up.

  • @principleshipcoleoid8095
    @principleshipcoleoid8095 Рік тому +4

    0:35 are you running GrapheneOS on it?

    • @TheSast
      @TheSast Рік тому +3

      Asking the right questions

    • @privy15
      @privy15 Рік тому

      Is GrapheneOS still a good option for Pixels? Heard that the dev (?) has some... mental issues

    • @mks-h
      @mks-h Рік тому +2

      @@privy15 he does, but a) paranoia probably isn't bad for a secure OS, b) he stepped down

    • @privy15
      @privy15 Рік тому +1

      @@mks-h He stepped down? Thats new. Thx

    • @principleshipcoleoid8095
      @principleshipcoleoid8095 Рік тому

      @@privy15 it's better than the stock Android full of Google. I don't think the devs even can do targeted attacks. But you can run Google things sandboxed (and even in a separate profile) for bank apps ect. Either way it is a good idea to use some FOSS Android without Google reading all of your messages and activity

  • @channalbert
    @channalbert 5 місяців тому

    * cries in Async Rust *