I think the main issue Rust has in this space is that you're not really all that likely to stumble on an actor model as a solution. You're far more likely to try to run two functions at once and find out the borrow checker is upset but it gets less upset if you Arc (it likely even recommends this). The path you're pushed on is the hard path, and it's partly because of the types of projects that are using Rust in the large.
@@KyleSmithNH yeah, my take (and this might be what you're saying) is more that Mutex is more in line with Rust's raison d'etre. It exists to get as close to the bare metal as possible (as rust often does), so it kind of betrays that to enforce anything that adds overhead, even if its typically a bad idea to go without. Like, Lamborghini might strongly suggest you obey the speed limit, but they're certainly going to make it possible to speed feloniously. Probably. I don't write php.
Thanks, I thought the same! The article states Hoare and the solution to all the problems: channels! But then sh*ts about how async rust sucks using anything other than channels...
@@Psy45Kai The point is that Rust doesn't lead you down the path of using them naturally, therefore many make the mistake of not doing so. Whereas other languages make it much easier
Parallellism is several chefs preparing different dishes at the same time, concurrency is one chef preparing multiple dishes by flitting between tasks as efficiently as possible, but always one task at a time.
Concurrency is simply what people call multi-tasking. Very few people actually do multiple things simultaneously for any consequential amount of time. They just rapidly (for a human) task switch.
To the best of my understanding: Parallelism is a subset of concurrency. Concurrency just means running at the same time which includes parallelism. Parallelism implies concurrency, but concurrency doesn't necessitate paralelism. Concurrency is both one chef or many chefs. Single threaded concurrency is explicitly a single chef.
concurrency is what everyone really does when they pretend to be "multitasking", except their brains have a very small amount of memory to store those tasks in the queue and a very slow single-core task runner whose limited attention is split among the tasks evenly
Honestly, the article makes some really good points, but after I've been learning async Rust for quite some time (and I have to admit, the learning curve is steep if you *really* want to get to the spicy parts), I feel like it's a complete exaggeration that async Rust is bad. The Tokio runtime provides enough things to not have function coloring be a problem, namely tokio::task::spawn_blocking and tokio::task::block_in_place. Go doesn't have function coloring problems because it's all async under the hood anyway. Go also actually does something similar to tokio::task::block_in_place for blocking code. I agree that this is where Go shines, but the projects I work on *require* the control that Rust provides; Go is just not an option at all in this case. It's not easy, but if there is one thing I am more tired of than complexity, then it's complexity being hidden for the sake of simplicity. Async Rust doesn't hide anything from me, and that's exactly what I need.
@@metaltyphoon I think I didn't manage to get my point across properly (that's on me); of course function coloring still exists, what I meant is that it's not a *problem* with Tokio. For example, Tokio's channels e.g. can send and receive messages in a blocking context, too. Let's say you have a worker thread (plain old std::thread::Thread somewhere) that's doing synchronous stuff. You can give that thread the receiver of a tokio::channel, and it will be able to call blocking_recv(), or vice versa, blocking_send() on a sender. You *can* therefore "go back and forth" between sync and async contexts. One just needs to learn how to do so. That's why I personally don't consider function coloring a problem. Now, my points are heavily relying on Tokio here; if I had to build my own runtime, I would have to build all of those facilities myself, which I admit would be a PITA. But that's why we have Tokio.
"the learning curve is steep if you really want to get to the spicy parts" But you honestly don't have to learn *that* much, for async to be very usable. I feel like people sometimes treat it as if they had to write their own runtime from scratch. Most of the time it's writing tokio::spawn instead of std::thread::spawn and using the async version of calls that would usually be blocking. And for some things it's even easier. Graceful shutdown (of the whole application, or parts of it) is a breeze with the tools tokio provides. And things like tokio-console are a godsent for taking a peek into a running program. "Async Rust doesn't hide anything from me, and that's exactly what I need." Eh, I personally think there is a bit too much magic involved. As with anything: If you understand the magic, it ceases to be magic. And to Rusts credit, at least it makes understanding said magic possible. But I do think there was room to be more transparent with the whole thing. I'd like to roll my own generators for instance, instead of having them perpetually locked away in nightly.
@@jfcfjcjfq Arcs are great if you just need to share data (especially big, immutable data) with complicated lifetime implications and just need a quick solution. There are almost always alternatives, but an Arc/Rc is pretty much the only "general" way that can act as a catch all solution and can also easily be tacked on. Once you've cloned the Arc, accessing the data is basically as free as if you just had a Box on it (loading is atomic by default on Intel, for instance). The trouble comes when you are sharing lots of small bits of data, especially when you are constantly spawning new threads/tasks and copying the Arc over all the time/they are going out of scope and calling Drop. Atomic operations can have surprisingly little overhead when there is low contention. Otherwise it can be better to just copy over the underlying data That said, if you are looking for a solution for configurations, may I suggest checking out something like tokio::sync::watch, as an alternative.
I had a very profound realization, I have been watching you videos for the past couple of months and have learned more than I have in my last 2 years of being a CS undergrad. Conceptual things that I though were clear to me, being completely rewritten in my head. Thank you soo much. This has to be the most entertaining way I have ever gone over concurrency, it just clicked.
@@mythbuster6126 I was talking more as getting a new perspective to the problem, surely I'll check Rob out. This was a very new take to me, thought it was very intriguing. I am still learning the craft, thank you for pointing in the right direction.
@@mythbuster6126 actually, it makes sense. Pls, correct me if I am wrong. Two threads running concurrently on the same CPU core do not actually run at the same time or in Parallel. The OS scheduler decides which one in the ready state runs. Say it begins with thread A and for some reasons A is interrupted (timer interrupt) the CPU then becomes free and the OS scheduler begins executing thread B. B either runs till it finishes or itself is interrupted (another timer interrupt) and the OS continues to execute thread A till A finishes and it continues with B till it finishes. That's how concurrency works. Say these two threads were working on the same file(with critical sections) it would be easy to get into a race condition here. Hence the need for locks, condition variables, and/or semaphores.
@@paulorubebe7308 ackchuwally, in CPUs with simultaneous multithreading (SMT, or hyperthreading, HT, if you're Intel) several threads running concurrently on the same CPU core do actually run at the same time or in parallel. Executing an instruction takes several "stages" and multiple stages can run simultaneously. They can process the instructions from the same thread, which CPU expects will follow one after another, or instructions from several threads could be interleaved: while finishing an instruction from thread A it could be starting an instruction from thread B and preparing the next instruction from thread A. On the next tick it would be finishing B, starting A and preparing B etc (here I imagined a hypothetical core with three stages - prepare, start and finish, in actual CPUs there can be more stages, so even more instructions from 1 or more threads may be processed simultaneously).
I tried to convert a Python program with Async into Rust Async. I'm still not finished, but got most things done. It's really hard to do, even though the logic of the program is already solved.
I think it's not correct to equate different async runtimes. Solving a problem in a high-level hot async dynamic language with a GC is does not mean the problem is also solved in a low-level systems language with stackless cold async. They are only loosely related.
@@neniugrava The async specific problems are the same regardless of the language. GC doesn't help here at all. Plus I did not use any RC and really did it the manual way in Rust.
When I figured out how asynchrony works in Rust, it helped me in understanding asynchrony in other languages. For example, like Python, where I understood asynchrony quite vaguely.
that happens to me a lot in rust, it's friendly enough that you don't really HAVE to think about some things that the language is doing, but if you do, you start understanding why it's doing it
More trivia: Meteor JS has a forked version of Node 14 using a Fibers extension to do "colourless" concurrency instead of await/async. Meteor 3.0 will be going 100% async to get away from that weird fork situation though, so it's sort of the end of a weird parallel universe of JS concurrency (pun not intended).
One thing I learned about Rust is that we can't try to be big brained with it. Often it is better to use the simplest solution possible instead of just trying to specify exactly how everything should work. Better to avoid fancy code as much as possible to avoid the weirdest bugs.
@@vadzimdambrouski5211 I didn't explain my point very well. I apologize. What I think it is best to say is "Better to not try reinventing the wheel when coding in Rust every time. There is most likely a crate that does what you need. Whatever it is."
I used to do this on an old IBM S360 reducing my shifts from 9-10 hours to only 4. Two tape units and printer running flat out. And me smashing keys to run the JCL that I'd split up to run the stock control and invoicing code at the same time. Hehe. No parallel tasks of the same code though.
They don't work in private browsing mode, so you have to write a conditional init logic (async ofc) for any function which can potentially interact with workers so it could work without them (potentially even after initialising with them). And don't forget interaction with workers incurs serialization/deserialization overhead, so you also want to write some sort of streaming logic on top. Which means you are going to write statically unoptimizable spaghetti and typescript makes it very painful to write cross-environment code like this. Good luck debugging all of that.
Explaining concurrency vs parallelism. Image building a house. You could have two workers laying bricks simultaneously. That's parallelism. You could have the same worker lay bricks, be out of bricks because there weren't enough bricks, go work on the plumbing, then go back to laying bricks when a new shipment of bricks arrived. By the way, I have no idea how to build a house.
I would rather say a kitchen. When you are making breakfast you can make a coffee while your wife makes a toast. Two people for two tasks. That's parallelism. The problem is that kitchen is small and you can't have many wives. Another problem is that when making a coffee the part of it is boil the water. While water is boiling you just sit and wait. The same goes for toast - while it's being heated your wife waits and does nothing useful. Here when concurrency comes to the rescue. You dont need a wife to make both coffee and toast concurrently - just load the toast, while heating boil the water, when toast is ready put a batter on it, then water is boiled so you switch there and finish coffee. Boom! TLDR; no wife no problem
People on a kitchen are CPU cores (processing unit), making a coffee (1) and making a toast (2) are tasks or jobs, loading toast/finishing coffee is computation, waiting for toast to heat/water to boil is waiting for IO, the toaster/kettle are OS/network.
What you are describing is called parallelization. Designing, planning and scheduling tasks in such a way that they can be done in parallel (= concurrently = at the same time) by multiple execution units, while also minimizing their idle time.
1:55 Parallelism can be seen as a subset of concurrency, whether you "believe that" or not. Parallelism: **executing** multiple tasks simultaneously across multiple resources (CPU cores). Concurrency: **managing** multiple tasks that could, although not necessarily, be running on a shared resource.
At the end of the day I personally don't give a shit about articles like these. I have written a very large backend in Rust and it works perfectly and it is much more maintainable than any TS nightmare a different programmer would have done. The places where you need to use stuff like pin are very uncommon. Also, most frameworks do the heavy lifting for you so shard global variables become non-issues.
@@dn5426 No he meant a middleware which passes the values to a real backend. I'd also add FS on top, because it can get pretty spicy with all the multi-platform stuff, so I don't believe you can write a "maintainable" code in this context, especially if you need some custom storage implementation logic (multi-platform ofc).
Concurrency = Doing two or more things by switching between tasks either periodically or when needed. E.g. when one thing is waiting for something else to happen. Parallelism = Doing two or more things at the exact same time. Threads are a form of concurrency, where the hardware periodically interrupts a task in order to preemptively switch to doing something else. Two or more threads could potentially be executed in parallel, but only if you have multiple processors or processor cores.
Concurrency is like manager trying to do multiple works by himself, where parallelism is manager defers these tasks to his team members which is much more efficient. When worked in Nodejs, I have always consider event loop as the manager.
I like the article's explanation for parallelism vs concurrency. Concurrency is when you break your problem into independent pieces. Those pieces might run on the same CPU, they might run at the same time, or they might not. Parallelism is what happens when you run concurrent tasks at the same time.
The fact that there is this perpetual need to re-explain and argue about concurrency vs parallelism makes me feel really secure about my continuing employment.
Fun fact: You can't explain concurrency without explaining what those tasks do. You must first let the user getting explained understand that a task gets "paused" by the kernel during an IO op, but with modern kernel apis you can continue doing some other task by the time this previous task is again resumed. In the past application would wait for a task to get completed before it "can" move to the next one, regardless of the task being an IO op where it's simply waiting for a thing to happen.
2:20 _Parallelism_ exploits algorithmic efficiencies to solve underutilization of CPU cores. _Concurrency_ exploits I/O bottlenecks to do the same thing _or_ allows you to logically separate the work among several actors with distinct roles.
Concurrency is parrallel waiting. In concurrency, you can sit around and do nothing while other people have their turn, and then pick up where you left off when some resource becomes available.
That's called multitasking or multiplexing or time-sharing, not concurrency. It's only emulating concurrency as there is always only one task that is running at one time.
Concurrency is about having multiple tasks running simultaneously from a conceptual standpoint. Parallelism is about literally being able to execute more than one CPU instruction at a time. Which is made somewhat weird by the fact that pipelining makes even a single core CPU engage in parallelism even though there is no concurrency.
That's certainly a correct definition of parallelism, but I would constrain it to "executing more than one stream of instructions at a time". A CPU can definitely do fancy out-of-order and super-scaler stuff, but that only works as long as it ends up being sequential at the other end again. Of course a superscalar architecture, pipelining, etc. is a large part of what enables this (SMT), but I find that executing multiple streams of instructions simultaneously is usually what people mean when they talk about parallelism!
Nah. Running tasks in parallel is exactly the same thing as running them concurrently. It has literally the exact same meaning. What prime is describing as "parallelism" is in fact called "parallelization", ie. designing tasks in such a way that sequences that are independent and don't have to wait for one another may be executed in parallel. But this might just as well be called "concurrentization". And what you are describing is in fact called multi-tasking, or multiplexing, or time-sharing. This is a method spliting a single shared resource such as an execution unit (or a communication channel in networking) into multiple time-chunks so that over a longer period of time it effectively seems as multiple units. This is only "emulated" concurrency not true concurrency as there is always only a single task running on the unit at any given moment.
@@kyjo72682 - There is a very good cppcon talk a couple of years ago that talks about why my way of viewing this is actually more useful. Time sharing and multi-tasking are indeed a form of concurrency. But concurrency is about the logical structure of your program. You can have concurrent tasks executing in parallel. You can have all the parallelism in the world, but if you using something like Python's GIL, you have no concurrency.
Please read through Without Boats's response to this article. This is the problem with react videos. You can't just say, "oh, that sounds good" and not dig in a little further. Async without GC is hard. Linear types are interesting. Passing information between Rust and a GC language requires a LOT of forethought, knowledge, and work. But that why there are libraries and amazing groups of people to work through the nuances.
For people elated about type annotations in the javascript files, they won't happen because they will straight up add parsing time and size to scripts with no runtime benefit whatsoever (and therefore the implementors will be blamed for "slow sites"). It might somewhat work in Python, because it's parse once and use many more times in prod, but not in browser js. The same reason there won't be pipe operators in JS either, because they incur a performance penalty on almost entire runtime, and implementors will be blamed for it.
parallelism is a subset of concurrency... all parallel tasks are concurrent tasks, but not all concurrent tasks are parallel... concurrency is about solving problems of sharing resources (CPU, memory, files, etc.) between processes/threads/tasks both in parallel or in pseudo-parallelism
I'm so confused. If async await is concurrency, then are goroutines parallelism? If so, does this mean than go doesn't actually have concurrency since it just constantly spawn green threads? In c# you have async await, green threads and "normal" threads if I understand correctly. So are languages like Go and Rust a step backwards? Are tokio spawns stackful or stackless? I'm I even alive right now?
Go abstracts the difference - goroutines are both concurrency and parallelism. You can have 16 CPU cores, spawn 16 computing goroutines and benefit. You can also have 1 CPU core, spawn 16 IO bounded goroutines and benefit. Because GO hides away the difference.
@@viktorshinkevich3169 Thanks for the explanation. But how does it know when to do concurrency for io bound stuff and when to do parallelism for cpu bound stuff? And is there a way to set it manually?
@@crimsonbit You are welcome! So again that's the cool part about go - you dont nesseceraly need to know. You are computing 4 different unrelated values that take time on 4CPU core computer spawning 4 goroutines? Fine, go scheduler will try and map those 4 goroutines on 4 Threads on 4 Cores. And speeding up computation 4x times You are requesting 4 different websites with 4 URLs on 1 CPU core toaster? Fine, go scheduler run them concurrently on 1 Thread, switching from one goroutine to another while waiting for network call response speeding you program to the time of slowest HTTP call. Go does so because he can difference IO bound from non IO bound (+timers). If you spawn goroutine that just calculates something Go will know it should not be "async". If you spawn another goroutine then does Timer (sleep) or that makes an HTTP call or reading from a file - that goroutine know is "async" since Go scheduler knows that it could be easily put on "waiting" list since there is nothing useful it can do in the meantime - it just awaits for response. If it just waits for HTTP call to come back and forth across ocean thru 15.000Km for a sec or two - it's totally legit. For the programmer in Go, there is no special syntax to know it he writes async function or a regular one. In Rust, NodeJS, Java, C# there is.
I still find the discussion of channel's funny though, particularly this article when it says "after decades of mutex madness". How do people think channel's work? They're *literally* designed as a critical section (CS) that's behind a - you guessed - a mutex (or some other locking/guarding mechanism).
Yes but the abstraction is significantly less powerful (and less dangerous) than a raw mutex. The programmer never needs to worry about shared memory, all of the messages sent and received are owned by one thread until they are moved/sent.
Yes, on the same line, control structures(loops, if statements and function calls) are also just glorified gotos and branches(and stacking)... they provide a structured interface which helps prevent foot guns. The deadlock condition on a correctly implemented queue is when there's mutual waiting because there are threads which want to consume while there is noone to produce(and the queue is empty).
@@bonsairobo Yes, I'm just saying, writing an ad-hoc channel is fairly trivial, particularly in languages like C++. And doing it is trivial, since you'll use a queue and a mutex that wraps the pushing and popping of values off of it. Reading off of a channel, is a blocking operation and as such, looks and behaves exactly like a raw mutex. And when writing your own solution, you can add a try_read that checks for content before reading, and just like that, you have implemented the mpsc channel in Rust.
@@matheusjahnke8643 That wasn't my point, or what I was implying though. What I was implying is that, writing your own ad-hoc channel abstraction is pretty trivial in languages like C++. Is it going to be *as general* as a channel from the standard library? Well, that depends on how good you are at writing templated generic code, but even with simple OOP-techniques it is extremely trivial.
@@simonfarre4907 What is your point? Because I read it as: "this thing is so simple ppl should just do it themselfes". Hopefuly it is clear why this is a weird statement to make, especialy in software.
9:34 in many languages queue-like solutions involve not only "copy" but also serialization (pickle in python terms). Which is a huge waste of resources.
Other reply said concurrent is 1 chef making 2 meals... Problem is both scenarios need additional constraints to be always true. E. G., 1 Chef can bake 2 simple dishes in parallel, 1 waiter can serve 2 side by side customers in parallel (this would be super unusual). After careful thought, I think the waiter analogy wins because serving 2 customers simultaneously is not typical.
I don’t get it either, but I haven’t programmed that much c. The only problem I’ve come across is if you have many exit points to a routine it’s easy to forget to free something at some exit point. Also you might have to look at the man page or docs of a function to understand whether you are responsible for freeing the data. But I feel loke all this could be solved by adding a defer statement to the language and perhaps adding some annotations to signify that a function returns heap-allocated data (which would let the linter earn you if you forget to free something) Other than that it’s a very simple concept I think
10:09 in php it's called fibers. But coroutines or whatever they are called in your language is not about who schedule the tasks the os or the app. It's about resumable suspendable functions. When you do await fetch() from inside myfunc(), myfunc should be suspended and the control go to the main loop, when fetch is fulfilled myfunc is resumed. How is that is done? The stack of the function is saved to the heap then when resumed it's switched back.
It's almost like people are equivocating on the meaning of "async". Assuming async is a transferrable skill between languages is as absurd as assuming any other feature maps one-to-one, but worse, because it is a very involved feature. It is not a bad thing to require knowledge of how it works, it's a *requirement*, and people are assuming that one set of design/implementation decisions (of their familiar language or type of languages) is "correct" async. Hot async with an implicit runtime in a dynamic or GC'd language is not the same thing as stackless cold async in an unmanaged systems language. Being upset that design patterns and practices don't transfer from one to the other is like complaining that you can't use your dynamic programming style with objects and closures and lambdas in C. It's a sign that you lack understanding and did not consider the very different design goals. It's great that popular languages can make it possible to use async and other abstractions without understanding how they work (in simple cases), but I will not accept that people are writing real non-trivial production async code in any other language without *any* level of understanding (or training) in what it actually does under the hood. They are forgetting the fact that they were not born writing async code. I typically am stuck in embedded C land, but when I first started working with async in python and C# I absolutely ran into issues where that "async code is code" abstraction fails. Working with async required a high level understanding of the implementation details to truly utilize it in a non-trivial application. The ease of apparently-working code is as much a weakness as a strength. The magical runtime can lead you into thread safety violations if you don't know that it schedules to a thread pool and that you must use synchronization tools in many cases. I've inherited some code with many un-awaited async tasks, and at first I'm like how is that even working? It isn't all intuitive in any language for non-trivial uses. I remember when async was first becoming popular, and there were *so many articles* explaining it, because it is literally just a shorthand for a specific set of design patterns.
Why is it so difficult to define concurrency? I feel every time I hear the term, people always give a different definition and can never really agree on what it actually means.. It's either that the order of operation doesn't matter, that it presents an opportunity to parallelize (i.e. concurrent tasks can be parallelized), that the code is running independently but always executes one task at a time (big difference since it means that parallelized tasks are not concurrent; prime literally said this at 1:56) and endless variation of these. The example Prime gives at 2:17 makes me think that it is all about utilizing wait time, which I've seen in several examples but I have never heard anyone explicitly says this out loud. But this doesn't feel like a definition of the word concurrent and instead is how to make concurrent tasks run faster. So it shouldn't be brought up when defining the word, but only when you talk about why concurrency is important. I dunno.. It feels like the concept is not complicated at all but different people focus on different details on the definition and as soon as you don't focus on the right detail in the same way as the other person, they will tell you that your idea of concurrency is wrong. I hate words.. so much...
19 днів тому
Parallelism is number of washing machines, concurrency is independent loads of laundry.
I can put on my left shoe and my right shoe in any order (concurrently), but I don't have the dexterity to put both of them on at the same time (in parallel).
if you want nodejs to use ALL your cores, try scanning the Arbitrum blockchain, block by block for arbitrage and liquidation oppurtunities, calculating 100+ token pairs price data on each block...
I really like the syntax of rust,just the lifetime complexity make it hard to use it,if rust would have a builtin semi automated garbage collector in std library,I won't gonna use other language anytime soon.
What I really want is Rust with all the memory safety, true functional language features, and Hoare concurrency (all communication with green threads occurs via channels with immutable messages). Who is working on this?
(alert: reductionism in progress) Parallelism is when everyone is drinking from their own bottle. Concurrency is when everyone passes the bottle to the next person after a sip. #erlang #elixirlang #golang
Hidden fact: there is no real async filesystem operations. All nodejs fs.promise is faked behind a thread pool. Unlike network kernel routines which are really async. There is no really that useful async file routines. You can do something else while the network socket consume the data you have sent but you can't do the same with a regular file.
I still don't get the difference between parallelism and concurrency. I see them as synonyms. Two tasks running in parallel means the same thing as two tasks running concurrently. Same thing. Instead what I think people are trying to describe is the concept of PARALLELIZATION of tasks. I.e. writing programs in such a way that sequences that CAN run in parallel are allowed to do so, that they are not unnecessarily blocked from doing so by how the program is written. (This also generalizes to how machines and factory lines are designed and to any planning and scheduling of tasks in general.) I also see people confuse concurrency with multi-tasking (multiplexing/timesharing a single execution resource). While the goal here is to achieve "effective" concurrency with respect to some longer period of time, this is technically not true concurrency as there is always only a single task that is running at any single moment.
The main advantage of async Rust over Go is possibility to throw all async-unrealated stuff in non-async program part. In Rust you just do not use Arc, Pin and other stuff like that. In Go you still pay penalty for GC, stack grow overhead, etc.
The problem with multithreading is that the programmer is the one implementing it. The CPU hardware should be delegating work to it's worker cores. It should opperate internally as a multithreaded system and present the user with an apparent single thread and the user should have the option to bypass the thread overseer but the default behaviour should be to just let the thread overseer handle threading and data race prevention. Multi-threading should be a hardware level issue, not a software level issue. There is no reason anti-data-race circuits could not be added to memory modules using a XOR-gate access lock at minimum. Hell, it could even be implemented compiler level, instead of programmer level, you just have the complier mutex lock any variable that can be access from multiple threads: internally handle those as structs containing var and mutext and then overload the access opperators to aquire and release the mutext locks. .....at the developer level, this is a single template class.........
It has the same problem as synchronous recursion (actually it's worse). If it's unbounded, you'll either get a stack overflow, or out of memory, depending if the recursive call is synchronous or asynchronous. Even if it's not unbounded, you could also get a stack overflow when the deepest task completes and starts to unwind the async call stack (.Net has special handling for such a situation, but it's an extremely rare case).
Meanwhile, Java's managed threads quietly gives you the memory benefits of async/await without all the callback complexity. With async/await to call A, B, and C, you have to say something like, "tell B to call C when B is done, then tell A to call B when A is done". In java you can say "do A, do B, then do C". Behind the scenes, java has a number of real threads proportional to the number of cores in your hardware, and all the "virtual" (aka micro-) threads just take turns using the real threads. I love the way java hides most of the ugly complexities from you while still giving you compiled-language performance. Not as fast as Rust of course, but more than fast enough for almost any job.
Soon graalvm will gain popularity and java again will become popular. I noticed that while coding in rust I become tired very quickly. This is not I expect from modern language
5 місяців тому
I like semaphores, because they remind me of trains. Checkmate!
Yes, I agree on that. I'd argue, that this is the cause of part of your souring on Rust. I imagine that at Netflix you've been mostly writing async Rust, and I've long been saying, with async you loose a lot of the advantages of Rust.
Parallelism may not use two separate CPUs; it may use two different threads, and you may use the same CPU for those threads. Splitting hairs, but, meh.
So why the fuck did I learn Rust then? It was being advertised as having excellent concurrency, and up until now I just thought I was too stupid to get, but apparently other languages have it easier. Wow, what a waste of time that was. Shit. Should I learn Go?
Async obfuscates every language. Solution #1: blocking IO with multiple threads is very straightforward, especially if you use immutable data, so race conditions are rare. Problem with solution #1; with many thousands of threads this will unnecessarily use up all your memory. BUT are you really going to need tens of thousands of threads? Probably not, so we can stop at solution #1. Solution #2, use solution #1 but with virtual threads. This uses microthreads behind the curtains, but they look like normal threads. Java has this, and it is coming soon or is already in every other popular language. Problem solved.
I'd say parallelism is when something runs in parallel on a given processing unit (i.e. your GPU or your CPU) while concurrent means that we are doing things in parallel across different processing units/hardware, i.e. doing work on the CPU while some other component is doing data transfers instead of just waiting for that to finish. When you're calling some function on the CPU that tells the GPU to do something, that will run concurrently. The task you gave to the GPU itself will execute in parallel on the GPU, and the GPU itself can also run things concurrently since it has more than just one streaming multiprocessor and can very well be running different tasks at the same time, but only one dedicated task in parallel on a given SMP.
My turn to ackchually about parallelism and concurrency. Parallelism is when things run at the same time... so processes aren't parallel in an old, single core, computer Concurrency is when things share resources... processes are concurrent because they share memory and CPU(and access to disk space... and internet and stuff)
I disagree that other languages do async "better". Async as a language feature adds exponential complexity by trying to pretend everything is single-threaded. Unroll the state machines explicitly and you can reason about what your code is doing in the real world.
Why is it that everyone loves RUST … except the Linux folks. Same ones who actually think X11 is secure and perfect and not 40 years behind gdi in Windows and Macosx hardware acceleration, and actually think initd is superior to Sustemd. It’s old people afraid of change. Good god how did Linux become so technophobic
Interesting comment on Bun i heard that also from another person this week. Great for some areas but needs work in so many to really be the perfect solution
Async and all that color the function, and that coloring spreads to its caller. With channels, you don’t know if a function you call is shelling out to something else, and don’t have to refactor everything to use it
@@AnthonyBullard but if the called function is inherently asynchronous, how does the caller can not now this? So if the caller just cares about the value it will act as await, but if the caller needs async interaction it will invoke it differently and subscribe to its channel?
Parallelism is two people each chopping an onion at the same time. Concurrency is one person chopping an onion while they wait for the oil in the frying pan to heat up.
@@privy15 it's better than the stock Android full of Google. I don't think the devs even can do targeted attacks. But you can run Google things sandboxed (and even in a separate profile) for bank apps ect. Either way it is a good idea to use some FOSS Android without Google reading all of your messages and activity
I mean, the rust book does encourage channels and also quote the go docs "don't communicate by sharing memory, share memory by communicating".
I think the main issue Rust has in this space is that you're not really all that likely to stumble on an actor model as a solution. You're far more likely to try to run two functions at once and find out the borrow checker is upset but it gets less upset if you Arc (it likely even recommends this). The path you're pushed on is the hard path, and it's partly because of the types of projects that are using Rust in the large.
I'm not entirely sure what that abstraction is trying to say without context. Sound cool though
@@KyleSmithNH yeah, my take (and this might be what you're saying) is more that Mutex is more in line with Rust's raison d'etre. It exists to get as close to the bare metal as possible (as rust often does), so it kind of betrays that to enforce anything that adds overhead, even if its typically a bad idea to go without. Like, Lamborghini might strongly suggest you obey the speed limit, but they're certainly going to make it possible to speed feloniously. Probably. I don't write php.
Thanks, I thought the same! The article states Hoare and the solution to all the problems: channels! But then sh*ts about how async rust sucks using anything other than channels...
@@Psy45Kai The point is that Rust doesn't lead you down the path of using them naturally, therefore many make the mistake of not doing so. Whereas other languages make it much easier
Parallellism is several chefs preparing different dishes at the same time, concurrency is one chef preparing multiple dishes by flitting between tasks as efficiently as possible, but always one task at a time.
That's the best explanation of parallel vs concurrent I have ever read. I'm stealing this.
Concurrency is simply what people call multi-tasking. Very few people actually do multiple things simultaneously for any consequential amount of time. They just rapidly (for a human) task switch.
@@jacekkurlit8403No, I am stealing this explanation
Great way of simplifying a concept that people often struggle with for a while!
To the best of my understanding: Parallelism is a subset of concurrency. Concurrency just means running at the same time which includes parallelism. Parallelism implies concurrency, but concurrency doesn't necessitate paralelism. Concurrency is both one chef or many chefs. Single threaded concurrency is explicitly a single chef.
Concurrency is how people with ADHD process their todo list
I feel seen 😂
concurrency is what everyone really does when they pretend to be "multitasking", except their brains have a very small amount of memory to store those tasks in the queue and a very slow single-core task runner whose limited attention is split among the tasks evenly
Without joining
facts
I'm not diagnosed with ADHD and yet this is how I understand how I do multitasking
Honestly, the article makes some really good points, but after I've been learning async Rust for quite some time (and I have to admit, the learning curve is steep if you *really* want to get to the spicy parts), I feel like it's a complete exaggeration that async Rust is bad. The Tokio runtime provides enough things to not have function coloring be a problem, namely tokio::task::spawn_blocking and tokio::task::block_in_place. Go doesn't have function coloring problems because it's all async under the hood anyway. Go also actually does something similar to tokio::task::block_in_place for blocking code. I agree that this is where Go shines, but the projects I work on *require* the control that Rust provides; Go is just not an option at all in this case.
It's not easy, but if there is one thing I am more tired of than complexity, then it's complexity being hidden for the sake of simplicity. Async Rust doesn't hide anything from me, and that's exactly what I need.
If there wasn’t a function coloring problem, we would be using the std library when doing async work yet thats not the case.
last sentence is absolutely on point, and why Rust is truly special
@@metaltyphoon I think I didn't manage to get my point across properly (that's on me); of course function coloring still exists, what I meant is that it's not a *problem* with Tokio. For example, Tokio's channels e.g. can send and receive messages in a blocking context, too. Let's say you have a worker thread (plain old std::thread::Thread somewhere) that's doing synchronous stuff. You can give that thread the receiver of a tokio::channel, and it will be able to call blocking_recv(), or vice versa, blocking_send() on a sender. You *can* therefore "go back and forth" between sync and async contexts. One just needs to learn how to do so.
That's why I personally don't consider function coloring a problem.
Now, my points are heavily relying on Tokio here; if I had to build my own runtime, I would have to build all of those facilities myself, which I admit would be a PITA. But that's why we have Tokio.
"the learning curve is steep if you really want to get to the spicy parts"
But you honestly don't have to learn *that* much, for async to be very usable. I feel like people sometimes treat it as if they had to write their own runtime from scratch. Most of the time it's writing tokio::spawn instead of std::thread::spawn and using the async version of calls that would usually be blocking.
And for some things it's even easier. Graceful shutdown (of the whole application, or parts of it) is a breeze with the tools tokio provides. And things like tokio-console are a godsent for taking a peek into a running program.
"Async Rust doesn't hide anything from me, and that's exactly what I need."
Eh, I personally think there is a bit too much magic involved. As with anything: If you understand the magic, it ceases to be magic. And to Rusts credit, at least it makes understanding said magic possible. But I do think there was room to be more transparent with the whole thing.
I'd like to roll my own generators for instance, instead of having them perpetually locked away in nightly.
@@jfcfjcjfq
Arcs are great if you just need to share data (especially big, immutable data) with complicated lifetime implications and just need a quick solution.
There are almost always alternatives, but an Arc/Rc is pretty much the only "general" way that can act as a catch all solution and can also easily be tacked on.
Once you've cloned the Arc, accessing the data is basically as free as if you just had a Box on it (loading is atomic by default on Intel, for instance). The trouble comes when you are sharing lots of small bits of data, especially when you are constantly spawning new threads/tasks and copying the Arc over all the time/they are going out of scope and calling Drop. Atomic operations can have surprisingly little overhead when there is low contention. Otherwise it can be better to just copy over the underlying data
That said, if you are looking for a solution for configurations, may I suggest checking out something like tokio::sync::watch, as an alternative.
As trivia, C# creates hot tasks and F# creates cold tasks (and has a lot finer control of them, in general)
Oh interesting that it's different.
Clearly dotnet wins the language competition
f# is so underrated
I had a very profound realization, I have been watching you videos for the past couple of months and have learned more than I have in my last 2 years of being a CS undergrad. Conceptual things that I though were clear to me, being completely rewritten in my head. Thank you soo much. This has to be the most entertaining way I have ever gone over concurrency, it just clicked.
@@mythbuster6126 I was talking more as getting a new perspective to the problem, surely I'll check Rob out. This was a very new take to me, thought it was very intriguing. I am still learning the craft, thank you for pointing in the right direction.
@@mythbuster6126 actually, it makes sense. Pls, correct me if I am wrong. Two threads running concurrently on the same CPU core do not actually run at the same time or in Parallel.
The OS scheduler decides which one in the ready state runs. Say it begins with thread A and for some reasons A is interrupted (timer interrupt) the CPU then becomes free and the OS scheduler begins executing thread B. B either runs till it finishes or itself is interrupted (another timer interrupt) and the OS continues to execute thread A till A finishes and it continues with B till it finishes.
That's how concurrency works. Say these two threads were working on the same file(with critical sections) it would be easy to get into a race condition here. Hence the need for locks, condition variables, and/or semaphores.
I've been coding for 10 years and watching Prime's videos has made me rethink a lot
@@paulorubebe7308 ackchuwally, in CPUs with simultaneous multithreading (SMT, or hyperthreading, HT, if you're Intel) several threads running concurrently on the same CPU core do actually run at the same time or in parallel. Executing an instruction takes several "stages" and multiple stages can run simultaneously. They can process the instructions from the same thread, which CPU expects will follow one after another, or instructions from several threads could be interleaved: while finishing an instruction from thread A it could be starting an instruction from thread B and preparing the next instruction from thread A. On the next tick it would be finishing B, starting A and preparing B etc (here I imagined a hypothetical core with three stages - prepare, start and finish, in actual CPUs there can be more stages, so even more instructions from 1 or more threads may be processed simultaneously).
This means that either your degree is bad or you haven’t been paying attention much
I tried to convert a Python program with Async into Rust Async. I'm still not finished, but got most things done. It's really hard to do, even though the logic of the program is already solved.
You'll have to relate the gains.
Python async is absolutely useless even if you use uvloop. Just using a huge threadpool always beats asyncio solutions in throughput AND latency.
@@youtubeenjoyer1743 if i'm not mistaken, python 3.12~13 will improve this.
I think it's not correct to equate different async runtimes. Solving a problem in a high-level hot async dynamic language with a GC is does not mean the problem is also solved in a low-level systems language with stackless cold async. They are only loosely related.
@@neniugrava The async specific problems are the same regardless of the language. GC doesn't help here at all. Plus I did not use any RC and really did it the manual way in Rust.
You very often say "I love rust for CLI tools", can you maybe go over some you've built and what approaches you follow?
He's mentioned before that he uses Rust at his job in Netflix, so maybe he can't exactly show them to us lol
Jon Gjengset has a great stream about pinning and unpinning
When I figured out how asynchrony works in Rust, it helped me in understanding asynchrony in other languages. For example, like Python, where I understood asynchrony quite vaguely.
that happens to me a lot in rust, it's friendly enough that you don't really HAVE to think about some things that the language is doing, but if you do, you start understanding why it's doing it
More trivia: Meteor JS has a forked version of Node 14 using a Fibers extension to do "colourless" concurrency instead of await/async. Meteor 3.0 will be going 100% async to get away from that weird fork situation though, so it's sort of the end of a weird parallel universe of JS concurrency (pun not intended).
I’d love to see fibers for Rust.
One thing I learned about Rust is that we can't try to be big brained with it. Often it is better to use the simplest solution possible instead of just trying to specify exactly how everything should work. Better to avoid fancy code as much as possible to avoid the weirdest bugs.
sounds like you'd like go lol
@@himboslice_Definitely.
@@himboslice_ i love Rust and Go for very different reasons but they're both easily amongst the best programming languages in the world
That's a recipe for a slow code. With that approach it's better to just write Go instead and get faster performance.
@@vadzimdambrouski5211 I didn't explain my point very well. I apologize. What I think it is best to say is "Better to not try reinventing the wheel when coding in Rust every time. There is most likely a crate that does what you need. Whatever it is."
I used to do this on an old IBM S360 reducing my shifts from 9-10 hours to only 4. Two tape units and printer running flat out. And me smashing keys to run the JCL that I'd split up to run the stock control and invoicing code at the same time. Hehe. No parallel tasks of the same code though.
please review Rob Pikes "Concurrency is not Parallelism" its a very good talk and a critical one for Go devs
Oh, I see. This is where all this needless confusion started.
It is kinda ironic to me, or surprising, that WebWorkers in JavaScript actually have one of the best models for parallelism going.
wat
They don't work in private browsing mode, so you have to write a conditional init logic (async ofc) for any function which can potentially interact with workers so it could work without them (potentially even after initialising with them). And don't forget interaction with workers incurs serialization/deserialization overhead, so you also want to write some sort of streaming logic on top. Which means you are going to write statically unoptimizable spaghetti and typescript makes it very painful to write cross-environment code like this. Good luck debugging all of that.
Explaining concurrency vs parallelism. Image building a house. You could have two workers laying bricks simultaneously. That's parallelism. You could have the same worker lay bricks, be out of bricks because there weren't enough bricks, go work on the plumbing, then go back to laying bricks when a new shipment of bricks arrived. By the way, I have no idea how to build a house.
I would rather say a kitchen. When you are making breakfast you can make a coffee while your wife makes a toast.
Two people for two tasks. That's parallelism.
The problem is that kitchen is small and you can't have many wives.
Another problem is that when making a coffee the part of it is boil the water. While water is boiling you just sit and wait. The same goes for toast - while it's being heated your wife waits and does nothing useful.
Here when concurrency comes to the rescue. You dont need a wife to make both coffee and toast concurrently - just load the toast, while heating boil the water, when toast is ready put a batter on it, then water is boiled so you switch there and finish coffee.
Boom!
TLDR; no wife no problem
People on a kitchen are CPU cores (processing unit), making a coffee (1) and making a toast (2) are tasks or jobs, loading toast/finishing coffee is computation, waiting for toast to heat/water to boil is waiting for IO, the toaster/kettle are OS/network.
What you are describing is called parallelization. Designing, planning and scheduling tasks in such a way that they can be done in parallel (= concurrently = at the same time) by multiple execution units, while also minimizing their idle time.
Don't know much about building houses myself, but can confirm it probably involves both bricks and plumbing.
1:55 Parallelism can be seen as a subset of concurrency, whether you "believe that" or not.
Parallelism: **executing** multiple tasks simultaneously across multiple resources (CPU cores).
Concurrency: **managing** multiple tasks that could, although not necessarily, be running on a shared resource.
The rust project should have implemented async as a library in userspace.
There was no need to bake all the syntax-sugar into the language.
At the end of the day I personally don't give a shit about articles like these. I have written a very large backend in Rust and it works perfectly and it is much more maintainable than any TS nightmare a different programmer would have done. The places where you need to use stuff like pin are very uncommon. Also, most frameworks do the heavy lifting for you so shard global variables become non-issues.
backend as in something that does HTTP + DB calls?
Why are you comparing to TS?
That's garbage.
@@dn5426 No he meant a middleware which passes the values to a real backend. I'd also add FS on top, because it can get pretty spicy with all the multi-platform stuff, so I don't believe you can write a "maintainable" code in this context, especially if you need some custom storage implementation logic (multi-platform ofc).
Concurrency = Doing two or more things by switching between tasks either periodically or when needed. E.g. when one thing is waiting for something else to happen.
Parallelism = Doing two or more things at the exact same time.
Threads are a form of concurrency, where the hardware periodically interrupts a task in order to preemptively switch to doing something else. Two or more threads could potentially be executed in parallel, but only if you have multiple processors or processor cores.
Concurrency is like manager trying to do multiple works by himself, where parallelism is manager defers these tasks to his team members which is much more efficient. When worked in Nodejs, I have always consider event loop as the manager.
I wonder if it is possible to somehow introduce things to rust that "fix" async to be more approachable.
I like the article's explanation for parallelism vs concurrency. Concurrency is when you break your problem into independent pieces. Those pieces might run on the same CPU, they might run at the same time, or they might not.
Parallelism is what happens when you run concurrent tasks at the same time.
The fact that there is this perpetual need to re-explain and argue about concurrency vs parallelism makes me feel really secure about my continuing employment.
On the other hand, I feel very uneasy to see how so many people get it confidently incorrect.
Fun fact:
You can't explain concurrency without explaining what those tasks do. You must first let the user getting explained understand that a task gets "paused" by the kernel during an IO op, but with modern kernel apis you can continue doing some other task by the time this previous task is again resumed. In the past application would wait for a task to get completed before it "can" move to the next one, regardless of the task being an IO op where it's simply waiting for a thing to happen.
its mentioned by all possible names in the article
2:20 _Parallelism_ exploits algorithmic efficiencies to solve underutilization of CPU cores. _Concurrency_ exploits I/O bottlenecks to do the same thing _or_ allows you to logically separate the work among several actors with distinct roles.
NFS is the network file system. We use it at home to have all computers' home directory on a central computer
Concurrency is parrallel waiting. In concurrency, you can sit around and do nothing while other people have their turn, and then pick up where you left off when some resource becomes available.
That's called multitasking or multiplexing or time-sharing, not concurrency. It's only emulating concurrency as there is always only one task that is running at one time.
Concurrency is about having multiple tasks running simultaneously from a conceptual standpoint. Parallelism is about literally being able to execute more than one CPU instruction at a time. Which is made somewhat weird by the fact that pipelining makes even a single core CPU engage in parallelism even though there is no concurrency.
That's certainly a correct definition of parallelism, but I would constrain it to "executing more than one stream of instructions at a time".
A CPU can definitely do fancy out-of-order and super-scaler stuff, but that only works as long as it ends up being sequential at the other end again.
Of course a superscalar architecture, pipelining, etc. is a large part of what enables this (SMT), but I find that executing multiple streams of instructions simultaneously is usually what people mean when they talk about parallelism!
Nah. Running tasks in parallel is exactly the same thing as running them concurrently. It has literally the exact same meaning.
What prime is describing as "parallelism" is in fact called "parallelization", ie. designing tasks in such a way that sequences that are independent and don't have to wait for one another may be executed in parallel. But this might just as well be called "concurrentization".
And what you are describing is in fact called multi-tasking, or multiplexing, or time-sharing. This is a method spliting a single shared resource such as an execution unit (or a communication channel in networking) into multiple time-chunks so that over a longer period of time it effectively seems as multiple units. This is only "emulated" concurrency not true concurrency as there is always only a single task running on the unit at any given moment.
@@kyjo72682 - There is a very good cppcon talk a couple of years ago that talks about why my way of viewing this is actually more useful. Time sharing and multi-tasking are indeed a form of concurrency. But concurrency is about the logical structure of your program. You can have concurrent tasks executing in parallel. You can have all the parallelism in the world, but if you using something like Python's GIL, you have no concurrency.
Please read through Without Boats's response to this article. This is the problem with react videos. You can't just say, "oh, that sounds good" and not dig in a little further. Async without GC is hard. Linear types are interesting. Passing information between Rust and a GC language requires a LOT of forethought, knowledge, and work. But that why there are libraries and amazing groups of people to work through the nuances.
Where can I find it?
By the way, 39 data-racing vulnerabilities were announced in Rust programs, compared to 2 data-racing vulnerabilities in Go
For people elated about type annotations in the javascript files, they won't happen because they will straight up add parsing time and size to scripts with no runtime benefit whatsoever (and therefore the implementors will be blamed for "slow sites"). It might somewhat work in Python, because it's parse once and use many more times in prod, but not in browser js.
The same reason there won't be pipe operators in JS either, because they incur a performance penalty on almost entire runtime, and implementors will be blamed for it.
parallelism is a subset of concurrency... all parallel tasks are concurrent tasks, but not all concurrent tasks are parallel... concurrency is about solving problems of sharing resources (CPU, memory, files, etc.) between processes/threads/tasks both in parallel or in pseudo-parallelism
Hi! Can you explain why rust is great for CLI tools? What does it offer that, say, Go doesn’t?
serde and clap, all about the beautiful #[derive] macros
I'm so confused. If async await is concurrency, then are goroutines parallelism? If so, does this mean than go doesn't actually have concurrency since it just constantly spawn green threads? In c# you have async await, green threads and "normal" threads if I understand correctly. So are languages like Go and Rust a step backwards? Are tokio spawns stackful or stackless? I'm I even alive right now?
Go abstracts the difference - goroutines are both concurrency and parallelism.
You can have 16 CPU cores, spawn 16 computing goroutines and benefit.
You can also have 1 CPU core, spawn 16 IO bounded goroutines and benefit.
Because GO hides away the difference.
@@viktorshinkevich3169 Thanks for the explanation. But how does it know when to do concurrency for io bound stuff and when to do parallelism for cpu bound stuff? And is there a way to set it manually?
@@crimsonbit
You are welcome!
So again that's the cool part about go - you dont nesseceraly need to know.
You are computing 4 different unrelated values that take time on 4CPU core computer spawning 4 goroutines? Fine, go scheduler will try and map those 4 goroutines on 4 Threads on 4 Cores. And speeding up computation 4x times
You are requesting 4 different websites with 4 URLs on 1 CPU core toaster? Fine, go scheduler run them concurrently on 1 Thread, switching from one goroutine to another while waiting for network call response speeding you program to the time of slowest HTTP call.
Go does so because he can difference IO bound from non IO bound (+timers).
If you spawn goroutine that just calculates something Go will know it should not be "async".
If you spawn another goroutine then does Timer (sleep) or that makes an HTTP call or reading from a file - that goroutine know is "async" since Go scheduler knows that it could be easily put on "waiting" list since there is nothing useful it can do in the meantime - it just awaits for response. If it just waits for HTTP call to come back and forth across ocean thru 15.000Km for a sec or two - it's totally legit.
For the programmer in Go, there is no special syntax to know it he writes async function or a regular one.
In Rust, NodeJS, Java, C# there is.
"Fibers" were a thing way back in C++ & COM too I remember. I don't believe they ever really went anywhere though.
boost::fiber is a nice C++ library. I’ve written some nice async apps with it.
parallel is the same task running in multiple threads, concurrent is different tasks running at the same time maybe in different threads.
I still find the discussion of channel's funny though, particularly this article when it says "after decades of mutex madness". How do people think channel's work? They're *literally* designed as a critical section (CS) that's behind a - you guessed - a mutex (or some other locking/guarding mechanism).
Yes but the abstraction is significantly less powerful (and less dangerous) than a raw mutex. The programmer never needs to worry about shared memory, all of the messages sent and received are owned by one thread until they are moved/sent.
Yes, on the same line, control structures(loops, if statements and function calls) are also just glorified gotos and branches(and stacking)... they provide a structured interface which helps prevent foot guns.
The deadlock condition on a correctly implemented queue is when there's mutual waiting because there are threads which want to consume while there is noone to produce(and the queue is empty).
@@bonsairobo Yes, I'm just saying, writing an ad-hoc channel is fairly trivial, particularly in languages like C++. And doing it is trivial, since you'll use a queue and a mutex that wraps the pushing and popping of values off of it.
Reading off of a channel, is a blocking operation and as such, looks and behaves exactly like a raw mutex. And when writing your own solution, you can add a try_read that checks for content before reading, and just like that, you have implemented the mpsc channel in Rust.
@@matheusjahnke8643 That wasn't my point, or what I was implying though. What I was implying is that, writing your own ad-hoc channel abstraction is pretty trivial in languages like C++. Is it going to be *as general* as a channel from the standard library? Well, that depends on how good you are at writing templated generic code, but even with simple OOP-techniques it is extremely trivial.
@@simonfarre4907 What is your point? Because I read it as: "this thing is so simple ppl should just do it themselfes". Hopefuly it is clear why this is a weird statement to make, especialy in software.
9:34 in many languages queue-like solutions involve not only "copy" but also serialization (pickle in python terms). Which is a huge waste of resources.
i'm with you on Pin and Unpin. I think i get it then the next day when i think about it again, i don't
Just like monads
21:39 nfs network file system. A shared folder in unix. Filesystem shared over the network.
Go is not really CSP. The memory passed is not immutable so you can still have data races.
Something about a ace of all trades is a master of none, but more often is more useful than a master of one
Parallel is when two chefs prepare two meals in the kitchen. Concurrent is when the same waiter two customers those two meals.
Other reply said concurrent is 1 chef making 2 meals...
Problem is both scenarios need additional constraints to be always true. E. G.,
1 Chef can bake 2 simple dishes in parallel, 1 waiter can serve 2 side by side customers in parallel (this would be super unusual). After careful thought, I think the waiter analogy wins because serving 2 customers simultaneously is not typical.
whatso hard about pointers and malloc, i dont get it
I don’t get it either, but I haven’t programmed that much c. The only problem I’ve come across is if you have many exit points to a routine it’s easy to forget to free something at some exit point. Also you might have to look at the man page or docs of a function to understand whether you are responsible for freeing the data.
But I feel loke all this could be solved by adding a defer statement to the language and perhaps adding some annotations to signify that a function returns heap-allocated data (which would let the linter earn you if you forget to free something)
Other than that it’s a very simple concept I think
10:09 in php it's called fibers. But coroutines or whatever they are called in your language is not about who schedule the tasks the os or the app. It's about resumable suspendable functions. When you do await fetch() from inside myfunc(), myfunc should be suspended and the control go to the main loop, when fetch is fulfilled myfunc is resumed. How is that is done? The stack of the function is saved to the heap then when resumed it's switched back.
It's almost like people are equivocating on the meaning of "async". Assuming async is a transferrable skill between languages is as absurd as assuming any other feature maps one-to-one, but worse, because it is a very involved feature. It is not a bad thing to require knowledge of how it works, it's a *requirement*, and people are assuming that one set of design/implementation decisions (of their familiar language or type of languages) is "correct" async.
Hot async with an implicit runtime in a dynamic or GC'd language is not the same thing as stackless cold async in an unmanaged systems language. Being upset that design patterns and practices don't transfer from one to the other is like complaining that you can't use your dynamic programming style with objects and closures and lambdas in C. It's a sign that you lack understanding and did not consider the very different design goals.
It's great that popular languages can make it possible to use async and other abstractions without understanding how they work (in simple cases), but I will not accept that people are writing real non-trivial production async code in any other language without *any* level of understanding (or training) in what it actually does under the hood. They are forgetting the fact that they were not born writing async code.
I typically am stuck in embedded C land, but when I first started working with async in python and C# I absolutely ran into issues where that "async code is code" abstraction fails. Working with async required a high level understanding of the implementation details to truly utilize it in a non-trivial application. The ease of apparently-working code is as much a weakness as a strength. The magical runtime can lead you into thread safety violations if you don't know that it schedules to a thread pool and that you must use synchronization tools in many cases. I've inherited some code with many un-awaited async tasks, and at first I'm like how is that even working? It isn't all intuitive in any language for non-trivial uses.
I remember when async was first becoming popular, and there were *so many articles* explaining it, because it is literally just a shorthand for a specific set of design patterns.
Boy that articles's author thinks in (congested) bloat. But I think his conclusion is correct.
Oh i love the days of maniacally chasing down semaphores locked by zombies.
Why is it so difficult to define concurrency? I feel every time I hear the term, people always give a different definition and can never really agree on what it actually means.. It's either that the order of operation doesn't matter, that it presents an opportunity to parallelize (i.e. concurrent tasks can be parallelized), that the code is running independently but always executes one task at a time (big difference since it means that parallelized tasks are not concurrent; prime literally said this at 1:56) and endless variation of these. The example Prime gives at 2:17 makes me think that it is all about utilizing wait time, which I've seen in several examples but I have never heard anyone explicitly says this out loud. But this doesn't feel like a definition of the word concurrent and instead is how to make concurrent tasks run faster. So it shouldn't be brought up when defining the word, but only when you talk about why concurrency is important.
I dunno.. It feels like the concept is not complicated at all but different people focus on different details on the definition and as soon as you don't focus on the right detail in the same way as the other person, they will tell you that your idea of concurrency is wrong. I hate words.. so much...
Parallelism is number of washing machines, concurrency is independent loads of laundry.
Number 5. Each thread having a 4kb control block... Is this related to why when piping data between processes the buffer size is 4096 bytes?
Rust async should be compared with c++ async. there you do not have pin.
6:45 we've been doing that in Python for 32 years haha
Classic parallelism vs concurrency confusion. I don't think Prime's definition is good enough.
I can put on my left shoe and my right shoe in any order (concurrently), but I don't have the dexterity to put both of them on at the same time (in parallel).
if you want nodejs to use ALL your cores, try scanning the Arbitrum blockchain, block by block for arbitrage and liquidation oppurtunities, calculating 100+ token pairs price data on each block...
4:50 It's 50 years. Pipes exist in Unix for 50 years this year, not just 40.
I really like the syntax of rust,just the lifetime complexity make it hard to use it,if rust would have a builtin semi automated garbage collector in std library,I won't gonna use other language anytime soon.
What I really want is Rust with all the memory safety, true functional language features, and Hoare concurrency (all communication with green threads occurs via channels with immutable messages). Who is working on this?
(alert: reductionism in progress)
Parallelism is when everyone is drinking from their own bottle. Concurrency is when everyone passes the bottle to the next person after a sip.
#erlang #elixirlang #golang
baby shark doo doo, doo doo. doo doo
Mommy Shark, doo-doo, doo-doo, doo-doo, doo-doo
Nice
man has a typescript cold, can't stop talking about damn node lol
Poor Ryst, getting hammered
It will survive, LOL.
Hidden fact: there is no real async filesystem operations. All nodejs fs.promise is faked behind a thread pool. Unlike network kernel routines which are really async. There is no really that useful async file routines. You can do something else while the network socket consume the data you have sent but you can't do the same with a regular file.
I still don't get the difference between parallelism and concurrency. I see them as synonyms. Two tasks running in parallel means the same thing as two tasks running concurrently. Same thing. Instead what I think people are trying to describe is the concept of PARALLELIZATION of tasks. I.e. writing programs in such a way that sequences that CAN run in parallel are allowed to do so, that they are not unnecessarily blocked from doing so by how the program is written. (This also generalizes to how machines and factory lines are designed and to any planning and scheduling of tasks in general.)
I also see people confuse concurrency with multi-tasking (multiplexing/timesharing a single execution resource). While the goal here is to achieve "effective" concurrency with respect to some longer period of time, this is technically not true concurrency as there is always only a single task that is running at any single moment.
The main advantage of async Rust over Go is possibility to throw all async-unrealated stuff in non-async program part. In Rust you just do not use Arc, Pin and other stuff like that. In Go you still pay penalty for GC, stack grow overhead, etc.
because Rust is designed to provide zero cost abstractions, Go is not, and what you said is as advantage as it is a disadvantage
The problem with multithreading is that the programmer is the one implementing it. The CPU hardware should be delegating work to it's worker cores. It should opperate internally as a multithreaded system and present the user with an apparent single thread and the user should have the option to bypass the thread overseer but the default behaviour should be to just let the thread overseer handle threading and data race prevention.
Multi-threading should be a hardware level issue, not a software level issue. There is no reason anti-data-race circuits could not be added to memory modules using a XOR-gate access lock at minimum.
Hell, it could even be implemented compiler level, instead of programmer level, you just have the complier mutex lock any variable that can be access from multiple threads: internally handle those as structs containing var and mutext and then overload the access opperators to aquire and release the mutext locks.
.....at the developer level, this is a single template class.........
A recursive async function? Would it garble the call stack, or would each recursive call not be async (for my sanity)?
It has the same problem as synchronous recursion (actually it's worse). If it's unbounded, you'll either get a stack overflow, or out of memory, depending if the recursive call is synchronous or asynchronous. Even if it's not unbounded, you could also get a stack overflow when the deepest task completes and starts to unwind the async call stack (.Net has special handling for such a situation, but it's an extremely rare case).
send in the chungus
"Just use Go" - Rust Programming Language Book.
really? 😮
Meanwhile, Java's managed threads quietly gives you the memory benefits of async/await without all the callback complexity. With async/await to call A, B, and C, you have to say something like, "tell B to call C when B is done, then tell A to call B when A is done". In java you can say "do A, do B, then do C". Behind the scenes, java has a number of real threads proportional to the number of cores in your hardware, and all the "virtual" (aka micro-) threads just take turns using the real threads. I love the way java hides most of the ugly complexities from you while still giving you compiled-language performance. Not as fast as Rust of course, but more than fast enough for almost any job.
Soon graalvm will gain popularity and java again will become popular. I noticed that while coding in rust I become tired very quickly. This is not I expect from modern language
I like semaphores, because they remind me of trains. Checkmate!
Do u think he listens to Tokyo drag while using tokio
Yes, I agree on that. I'd argue, that this is the cause of part of your souring on Rust. I imagine that at Netflix you've been mostly writing async Rust, and I've long been saying, with async you loose a lot of the advantages of Rust.
rust is really good for data oriented ecs based games
Parallelism may not use two separate CPUs; it may use two different threads, and you may use the same CPU for those threads. Splitting hairs, but, meh.
Never had to use Pin. What the h*ll were you developing? An async linked list?
Parallelism does not require multiple CPUs. Just to be clear
Can't someone write the equivalent of go in a rust library
Prime I know you read parallelism vs concurrency by rob pike but you should watch the video too
Love the humour!
"pretty neat"
So why the fuck did I learn Rust then? It was being advertised as having excellent concurrency, and up until now I just thought I was too stupid to get, but apparently other languages have it easier. Wow, what a waste of time that was. Shit. Should I learn Go?
Async obfuscates every language. Solution #1: blocking IO with multiple threads is very straightforward, especially if you use immutable data, so race conditions are rare. Problem with solution #1; with many thousands of threads this will unnecessarily use up all your memory. BUT are you really going to need tens of thousands of threads? Probably not, so we can stop at solution #1. Solution #2, use solution #1 but with virtual threads. This uses microthreads behind the curtains, but they look like normal threads. Java has this, and it is coming soon or is already in every other popular language. Problem solved.
I'd say parallelism is when something runs in parallel on a given processing unit (i.e. your GPU or your CPU) while concurrent means that we are doing things in parallel across different processing units/hardware, i.e. doing work on the CPU while some other component is doing data transfers instead of just waiting for that to finish. When you're calling some function on the CPU that tells the GPU to do something, that will run concurrently. The task you gave to the GPU itself will execute in parallel on the GPU, and the GPU itself can also run things concurrently since it has more than just one streaming multiprocessor and can very well be running different tasks at the same time, but only one dedicated task in parallel on a given SMP.
My turn to ackchually about parallelism and concurrency.
Parallelism is when things run at the same time... so processes aren't parallel in an old, single core, computer
Concurrency is when things share resources... processes are concurrent because they share memory and CPU(and access to disk space... and internet and stuff)
I disagree that other languages do async "better". Async as a language feature adds exponential complexity by trying to pretend everything is single-threaded. Unroll the state machines explicitly and you can reason about what your code is doing in the real world.
Why is it that everyone loves RUST … except the Linux folks. Same ones who actually think X11 is secure and perfect and not 40 years behind gdi in Windows and Macosx hardware acceleration, and actually think initd is superior to Sustemd.
It’s old people afraid of change. Good god how did Linux become so technophobic
NFS was great (other than the stale mounts of course)!!
Nice explanation of parallelism vs concurrency 👍
I don't think so.
deez parallel units
Interesting comment on Bun i heard that also from another person this week. Great for some areas but needs work in so many to really be the perfect solution
If async/await/futures/promises is a leaky abstraction, how channels is better? I think it will become event hell the same as RxJS observables.
Its not hell in golang, its not hell in elixir, why should it be hell here?
Async and all that color the function, and that coloring spreads to its caller. With channels, you don’t know if a function you call is shelling out to something else, and don’t have to refactor everything to use it
@@AnthonyBullard but if the called function is inherently asynchronous, how does the caller can not now this? So if the caller just cares about the value it will act as await, but if the caller needs async interaction it will invoke it differently and subscribe to its channel?
Pixel 7 eh? The volume rocker is soon to fall out.
Can we just all agree than when it comes to IO go abstraction is so much easier and convenient that rusts.
Parallelism is two people each chopping an onion at the same time. Concurrency is one person chopping an onion while they wait for the oil in the frying pan to heat up.
0:35 are you running GrapheneOS on it?
Asking the right questions
Is GrapheneOS still a good option for Pixels? Heard that the dev (?) has some... mental issues
@@privy15 he does, but a) paranoia probably isn't bad for a secure OS, b) he stepped down
@@mks-h He stepped down? Thats new. Thx
@@privy15 it's better than the stock Android full of Google. I don't think the devs even can do targeted attacks. But you can run Google things sandboxed (and even in a separate profile) for bank apps ect. Either way it is a good idea to use some FOSS Android without Google reading all of your messages and activity
* cries in Async Rust *