I'd just like to interject for a moment. What you're referring to as a binary semaphore, is in fact a binary semaphore with ownership, or as I've recently taken to calling it, a Mutex. A binary semaphore does not have unlocking restrictions by itself, but is rather another free attribute of a fully functioning mutex type made useful by its ownership semantics. Many programming languages implement a binary semaphore today, without enforcing the ownership. Through a peculiar turn of events, the most popular implementations today which are widely used are often called "Mutexes", and many of its users are not aware that it's basically unsafe. There really is a binary semaphore, and these people are using it, but it is just a part of the type they use.
Welcome to basic 12th grade algebra structures! Wow amazing! PS. joking prob 1st or 2nd year faculty algebra. PS don't remember when lambda calculus was.
5:00 It appears that Prime mistakenly interprets the phrase 'If it compiles, it works' to mean 'If it compiles, it works correctly.' However, no one using this phrase believes that programs written in Rust or Haskell are free of logic bugs. Instead, it means that a compiled program won't crash unexpectedly due to issues like mismatched types, use-after-free errors.
@@christopher8641 well, people should not pretend that real Rust programs are not going to be interfacing with highly useful (and very often necessary) libraries that aren’t written in Rust. And those integrations mean Rust unsafe blocks.
I think "If it compiles, it works" should be reworded to something like "if it compiles, it does what you described". If what you described is wrong, then the output will be wrong, but that's not really a bug, it's just wrong logic. Rust eliminates, by default, the ambiguity that unsafe code can carry. You could have described the correct algorithm, but have data races which do introduce bugs, for example.
I think it mostly gives people a false sense of security in rust, especially for devs that are new to systems programming. Also, it was butchered heavily. I’m pretty sure the original saying was “if it compiles, it runs” which has way different connotations.
@@bennyboiii1196problem with that is it doesn't sound as revolutionary. Also... I know this is going to be a massive shock but a lot of coders are pedantic AF contrarians and would purposely misunderstand "it runs" to be all "well anything that compiles technically runs..."
@@TheSulross so? If those uphold the safety contracts, are correctly implemented, the things they depend on (OS, drivers, libraries, etc) are not bugged and Rust itself doesn't have bugs, then unsafe code is irrelevant. Safety in Rust means the code you write does what you told the compiler it should do, if you wrote it incorrectly the compiler cannot do anything, and if any of your dependencies doesn't work as it says, that's not a problem with your code. So if the dependencies you rely on are well tested, fuzzed, etc then there should be no problem. The thing with Rust is the low level FFI code should be put in unsafe libraries that are extremely well tested and verified, and also written by knowledgeable developers, then wrapped in safe interfaces and consumed by developers oblivious to the unsafety under it. It's the same as when people say that you can do fully safe C code with static + dynamic analysis tools (you actually can't 100%). Then do the same with the low level unsafe Rust lib making it "safe", letting you use fully safe Rust on top of it, instead of also having to use analysis tools in your higher level C code.
Prime, if your worst bugs are logical bugs, it simply means you write code in dynamic languages. Worst bugs in C are silent memory corruption and data races. Only C system SW engineers know what heisenbugs are not by reading memes, but spending days trying to debug them.
amen. i do telco hardware and web developers saying memory issues are not real is laughable. i also write python for shims as well and having logical bugs in python is skill issue which I clearly have. but like you said some of the worst bugs in C are memory errors and segfaults which literally occur once in a moon or when lightning strikes nearby town drunk and is in range of 10 kms. I've had to literally travel 1000kms to fix bugs which would literally never happen in rust.
Wow I haven't heard heisenbugs in a while. You test (in sections), then the bug goes away. You go live, the bug shows up. Good one. I going to have to teach this one to some of my contractors (that I occasionally hire (for audio stuff)). We're having problems with GTK and some C++ Stuff coming down to [audio frame] memory problems and crash; I bet he can't find the problem because of his testing/debug/flags environment. [ heisenbug ] is a good word for the documents. Thanks.
@jonkoenig2478 Honest question, is the person I'm replying to a bot? I wonder if there are bots trying to create random hate on the internet lately...? If you're human write the answer to "42 - 7".
That's only true because you don't write business logic in C... Having worked in both embedded and now financial domains I say both have really weird problems. Where the later is mostly a headache because of really complicated business processes and legislative requirements... Both take a toll on your sanity, the stupid comparison between the two and trying to prove that your problems are the real ones and others are children playing in the dark is a sanity issue in itself.
Ive been in games for decades now, and I agree with Prime. The bulk of bugs, by a large margin, are logic bugs, and not memory corruption and data races. But, run of the mill logic bugs arent nearly as much fun to tell as overwriting arrays or data races. At game companies, you often implicitly do the same thing as Rust or C# and have libraries with unsafe code, and then write a ton of code in a more safe style. As I mentioned, its much better for a story to talk about a bug where a static variable was getting set twice (which is .. uhh, should be impossible), and yes, that was caused by someone writing off the end an array VS fixing 30 bugs about not doing something with one enemy, missing a case for something, not switching states in a state machine at the right time, etc Oh, and at the company I mentioned in another response (XBoxOriginal game company), our only "memory leak" was because I had an array that I never removed items from. Which is a memory leak all literally all languages can have. Would I like to flip implicit for explicit and add a safe keyword to C++? Absolutely. Itd be awesome. Itd be an interesting exercise to see how close one could get to making a new C+++ where you can incrementally update parts of projects from unsafe-by-default to safe-by-default.
The difference between Zig’s comptime and C++ templates and constexpr is that Zig’s stuff is unified. In C++, you can call constexpr functions at compile time to produce compile-time constants, e.g. for array sizes or template value arguments. Template stuff is declarative, constexpr function execution is imperative. That makes sense from where C++ is coming from, but modern language designers realized that the clear separation of types and values isn’t useful at compile-time. In Zig, types are values at compile-time. A function can have parameters that contain types as values and return types as values at compile-time. In C++ lingo, that would mean that not only can you write a consteval function that takes in something and returns a number (for an array bound), but also, you can write a function that returns a type to be used as a type for a run-time value. Essentially, type_info, at compile-time, isn’t meaningfully different than a type, so why have them be different things? In C++, there are function templates, class templates, value templates, and alias templates. In D, there are just templates that happen to produce functions, classes, values or aliases (which is a lot more streamlined), and in Zig, there are no templates, but functions that shove types around as if they were values. Sorting types (at compile-time)? No harder than sorting values at compile-time, which is no harder than sorting values at run-time. Why copy values and alias types? If a type is a value (which it is at compile-time), an alias of a type is exactly a (const) copy of a type. So in Zig, things that are separated for no good reason (in hindsight) aren’t separate. The only difference between types and other values is that types don’t exist at run-time, which means if you’d end up with having to run a function that manipulates types at run-time (e.g. because of a run-time value argument), the compiler will complain.
I really don't get the point of hating on "if it compiles it works"... It eliminates a type of bug completely, the logic bugs are there in any language?
It's like claiming that type system are pointless because they can't catch logic bugs either. Ideally the computer should take care of as much menial work as possible, and both a type system and the borrow checker do that.
@@skulverbut that’s genuinely not true in rust, there are whole classes of bugs which rust cannot statically check, out of bounds errors for example, so it can still crash.
Dude, that's not true that ownership is not a biggie. It is a biggie. Especially in a kernel environment where you have to worry about concurrency, reentrancy (you could call user code and then be called again from that user code). The hardest to find bugs, are by far those to do with memory safety. Because ultimately you're not writing a closed system where you have control over everything. You're writing an open system where most of the running code is user code, or kernel module code. It's incredibly hard to reproduce issues. It's much much easier to fix invalid logic bugs than memory management bugs in such an environment.
If you build any heavily concurrent system, I think ownership actually does cause a lot of bugs. I worked on an engineering simulation software that did a ton of concurrent calculations, some of which were interdependent, and the two biggest classes of bugs I dealt with were serialization issues (more related to networking) and data races. Even currently, working in the cloud, i deal wih more race conditions across services than logic bugs. I feel like the majority of logic bugs were found early on in testing because they are typically deterministic and thus easily reproducible. But with race conditions, there have been a handful where they were seen, but then couldn't be reproduced, and the QA engineer would end up getting gaslit into thinking they must've just not executed the test correctly.
Re; “If it compiles, it works”; As someone who comes from C/C++ like Pekka, this definitely *feels* partially true in Rust. Yes logic bugs are still a problem, but in my experience most of the bugs I have to deal with in C++ are memory errors (working with low level systems programming). Almost all of these are compiler errors in Rust.
If most errors you see in C++ are memory errors, you're using the language incorrectly. If you're tracking memory allocation correctly (i.e. using dedicated memory management types such as std::vector and std::unique_ptr instead of having each of your classes have that responsibility in addition to their business role, tracking pointers with types such as std::span and std::string view which also track size information, and just being minimally thoughtful about ownership (which you also have to do in Rust), you should hardly ever see any problems.
@@CGMossa It's not really onerous to say "your class should have a vector of things instead of a pointer to an array of things". What I described is literally the easiest way to set things up. People go out of their way to make C++ hard.
@@isodoubIet Rust is still safer than C++ even with std::vector and std::unique_ptr because references into either can be invalidated if the memory is released or the std::vector is resized. Almost all data structures in the C++ standard library have reference or iterator invalidation caveats that can typically only be detected at runtime with address sanitizers, and detection can still be flakey. This can't happen is Rust due to lifetimes enforced at compile time.
Yeah, logic problems are much easier to exclude too. I do C++ at work, and still find that refactoring Haskell I wrote last year is easier than C++ I wrote yesterday. The ability to reduce the problem space with enum arguments, and then have the compiler enforce completeness makes it much easier to write code I can reason about, and easier to keep track of all possible paths through my code. This seems to be the case for Rust as well.
To distill the author’s point, they pint out that Zig doesn’t have a dedicated type (meta) language for types because Zig is used for both metaprogramming and programming. The argument for making the type language the same as the main language is that you don’t have to learn two languages and the interplay is good. The argument against is that a meta language might have different requirements and benefit from being a different language (usually declarative, different primitives like, say, Type)
Async Rust doesn't use green threads though. It's agnostic. Sure, Tokio's runtime for async uses green-ish threads, but that's not the only runtime! You can use async on embedded devices without allocators and without OS threads too. Async Rust really is just polling objects and having wakers that can notify the executor to poll again. Anything more than that is runtime specific and is different for each runtime you may use
Async Rust are even more powerful. With a bit of hacky workaround, you can implement generators from it. I think generators proposal were sort-of based on async. Async iterators, boom easy and included. I once tried to handle state machine, and i ended up using async + other stuff. It's surprisingly usable and easy for end user to use.
Correction! Go doesn't "spawn threads" usually when calling a go routine. The runtime spawns a bunch of threads when the program is started and then go distributes work chunks on that existing threads (hence they are called lightweight threads) because you save the overhead of creating the thread context every time you spawn a go routine. This is why they are that fast.
I object, your honor, to calling debugging tricky memory issues in C/C++ code, a "skill issue". The best C/C++ programmers in the world have this problem. So, unless "not having god-like omniscience" is considered a "skill issue", then this is not a skill issue. It is a language issue. I'm not saying it is a language issue that isn't worth the extra work. Not saying we should all stop with C/C++, just saying that isn't exactly a skill issue.
Actually, i DO have a skill issue with those languages. I'm just saying that people who are wizards with those languages still struggle with memory issues.
@@freeideas This is why it's a problem that you conflated C and C++. In C, yes, everyone will struggle with memory issues. C++? Not so much. The language provides tools for managing memory that make it so you hardly ever have to think about it.
The language told you you are responsible for the memory then how is it a language issue if YOU mismanage YOUR memory? You Rust cultists have a serious problem with taking responsibility for your failings.
"if it compiles it works" is less about "bug-free" binaries and more about completely getting rid of undefined behavior. as a former TypeScript dev, Rust is a godsend because it's so much easier to debug than TS.
How rust async-executors work is actually now /that/ complicated. Basically you have a queue of top-level futures that you create using things like `tokio::spawn()` that get polled when they are called to awake. The fact that this executor is executing this queue multi-threaded is mostly just a implementation detail. And futures are nested state machines that somewhere down the line depend on a "Resource" that `.wake`s the associated Waker when it is ready, which signals the executor to repoll the associated top-level future. (One example for a resource is AsyncFd, which uses something like epoll to signal when it is ready.)
Rust async does NOT use green threads. The original rust pre-1.0 did use/expose them. Both tokio and stdasync use hardware thread pools. It is true that these pools are of the M:N variety, but that is a separate concept from green threads, which typically facilitate cooperative multitasking.
Never used zig, but I agree that thus is_duck example seemed pretty hacky. You accomplish the same thing in rust in your type declarations without having to write type checkers in the body of your function. I understand why this would be used in a pythonic language tho (mojo) - you have things like "isinstance" in the middle if your code and it follows the same pattern.
The mentality of doing things "the Rust way" is real. Even when I go back to writing Java for school work, I now prefer using interfaces and enums over inheritance. Additionally, I make sure that all nullable objects are wrapped in an Optional.
Procedural people have been telling you for decades that full-on OOP is a bad idea so why does Rust get all the credit, just because they have a toxic PR policy?
No, I pretty much agree with him. He did not said "I prefer the pre-processor over Zig comptime", he said "I prefer pre-processor over Zig comptime IF we can't have generics", which is another thing. Comptime in zig feels more like a workaround to the fact that typesystems are a specific part of a language than anything concrete on itself.
@@diadetediotedio6918 right. It's really cool writing "argument: anytype" and knowing a very lean code will be generated at comptime and the compiler will scream at me if it's invalid, but when I'm reading a function trying to understand how to use it and what it expects of me and I read "argument: anytype" all I can think is "well, shit"
Green threads are cooperative in nature and tasks have to explicitly yield to let other tasks take turns. Go compiler inserts yields into the code at strategic points (typically when functions are called or returned from) behind the scenes. Hardware threads use interrupts to switch tasks and do not require explicit yields. Interrupts can happen at any time/point in the code.
I tried giving zig so many chances, but it never clicks. I keep drifting back to C, maybe it's just wired into my brain. Rust feels like a breath of fresh air, but its colourful, toxic community keeps me away.
I'd recommend ignoring the Rust community. When I started reading top crates' code and doing syscalls (not really, but direct libc calls) manually, is when I started understanding Rust, deeply.
Red Black: think of it like this: Coloring is there to show you how lengthy the branch is compared to logn... when it's too long, the coloring (and the rotation) is basically just there to balance it... so updating the coloring is a chore to make rotations possible to balance it within limits. Just like what you do with a Btree, but while you overpopulate that theoretical totally logn tree horizontally a bit, you overpopulate vertically on a rb, the color limit of the RB is like the size limit of the B. At least this how I used to teach it at the univ... Hope this helps someone randomly finding it. :D
Most of my career was also C in the kernel (Windows kernel in this case). You get really good at C after years and rarely code bugs of the security-vulnerability nature. There are linters and safe string libraries and stuff like that to make things safer. The book "Writing Solid Code" set me on the right path early on.
As you said you get good with C after years. I am doing C for numerical stuff and I still mess up indexes and crash my programs. Also C programmers are elite programmers, better than most. So if you can write C right, you special not the average.
C developers overuse linked lists because they are simple to implement and get right. Unfortunately linked lists are really slow on modern hardware because walking a list involves constantly dereferencing pointers and that is very cache unfriendly.
depends on how you implement them, and what your program does with each node. if they're intrusive and allocated with a free list into a block of contiguous memory, they can be fast. if they are partitioned into large chunks and those chunks are linked, they can be fast. it's really only the trivial individually heap allocated linked list that is slow, when your loop is tight enough that the cache misses are dominating your performance. if a lot is done with that individual node, and local caches get a lot of use while handling an individual node, it can still cause negligible slow down. but yeah, if you're doing mostly CPU work and hardly pulling in any RAM besides the nodes contents, then an individually linked list can be extremely slow in comparison to a contiguous array. people often cause similar problems in other higher level languages when using associative arrays. if the hash table implementation is distributing access too randomly/broadly across its buckets, then you can get similar cache thrashing. sometimes just doing a linear search on a contiguous array can be much faster, even though its complexity is O(N) instead of O(1).
My learning strat right now is study C, use C API to work with SQL and learn both at the same time. Git gud with C. Continue grinding on C till Zig finally reach 1.0.
I'd normally suggest Rust. But if C is your way to go, I'd suggest contributing to Open Source for the sake of learning. Always worth everyone's time, yours included 😁
@@mysterry2000 Any more details on getting started with Rust? Any good ways of finding Open Source projects with easy tasks to get familiar with a Rust project, for instance? As for books, I've recently heard a podcast episode that recommended Hands-On Rust.
I pretty much agree with the comptime take (which extends to the notion of types as values). * It makes it harder or almost impossible for the compiler to infer types * It is resolved after compilling, so you cannot expect reasonable hintings from the tooling * It is based on the strange conception that "type systems should not use a different syntax or be in a different world", while simultaneously making this distinction implicitly and explicitly many times because this is just how things work, it feels like a workaround to a problematic worldview more than a properly designed feature.
@@TheFreshMakerHD But template-style metaprogramming is functional, while comptime are inherently imperative. With functional we can use things like type theory and immutability, which is way easier to reason.
Zig is actually more low level than C because you can design data structures with dedicated padding and specific sizes of enums in a way you could not do in portable C prior to C23.
The main complaint I have about zig is that in industry it is kind of difficult to push over just using a linter-enforced subset of C++ or Swift. It is an incremental improvement over C but it doesn't provide a fundamentally new capability that you can't get from any of the million of other C replacements that have popped up over the years (Ada, D or Nim without GC, V, Odin, Pascal, Delphi, fortran, C2 and C3, Holy C, and the list just goes on and on, also holy C is cooler than zig). Rust on the other hand genuinely does provide a new capability (non-GCed language with memory safety). Yes, it can be painful to write it, and yes I wish it were smaller and closer to what Greydon originally wanted, and yes you should probably just use threads instead of async/await, but at least I can immediately explain what the point of it is, and parallel (not concurrent) code with Rayon is better than pretty much any alternative I've seen. If you want a better C that is simple like Go, I like C3. It's a sane "fixed C" that does not try to do everything like C++ and just removes the obviously broken C footguns by making things defined, adds defer, interfaces, slices, odin-like error handling, and Go-like build tooling, while staying fully ABI compatible. If you want a C that is a bit more like Go, that's what I would suggest.
The point the article author made about C being amazing until you need a data structure that you don’t want to roll out on your own is so true. It almost makes you consider C++ until you realize how shit it is.
Saying "I don't like C++ so I'll stick with C" is wild Like a completely bonkers take. The worst parts of C++ are those it has in common with C, my dude.
@@EnDeRBeaT Decent? With types so long that "auto" were invented for? And however many constructors, assignment overloads there are? Oh and how many ways of allocating objects? Yeah C++ beyond C++98 is a huge mess. That's kinda why i don't touch any C++11 features at all. Give me plain old malloc() please.
@@hanifarroisimukhlis5989 i would take long types any time of the day over void*. Constructor complaints are valid, however most of the times you just need a constructor and destructor. Many ways of doing something? Choose one, and stick to it. If you're given many tools you don't need to use every single one
I like C too and I know how to write C in way that it never crash or have bugs. It just need very strict development process and verification. It is actually good then to make reliable software because tools are verified to work correctly, there are tools for verification and there isn't much complexity. It won't work if it is written same way as junior web developers write Javascript. It requires very different mindset.
13:40 This is why I fell in love with the D language ~10 years ago. It allows you to approach problems the way they come up in your head. There is no special D way.
Being that almost all languages use libraries written in C to do their work, I think that C has more libraries than any other language. When people say "You have to write a lot of code if you use C." That just means that the lack of a good package manager plus the general lazyness of people cause people to role their own libraries more often in C. However, just like in other languages, you can do anything in a single line of code: s function call. Also, due to inline functiins, you also pay less of a performance cost for the abstraction.
C isn't just missing a good package manager, it's also missing a good api layer. It doesn't have generics, it doesn't have that big of a standard library, and because of that, the C ecosystem doesn't have a whole lot of things that you can rely on.
@@hanifarroisimukhlis5989 Exactly. I've reimplemented many build systems in CMake, but it's proven to be more trouble than it's worth. Zig's build system and Cargo (Rust's package manager and build system) are far better, and easier to work with, as well.
@@Luxalpa A huge standard library is a problem. Templates were originally made by using C preprocessor macros, so it does have them, I've used them, but they're ugly, and hard to maintain.
zig's comptime is like C++'s templates, type traits and SFINAE, but on steroids. And then you add a bit of reflection on top of it. It's pretty fucking cool
It also has many of the same problems: duck typing, weird type errors (stuff fails when it's used, so the type error can be far away from what really caused it), and severely limiting the ability of LSP.
I think so too for the most part, but Pekka really is onto something. Preprocessor macros are complex and more than a little terrifying at times, but with preprocessor macros, C folks have settled on a set of standard predefined macros and idioms for applying them. The simplicity is there at different levels. C preprocessor macros are horrible to understand how to build safely or implement, but easy to logically compose because they aren't complicated -- it's all textual source code generation. Zig comptime is incredibly powerful, easy to understand and dig into how it works, but much harder to logically compose because it empowers you to do so much at runtime. A comptime function, unlike a preprocessor statement: - can be entirely bespoke meaning that you're REQUIRED to dig into the implementation to understand what it really does or how to use it sometimes - is easy to write, so you're likely to encounter tons of them (the only restraint engineers have is their own laziness!) - introduces a ton of complexity if you are just trying to do interfaces I think that over time, Zig will standardize on some idioms for comptime functions, including common techniques and packages for implementing patterns like interfaces and then these problems won't matter much. However, for the time being, when there isn't a book/definitive resource on when Zig programmers should zig or zag, whenever i see a new comptime function, i have to read it to know what on earth is going on, and that just makes me sad.
Fun fact, I did write my own BTree+ in Rust - Why you might ask? I needed a no_std implementation which worked directly on embedded-io-async, and there was nothing for that out there ;)
This is not having watched the video, but I've been using Rust as my primary language for about 18 months now. And loved it, primarily the strictness of the language, like the borrow checker and all that. And I wanna learn Zig, but I'm hesitant, so I keep telling myself, when it reaches 1.0 I'll pick it up then and see how I go. It's probably also with admitting I've only used python for over a decade prior to giving Rust a go and enjoying it, is just a lack of exposure thing. Though I have been using Haskell for the last 3 weeks, as something new to learn, so I won't be ready to pick a new language for at least a year yet anyway.
For clone stack vs clone heap you're right. That's a bad design decision made by Rust's library to have `Clone` trait used for both. They currently working on introducing a new trait (maybe named `Claim`) to differentiate between actual clones (like cloning a `Vec`'s heap data) to "cheap"/"invisible" clones such as increasing the ref-count of `Rc` or `Arc`. This will also fix the problem of having to call `.clone()` on `Arc`s when sharing ownership with closures (lambdas). Today you have to sprinkle `.clone()` but `.claim()` can be implicit (with an option to opt-out and force it to be explicit for code sections or crates where atomic increments/decrements can harm performance). It's insane that this change is possible, thanks to editions in Rust! Pretty good foresight to design the language with editions to be able to evolve like that.
A key point when talking between OS threads and green threads is that the former is a preemptive multitasking environment, and the later is a cooperative multitasking environment (besides the cost of running one or the other, of course).
I've changed my mind on colored functions.. the thing is it is useful to 'annotate' functions that do have 'continuations' for many reasons, and obviously that has to be recursively applied to all callers.
Minor correction, async/await in Rust doesn’t use green threads. Green threads use time slicing, while the concurrency model for async/await is based on cooperative multitasking. You can configure Tokio to be single threaded and if you block forever and nothing else will execute (unlike green threads)
Rust does use "green threads". Green threads are basically scheduled tasks which are run by scheduler using OS threads, and scheduler reuses fixed number of threads. Both Rust and Go in essence do that and both use work-stealing to redistribute work.
@@maniacZesci yeah, I was using a narrower definition of green threads. The one you provided, though not ubiquitous, is essentially the same as the one used by Tokio. Given this definition, I mostly agree with you, but it’s worth noting that Rust does not implement a scheduler. Rust also does not implement work-stealing either. Schedulers can implement whichever scheduling algorithms they like. Tokio is the most common and does use work stealing by default. Also worth noting that given this definition of green threading, async/await in JS is also green-threaded, which is interesting as most people seem to agree JS doesn’t have threads.
@@benheidemann3836 I'm not familiar with JS internals, but I believe JS uses event loop which runs in a single thread, so I wouldn't claim that JS has green threads. Yes Rust leaves scheduler and runtime implementation to library authors and work-stealing is not only algorithm. What I meant is that in Rust you can have green threads too if you want (using libraries like Tokio) not that Rust comes with batteries included for that like Go. Rust used to have that prior version 1.0 but they removed it because it comes with additional overhead and for Rust that aims to be a systems programming language too that was unacceptable.
@@maniacZesci thanks for the clarification. I think we’re on the same page now 🙂 The observation I was making with JS was that Tokio (for example) can be configured to execute on a single thread, which means it behaves functionally (nearly) the same as the JS event loop (I know there are some subtle differences). If we were to say that Tokio is green threads even if it’s configured to run on a single OS thread, then JS has green threads too in a sense. Having had this OS threads, green threads, async/await discussion a couple times, people seem to fall into two camps: those who require parallel execution (> 2 OS threads managed by the executor) and those who feel that concurrency is enough as long as there’s an executor. Interestingly, the definition used by Tokio on their docs seems to imply the latter, and therefore that JS has green threads.
@@benheidemann3836 I haven't look at Tokio docs for some time but I think Tokio uses thread per core by default, so if CPU has 8 cores, Tokio will spawn 8 threads for example. Configuration for single thread is reserved for those rare cases should someone need it for some reason, I really can't think of any reason for it tbh, but what do I know. It can be configured to spawn any number of threads ignoring thread per core too.
@ 3:00 Well, you don't need a balanced tree if you have a radix tree Want to find something? Is bit zero? Go left, otherwise go right Want to do a range query between left and right, (l,r)? You can use most significant digit for indexing. Is the bit on the left equal to the bit on the right? Then to to its subtree with same range Are the bits different? If left is higher than right, then you're out of bounds, otherwise you can merge the ranges (l, inf) and (0, r) on both subtrees. Now, of course the logarithm of the amount of items (complexity of a balanced tree) is always lower than the logarithm of the amount of possible items (complexity of a trie), but you can't beat this simplicity with a balanced tree.
Comptime has one major problem - lack of bounds on inputs. So you can not make a channel that only accepts types that are safe to send across threads. Might sound like a small issue, but it is at the root of the "rust experience".
You can make that system using comptime. Of course, this will not be that "elegant" like in Rust. You will need to write boundaries in a function body. Part of those will be determined through some kind of a handmade "comptime interface" I guess. The problem is that all of those boundaries will not be seen in the function's signature, which is a downside compared to Rust's approach.
@@Presenter2 let us hope that one day we will see languages converge to something that is flexible like in zig but remains capable of strictly enforcing constraints like in rust.
Prime, you should try writing an executor, it's actually not that hard to make a crappy executor! The harder part is reactors, which are, conceptually, a thing that holds info on async I/O tasks that the executor polls to check what I/O is ready. If your Future impl can check for itself if it's done, you can just busy wait instead. I have my own Future types that want to batch requests to something, so they just enqueue requests (or send if the buffer fills up) and then my executor sends the batch on idle.
Hey, I just want to bring some precision. At 6:14 when you are trying to explain briefly the basic of Rust, what are you talking about is RAII (Resource Acquisition Is Initialization) which is a concept from C++ that Rust has taken. I think to describe what makes Rust unique, I would have talked about the borrow checker, which is ensuring you can have mutation XOR aliasing at compile time. Now others have taken this concept, like mojo, but Rust what the one who introduces it. At 9:29 when you describe the way Arc works, you wrongly place the counter on the stack, it should be on the heap, next to the value inside the Arc. Indeed, if it were on the stack, each clone would have its own copy of the counter which will defeat the whole point of making reference counting. Regarding the green threading thing, I agree that the terminology is confusing, for the one who wants to lookup on the internet, go has what's called "stackful coroutine" and Rust has "stackless coroutine". I don't want to blame or anything, I know it's hard to vulgarize tech content in life, please keep up the good work.
5:05 using Rusts extensive type system you can model most logic as types and varients. This way the idea works if you write "perfect rust" without any boolean logic. Of course this is not feasible in reality. But it would work in theory. Moving as much logic to compile time as possible is the strategy I think will be the future of programming. Compilers understand the code we write better then we ourselves nowadays. So with more interactivity between the compiler and the programmer the compiler will be able to even help with logic bugs. Static asserts (c++) and strong types (rust) are the first step on that journey of compile time guaranteed code.
10:32 As respectfully as possible (I’m not as experienced with Rust as you are, I don’t think) I would like to disagree. I believe Copy is the trait that clones the stack: indeed it is the trait that says “I have no information beyond what is stored in the stack, and I can leave scope without doing clean-up.” Clone, on the other hand, says “to copy me I need to do some nontrivial work that involves things other than the stack, like copying heap buffers.” I do grant that Arc doesn’t actually affect the heap when cloned and so you weren’t *wrong* by saying that it “clones the stack,” but my point is that when I’m reading Rust and I run into a call to Clone::clone, I read that as saying “I am doing some expensive data manipulation that affects more than just the stack.” Idk if that’s a good heuristic but I don’t think it has failed me yet.
Copy is just a hint to the user. Anything that impls Clone, can impl Copy only if the owner of the type decides to do so. The Copy Trait it's just telling the compiler "It's ok to implicitly clone this. If it need be, clone it" This saves yourself from having to constantly do some_fn(my_var.clone()) to solve ownership problems. That's why all integer and floating types implement Copy, because it's cheap to clone, and sidetracking a bit, it makes more sense to clone because it would take more bytes to pass by reference a u8(8 bytes) , than simply copying 1 byte
no, his entire understanding of this is wrong. Almost every single sentence. Copy/Clone has nothing to do with stack/heap: //this is a `Copy` from heap to heap let mut a = Box::new(1); let b = Box::new(2); *a = *b; Also his description of Arc cannot possibly be true, because if the refcount was on the stack, then how could the other owners of Arcs access that refcount? You said "arc doesn't actually affect the heap when cloned" that is wrong. Arc stores the T as well as the refcount on the heap, that way the other Arc instances can modify that shared refcount, while any Arc object can be deallocated without deallocating the refcounter, regardless of whether it lives on the stack and heap. It is painful to see so much misunderstanding presented so confidently in primes videos, where most people don't have the expertise to question it. Your intuition about Clone is good imo. I hope you don't let yourself get too sidetracked by prime content.
@@raymarch3576 Oh of course! As soon as you said that the count is stored on the heap it clicked for me! Apologies for also being confidently wrong about the Arc stack/heap, and thank you for your correction.
By the way, note that he is right about the problem with `Arc` and `Rc` implementing `Clone`. I think many experienced Rustaceans are currently in the process of thinking up a fix in a future Rust edition. Basically, we want something like a `Claim` trait (in addition to `Copy` and `Clone`) to differentiate "heavy" data clones (such as `Vec` clones) from logical ref-counting increases. `Arc::claim()` would be used instead of `Arc::clone()`. This will allow to trace all actual heavy clones by grepping for `.clone()`, and will also allow capturing `Arc`s in closures to be much simpler and implicit (very common thing to do). Simply do: let foo = Arc::new(...); // `foo.claim()` implicity called because `foo` is still needed later in the function (still "lively"). let bar = || { do_something(foo); }; do_something_else(foo); For most projects, `claim()` can be implicit as atomic increments/decrements are not a performance concern. For code sections and/or crates where such operations should not be implicit, there will be an opt-out so that `claim()`s must be explicit. But you're right that Prime went on to confidently explain things very wrong :p
@@hagaiak i remember i heard of `Claim`. I didn't look deep into it, but to me it feels like they should call it `Assign` and make it the missing trait that corresponds to the "=" operator, unless theres some detail i missed. As someone who has wasted a lot of their life debugging C++ i tend to object to every proposal that adds implicit function calls, even though I totally understand where they're coming from. In the end, even if they add a `Claim`/`Assign` trait, the copy situation will still not be remotely as messy as that of C++ thankfully
Indeed the worst bugs always are in business logic. Often stacked logic where there are exceptions on the standard logic. Ugghhh… I’ve had few bugs preventing code from compiling or randomly crashing. But being off by one, weird logic trees that seem to work but fail once every 100 runs.
confirming red black trees are complicated. when I tried to make my implementation in C# I could not understand the algo well enough to purely code it myself, so I looked into 2 implementations (dont remember in which languages) to see how they do it, and just transferred the logic
Started learning C as part of OS dev over last two months, and I'm loving C, I thought it would be scary. But I know i'm not good enough for it, so I need to use Rust.
Rust doesnt use green threads, using green threads requires to have a GC as how Go handle them. Zig or Rust will never use green threads because of C compatability (Green threads have their own heap and stack) issues.
I believe anything done with async/await can be done with less complexity with plain old blocking threads. The price is more memory -- except with java virtual threads, which have the simplicity of old-fashioned threads and the memory cost of microthreads. Of course, if you are using an async/await infested library, there is not much you can do to simplify.
The fundamental problem with compile time evaluation (turing-complete evaluation of expression at compile time but only limited by time) are the problems which arise from ANY turing-complete system: it's entirely possible (but not necessary) for turing-complete systems to fail at some place at some time which you CANNOT be predicted, understood or even analyzed just using its components (not just not in a limited amount of time but) AT ALL. This is very much follows from Gödel's incompleteness theorem. Of course almost every practical program is written in a language which is for all intents & purposes turing-complete, meaning these issues will arise in any language anyway in some way, at least at runtime. So why would it be so much worse when a language like Zig would also adopt the same concept at compile time? Well, it means that software is not just potentially impossible to get working good enough even when its part are correct when you run it, no, it will mean that the compilation, debugging, build process & hence software development itself is now also potentially impossible to get right _EVEN_ when all of its parts are correct! That's something that wouldn't be the case if Zig didn't have a turing-complete compile time, because then if you compile a program of which you knew all of its parts compiled correctly, the whole part would always also compile correctly. That's a hidden danger but very real & not well known side effect of turing-complete systems. The reason why turing-complete systems are used anyways is because one has to, e.g. because there is no other option to do some things, e.g. some programs either strictly require a system as complete as this or would be impractically hard to write down or execute else. The problem I fear is that the problem that Zig is trying to solve by putting a turing-complete system in its compile time is that this problem DOESN'T REQUIRE such a complete system to solve, meaning you'll get all the problems that a turing-complete system brings with questionable benefits, hidden dangers but which could ultimately be solved differently without these additional dangers - even if admittedly sometimes in a more difficult way.
Technically any generics which is strong enough for Peano arithmetic is automatically Turing complete. But there's inherent ease with functional-style templating, which discourage many non-halting condition. Imperative/comptime style metaprogramming makes it very easy to get into non-halting.
I think you should also think about C and C++ in terms of stack and heap. Even in the kernel world kernels have heaps as well or at least those I'm most familiar with do. It's pretty much unavoidable.
first, cloning in Rust has *nothing* to do with whether a value is on the stack or on the heap. a clone is always about getting a second owned copy of a value, whether on the stack or not. second, I am not surprised that he doesn't understand how async/await works in Rust if he doesn't understand what "green threads" (stackful coroutines) means and how what Rust does (stackless coroutines / state machines) is different. third, if you like comptime so much compared to generics in Rust, you can do very similar things with "macros by example" Rust (generating copies of functions for different types, etc.) you *can* do that, but it's just objectively worse than using proper generics for that use case 🤷🏻♂️ this kind of misinformation about Rust (whether intentional or not) is making me start to distrust any of his takes. like, if you don't understand it, just don't say anything about it?
C macros are text replacement. And a big implication of this is that they also have no real access to any information about the C program. For example, there would be no way to sensibly ask if there was a function `quack` that could be applied to an argument to the function.
X will feel better than Y, unless it’s JavaScript. Because I’ve had to use JavaScript throughout my work week for 2 years now and I’m ready to rewrite everything I’ve touched in PHP just to save my sanity.
I played enough with mutexes to know that it is not simple or easy. But if tuned right, they are blazing fast. By tuning, i mean that the write of a thread happens while the other threads are processing. Eliminating most of the wait to write time.
comptime is terrible for api docs. It doesn’t tell you what the actual requirements are on the type you are supposed to pass in. Traits / interfaces are much more clear here.
True, but some of the problems are shared. When you see a function that accepts something that implements trait A, you need to now what implements the trait to use the function. In that case, reading type bounds from a function's docs (this is what you would typically do in Zig to notify user of required parameter's type properties) is not worse. Fair point though 👍
I did a whole red black tree implementation and took quite a while to debug 2 of the 8 rebalances. It was fun when it finally worked although it did get frustrating after getting the first 6 pretty easily.
My Odin project has multiple catastrophic memory leaks. And that explanation about how Rust works at around 7 minutes has me very interested. I wonder if it's possible to program in a minimal subset of Rust. Maybe it's time to finally give it a try.
Comptime is huge for me. I code DSP stuff. I can build arrays that have the outputs of all sorts of complex functions stored on the heap, and have them perform like they’re on the stack. What I do (audio) gets near being real time computing. It isn’t that, but it is really very close. Enforced garbage collection is hot dogshit in this situation. Comptime is huge.
Is comptime cool? yeah... but I have a hard time coming up with a genuinely good reason to use them that isn't going to just add more complexity than some alternative. If you find yourself leaning on comptime I think that'd be a bit of a code smell to re-evaluate what the hell you are doing
That ARC thing sounds exactly like my C++ object wrapper class I made back in the early 2000's to count references to an object and auto delete the object
As a minor nitpick, Rust has both Rc and Arc. The difference is that in the case of Arc, incrementing and decrementing the reference count is *atomic* so it's also guaranteed to work with multiple threads (but might be a bit slower than Rc). I would guess that you didn't make your reference count atomic. Or maybe you did?
I think zig comptime is awesome, I just think the tooling for comptime functions isn’t good yet since they’re working on the compiler speeds and other things right now. It will get better as time goes on.
"If it compiles it works" has a lot of reality to back it up. I think of languages like Agda, Idris, or ATS which show that type systems can essentially act as a proof for your program. You can still have bugs, but in a lot of languages you complete omit entire classes of bugs that are very much real and common. Haskell a runtime error almost doesn't even exist apart from a select few edge cases. Rust definitely was inspired by some of these research languages which is likely what inspired it's type system that prevents memory issues, but it also prevents data races. I don't really think Rust ends there, it really kind of forces you to consider things up front. You can srill screw up the business logic, but there's entire categories of issues you don't need to worry about. My experience with maintaining a large OSS project in rust is that if it compiles it works, and most of the bugs you're left with are usually small and easy to address. I don't doubt there are exceptions to that.
The problem with "if it compiles it works" is that anyone who's ever written a Haskell program more complicated than a Hello world knows that's BS. In particular, it means that everyone saying that is lying to your face. Not just wrong, not just a disagreement, but a straight up, unambiguous lie.
I've been using Zig. I was just watched your p99 video and now this. I'm also very confused about this whole generics/macros over comptime. I haven't used comptime extensively so maybe there cases where it's problematic that I haven't seen. but the idea of it to me has always sounded good. I was looking at the Mesa project and you can barely understand anything with layers of macros to macros to macros. clang lsp can't find the macros so I have to keep doing search for and jump. and trying to preprocess files. In zig looking at something like the standard library even with the state of the lsp you can jump through every single comptime right to the `builtin` file for your specific os/platform.
I've never used Zig, but what you're describing is the same as what one can get with Rust macros. At least what I can do from Rustrover with the expand macros toggle enabled. If you want to see what the macro translates into, you can use the built-in call to cargo --expand and get colored Rust output. C macros are just pretty horrendous to me.
@@LtdJorge yeah both rust macros are better as well I was only talking abut c macros in comparison. my only issue with rust macros is I never even try to decode what's happening in some complex ones. I've never met a comptime definition that I didn't understand by just reading down it line by line.
To say that you need diffrent list implemntation for int and a list of char * is a meter of skill issue. It is harder in c, but using void pointers and structs you can reach a very high level of generic behavoir without a lot of effort. Intersting vid non the less.
Oh, and not me on the M-F vs weekend thing. In 2009 I wrote ColdFusion from M-F and Ruby on the WE, and Ruby felt much, much better. Same way with recent job working with Swift, Python, and JS in day job, Flutter on WE. Flutter felt much, much better.
Comptime only solves problem of abstracting code over types but does not let us conclude a contract between caller and callee. That's why c+ added concepts, because pure templates have the same problem.
"A mutex is just a semaphore of length one."
"A monad is just a monoid in the category of endofunctors."
Both clauses are true ❤
I'd just like to interject for a moment. What you're referring to as a binary semaphore, is in fact a binary semaphore with ownership, or as I've recently taken to calling it, a Mutex. A binary semaphore does not have unlocking restrictions by itself, but is rather another free attribute of a fully functioning mutex type made useful by its ownership semantics.
Many programming languages implement a binary semaphore today, without enforcing the ownership. Through a peculiar turn of events, the most popular implementations today which are widely used are often called "Mutexes", and many of its users are not aware that it's basically unsafe. There really is a binary semaphore, and these people are using it, but it is just a part of the type they use.
Welcome to basic 12th grade algebra structures! Wow amazing! PS. joking prob 1st or 2nd year faculty algebra. PS don't remember when lambda calculus was.
Semaphore is just a mutex that has length
pretty sure C was considered high level language few decades ago atleast
whats next... javascript low level language in 2050 ?
5:00 It appears that Prime mistakenly interprets the phrase 'If it compiles, it works' to mean 'If it compiles, it works correctly.' However, no one using this phrase believes that programs written in Rust or Haskell are free of logic bugs. Instead, it means that a compiled program won't crash unexpectedly due to issues like mismatched types, use-after-free errors.
Agree when Im writing in Rust when it compiles successfully, the next thing I do is write more test cases to see if the application work as expected.
It's also a matter speed, is just faster to get this kind of errors from the lsp that at runtime.
@@samuraijosh1595 Right, I was wrong, out of bounds checks happen in runtime.
Most things in rust keep you from crashing, they very much beg you to handle panics @samuraijosh1595
"y.' However, no one using this phrase believes that programs written in Rust or Haskell are free of logic bugs."
Yes they are. Defend your bailey.
Came to this channel for programming, stayed for the moustache
Came for the moustache... twice
Same. Also came on the mustache.
Technically it's a pornstache though, more specifically a 70's pornstache. Very nice
does this guy even program?
Mustache ain’t real pal, it’s all a simulation to condition you for the AI overlord who will also have a mustache 😅😂🙂
If it compiles, then all that is remaining is business logic bugs. I like that
And bad pointer dereferencing
@@TheSulross in safe rust?
@@christopher8641 well, people should not pretend that real Rust programs are not going to be interfacing with highly useful (and very often necessary) libraries that aren’t written in Rust. And those integrations mean Rust unsafe blocks.
unwrap, oops!
I think "If it compiles, it works" should be reworded to something like "if it compiles, it does what you described". If what you described is wrong, then the output will be wrong, but that's not really a bug, it's just wrong logic. Rust eliminates, by default, the ambiguity that unsafe code can carry. You could have described the correct algorithm, but have data races which do introduce bugs, for example.
I think it mostly gives people a false sense of security in rust, especially for devs that are new to systems programming. Also, it was butchered heavily. I’m pretty sure the original saying was “if it compiles, it runs” which has way different connotations.
@@bennyboiii1196problem with that is it doesn't sound as revolutionary. Also... I know this is going to be a massive shock but a lot of coders are pedantic AF contrarians and would purposely misunderstand "it runs" to be all "well anything that compiles technically runs..."
@@bennyboiii1196 Yeah, true. It runs is better than it works.
Real Rust programs be written using other foreign code and libraries where uses lots of unsafe blocks as wrappers
@@TheSulross so? If those uphold the safety contracts, are correctly implemented, the things they depend on (OS, drivers, libraries, etc) are not bugged and Rust itself doesn't have bugs, then unsafe code is irrelevant.
Safety in Rust means the code you write does what you told the compiler it should do, if you wrote it incorrectly the compiler cannot do anything, and if any of your dependencies doesn't work as it says, that's not a problem with your code.
So if the dependencies you rely on are well tested, fuzzed, etc then there should be no problem. The thing with Rust is the low level FFI code should be put in unsafe libraries that are extremely well tested and verified, and also written by knowledgeable developers, then wrapped in safe interfaces and consumed by developers oblivious to the unsafety under it. It's the same as when people say that you can do fully safe C code with static + dynamic analysis tools (you actually can't 100%). Then do the same with the low level unsafe Rust lib making it "safe", letting you use fully safe Rust on top of it, instead of also having to use analysis tools in your higher level C code.
Prime, if your worst bugs are logical bugs, it simply means you write code in dynamic languages. Worst bugs in C are silent memory corruption and data races. Only C system SW engineers know what heisenbugs are not by reading memes, but spending days trying to debug them.
amen. i do telco hardware and web developers saying memory issues are not real is laughable. i also write python for shims as well and having logical bugs in python is skill issue which I clearly have. but like you said some of the worst bugs in C are memory errors and segfaults which literally occur once in a moon or when lightning strikes nearby town drunk and is in range of 10 kms. I've had to literally travel 1000kms to fix bugs which would literally never happen in rust.
Wow I haven't heard heisenbugs in a while. You test (in sections), then the bug goes away. You go live, the bug shows up. Good one. I going to have to teach this one to some of my contractors (that I occasionally hire (for audio stuff)). We're having problems with GTK and some C++ Stuff coming down to [audio frame] memory problems and crash; I bet he can't find the problem because of his testing/debug/flags environment. [ heisenbug ] is a good word for the documents.
Thanks.
@jonkoenig2478 Honest question, is the person I'm replying to a bot? I wonder if there are bots trying to create random hate on the internet lately...? If you're human write the answer to "42 - 7".
That's only true because you don't write business logic in C... Having worked in both embedded and now financial domains I say both have really weird problems. Where the later is mostly a headache because of really complicated business processes and legislative requirements... Both take a toll on your sanity, the stupid comparison between the two and trying to prove that your problems are the real ones and others are children playing in the dark is a sanity issue in itself.
Ive been in games for decades now, and I agree with Prime. The bulk of bugs, by a large margin, are logic bugs, and not memory corruption and data races. But, run of the mill logic bugs arent nearly as much fun to tell as overwriting arrays or data races.
At game companies, you often implicitly do the same thing as Rust or C# and have libraries with unsafe code, and then write a ton of code in a more safe style.
As I mentioned, its much better for a story to talk about a bug where a static variable was getting set twice (which is .. uhh, should be impossible), and yes, that was caused by someone writing off the end an array VS fixing 30 bugs about not doing something with one enemy, missing a case for something, not switching states in a state machine at the right time, etc
Oh, and at the company I mentioned in another response (XBoxOriginal game company), our only "memory leak" was because I had an array that I never removed items from. Which is a memory leak all literally all languages can have.
Would I like to flip implicit for explicit and add a safe keyword to C++? Absolutely. Itd be awesome. Itd be an interesting exercise to see how close one could get to making a new C+++ where you can incrementally update parts of projects from unsafe-by-default to safe-by-default.
I read the title as "Why I Choose Rush over Zerg", and I thought that didn't make a lot of sense.
Well you're right, the zerg will inherit the galaxy dude.
To be fair terran's rushes are doing pretty good these days, reapers and helions baby!
Well, Rush is a Terran player, so that tracks
The difference between Zig’s comptime and C++ templates and constexpr is that Zig’s stuff is unified. In C++, you can call constexpr functions at compile time to produce compile-time constants, e.g. for array sizes or template value arguments. Template stuff is declarative, constexpr function execution is imperative. That makes sense from where C++ is coming from, but modern language designers realized that the clear separation of types and values isn’t useful at compile-time. In Zig, types are values at compile-time. A function can have parameters that contain types as values and return types as values at compile-time. In C++ lingo, that would mean that not only can you write a consteval function that takes in something and returns a number (for an array bound), but also, you can write a function that returns a type to be used as a type for a run-time value. Essentially, type_info, at compile-time, isn’t meaningfully different than a type, so why have them be different things? In C++, there are function templates, class templates, value templates, and alias templates. In D, there are just templates that happen to produce functions, classes, values or aliases (which is a lot more streamlined), and in Zig, there are no templates, but functions that shove types around as if they were values. Sorting types (at compile-time)? No harder than sorting values at compile-time, which is no harder than sorting values at run-time. Why copy values and alias types? If a type is a value (which it is at compile-time), an alias of a type is exactly a (const) copy of a type. So in Zig, things that are separated for no good reason (in hindsight) aren’t separate. The only difference between types and other values is that types don’t exist at run-time, which means if you’d end up with having to run a function that manipulates types at run-time (e.g. because of a run-time value argument), the compiler will complain.
@3:00 "when you have a black uncle you have to do something"...
as a black person... FACTS!
Lmfao
it's yet another case of PrimieTIme say stuff that sounds out of context even in context
Came for the CS geeking - stayed for the pithy sociological observations
I really don't get the point of hating on "if it compiles it works"... It eliminates a type of bug completely, the logic bugs are there in any language?
yeah i had the same thought, complaining about language independent logic bugs will happen in any language
It's like claiming that type system are pointless because they can't catch logic bugs either. Ideally the computer should take care of as much menial work as possible, and both a type system and the borrow checker do that.
They never claim if it compiles it's good, just that it works, which for this purpose means it won't randomly crash because of a preventable error.
@@skulverbut that’s genuinely not true in rust, there are whole classes of bugs which rust cannot statically check, out of bounds errors for example, so it can still crash.
@@UnidimensionalPropheticCatgirl still happens less often, whereas in C you must cover various things beforehand, to avoid them
Dude, that's not true that ownership is not a biggie. It is a biggie. Especially in a kernel environment where you have to worry about concurrency, reentrancy (you could call user code and then be called again from that user code). The hardest to find bugs, are by far those to do with memory safety. Because ultimately you're not writing a closed system where you have control over everything. You're writing an open system where most of the running code is user code, or kernel module code. It's incredibly hard to reproduce issues. It's much much easier to fix invalid logic bugs than memory management bugs in such an environment.
If you build any heavily concurrent system, I think ownership actually does cause a lot of bugs. I worked on an engineering simulation software that did a ton of concurrent calculations, some of which were interdependent, and the two biggest classes of bugs I dealt with were serialization issues (more related to networking) and data races.
Even currently, working in the cloud, i deal wih more race conditions across services than logic bugs. I feel like the majority of logic bugs were found early on in testing because they are typically deterministic and thus easily reproducible. But with race conditions, there have been a handful where they were seen, but then couldn't be reproduced, and the QA engineer would end up getting gaslit into thinking they must've just not executed the test correctly.
Re; “If it compiles, it works”;
As someone who comes from C/C++ like Pekka, this definitely *feels* partially true in Rust. Yes logic bugs are still a problem, but in my experience most of the bugs I have to deal with in C++ are memory errors (working with low level systems programming). Almost all of these are compiler errors in Rust.
If most errors you see in C++ are memory errors, you're using the language incorrectly. If you're tracking memory allocation correctly (i.e. using dedicated memory management types such as std::vector and std::unique_ptr instead of having each of your classes have that responsibility in addition to their business role, tracking pointers with types such as std::span and std::string view which also track size information, and just being minimally thoughtful about ownership (which you also have to do in Rust), you should hardly ever see any problems.
That's a lot of boxes as a necessity... Atleast in rust, you start without boxes.
@@CGMossa It's not really onerous to say "your class should have a vector of things instead of a pointer to an array of things". What I described is literally the easiest way to set things up. People go out of their way to make C++ hard.
@@isodoubIet Rust is still safer than C++ even with std::vector and std::unique_ptr because references into either can be invalidated if the memory is released or the std::vector is resized. Almost all data structures in the C++ standard library have reference or iterator invalidation caveats that can typically only be detected at runtime with address sanitizers, and detection can still be flakey. This can't happen is Rust due to lifetimes enforced at compile time.
Yeah, logic problems are much easier to exclude too. I do C++ at work, and still find that refactoring Haskell I wrote last year is easier than C++ I wrote yesterday. The ability to reduce the problem space with enum arguments, and then have the compiler enforce completeness makes it much easier to write code I can reason about, and easier to keep track of all possible paths through my code. This seems to be the case for Rust as well.
To distill the author’s point, they pint out that Zig doesn’t have a dedicated type (meta) language for types because Zig is used for both metaprogramming and programming. The argument for making the type language the same as the main language is that you don’t have to learn two languages and the interplay is good. The argument against is that a meta language might have different requirements and benefit from being a different language (usually declarative, different primitives like, say, Type)
Async Rust doesn't use green threads though. It's agnostic.
Sure, Tokio's runtime for async uses green-ish threads, but that's not the only runtime!
You can use async on embedded devices without allocators and without OS threads too.
Async Rust really is just polling objects and having wakers that can notify the executor to poll again. Anything more than that is runtime specific and is different for each runtime you may use
Async Rust are even more powerful. With a bit of hacky workaround, you can implement generators from it. I think generators proposal were sort-of based on async. Async iterators, boom easy and included.
I once tried to handle state machine, and i ended up using async + other stuff. It's surprisingly usable and easy for end user to use.
Yeah, I use async Rust on embedded devices, and it's awesome. It all depends on the executor/runtime.
@jonkoenig2478 hmmm no it's not. Almost all production async Rust code I've written is not running on Tokio
@jonkoenig2478 dude said ===, js fanboy detected 😭😭
Tokio uses regular OS threads, no?
By default a pool of `cpu_core_count` OS threads.
There is an excellent paper on implementing red black trees by Sedgewick, called Left Balanced Red Black trees. Worth a check.
Yes, and it's also in his book on "Algorthms" which I bought before the internet was a thing. Good book.
Correction! Go doesn't "spawn threads" usually when calling a go routine. The runtime spawns a bunch of threads when the program is started and then go distributes work chunks on that existing threads (hence they are called lightweight threads) because you save the overhead of creating the thread context every time you spawn a go routine. This is why they are that fast.
I object, your honor, to calling debugging tricky memory issues in C/C++ code, a "skill issue". The best C/C++ programmers in the world have this problem. So, unless "not having god-like omniscience" is considered a "skill issue", then this is not a skill issue. It is a language issue. I'm not saying it is a language issue that isn't worth the extra work. Not saying we should all stop with C/C++, just saying that isn't exactly a skill issue.
Sounds like you have a skill issue.
"I don't have a skill issue"
- Person who described the language as "C/C++"
Actually, i DO have a skill issue with those languages. I'm just saying that people who are wizards with those languages still struggle with memory issues.
@@freeideas This is why it's a problem that you conflated C and C++.
In C, yes, everyone will struggle with memory issues. C++? Not so much. The language provides tools for managing memory that make it so you hardly ever have to think about it.
The language told you you are responsible for the memory then how is it a language issue if YOU mismanage YOUR memory? You Rust cultists have a serious problem with taking responsibility for your failings.
"if it compiles it works" is less about "bug-free" binaries and more about completely getting rid of undefined behavior. as a former TypeScript dev, Rust is a godsend because it's so much easier to debug than TS.
What undefined behavior are you encountering in TS?
@@justsomeguy8385 Do you know whether JSON.parse throws? How did you discover that information?
@@justsomeguy8385 JSON.parse() comes to mind
How rust async-executors work is actually now /that/ complicated.
Basically you have a queue of top-level futures that you create using things like `tokio::spawn()` that get polled when they are called to awake.
The fact that this executor is executing this queue multi-threaded is mostly just a implementation detail.
And futures are nested state machines that somewhere down the line depend on a "Resource" that `.wake`s the associated Waker when it is ready, which signals the executor to repoll the associated top-level future.
(One example for a resource is AsyncFd, which uses something like epoll to signal when it is ready.)
Zig vs rust > Vim vs emacs
Go is the vscode of programming languages
@@Hardware-pm6ufwhich language is helix?
@@Hardware-pm6uf it just works?
@@notuxnobux yup
@@ivymuncher I think ocaml I don't know a good fit for helix
Rust async does NOT use green threads. The original rust pre-1.0 did use/expose them. Both tokio and stdasync use hardware thread pools. It is true that these pools are of the M:N variety, but that is a separate concept from green threads, which typically facilitate cooperative multitasking.
Yeah. Work-stealing event loops that are multi-threaded by default.
They forced us to make redblack trees by hand in our data structures course.
Never used zig, but I agree that thus is_duck example seemed pretty hacky. You accomplish the same thing in rust in your type declarations without having to write type checkers in the body of your function. I understand why this would be used in a pythonic language tho (mojo) - you have things like "isinstance" in the middle if your code and it follows the same pattern.
The mentality of doing things "the Rust way" is real. Even when I go back to writing Java for school work, I now prefer using interfaces and enums over inheritance. Additionally, I make sure that all nullable objects are wrapped in an Optional.
Procedural people have been telling you for decades that full-on OOP is a bad idea so why does Rust get all the credit, just because they have a toxic PR policy?
Better to show than tell and rust did that well @@rusi6219
@@rusi6219why are you so mad?
@@rusi6219 One more word and I'll replace the i in your name with a t
NGL "I prefer the pre-proccesor over Zig comptime" is such a insane take I genuinely feel like it invalidates the entire article.
right, it just reeks of skill issues
@@purewantfun yes. skill issue of you guys. c pre processor literally powers almost entirety of internet and telecom.
No, I pretty much agree with him. He did not said "I prefer the pre-processor over Zig comptime", he said "I prefer pre-processor over Zig comptime IF we can't have generics", which is another thing. Comptime in zig feels more like a workaround to the fact that typesystems are a specific part of a language than anything concrete on itself.
@@purewantfunI don't think the skill issue applies to someone that contributes to the linux kernel. Maybe you have the skill issue
@@diadetediotedio6918 right. It's really cool writing "argument: anytype" and knowing a very lean code will be generated at comptime and the compiler will scream at me if it's invalid, but when I'm reading a function trying to understand how to use it and what it expects of me and I read "argument: anytype" all I can think is "well, shit"
Green threads are cooperative in nature and tasks have to explicitly yield to let other tasks take turns. Go compiler inserts yields into the code at strategic points (typically when functions are called or returned from) behind the scenes. Hardware threads use interrupts to switch tasks and do not require explicit yields. Interrupts can happen at any time/point in the code.
I tried giving zig so many chances, but it never clicks. I keep drifting back to C, maybe it's just wired into my brain. Rust feels like a breath of fresh air, but its colourful, toxic community keeps me away.
I'd recommend ignoring the Rust community. When I started reading top crates' code and doing syscalls (not really, but direct libc calls) manually, is when I started understanding Rust, deeply.
@@samuraijosh1595 mojo ... cough cough
@@samuraijosh1595 you're describing Mojo!
The rust community sucks but your use of "colorful" just sounds like you are a biggot instead of having any Substantive criticism.
@@gideonunger7284your response made his sensible. 😮
Red Black: think of it like this:
Coloring is there to show you how lengthy the branch is compared to logn... when it's too long, the coloring (and the rotation) is basically just there to balance it... so updating the coloring is a chore to make rotations possible to balance it within limits.
Just like what you do with a Btree, but while you overpopulate that theoretical totally logn tree horizontally a bit, you overpopulate vertically on a rb, the color limit of the RB is like the size limit of the B.
At least this how I used to teach it at the univ...
Hope this helps someone randomly finding it. :D
Most of my career was also C in the kernel (Windows kernel in this case). You get really good at C after years and rarely code bugs of the security-vulnerability nature. There are linters and safe string libraries and stuff like that to make things safer. The book "Writing Solid Code" set me on the right path early on.
Thanks for the book recommendation. It looks great
As you said you get good with C after years. I am doing C for numerical stuff and I still mess up indexes and crash my programs. Also C programmers are elite programmers, better than most. So if you can write C right, you special not the average.
C developers overuse linked lists because they are simple to implement and get right. Unfortunately linked lists are really slow on modern hardware because walking a list involves constantly dereferencing pointers and that is very cache unfriendly.
depends on how you implement them, and what your program does with each node.
if they're intrusive and allocated with a free list into a block of contiguous memory, they can be fast.
if they are partitioned into large chunks and those chunks are linked, they can be fast.
it's really only the trivial individually heap allocated linked list that is slow, when your loop is tight enough that the cache misses are dominating your performance. if a lot is done with that individual node, and local caches get a lot of use while handling an individual node, it can still cause negligible slow down.
but yeah, if you're doing mostly CPU work and hardly pulling in any RAM besides the nodes contents, then an individually linked list can be extremely slow in comparison to a contiguous array.
people often cause similar problems in other higher level languages when using associative arrays. if the hash table implementation is distributing access too randomly/broadly across its buckets, then you can get similar cache thrashing. sometimes just doing a linear search on a contiguous array can be much faster, even though its complexity is O(N) instead of O(1).
Ackshually C devs always use arrays unless a linked list is an objective net gain
My learning strat right now is study C, use C API to work with SQL and learn both at the same time. Git gud with C. Continue grinding on C till Zig finally reach 1.0.
I'd normally suggest Rust.
But if C is your way to go, I'd suggest contributing to Open Source for the sake of learning.
Always worth everyone's time, yours included 😁
C is great but your jobs are gonna be embedded might as well learn c++
You can do both C and Zig in tandem. That's what I'm doing currently.
Yeah Zig is not 1.0 yet but it's perfectly usable.
I hope they improve on zig syntax until then.
@@mysterry2000 Any more details on getting started with Rust?
Any good ways of finding Open Source projects with easy tasks to get familiar with a Rust project, for instance?
As for books, I've recently heard a podcast episode that recommended Hands-On Rust.
I just love Rust, that’s not going to change
@@toby9999 who asked
>> How could he not understand comptime??
>> Async is too magical! (explains async incorrectly)
sorry if you find this toxic
I pretty much agree with the comptime take (which extends to the notion of types as values).
* It makes it harder or almost impossible for the compiler to infer types
* It is resolved after compilling, so you cannot expect reasonable hintings from the tooling
* It is based on the strange conception that "type systems should not use a different syntax or be in a different world", while simultaneously making this distinction implicitly and explicitly many times because this is just how things work, it feels like a workaround to a problematic worldview more than a properly designed feature.
Not even. Comptime is useful for generics in zig or for dynamic programming problems
@@TheFreshMakerHD But template-style metaprogramming is functional, while comptime are inherently imperative. With functional we can use things like type theory and immutability, which is way easier to reason.
Zig is actually more low level than C because you can design data structures with dedicated padding and specific sizes of enums in a way you could not do in portable C prior to C23.
31:25 There is a great blog article the tokio team made about their executor called "making the tokio scheduler 10x faster".
The main complaint I have about zig is that in industry it is kind of difficult to push over just using a linter-enforced subset of C++ or Swift. It is an incremental improvement over C but it doesn't provide a fundamentally new capability that you can't get from any of the million of other C replacements that have popped up over the years (Ada, D or Nim without GC, V, Odin, Pascal, Delphi, fortran, C2 and C3, Holy C, and the list just goes on and on, also holy C is cooler than zig).
Rust on the other hand genuinely does provide a new capability (non-GCed language with memory safety). Yes, it can be painful to write it, and yes I wish it were smaller and closer to what Greydon originally wanted, and yes you should probably just use threads instead of async/await, but at least I can immediately explain what the point of it is, and parallel (not concurrent) code with Rayon is better than pretty much any alternative I've seen.
If you want a better C that is simple like Go, I like C3. It's a sane "fixed C" that does not try to do everything like C++ and just removes the obviously broken C footguns by making things defined, adds defer, interfaces, slices, odin-like error handling, and Go-like build tooling, while staying fully ABI compatible. If you want a C that is a bit more like Go, that's what I would suggest.
The point the article author made about C being amazing until you need a data structure that you don’t want to roll out on your own is so true. It almost makes you consider C++ until you realize how shit it is.
what's with the C++ hate on this channel, the language is decent despite being older than most of the viewers
Saying "I don't like C++ so I'll stick with C" is wild
Like a completely bonkers take. The worst parts of C++ are those it has in common with C, my dude.
@@EnDeRBeaT Decent? With types so long that "auto" were invented for? And however many constructors, assignment overloads there are? Oh and how many ways of allocating objects? Yeah C++ beyond C++98 is a huge mess.
That's kinda why i don't touch any C++11 features at all. Give me plain old malloc() please.
@@hanifarroisimukhlis5989 "The language is bad because it has type inference"
Wild, wild, wild.
@@hanifarroisimukhlis5989 i would take long types any time of the day over void*. Constructor complaints are valid, however most of the times you just need a constructor and destructor. Many ways of doing something? Choose one, and stick to it. If you're given many tools you don't need to use every single one
I like C too and I know how to write C in way that it never crash or have bugs.
It just need very strict development process and verification. It is actually good then to make reliable software because tools are verified to work correctly, there are tools for verification and there isn't much complexity.
It won't work if it is written same way as junior web developers write Javascript. It requires very different mindset.
13:40 This is why I fell in love with the D language ~10 years ago. It allows you to approach problems the way they come up in your head. There is no special D way.
Being that almost all languages use libraries written in C to do their work, I think that C has more libraries than any other language. When people say "You have to write a lot of code if you use C." That just means that the lack of a good package manager plus the general lazyness of people cause people to role their own libraries more often in C. However, just like in other languages, you can do anything in a single line of code: s function call. Also, due to inline functiins, you also pay less of a performance cost for the abstraction.
And how you compile those libraries, each with their own build tooling because C doesn't have any. It's getting slightly better, but i digress.
C isn't just missing a good package manager, it's also missing a good api layer. It doesn't have generics, it doesn't have that big of a standard library, and because of that, the C ecosystem doesn't have a whole lot of things that you can rely on.
@@hanifarroisimukhlis5989 Exactly. I've reimplemented many build systems in CMake, but it's proven to be more trouble than it's worth. Zig's build system and Cargo (Rust's package manager and build system) are far better, and easier to work with, as well.
@@Luxalpa A huge standard library is a problem. Templates were originally made by using C preprocessor macros, so it does have them, I've used them, but they're ugly, and hard to maintain.
@@Luxalpa C standard library is pretty bad but the fact that it's tiny is one aspect of it that's actually good
zig's comptime is like C++'s templates, type traits and SFINAE, but on steroids. And then you add a bit of reflection on top of it. It's pretty fucking cool
It also has many of the same problems: duck typing, weird type errors (stuff fails when it's used, so the type error can be far away from what really caused it), and severely limiting the ability of LSP.
@@maleldil1 true! Very true! And that why zig is still in 0.1x status, it's not stable
That comptime take was WILD! Preferring preprocessor macros over comptime seems insane to me.
I think so too for the most part, but Pekka really is onto something. Preprocessor macros are complex and more than a little terrifying at times, but with preprocessor macros, C folks have settled on a set of standard predefined macros and idioms for applying them.
The simplicity is there at different levels.
C preprocessor macros are horrible to understand how to build safely or implement, but easy to logically compose because they aren't complicated -- it's all textual source code generation.
Zig comptime is incredibly powerful, easy to understand and dig into how it works, but much harder to logically compose because it empowers you to do so much at runtime.
A comptime function, unlike a preprocessor statement:
- can be entirely bespoke meaning that you're REQUIRED to dig into the implementation to understand what it really does or how to use it sometimes
- is easy to write, so you're likely to encounter tons of them (the only restraint engineers have is their own laziness!)
- introduces a ton of complexity if you are just trying to do interfaces
I think that over time, Zig will standardize on some idioms for comptime functions, including common techniques and packages for implementing patterns like interfaces and then these problems won't matter much.
However, for the time being, when there isn't a book/definitive resource on when Zig programmers should zig or zag, whenever i see a new comptime function, i have to read it to know what on earth is going on, and that just makes me sad.
Fun fact, I did write my own BTree+ in Rust - Why you might ask? I needed a no_std implementation which worked directly on embedded-io-async, and there was nothing for that out there ;)
This is not having watched the video, but I've been using Rust as my primary language for about 18 months now. And loved it, primarily the strictness of the language, like the borrow checker and all that. And I wanna learn Zig, but I'm hesitant, so I keep telling myself, when it reaches 1.0 I'll pick it up then and see how I go. It's probably also with admitting I've only used python for over a decade prior to giving Rust a go and enjoying it, is just a lack of exposure thing. Though I have been using Haskell for the last 3 weeks, as something new to learn, so I won't be ready to pick a new language for at least a year yet anyway.
For clone stack vs clone heap you're right. That's a bad design decision made by Rust's library to have `Clone` trait used for both.
They currently working on introducing a new trait (maybe named `Claim`) to differentiate between actual clones (like cloning a `Vec`'s heap data) to "cheap"/"invisible" clones such as increasing the ref-count of `Rc` or `Arc`.
This will also fix the problem of having to call `.clone()` on `Arc`s when sharing ownership with closures (lambdas). Today you have to sprinkle `.clone()` but `.claim()` can be implicit (with an option to opt-out and force it to be explicit for code sections or crates where atomic increments/decrements can harm performance).
It's insane that this change is possible, thanks to editions in Rust! Pretty good foresight to design the language with editions to be able to evolve like that.
A key point when talking between OS threads and green threads is that the former is a preemptive multitasking environment, and the later is a cooperative multitasking environment (besides the cost of running one or the other, of course).
I've changed my mind on colored functions.. the thing is it is useful to 'annotate' functions that do have 'continuations' for many reasons, and obviously that has to be recursively applied to all callers.
Green threads are basically coroutines, with an executor coordinating. Cooperative multitasking.
Thank you for the brief overview about threads in this video.
Rust drop is essentially no different than a C++ destructor. Not exactly difficult to understand.
Minor correction, async/await in Rust doesn’t use green threads. Green threads use time slicing, while the concurrency model for async/await is based on cooperative multitasking. You can configure Tokio to be single threaded and if you block forever and nothing else will execute (unlike green threads)
Rust does use "green threads". Green threads are basically scheduled tasks which are run by scheduler using OS threads, and scheduler reuses fixed number of threads. Both Rust and Go in essence do that and both use work-stealing to redistribute work.
@@maniacZesci yeah, I was using a narrower definition of green threads. The one you provided, though not ubiquitous, is essentially the same as the one used by Tokio. Given this definition, I mostly agree with you, but it’s worth noting that Rust does not implement a scheduler. Rust also does not implement work-stealing either. Schedulers can implement whichever scheduling algorithms they like. Tokio is the most common and does use work stealing by default. Also worth noting that given this definition of green threading, async/await in JS is also green-threaded, which is interesting as most people seem to agree JS doesn’t have threads.
@@benheidemann3836 I'm not familiar with JS internals, but I believe JS uses event loop which runs in a single thread, so I wouldn't claim that JS has green threads.
Yes Rust leaves scheduler and runtime implementation to library authors and work-stealing is not only algorithm.
What I meant is that in Rust you can have green threads too if you want (using libraries like Tokio) not that Rust comes
with batteries included for that like Go.
Rust used to have that prior version 1.0 but they removed it because it comes with additional overhead and for Rust that aims to be a systems programming language too that was unacceptable.
@@maniacZesci thanks for the clarification. I think we’re on the same page now 🙂
The observation I was making with JS was that Tokio (for example) can be configured to execute on a single thread, which means it behaves functionally (nearly) the same as the JS event loop (I know there are some subtle differences). If we were to say that Tokio is green threads even if it’s configured to run on a single OS thread, then JS has green threads too in a sense.
Having had this OS threads, green threads, async/await discussion a couple times, people seem to fall into two camps: those who require parallel execution (> 2 OS threads managed by the executor) and those who feel that concurrency is enough as long as there’s an executor. Interestingly, the definition used by Tokio on their docs seems to imply the latter, and therefore that JS has green threads.
@@benheidemann3836 I haven't look at Tokio docs for some time but I think Tokio uses thread per core by default, so if CPU has 8 cores, Tokio will spawn 8 threads for example.
Configuration for single thread is reserved for those rare cases should someone need it for some reason, I really can't think of any reason for it tbh, but what do I know.
It can be configured to spawn any number of threads ignoring thread per core too.
@ 3:00
Well, you don't need a balanced tree if you have a radix tree
Want to find something? Is bit zero? Go left, otherwise go right
Want to do a range query between left and right, (l,r)? You can use most significant digit for indexing.
Is the bit on the left equal to the bit on the right? Then to to its subtree with same range
Are the bits different? If left is higher than right, then you're out of bounds, otherwise you can merge the ranges (l, inf) and (0, r) on both subtrees.
Now, of course the logarithm of the amount of items (complexity of a balanced tree) is always lower than the logarithm of the amount of possible items (complexity of a trie), but you can't beat this simplicity with a balanced tree.
radix_me_harder_daddy
I have no idea how to program anything, yet I love watching this channel
Comptime has one major problem - lack of bounds on inputs. So you can not make a channel that only accepts types that are safe to send across threads. Might sound like a small issue, but it is at the root of the "rust experience".
You can make that system using comptime. Of course, this will not be that "elegant" like in Rust. You will need to write boundaries in a function body. Part of those will be determined through some kind of a handmade "comptime interface" I guess. The problem is that all of those boundaries will not be seen in the function's signature, which is a downside compared to Rust's approach.
@@Presenter2 let us hope that one day we will see languages converge to something that is flexible like in zig but remains capable of strictly enforcing constraints like in rust.
Prime, you should try writing an executor, it's actually not that hard to make a crappy executor! The harder part is reactors, which are, conceptually, a thing that holds info on async I/O tasks that the executor polls to check what I/O is ready. If your Future impl can check for itself if it's done, you can just busy wait instead. I have my own Future types that want to batch requests to something, so they just enqueue requests (or send if the buffer fills up) and then my executor sends the batch on idle.
Hey, I just want to bring some precision.
At 6:14 when you are trying to explain briefly the basic of Rust, what are you talking about is RAII (Resource Acquisition Is Initialization) which is a concept from C++ that Rust has taken. I think to describe what makes Rust unique, I would have talked about the borrow checker, which is ensuring you can have mutation XOR aliasing at compile time. Now others have taken this concept, like mojo, but Rust what the one who introduces it.
At 9:29 when you describe the way Arc works, you wrongly place the counter on the stack, it should be on the heap, next to the value inside the Arc. Indeed, if it were on the stack, each clone would have its own copy of the counter which will defeat the whole point of making reference counting.
Regarding the green threading thing, I agree that the terminology is confusing, for the one who wants to lookup on the internet, go has what's called "stackful coroutine" and Rust has "stackless coroutine".
I don't want to blame or anything, I know it's hard to vulgarize tech content in life, please keep up the good work.
5:05 using Rusts extensive type system you can model most logic as types and varients. This way the idea works if you write "perfect rust" without any boolean logic. Of course this is not feasible in reality. But it would work in theory.
Moving as much logic to compile time as possible is the strategy I think will be the future of programming. Compilers understand the code we write better then we ourselves nowadays. So with more interactivity between the compiler and the programmer the compiler will be able to even help with logic bugs. Static asserts (c++) and strong types (rust) are the first step on that journey of compile time guaranteed code.
10:32 As respectfully as possible (I’m not as experienced with Rust as you are, I don’t think) I would like to disagree. I believe Copy is the trait that clones the stack: indeed it is the trait that says “I have no information beyond what is stored in the stack, and I can leave scope without doing clean-up.” Clone, on the other hand, says “to copy me I need to do some nontrivial work that involves things other than the stack, like copying heap buffers.” I do grant that Arc doesn’t actually affect the heap when cloned and so you weren’t *wrong* by saying that it “clones the stack,” but my point is that when I’m reading Rust and I run into a call to Clone::clone, I read that as saying “I am doing some expensive data manipulation that affects more than just the stack.” Idk if that’s a good heuristic but I don’t think it has failed me yet.
Copy is just a hint to the user.
Anything that impls Clone, can impl Copy only if the owner of the type decides to do so.
The Copy Trait it's just telling the compiler "It's ok to implicitly clone this. If it need be, clone it"
This saves yourself from having to constantly do some_fn(my_var.clone()) to solve ownership problems. That's why all integer and floating types implement Copy, because it's cheap to clone, and sidetracking a bit, it makes more sense to clone because it would take more bytes to pass by reference a u8(8 bytes) , than simply copying 1 byte
no, his entire understanding of this is wrong. Almost every single sentence.
Copy/Clone has nothing to do with stack/heap:
//this is a `Copy` from heap to heap
let mut a = Box::new(1);
let b = Box::new(2);
*a = *b;
Also his description of Arc cannot possibly be true, because if the refcount was on the stack, then how could the other owners of Arcs access that refcount?
You said "arc doesn't actually affect the heap when cloned"
that is wrong.
Arc stores the T as well as the refcount on the heap, that way the other Arc instances can modify that shared refcount, while any Arc object can be deallocated without deallocating the refcounter, regardless of whether it lives on the stack and heap.
It is painful to see so much misunderstanding presented so confidently in primes videos, where most people don't have the expertise to question it.
Your intuition about Clone is good imo. I hope you don't let yourself get too sidetracked by prime content.
@@raymarch3576 Oh of course! As soon as you said that the count is stored on the heap it clicked for me! Apologies for also being confidently wrong about the Arc stack/heap, and thank you for your correction.
By the way, note that he is right about the problem with `Arc` and `Rc` implementing `Clone`.
I think many experienced Rustaceans are currently in the process of thinking up a fix in a future Rust edition. Basically, we want something like a `Claim` trait (in addition to `Copy` and `Clone`) to differentiate "heavy" data clones (such as `Vec` clones) from logical ref-counting increases. `Arc::claim()` would be used instead of `Arc::clone()`.
This will allow to trace all actual heavy clones by grepping for `.clone()`, and will also allow capturing `Arc`s in closures to be much simpler and implicit (very common thing to do). Simply do:
let foo = Arc::new(...);
// `foo.claim()` implicity called because `foo` is still needed later in the function (still "lively").
let bar = || { do_something(foo); };
do_something_else(foo);
For most projects, `claim()` can be implicit as atomic increments/decrements are not a performance concern. For code sections and/or crates where such operations should not be implicit, there will be an opt-out so that `claim()`s must be explicit.
But you're right that Prime went on to confidently explain things very wrong :p
@@hagaiak i remember i heard of `Claim`. I didn't look deep into it, but to me it feels like they should call it `Assign` and make it the missing trait that corresponds to the "=" operator, unless theres some detail i missed.
As someone who has wasted a lot of their life debugging C++ i tend to object to every proposal that adds implicit function calls, even though I totally understand where they're coming from.
In the end, even if they add a `Claim`/`Assign` trait, the copy situation will still not be remotely as messy as that of C++ thankfully
Indeed the worst bugs always are in business logic. Often stacked logic where there are exceptions on the standard logic. Ugghhh… I’ve had few bugs preventing code from compiling or randomly crashing. But being off by one, weird logic trees that seem to work but fail once every 100 runs.
Rust borrow checker also guarantees no race conditions, so...
confirming red black trees are complicated. when I tried to make my implementation in C# I could not understand the algo well enough to purely code it myself, so I looked into 2 implementations (dont remember in which languages) to see how they do it, and just transferred the logic
Jon Gjengset demystifies how async runtimes work in his video "Decrusting the tokio crate" and it's not that hard conceptually
Started learning C as part of OS dev over last two months, and I'm loving C, I thought it would be scary. But I know i'm not good enough for it, so I need to use Rust.
Rust doesnt use green threads, using green threads requires to have a GC as how Go handle them.
Zig or Rust will never use green threads because of C compatability (Green threads have their own heap and stack) issues.
Rust in the title is an immediate neuron activation for me
this comment is good
@@agnesakne4409 thx buddy, I gave it my all and tried to be honest
A potential salt mine / spice warehouse 🧠⚡ so yeah, exactly that
lost traits and enums is less painful than forced to use inheritance
Nobody's forcing anyone to use inheritance
I believe anything done with async/await can be done with less complexity with plain old blocking threads. The price is more memory -- except with java virtual threads, which have the simplicity of old-fashioned threads and the memory cost of microthreads. Of course, if you are using an async/await infested library, there is not much you can do to simplify.
I write java all day, but i feel so productive with Rust, because I don't feel fear :D
The fundamental problem with compile time evaluation (turing-complete evaluation of expression at compile time but only limited by time) are the problems which arise from ANY turing-complete system: it's entirely possible (but not necessary) for turing-complete systems to fail at some place at some time which you CANNOT be predicted, understood or even analyzed just using its components (not just not in a limited amount of time but) AT ALL. This is very much follows from Gödel's incompleteness theorem. Of course almost every practical program is written in a language which is for all intents & purposes turing-complete, meaning these issues will arise in any language anyway in some way, at least at runtime. So why would it be so much worse when a language like Zig would also adopt the same concept at compile time? Well, it means that software is not just potentially impossible to get working good enough even when its part are correct when you run it, no, it will mean that the compilation, debugging, build process & hence software development itself is now also potentially impossible to get right _EVEN_ when all of its parts are correct! That's something that wouldn't be the case if Zig didn't have a turing-complete compile time, because then if you compile a program of which you knew all of its parts compiled correctly, the whole part would always also compile correctly. That's a hidden danger but very real & not well known side effect of turing-complete systems. The reason why turing-complete systems are used anyways is because one has to, e.g. because there is no other option to do some things, e.g. some programs either strictly require a system as complete as this or would be impractically hard to write down or execute else. The problem I fear is that the problem that Zig is trying to solve by putting a turing-complete system in its compile time is that this problem DOESN'T REQUIRE such a complete system to solve, meaning you'll get all the problems that a turing-complete system brings with questionable benefits, hidden dangers but which could ultimately be solved differently without these additional dangers - even if admittedly sometimes in a more difficult way.
Technically any generics which is strong enough for Peano arithmetic is automatically Turing complete. But there's inherent ease with functional-style templating, which discourage many non-halting condition. Imperative/comptime style metaprogramming makes it very easy to get into non-halting.
Building on Rust the last few months and I love it
I think you should also think about C and C++ in terms of stack and heap. Even in the kernel world kernels have heaps as well or at least those I'm most familiar with do. It's pretty much unavoidable.
first, cloning in Rust has *nothing* to do with whether a value is on the stack or on the heap. a clone is always about getting a second owned copy of a value, whether on the stack or not.
second, I am not surprised that he doesn't understand how async/await works in Rust if he doesn't understand what "green threads" (stackful coroutines) means and how what Rust does (stackless coroutines / state machines) is different.
third, if you like comptime so much compared to generics in Rust, you can do very similar things with "macros by example" Rust (generating copies of functions for different types, etc.) you *can* do that, but it's just objectively worse than using proper generics for that use case 🤷🏻♂️
this kind of misinformation about Rust (whether intentional or not) is making me start to distrust any of his takes. like, if you don't understand it, just don't say anything about it?
Are you becoming Frustacean, mate?
C macros are text replacement. And a big implication of this is that they also have no real access to any information about the C program. For example, there would be no way to sensibly ask if there was a function `quack` that could be applied to an argument to the function.
X will feel better than Y, unless it’s JavaScript. Because I’ve had to use JavaScript throughout my work week for 2 years now and I’m ready to rewrite everything I’ve touched in PHP just to save my sanity.
I haven't learn comptime yet, but what I learned is error messages in zig are awful. I have no idea what's going on
I played enough with mutexes to know that it is not simple or easy. But if tuned right, they are blazing fast.
By tuning, i mean that the write of a thread happens while the other threads are processing. Eliminating most of the wait to write time.
comptime is terrible for api docs. It doesn’t tell you what the actual requirements are on the type you are supposed to pass in. Traits / interfaces are much more clear here.
True, but some of the problems are shared. When you see a function that accepts something that implements trait A, you need to now what implements the trait to use the function. In that case, reading type bounds from a function's docs (this is what you would typically do in Zig to notify user of required parameter's type properties) is not worse. Fair point though 👍
I did a whole red black tree implementation and took quite a while to debug 2 of the 8 rebalances.
It was fun when it finally worked although it did get frustrating after getting the first 6 pretty easily.
My Odin project has multiple catastrophic memory leaks. And that explanation about how Rust works at around 7 minutes has me very interested. I wonder if it's possible to program in a minimal subset of Rust. Maybe it's time to finally give it a try.
Comptime is huge for me. I code DSP stuff. I can build arrays that have the outputs of all sorts of complex functions stored on the heap, and have them perform like they’re on the stack. What I do (audio) gets near being real time computing. It isn’t that, but it is really very close. Enforced garbage collection is hot dogshit in this situation.
Comptime is huge.
Is comptime cool? yeah... but I have a hard time coming up with a genuinely good reason to use them that isn't going to just add more complexity than some alternative. If you find yourself leaning on comptime I think that'd be a bit of a code smell to re-evaluate what the hell you are doing
I am kinds blowen away by how much prime is blowen away by zigs tooling
Looks like valgrind... like am I missing something?
Guard clause technique goes a long way minimizing some mistakes in C and C++
CTFE is so satisfying, especially with good access to compile-time reflection/metaproframming
made me to waste a lot of time on a dlang
Dont know much about programming. But i love your way of communicating your thoughts and how you explain things. 👌
That ARC thing sounds exactly like my C++ object wrapper class I made back in the early 2000's to count references to an object and auto delete the object
As a minor nitpick, Rust has both Rc and Arc. The difference is that in the case of Arc, incrementing and decrementing the reference count is *atomic* so it's also guaranteed to work with multiple threads (but might be a bit slower than Rc). I would guess that you didn't make your reference count atomic. Or maybe you did?
I think zig comptime is awesome, I just think the tooling for comptime functions isn’t good yet since they’re working on the compiler speeds and other things right now. It will get better as time goes on.
@13:10 If X (i.e. your monday-to-friday language) is JS/TS, then Y feels better 🙂
I think the counter for arc is on the heap with the data.
"If it compiles it works" has a lot of reality to back it up. I think of languages like Agda, Idris, or ATS which show that type systems can essentially act as a proof for your program. You can still have bugs, but in a lot of languages you complete omit entire classes of bugs that are very much real and common. Haskell a runtime error almost doesn't even exist apart from a select few edge cases. Rust definitely was inspired by some of these research languages which is likely what inspired it's type system that prevents memory issues, but it also prevents data races. I don't really think Rust ends there, it really kind of forces you to consider things up front. You can srill screw up the business logic, but there's entire categories of issues you don't need to worry about. My experience with maintaining a large OSS project in rust is that if it compiles it works, and most of the bugs you're left with are usually small and easy to address. I don't doubt there are exceptions to that.
The problem with "if it compiles it works" is that anyone who's ever written a Haskell program more complicated than a Hello world knows that's BS. In particular, it means that everyone saying that is lying to your face. Not just wrong, not just a disagreement, but a straight up, unambiguous lie.
I've been using Zig.
I was just watched your p99 video and now this.
I'm also very confused about this whole generics/macros over comptime.
I haven't used comptime extensively so maybe there cases where it's problematic that I haven't seen.
but the idea of it to me has always sounded good.
I was looking at the Mesa project and you can barely understand anything with layers of macros to macros to macros.
clang lsp can't find the macros so I have to keep doing search for and jump. and trying to preprocess files.
In zig looking at something like the standard library even with the state of the lsp you can jump through every single comptime right to the `builtin` file for your specific os/platform.
I've never used Zig, but what you're describing is the same as what one can get with Rust macros. At least what I can do from Rustrover with the expand macros toggle enabled. If you want to see what the macro translates into, you can use the built-in call to cargo --expand and get colored Rust output.
C macros are just pretty horrendous to me.
@@LtdJorge yeah both rust macros are better as well I was only talking abut c macros in comparison.
my only issue with rust macros is I never even try to decode what's happening in some complex ones. I've never met a comptime definition that I didn't understand by just reading down it line by line.
To say that you need diffrent list implemntation for int and a list of char * is a meter of skill issue. It is harder in c, but using void pointers and structs you can reach a very high level of generic behavoir without a lot of effort. Intersting vid non the less.
What are the increasing numbers above and below the active line in Prime's editor? Are they used to jump between lines quickly?
That one lmao: "coolkids call it comptime, dummies call it structural macros, c++ degens call it template magic"
Maybe the man is not hype-train traveler, unlike some UA-camrs.
Oh, and not me on the M-F vs weekend thing. In 2009 I wrote ColdFusion from M-F and Ruby on the WE, and Ruby felt much, much better. Same way with recent job working with Swift, Python, and JS in day job, Flutter on WE. Flutter felt much, much better.
The duck example is just CRTP in C++, very common in C++, as I thought everyone is just copying C++!
Comptime only solves problem of abstracting code over types but does not let us conclude a contract between caller and callee. That's why c+ added concepts, because pure templates have the same problem.
comptime allows it. It's just much more verbose compared to C++ concepts.