@@ScibbieGames exactly. The good thing about C is you can do anything with it, the problem with C is you can do anything with it. C programmers usually want to be "creative" and there's where they turn on the fan and grab the bag of sh*t. "Oh I'm so great in C look what I can do" yeah genius you're throwing sh*t at the fan now.
that, combined with the fact that sauron apparently wants it - makes me thing this isnt about garbage collection., but instead some other effects of those preffered languages. this is about control of something. it always is. i am familiar with whats going on at the hardware level and i am under no illusion that this is any better through the layers up to the programmer/user... any thoughts? this is an issue from the beginning. it starts at the chip-fab/systems-assembly.
I read a joke a long time ago about how if you ask a C programmer not to run with scissors they will reply that it should be 'don't fall with scissors'; I don't fall. I think we found the guy the joke is about. I also find it funny that nothing in the entire article actually argues for the point he was trying to make.
After spending some time thinking of how I'd try to explain the problems with the argument, I actually think this sums it up much more concisely and convincingly than any kind of logical analysis.
I personally feel like Sisyphus in the Dunning-Kruger Effect "graph". The more I learn and "git gud" the more I realize how fucked everything is and that my every effort may be an exercise in futility. Climbing up the Slope of Enlightment just leads me back to the Valley of Despair.
It's because you're programming defensively. The language should have a type called number and if it overflows it goes into bignums. Gawk does this. Erlang also let's programs crash. We are stuck on shit. But it's because of speed.
@GreyReBl describes the mental discomfort when the horizon of "unknown unknowns" grows rapidly as technical learning exposes more knowledge surface area.
Yup, I've been programming for living for 30 years, trying to learn every day how to be a better programmer.. and year by year it's become ever more painfully obvious how little we actually know and care about making good software. It's tribal warfare about which mistakes we should repeat. I hope we get more science in the "computer science" and the industry actually starts listening to it..
@@DanielJoyceI would call that a matter of opinion. It is less about the programming language used and more about best practices in design, implementation, and testing. Although, I do think there is a fair case about not using a language designed for systems programming to write ALL of your software.
I mean, it's obvious that the author takes this all incredibly personally. It's a fact that the inherent "rails off" world C DOES lead to more bugs, regardless of how good you personally are at C. The author obviously sees himself as a lone wolf giga chad C sniper elite, but fact of the matter is: If 9/10 people you work with are likely to fuck up the C code, maybe don't work in C unless it's a solo project.
Also this guy is being dishonest with his statisics in his quest to prove that he IS good enough to use C. "88% are below average contributors" while having obvious mega outliers. The median number of commits is three. The "average" contributor makes about 3 commits, and the 1 or 2 guys maintaining a project do a lot more work, sure. That doesn't make the rest "below average". What an insufferable author.
@@markogalevski6088By definition, about half or more of people im any given profession are below average in measurable skills, and given Pareto, the levels of contribution spread over a group are abysmal data to review.
NASA’s interplanetary missions use C/C++ on the spacecraft, but not like how most people use it. It’s not only a skill issue, but a process issue and a rules issue. Code for spacecraft uses a smaller subset of language features that is more predictable, deterministic and testable.
@@research417 Also every method needs to take a time budget (often simplified to a counter) as one of its inputs, and abort early & cleanup if it can't finish within the time constraint. Cooperative multitasking where each task is just proven to finish in time is much more predictable than preemption
@@InZiDes, no static memory is the memory which is allocated during compilation while dynamic is the one which could be created during runtime. uint8_t arr[1024] = {0} vs uint8_t *array = malloc(1024) for example
A couple of things: - The article exposes the author as not an "ace C programmer". - The best cyclist in the world will still be faster in a car. The weak version of the argument that the author makes is obviously true: Somewhere and for some reason, writing C will make more sense than Rust. But the strong version is just plain wrong. C safety issues run deep, "You need a proof assistant to verify the correctness of your code, because humans are physically incapable of doing the analysis manually." kinds of deep. Not reaching 100kmph on a bicycle just cannot be called a skill issue with a straight face. If the author had used their examples to mention anything about topics like relaxed memory ordering, cpu barriers, undefined behaviors, etc, etc, then maybe there would be something to compare and consider. Without those, I can only say that I do not know of a single "ace C programmer" who has ever claimed that language-level safety features are an unwanted or unneeded luxury for them. PS: Yes, I'm a Linux kernel contributor.
I was definitely suspicious of his "ace" claim after reading some of the code. In general he jumps through a lot of illegible hoops to do seemingly unsafe stuff that other languages abstract away.
Isnt the article entirely contradictory. I mean it appears to be arguing that using C is dangerous, only a minority of top skilled devs can write truly safe code, most developers are not good enough at it and whats more arnt aware of how bad they actually are, making finding the safe minority effectively impossible... and thats why the White House is wrong to recommend everyone use languages designed to be inherently safer... What?
I think the author fears that he will be forced to use/ learn something else, when he feels like he is competent enough in C to not fall into the pitfalls.
Yes, this was my takeaway too. The author just basically argued against their own statement. If anything I would argue that if you rely on an unsafe language like C to write fast code than you are the one with the real skill issue.
I mean his point was the White House argues ALL programs should move away from C, which he demonstrates isn't the case. Because there is a higher ceiling in C than in Rust, simply with how it let's you do more. It's not an outright denial of the White Houses claim, he agrees most people should be using Rust, it is literally one of his first statements, his critique is very specifically that it shouldn't be said that all code bases should now ignore C. The article makes it pretty clear what part of the White Houses statement he is arguing against, how do you misconstrue that as arguing against ALL of the statement? Especially when he outlines what part he does agree in the introduction.
When 75% of 0day bugs used in the wild in 2023 alone are related to memory management issues despite decades of education, mitigations, formal methods, and tools like valgrind existing, suggesting that systems programming be done in a language like Rust is justified and pragmatic.
@@UnidimensionalPropheticCatgirl No, you can't. Things like cve-rs are neat tricks you can play with the type system, but they aren't things that are just going to randomly pop up in your code "without that much resistance". While very interesting, they are very deliberate exploits that demonstrate hard to fix edge cases with the compiler.
The memory issues that Rust supposedly solves "because tooling" are also solved with C & C++ -- just use the tooling (general a flag or two). When people refuse to do that we can take bets about how many "Rust" devs will soon be in disabling "those annoying safety things..."
@@UnidimensionalPropheticCatgirlYou're talking about temporary compiler bugs that don't even apply to normal code vs fundamentally unsafe design. Do you think corporations like Microsoft, Google etc would go through the effort to switch to this complex language if there was no clearly measurable advantage?
@@infinitelinkAnd another "skill issue" person. Just use the tooling, just jump through all the hoops, just do everything perfectly because that's what people can be famously relied on.
So: premise of the article, C is not safe -> it's a skill issue -> proceed to show "elegant & smart C" vs "naive C" to illustrate it. It somehow feels like a completely out of topic answer. There are two kinds of C developers: those who have shot themselves in the foot and those of who have but don't know it yet. Even the OpenBSD folks are not immune.
Find one major application written in C that doesn't have a CVE about an RCE vulnerability haha, the issue has nothing to do with skill like the author claims. I would even argue the author demonstrates their own lack of skill by demonstrating that they feel as if they require memory unsafety to write good code. There are a lot of issues with many of the author's points imo and they use a lot of logical fallacy.
@@Hexcede also, that 10x programmer also has to spend all his time fixing the other stuff written by the newbs that he won't get anything done at all. Rarely will you have a team of all 'aces'.
This. every time i think about making a linked list i realize how in every case i can think of it's less efficient than just using an array or vector. C++ but same concept.
This is so completely stuck in peoples brains. Performance isn’t everything. Sometimes linked list fits the problem domain better than array/vec. Choose the right tool for the job and stop talking about performance. Computers are fast.
When C was still a general purpose language people adhered to all kinds of style guides and best pratices to reduce the chances of terrible mistakes but since we have a whole host of other languages nowadays that fill these roles, the rules of what makes C code good have changed drastically. Nowadays performance has a much higher priority and unconventional solutions that are somewhat harder to understand but have runtime benefits are judged much more favorably. The reason for this should be kind of self-evident. People use C mainly in performance critical and/or space constrained scenarios where these factors are of course critical.
Exactly right. Good structure, indentation, careful attention to matching brackets. Avoid huge functions as much as possible; don't do everything in main(). Once you get a good function, you can re-use that function in many places. Check the boundary conditions. Never ever assume that a string ends properly. I do some tricky things with unions where a particular memory address is simultaneously a byte, an int, or a string (or other things). This will sometimes be useful in the case of receiving data over the ethernet when you don't know for sure what exactly it is. Same with a chunk of data from a database. There's no telling what exactly it is sitting there in the buffer.
@@comradepeter87Maybe not true but it paints a rough joke on what Rust means for the dev world. Just look at the cpp vs rust reddits, one has twice the other's active users while retaining the same sub count, meaning perhaps that people enjoy more talking about rust than coding in rust. Proof of this is that I can find 8 job posts in the Barcelona area relating to Rust (6 of them are the same remote position copy-pasted all across spain; C++ on the other hand...
@jazzycoder A certain part of me finds you people endearing. Please continue to spam your pro-C comments to angry Rusties. They deserve it because they act like a cult.
Average Marksman player in Squad. Worst part is, you would need an Automatic riflemen or maybe a LAT, but there's always some clown that hogs the marksman role for zero reason, and does nothing with it.
Arguing individual element mallocs is an ace is very bold. Not only are mallocs expensive and unnecessary, but no matter your intellect, simplicity is always better. It's much faster and simpler to allocate large chunks at startup, and write simple allocators for your requirements, while enforcing zero-as-initialization to minimize required error handling.
this man's issue is that, regardless of whether "aces" exist, the process of determining who is an ace and who is not will inevitable produce a bucket ton of garbage code
The reason why the function takes a void pointer to "user_data" is because the function pointer is the C idiom for what would be a lambda expression or closure in another language because this is a construct that doesn't otherwise exist in C. So think variables that you inject into a lambda from an outer scope. That is exactly what this is.
yes, void* in C is the most generic type possible, the only downside is that the compiler won't help me if I screw up(and a possible indirection for small size types that could normally fit in a register)
@@TsvetanDimitrov1976I'm an embedded systems programmer, using void* on the daily together with about 20 to 30 people working in the same repo. I can't remember a single time this has caused a bug.
Hard disagree. I wrote secure C for over a decade and triple checking everything was just a constant pain and my functions to prevent array overflow a little ugly, not to mention non portable. It's so much better not to have to think much about security with Ada. I thought Ada would be harder to work with due to it's reputation but it is so much nicer once you get to grips with it. I couldn't be happier with Ada.
The author is effectively advocating to continue funding the building of tightropes instead of bridges because some people can cross the pit of corpses.
The sniper argument is a bit apples-to-oranges comparison, sniper isn't just a regular grunt but better. Sniper is a role. It requires much, much more than just being accurate shot. Similarly, C-programmer given modern security constraints is much, much more than just knowing how to write C. To write good and secure C you basically need to know all the same nuances, footguns, quirks and patterns as you'd need with C++ or Rust. You also need to verify you're not accidentally skill-issuing yourself. Therefore you need to setup static analyzers, add appropriate compiler and linker flags and so on. You can setup your build to be strict even in C. But it's not a given all C-projects do this. In my mind the main selling point in Rust is that it defaults to skill-issues being a compiler error.
@@blarghblargh That is exactly the point. I'm definitely a "non-sniper". Most of all software ever written and ever going to be written are by "non-snipers". Unlike for example civil engineering, software engineering has no way of preventing "non-snipers" from doing "sniper" work. The goal therefore is somehow make sure the code that gets written meets at least certain level of quality. I do agree Rust does not solve all issues. However, if a language makes by default memory errors into compiler errors, that is still a massive win for "non-snipers". Also, how many C coders or C projects you know of which actually use -Werror -Wall -Wextra -pedantic? In my experience, pretty much none. Most C and C++ coders in my experience will just fight you if you enable them. Either because dependencies, too much work to fix or "compiler is wrong".
You could consider driving a car without a safety belt a skill issue too - in the sense that a "skilled driver" wouldn't need one, but we usually wear one for obvious reasons. Why is it the kind of people that pontificate "skill issue" are the ones that have to apologize later? They are also the ones that nearly cry and become vindictive when a mistake in their code is found.
Better analogy is a car 5 - 8k thats a metal box but cheap and light. Or a 50K - 80K+ car that has all every feature, etc. Sometimes you just want something thats simple that gets the job done
The author didn't even mention typeof is a GNU C extension or that the pointer cast violates strict aliasing without -fno-strict-aliasing. These are not minor details if you're actually writing portable C, and the latter is the primary culprit behind amateur' "the optimizer broke my code" accusations.
He doesn't need to, it's from the Linux kernel, ergo gcc extensions and -fno-strict-aliasing. Also, what compiler would you even remotely want to port to, MSVC?
@@Comeydclang supports all, or nearly all, gcc extensions, and is actually in front in some respects (clang had a guaranteed tail call optimisation func attribute first for example). This has nowt to do with libc. The only valid point in 2024, now that even icc is dead, is proprietary compilers for embedded systems like Keil, but you're going to have much bigger fish to fry when porting to that. Rust is not an alternative to C, it's an alternative to C++ for general purpose development. C use case in the modern era is essentially as a portable assembler that gives you semi-competent insn scheduling for free most of the time. It is perfectly reasonable to use it as such. Being able to compile your code with exotic legacy implementations like MSVC is not a requirement, and if it is one, that's a strong indication you're using the wrong tool for the job.
Astounding how the article used so many graphs so poorly, especially considering it was all for talking about a "skill issue." Also, isn't the whole dynamic he is describing part of the issue that the White House paper was describing? From what I understood, they want to move towards safety, not fully eliminate everything that may be unsafe. Specifically, they are suggesting to move towards the model his analogy with riflemen and snipers depicts. Wouldn't an extension of the analogy would be to model the current situation as having something like 4:6 riflemen to snipers? Is the author just yelling into the wind about how people who don't know things say oversimplified and nonsensical garbage but smart people are smart while shaking low quality graphs at a cloud? The whole paper feels insanely lacking in self awareness and high in hypocrisy while being very pretentious from beginning to end.
That "inverted" linked-list pattern in the second example is what would generally be called an "intrusive container", which is very common in C and also seen in C++, for performance reasons (reduce allocations and improve cache locality). The basic idea is that the library provides you with "things" to add to your classes/structs (in that example, the llist_node member) so that it can organize them into a container / data-structure directly. It's common for linked lists, trees, hash tables, etc. In C++, it's less common due to templates not having the overhead that the "void*" solutions have in C. In C++, you'd see that more often for multi-indexing, i.e., the same "nodes" are indexed multiple ways (e.g., a tree with nodes that can be "moved" to a free-list). Judging by the experienced C programmers I know, intrusive containers seem to be the default way to write any collection of items beyond a simple array.
@@msclrhd Not quite. With make_shared, the object and the ref count can be allocated together in the same block of memory, saving one indirection. But it's not technically an intrusive pattern. An example of the intrusive pattern is boost::intrusive_ptr, where you put the reference count in your class, making it usable by that smart pointer. But boost:: intrusive_ptr is pretty much useless now. Like I said, with C++ templates, it's pretty rare that you really need an intrusive pattern because you can most often just store the "T" object by value within whatever node you have and get the same performance, which is what std::make_shared does.
In C++ it is less common for stupid reasons. Code managing links in link based data structures like lists or red-black trees is type-agnostic. You shouldn't have a different type-annotated instantiation of it for each "template instantiation" when the outputted assembly's the same. But the subsequent casting (container_of in the Llinux kernel) to the right container type should be done inline and in a contained fashion.
He proved that C is very flexible - you can write really bad code and you can write useable, relatively safe code. What he didn't show is - what do you achieve with this approach. Is the code faster than rust implementation? Uses less memory? Could be - I don't know rust... What I don't like in "ace code" is the fact that someone must maintain it - and (as we see in the article) aces don't write comments...
You almost said Donning-Cougar effect at 10:38, which I found quite funny. I wonder what that would be - you think of yourself to be much hotter and younger-looking than you actually are?
That's not what a cougar is, so this isn't really landing. Cougars are mature women who play the field of young men, generally 40-50. The pejorative for an unattractive cougar is "cradle robber" but objective beauty is not real etc. and the term doesn't generally apply to functional narcissism, but rather the subtext described above. Almost clever, but perhaps too surface
I used to study philosophy and I came to believe the following: general skepticism about truth, morality and anything else that matters is very defensible, but if somebody needs to bring it up to defend their point, they can't defend their point. So yeah, I don't know what I don't know and it's possible that there's good reasons to use C. But anything is possible and until somebody can defend C in some specific way, I'm not impressed by these completely unspecific pontifications.
But if you studied statistics you would know that "Graph! Sniper riffle! Another graph! Most people who say they know DK don't know DK! You thought it was DK! Gottem! It was the mount stupid, stupid, haha,ha, look at me, I'm so smart and we haven't even started talking about C" is something to be impressed! Probably.
It's not hard to "give good reasons for C". (1) where there is need for portability, supposed replacements with complex compilers or toolchains (e.g. Rust) don't work--C can be learned in about 20 pages (not a joke), it's that simple. That also means implementing a compiler on a new machine is always reasonable and quick. (2) Where portability is needed, a language *must* have "undefined" behaviors where defining the language would cause it to be machine-specific. (3) Where need for to solve difficult problems in spaces that are unknown: you aren't going to solve an ambiguous world-class problem in Rust if you're fighting to get Rust to work or let you compile: by definition to explore a space you need LOW friction from the tool. Most software projects fail... friction geometrically (at least) increases likelihood of that probable failure. (4) Where need raw control exists--from portability to need to implement your own structures as close to metal as possible... That's before getting into the fact it's an API for machines now, not really a language the way you think, into psychological considerations where at times there is need for COMPACT expression, value in the capability to override anything... That's a tremendous set of considerations where you begin to think through wanting to solve something with software and only C will really do--not even Zig will do (so bounding etc is great until you need that flexibility!). From writing another language to having assurances the language and its tools some modify names behind the scenes...I recently wanted to check out Nim for a project then discovered in horror that besides choices like case insensitivity--aka failing to deal with the world as it is--it then seeks to work around such bad choices by modifying names under the covers!!!!! 🫠 Not acceptable when you need assurances and control. A BIIIIIG reason for C almost never stated is this: simple compilers and simple languages enable you to write sophisticated tooling to aid your efforts. C devs that did the prior reading tend to be able to write their own **** analysis tools in ways nobody else can... many of the supposed shortcomings of C are there for reasons as mentioned above but *due to valid reasons the authors had in considering implementing C & Unit*: I know because they specify many of them in the early books about that things that they wrote. Many supposed faults in C are things people today think were missing "because it's old" that were present already in older langs (PL/1, Pascal, Algol (take your pick)...), and the C authors EXPLAINED WHY THESE HAD TO BE LEFT OUT OF A LANG LIKE C!!!! Your problem isn't that you've "never heard" this it that to convince you but that you apparently never read into the history behind these things.
I did know about the portability of c, I've also heard it's simplicity lauded, and it's consistency over time. I don't know why the phrase "never heard" is in quotation marks. I didn't say that. I was responding to the article specifically and said the arguments were weak. Which they are even if their conclusion happens to be correct. I don't care to take sides between Rust and C. I don't know enough about either language.
Tell you one thing, after 20 years of thinking of I was learning C and C++ in unison, when I finally felt capable at C++ dev I realised I didn't know C for sh*t. I love how we compare Rust to C when it is far more comparable to C++. There are so many abstractions not found in C, that are found in C++. It's just cool to hate C++, even for C devs. While it's cool to say you like C, even though a majority who say that cannot write it at all. Similar to Rust, in that way. Cool to say you use it, whether you actually can or not.
I cannot understand how C, which historically is the reference language that nearly everything else is defined against or must be written to interact with... a language that can be learned in about 20 (or maybe 30) pages from the K&R book (the language, not stdlib etc), is something in which that so many "professionals" cannot write Even basic programs. It's literally about 20-30 pages with examples, reasoning, explanations of the examples to be able to write some seriously useful C programs. But supposedly nobody can use it?
@@infinitelink it's because the language really doesn't give you enough abstractions so you resort to all sorts of ceremony and boilerplate. One example is the 'container_of' macro mentioned in the article, all it does is, you give it a pointer to some struct member and it gives you back a pointer of the whole struct. Reading and understanding the macro implementation is quite opaque for a beginner. Zig in contrast just gives you a built-in function to do the same thing called @fieldParentPtr. The way C is written/read today isn't really comparable to what is in that book, because a lot is going to be convention and some of it is quite obscure.
I like the version of C++ thats super bare bones like C, and is super simple yk. I hate memorizing stuff. I like to write my C++ code how I write my Roblox code, since I learnt to code on Roblox lmfao. Abstractions are really retarded sometimes cus they just make certain stuff more complicated than it ever should be. I shouldn't need to put time learning how to do the same thing but in a slightly different way. Everythign in Roblox is so simple and just works. I litterally model all my irl code to work like how it would on Roblox lmfao.
Those "top 10%" C programmers got that way by writing a lot of code along the way. Are we supposed to keep letting developers write bad and unsafe code so 10% of them can become amazing at an unsafe language? And even if you write really amazing code making use of all sorts of obscure efficiency hacks... who is supposed to maintain it? Will there always be someone around with your skill level? This guy seems to be missing the big picture.
"Are we supposed to keep letting developers write bad and unsafe code so 10% of them can become amazing at an unsafe language?" There is no WE. Unless you are the boss and not a developer yourself and looking to UA-cam commenters to answer this question. "Will there always be someone around with your skill level?" Sure; a few thousand miles away perhaps. I come from an era of mainframe assembly language. How "unsafe" is THAT? But when it works, you KNOW it works, no hidden library bugs.
What portion of those top 10% are going to accept modest compensation for participating in a government contract awarded to the lowest bidder with the smallest project budget?
Somehow this whole "ace c developers write code like this" convinced me that C should not be used for certain types of applications (medical etc.). It just looks error prone even when you somewhat now what you are doing. Or think that you know what you are doing to that extent
@@computernerd8157 C is not objectively fast, you can write Rust code with identical or better performance to perfectly optimized C code, so this argument, although valid in some specific contexts, barely applies. Taken to its logical conclusion, this argument on its own pretty much just leads to hand written assembly being better than C.
12:50 I disagree strongly on this one Prime. The thing is they don't think they are bad, they think they are good, better than people worse than them, but they don't think they are as excellent as they are. That's like a 10/10 saying "I am a 9/10". They still think they are better than 90% of people.
I don't consider myself a C expert, but I learnt nothing new today. I also prefer the explicit over the implicit. if (bla == NULL) is much more understandable than if (!bla). Though I do tend to use ! for booleans. Yes I know they're effectively the same as an int, sometimes a byte.
Yeah flipping between languages a lot I find that this is a much safer approach as the not equivalency table is too hard to remember across many languages.
The general principle should be: "use a safe language unless you need to use an unsafe one". In rust we have this idea implemented in the form of unsafe {}.
More importantly, in Rust only small, specified parts are unsafe, as opposed to the whole thing. Which is easier to audit: twenty lines, or ten thousand?
@@greg77389Because in general, it is easier to audit a small line of and make a prove of soundness than to have to audit multiple files. For example, Everytime you make an unsafe block in rust, you will conventionally, have to provide a reason why the unsafe block wouldn't cause UB. (Like writing down that the memory in the pointer in this function are confirmed to always point to a valid address.
18:20 - 26:30 Funny how the author is nit-picking (ultimately) meaningless syntax, calling the person who wrote it a noob, but never even touches on the real issues here (memory unsafety), that would actually cause problems in the wild. (strcpy vs strncpy)
The goal of the code snippt was to show how fundamentally different an ace programmer would code from a base programmer but in an article defending c from its unsafety that is really funny.
Also funny is proposed "solution" of using static variable. if do_stuff uses john as a scratch buffer and sets `age=-25` in the process, it's not issue in "incorrect" example. But with static... In rust John would be const to begin with and ownership would be tracked.
@@leonardschungel2689 The thing that comes to mind is "know your audience" an "acE" (cringe) programmer may be able to write and rapidly understand the code, but if he's the lead of a team of 4 or 5 who are mid or worse, it's actually shows that he's not infact act. Fully agree with prime on the point of making what you are writing deliberate and not being too cute.... even if future you is as smart enough to understand your tricks, is future other members of the team? and if they touch that area of code are they likely to cause issues? If yes then the developer is just enjoying their own farts.
The actual problem is that choosing to use a "safe" C codebase requires trust that the one(s) who wrote it were these so-called "aces". It's not a question of whether it *can* be done safely. Rather it is a question of whether we can have *confidence* that it was done safely, and since there's no objective way to be able to identify these so-called "aces", it is literally best to assume none exist. The problem with the sniper analogy is that we can actually define operational parameters to be used for training and evaluation purposes that can designate someone as proficient enough with the sniper rifle to qualify for a mission requiring its use. Even then, said sniper doesn't just defacto go on all missions with his non-sniper buddies toting his trusty sniper rifle ... either he will be outfitted with another weapon suited to the task(s) his squad has been assigned (i.e. using basic bitch "safe" languages right alongside all the other plebes), or will otherwise be deemed to valuable to waste on missions where his expertise could go to waste (i.e. I hope you've budgeted for the financial feast/famine cycles inherent in highly specialized consultancy..... oh, and that you can manage to sell yourself as a so-called "ace" without also selling yourself as a total dickwad that nobody wants to hire). In short, the only C that need be written is that which is absolutely necessary before handing it off to a "safe" language.... which is hopefully 0.
The aces are likely going to demand more compensation for their skills than can be budgeted for in a competitive government contract for a software project. Best you can expect is a significant portion of middle skilled professionals involved in the projects. Special unicorn developers might be involved but you can’t really count on it or expect it.
I'll never understand this "this code is bad!!! it has macros!!!", like, this is the textbook definition of good C macro usage, it's literally text replacement, it's not even conditional. It's syntactic sugar, but for some reason people don't like it? There are mistakes (gtk source code), then pasters (why?!!?) and finally totally ok macros like this.
The issue with C is not that it's a sniper rifle, but the fact that it's a rifle constantly aiming for your foot. And the "mastery" of the "aces" mostly consists of dodging those damn bullets while jumping and arching in a way that makes them fly in a general direction of the enemy...
And, to elaborate, the extra context is there because function pointers do not work as closures, so would have to rely on global objects for any additional context if it wasn't passed in. And you obviously need that context to do anything else than simple in-place mutation.
I work in manufacturing, not tech, but we have a concept from Japan called "Poka Yoke", which is to make it as difficult as possible to make mistakes. Yes, we shouldn't need to prevent mistakes, but while we're in dreamland I want a pony. In real life, people make mistakes. And taking measures to prevent mistakes from being possible is worthwhile. That's what these other languages have, Poka Yoke for memory safety. I absolutely believe a grandmaster at C can write insanely secure code, but most people are not grandmasters.
99% of " elite C real men programers" are actually average skilled coders with a completely altered sense of reality, the huge number of memory related bugs are a proof of this. There is always better to rely on safety by design rather than safety by skill ¿Why would I want my system (specially a complex system) to rely solely on, let's say Harold, that super genius guy that can manage flawlessly pointers like a real Cha? ¿What happens when he is no more, when he quits or changes the job and I could not find a new guy with the same skill level thus I can't enhance or maintain Harold's code safety anymore? Fuck elite programmers that are mostly worried about proving they have the biggest coding cock.
14:45 I’m a professional C++ dev and I wouldn’t touch C. I know the C++ code I write could not be easily translated to C. While C is almost a subset of C++, actually using this subset is considered bad style, “we have the extra stuff for a reason”. The best thing about C, IMO, is the C calling convention, allowing for language interop (e.g. Java and C++ via JNI) as both languages talk C.
Imagine the effort it took to write this article that just proves the point on C ironically. Wouldn't everyone love to be an elite ninja sniper 1%'er programmer? Like many things in life, you aren't one unless other 1%ers call you out as one of them. It isn't something you get to label yourself.
He's a giga chad alpha wolf mega sniper who thinks "only bad engineers make bugs" and they cope with "C's just as safe as Rust, you just need to be good at it. It's easy, look, let me just run valgrind, absan, ubsan, clang-tidy, a formatter, and my tests written in a third party test suite." And then they call you back in 30 min because their make script was broken.
hay if he's so good at C, then just use unsafe rust, but he won't because doing that would make obvious how many uncessery risks you are taking to do little anything
The author is fundamentally right - the only problem is the "average doesn't exist" point. They do exist because there's not only extremes. There's people that think they are good in C, and are bad. There are people that think they're good and they're mediocre. And there's people that really are good. Commits != good commits or good code.
My brother, you couldn't find a better C programmer than Linus back in the day, who still has the most code contributed in the kernel. Now please go and check how many CVEs have been from code written back in the ye olden days. It's pretty obvious the more surface area increases, it's prettty much inevitable that vulns will come. This isn't a skill issue, lest you go ahead and tell me Linus is incompetent.
No, no, no, but THEORETICALLY a good enough programmer can avoid any and all C bugs! Reminds me of how in some video games, players will to as far as to call professional players incompetent because they can't make something that's theoretically unbalanced work in practice. Looking at you, League community.
it is so much a skill issue, back in 2003/2004 I spent 3 months on contract to write a Linux driver for a company, and delivered a fully functional driver with 0 defects, and wrote a report on why their program performed improperly when compiled with specific versions of gcc because there was a C++ template processing bug which elided some nested templates. Of course I read the assembly output to identify the problem and do the comparison. There is a huge difference between someone with 20-40 years of experience with a language like C working on a wide range of projects, and processor architectures, and your average programmer. Just like there's a huge difference between your top surgeon and a boy scout with a first aid badge.
0 *known* defects. A dedicated professional examining the code or the executable with modern tooling, concolic analysis, fuzzing, etc could very likely find an exploitable vulnerability in most reasonably sized software, particularly from that era because of a lack of mitigations and particularly something like a driver.
It doesn't matter how good your code was, it's not because there are professional racers that we should get rid of speed limits. The fact of the matter is that in the actual software world, you will rarely if ever write something alone. That was the point of OOP, to get mediocre interns to not screw up your well-made code. It doesn't matter if you're in the top 1% or .1% or whatever, you wrote a ton of code to get there, and that code likely had bugs. And everyone needs to to through this to get good. And the industry needs devs now, not in 50 years. And devs need jobs now, not after 30 years of studying, experimenting and t*nkering.
C has been called the ultimate computer virus (cf Worse is Better). Simple enough for anyone to pick up the basics, you can quickly write something that mostly works, and it can run just about anywhere (and runs just about everywhere). The lack of safety features is itself a feature: you can go full cowboy mode and ship your code fast, use the initial release as a giant beta test, then patch whatever problems arise. The C vs Rust debate is retreading a lot of ground from 40 years ago when Ada was the new language that was pushed for critical software where safety was paramount. Still in use, but failed to gain wider traction.
It's really interesting how only the Linux kernel is referenced The code in the example is absolutely forbidden in misra and anything close to functional safety.
When doing something for hours at a time, are you fully attentive constantly? No, you do things subconsciously, you inevitably become complacent, you make stupid mistakes. This is a well known fact in most industries, why are so many people here ignorant to it?
Yeah, I don't know anyone who doesn't write silly code from time to time. Even the best engineers I know make bugs. And you know what? They usually laugh about it, it happens. And the best thing is, they're also humble enough to know they'll screw up at some point. They put systems in place, unit tests, running the program themselves, canary releases, ... -- dealing with (temporary) skill issues of humans is just natural. In (commercial) aviation, there's two pilots for a reason. Every pilot in a cockpit of a 737 or A320 can start and land the plane on their own. They can make all the fuel calculations, and handle most emergencies. So, why have the 2nd person? To catch skill issues. They have a pilot flying, and pilot monitoring. They have regular checks, and checklists. They have lots of manouvers they'd like to do, that would be more efficient, but that they don't do -- because skill issues. But when it comes to writing code professionally, especially critical kernel-level code, suddenly all the huamns are perfect. And there's a bob with the most perfect skills, that never makes a mistake. Sure, Bob, sure. You never make a mistake.
21:00 The example you give for !ptr for null checks in C is both dumb and in the the wrong language. No one does !arr.length, but using !ptr is cleaner and less verbose than ptr == NULL. We're checking the validity of the object, not a property of it so its not analogous to !arr.length. L take
Whenever I see articles like this, I just imagine how the article would be written for another language. "JavaScript skill issue; how Typescript is wrong".
What the author is correctly arguing against is the practical application of "equity," where instead of allowing those to succeed at what they are good at, they use broad strokes to try force everyone to do the same thing. Rather than acknowledging the fact that there are differences in skill level, the tool is blamed. His use of riffles as an analogy is apt, as it is common for the same types of people who would refuse to acknowledge differences in skill would also blame a tool for what someone does with it. Edit: I had a factory job once, where we operated machines that would output a large roll of printed vinyl, and because someone had difficulty picking it up, rather than the job making it a special case that this person could receive help, in order to not make this person feel bad, they made it a rule that everyone else would now also need to stop a second person from doing what they were doing and get assistance lifting the roll, whether or not they actually needed the help. This slowed down a lot of people who were faster and more productive otherwise.
I don't agree. The author didn't really make many solid points imo, and the whitehouse didn't say "use Rust or else" they simply recommended that people prioritize the use of memory safe languages, and I agree with this perspective for a number of reasons. First of all, this doesn't mean unsafe code is somehow disallowed, of course C code is going to be written still, and of course it is going to be useful, just like every other unsafe languages. Languages don't just go away because someone said something, because programming languages all have different characteristics, and different pros and cons. It simply doesn't mean what the author suggests that it means and much of their argument revolves around the idea that the US government is saying that memory safe languages replace memory unsafety. No, rather they are asking memory safe code to be the priority, and this prioritization can be met with the use of C code too. The author's implication that C is objectively better than a memory safe language like Rust is also simply incorrect. Memory safe languages (e.g. Rust) can do the exact same things that C can with almost identical or better performance. You can close the gap by writing unsafe Rust code, but it is up to you the programmer to ensure that the unsafe code you write is safe, and by using a language like Rust you are able to write both memory safe and memory unsafe code simultaneously but you compartmentalize your unsafe code and keep it as limited as you can. Compartmentalizing that unsafe code and primarily relying on safe code except in performance sensitive cases will allow you to be a more effective programmer, because you are getting an immense number of gaurantees and static analysis benefits that you do not get in memory unsafe languages. Not just this, but Rust can make many, many optimizations around many of the memory safety gaurantees it makes while maintaining convenience to you, the programmer. In other words, I think the author's own arguments entirely work against them and suggest the exact opposite of what they were trying to argue. If you require an unsafe language like C to write fast code, you have skill issue.
there are meaningful differences in skill level within a single team; those teams are likely going to use a particular language for systems programming rather than whatever every individual thinks they’re the best at; memory safety issues are responsible for a large majority of exploited bugs found in the wild; defaulting to the use of (more) memory-safe languages which is sensible, particularly for government vendors. The article is self-defeating and seems to only exist the stroke the ego of the author and other pedants who couldn’t even be bothered to read the original ONCD report.
@@Hexcede I agree with your general sentiment that most people are better off playing it safe. There are a lot of people out there who would probably cause accidents if they didn't have lane-assist and side and rear-view cameras and were made to drive manual transmission vehicles. That doesn't mean that a driver who knows how to operate and maintain an older stick-shift vehicle needs to drive one in order to be a better driver. They generally just are a better driver and don't require the same "recommended" safety features in order to accomplish the task of getting from point A to point B. There is no shame in admitting you don't know how to drive a manual. That's ok.
@@aazendude Yes but that's what the first part of my message was talking about. The point was never "Rust = better programmer" or something. The analogy also isn't perfect because many of the benefits of Rust result in practical speed-ups in the writing of code, and reductions in bugs and debugging. The difference is that something like lane-assist doesn't cut out any real burden of driving. Rust just as an example makes mathematical gaurantees that allow you to eliminate entire classes and subsets of bugs almost for free, and this isn't just a crutch, it's a tool.
I think for most Rust devs it isn't about the safety mainly, not even mainly about being blazing fast, fearless concurrency, etc. It's about moving things into the editor, trading debugging for writing and testing for compiling. Of course it's a tradeoff, but it's one many welcome.
Exactly! It might take slightly longer to write Rust code… but that is 100% worth it once it is compiled because it… *just works*. Rust gives you such a level of confidence in the reliability of your software you can sleep well at night, because you aren’t going to get 3AM calls to debug something *right now*
I agree. Rust brings more than just rustc to the table. The tooling and the ecosystem that grew out of that is definitely a huge part of why it got picked up in a relatively short amount of time. Though it still allows you to go outside of that when needed; it doesn't _force_ you a specific way, making it very general purpose.
In this post and many others, the author tries to put himself on the same level as Linus Torvalds, but based on examples like linked lists and not on any measurable success in programming. Even when he's technically right, he still makes sure to be a dick about it. He was banned from Git and made a fork git-fc (named after himself) that is already abandoned. He's a goldmine of intelligent but unhinged writing.
Honestly, he doesn't seem nearly as smart as he (seems to) thinks he is. Yeah, there are skill issues, but there are also a lot of easy mistakes made even by very good developers that are less probable with even C++, let alone languages with saner defaults. C has some restrictions that bring down the developer experience, and some allowances that improve it, but also introduce safety issues.
I'm with you old man. Nothing wrong with assembler. In fact I think all programmers should have at least some understanding and proficiency in assembler. "Unsafe"? No problem, we are aces right!. In fact I would claim that it is easier to write safe, UB free code in assembler than C++.
In my opinion, the move towards statically enforced memory safety by default is the right one. But Rust is messy and hasn't gotten a lot of things right yet, if it ever will. We may still see it "beaten" by another language which could well be written in C or another unsafe language. You can well have well-designed safe systems on top of well contained unsafe components. Electricity's plenty unsafe, and yet all these "safe" languages ultimately use it. Similar to that the mere use of an unsafe language shouldn't by itself be a cause for concern, provided it is well contained in small, proved components.
That article rubbed me the wrong way, I feel like the entire premise is wrong. These recommendations aren't based on feelings, but on 30 years of research showing even the smartest people suck at writing safe C. And maybe it's not just a god-damn skill issue. Except for Felipec of course *they* are clearly smarter than everyone else.
My thoughts exactly. One doesn't need to go through the mental gymnastics of evaluating who's the real C programmer. Results speak for themselves and they're damning.
I think my other complaint about this 1 in 10 'Ace' C programmer argument is that it doesn't consider the fact that these 'Ace' programmers weren't born out of the womb 'Ace' C programmers. How many thousands or millions of lines of code did they have to write before they got there? How many memory vulnerabilities did they write on their path to becoming an 'Ace' programmer, and how much damage did that do to the software ecosystem at large? At the end of the day, learning to write memory safe C code is optional, whereas learning to write memory safe Rust code is MANDATORY (provided you don't take the easy way out and wrap your entire code base in unsafe blocks, which is really easy to catch in code reviews). Also, the compiler starts teaching you memory safe habits from day 1, so if/when you do start dipping your toes into unsafe code, you've already built up some good habits beforehand, and the word 'unsafe' itself signals to the programmer that they should probably do some additional reading to protect themselves before trying their hand at it.
Most user-facing C programs use a bunch of global variables defined in each translation unit. This and the lack of namespaces makes making large executables difficult. Rather, it encourages the creation of many small utilities that are integrated by a scripted language (Shell, Python, etc.) In C++, these global variables and small programs can be made into its own thing by wrapping the lines in a struct or class definition.
app_window_new (). Globals should be used sparsely, If you need them, hide them behind a function call. (Singelton) I find the simplicity of C very appealing. I can follow calls down the stack and know what is happening. That does not happen with stuff like C++ or Rust. And don't get me started on language specific "package managers"... That should be illegal. It's abhorrent.
@@attilatorok5767 because if I'm writing a function in modulename I can type just funcname instead of the whole thing. Lots of problems with how it was implemented in C++, but it's a good idea overall.
The Whitehouse didn't call for that. the Whitehouse Office of the National Cyber Director (ONCD) recommends it. This has been recognized by cyber security researchers for years.
That article was rough. A whole lot of text for not much substance. Has there been a serious argument that it is impossible to write safe C? The argument as I've understood it has always been that writing safe C is hard and error prone and the author even admits as much. The defense of C really needs to come from a place of developer efficiency or patterns that aren't possible in rust rather than there being an archaic set of invocations known only to the old ones that achieves parity. Running with scissors is possible, but that still doesn't make it a good idea.
When I worked at Lockheed I coded in C and Ada. No way that codebase will ever be migrated to Rust. The code and the system using it is way too mission critical to change.
@@antifa_communistHere’s the thing, it works and works great. I’ve seen my software work in multiple wars. If you ever worked in government or worked on any large government contract you wouldn’t even ask this question. I am talking about a multi-billion dollar military platform that would take 100s of millions of dollars re-code in Rust, plus test, deploy, and go then through all the other government and security requirements. Instead of spending an obscene amount of money to recode in Rust, it would be money better spent on building out new tactical capabilities and features. Plus this would never get approved by the Generals just from a cost and budgeting perspective.
Lockheed is willing to spend ridiculous amounts of resources to make something slightly better and the government is willing to pay for that as a customer.
So what are the odds that the people not good enough to be trusted with C are not also the people who would immediately just wrap all their code in unsafe if you hand them rust?
Not that much: borrow checker is not disabled in unsafe code, so they'll have to jump over their heads to cast all references to pointers(BC doesn't care) and back(and if they will make 2 mut references during their pointer play, it is UB).
@@AM-yk5ydThere are ways of abusing the type system to create immutable 'static borrows from safe code using the unsoundness of the function pointer type. But if you can do that you are good enough to know what the f you're doing to not do it.
3:38 what is bro talking about? Local variables are not a feature of the CPU, they're a feature of the language. RBP (base pointer register) may point to where local variables begin but when writing assembly or in some other language that doesn't have the concept of local variables it could be used for something else. The same goes for the first few arguments being passed in registers. It's merely a convention.
The article explains skill issues. Prime has skill issues understanding the article. I have skill issues understanding Prime's explanation of the article. I'm lost...
Defer the free to /when/? If it's just referring to the end of the program, then the free is actually unnecessary, since all memory is reclaimed by the OS when a process terminates.
@@Sindrijo True, but that should be obvious. Linked lists are almost never the correct answer, but every once in a while they are what you need. I think the best approach is to always go with the simplest thing (arrays) which is also usually good for performance.
I think the argument is, "It is possible to write safe code in C for a minority of developers, therefore C should not be abandoned for specific use cases in which it is advantageous." It makes me wonder, though: In order to determine which developers for whom it is safe to write C, you'd have to have somebody for whom C is not a skill issue making hiring decisions, and you may have no way of making that initial hire reliably. Simultaneously, this talent pool would initially be very narrow and therefore expensive, and I'm not sure that most organizations would find it beneficial to maintain these use cases as a result of both this as well as the hiring problem, especially since they also take on the risk of having those use cases, hiring incorrectly, and then suffering the fallout of unsafe C code in whatever their product is.
I support rust adoption because it makes all the bad-at-coding mentally unwell people who aren't even sure whether they're man or woman migrate away from C.
@@antifa_communist How long have *you* been programming, building up a massive knowledge and code base, along with methodologies and practices that prevent unstable code, eliminating the need to run to new languages to solve problems brought on due to lack of mindfulness and discipline, huh?
I think that the L graph at 8:40 is caused also by the fact that after each commit adding more becomes easier each time. You are familiar with the project. You know what you did last time and what you didn't do that you could have done. So it is easier for you to commit rather than for someone who had specifically targeted some bug in the code and fixed it.
Do it the Pythonic way: snake case for variables and names of functions, Title upper case for classes names my_variable my_function MyClass For me this is peak beauty, but why? Well, because it is what I am used to see.
13:40-13:55 hit me like a ton of bricks. Imposter syndrome is a result of thinking about yourself too much rather than focusing on the problems you are solving.
34:17 Goto for memory clean up is cool. I'm still a noob but thats basically like Odin defer() isnt it? It helps to keep things in scope and clean up once its done or if/when the program closes. (I also dabble into Odin to learn more C and vice versa)
Please learn some zig when you get the chance, it's super interesting because it does as much as possible during compile time, leaving as little to run at runtime. It also lets you write super simple parsers because it enables inline for and switch (aka, iterating over the fields of structs). among so many other useful features.
One thing people forget to mention when they look at these things is how Rust starts out hard and slow, but becomes powerful and fast fairly quickly as your skills improve. A proficient Rust programmer can do incredible things, with fewer mistakes, than an equivalently skilled C programmer, because Rust includes some very powerful patterns in its standard library.
Rewriting all the legacy c(or c++) codebases in rust/zig/go/etc. is simply not practical, regardless of performance differences. Nobody would throw a ton of $ for marginal "safety" improvements. On top of that, looking at the rust linux kernel it's hardly even "safer" - most of the Rust code is tagged with unsafe all over the place, I hardly see that as improvement compared to C
I did not at all interpret it as "rewrite everything in rust/zig/go/etc." Secondly taking something as low level as the Linux kernel and taking Rust code from that and treating it like average Rust code is super silly. "The Linux kernel does memory unsafe stuff and operates directly with hardware? Woaah, that's crazy"
@@TsvetanDimitrov1976 Yes and no, but this is a super specific example where unsafe code will obviously be used heavily. You are taking "this code has a lot of unsafe blocks" as "C is unsafe and is therefore better because most of this code is unsafe" but this is a flawed conclusion/assumption, which was part of my point kinda, but also that this example is just bad. Nobody said "no more C, C bad, never use C," the focus is simply on the prioritization of memory safe language. The additional benefits of Rust's memory safety are kind of being ignored too because they still apply to unsafe blocks. Having a lot of unsafe blocks is still safer than being fully unsafe. You are relying on a lot of safe rust code inside the unsafe blocks that uses a lot of memory safe principles and you still get a lot of safety gaurantees. You are able to make some gaurantees that your unsafe code is safer than it might otherwise be, and when it is compartmentalized you are able to much more effectively debug and identify issues, find where bugs may be, etc in addition to being able to then utilize that unsafe code under its promise that it is a safe implementation. Safe Rust doesn't enclose all safe, valid programs, that is what unsafe blocks are for. But it encloses a huge majority of safe programs. And much of your code can rely on code containing unsafe blocks and maintain safety as long as you have made the guarantee that your unsafe block is implemented safely. You have greatly lessened the burden of identifying problems and gauranteeing safety by explicitly demonstrating where issues may be and in doing so you are able to make greater, stronger assertions about the safety of your program even with the use of lots of unsafe blocks.
The government will spend it. Even a slight improvement is worth vast amounts of money when adversaries are well funded governments actively seeking and exploiting vulnerabilities every day and have successfully penetrated systems on numerous occasions. It doesn’t solve the problem but can contribute to the solution as one measure among many. The cost effectiveness is completely unrelated to business side cost benefit analysis because the consequences of failure are potentially catastrophic and horrendously expensive on the scale of Trillions of dollars in damages.
now i want the Vatican to endorse Holy C
glow in the dark
@@werren894 N
That actually made me spill tea through my nose. Well played sir.
Hardest question: "Is this n*****licious or is it divine intellect?"
and canonize Terry Davies as a saint
Problem is, 90% of C programmers probably think they're in the top 10%.
I can confirm this applies to me. I know I'm not that good, but I "feel" like I am.
If your software is boring, you're in the top 10%
@@ScibbieGames Mine is boring, not because I'm in the top 10%
@@ScibbieGames exactly. The good thing about C is you can do anything with it, the problem with C is you can do anything with it. C programmers usually want to be "creative" and there's where they turn on the fan and grab the bag of sh*t.
"Oh I'm so great in C look what I can do" yeah genius you're throwing sh*t at the fan now.
They're C programmers, hubris comes with the job description. :)
There are two types of programmers. One type knows they write memory bugs. The other thinks they don't write memory bugs.
Both of them write logic bugs
There are two types of programmers:
- One type knows they write memory bugs. The other thinks they don't write memory bugs.
- n+1.
There are two types of programmers, those with the right tooling and ..
that, combined with the fact that sauron apparently wants it - makes me thing this isnt about garbage collection., but instead some other effects of those preffered languages. this is about control of something. it always is. i am familiar with whats going on at the hardware level and i am under no illusion that this is any better through the layers up to the programmer/user... any thoughts?
this is an issue from the beginning. it starts at the chip-fab/systems-assembly.
a good programmer is so unfathomable to shit programmers
I read a joke a long time ago about how if you ask a C programmer not to run with scissors they will reply that it should be 'don't fall with scissors'; I don't fall. I think we found the guy the joke is about. I also find it funny that nothing in the entire article actually argues for the point he was trying to make.
After spending some time thinking of how I'd try to explain the problems with the argument, I actually think this sums it up much more concisely and convincingly than any kind of logical analysis.
@@devinneal438 How is your comment from an hour ago but the one written by @Zguy1337 only 32 minutes old? I am confused.
@@catfan5618 Your youtube probably refreshed the comments at some point, adding any new comments without updating the time frame of old comments
Actually, it should have been "do not fall with scissors in a way you didn't intend to" ;-)
@@czakotmiszermawi *bumps into coworker mortally wounding them* "woops"
I personally feel like Sisyphus in the Dunning-Kruger Effect "graph". The more I learn and "git gud" the more I realize how fucked everything is and that my every effort may be an exercise in futility. Climbing up the Slope of Enlightment just leads me back to the Valley of Despair.
It's because you're programming defensively. The language should have a type called number and if it overflows it goes into bignums. Gawk does this. Erlang also let's programs crash. We are stuck on shit. But it's because of speed.
@@Khwerz I'd prefer to still split it into integers and decimals, but yes. Speed is king, which is why the languages let us make those decisions.
@GreyReBl describes the mental discomfort when the horizon of "unknown unknowns" grows rapidly as technical learning exposes more knowledge surface area.
Yup, I've been programming for living for 30 years, trying to learn every day how to be a better programmer.. and year by year it's become ever more painfully obvious how little we actually know and care about making good software. It's tribal warfare about which mistakes we should repeat. I hope we get more science in the "computer science" and the industry actually starts listening to it..
pure poetry
Man, I just want to write stuff in C because its fun. I don't care if I suck.
facts and logic
Chad
No one has a problem with that. It just doesn't belong in critical infrastructure or controls.
@@DanielJoyceI would call that a matter of opinion.
It is less about the programming language used and more about best practices in design, implementation, and testing.
Although, I do think there is a fair case about not using a language designed for systems programming to write ALL of your software.
o7!!!👍
I mean, it's obvious that the author takes this all incredibly personally. It's a fact that the inherent "rails off" world C DOES lead to more bugs, regardless of how good you personally are at C. The author obviously sees himself as a lone wolf giga chad C sniper elite, but fact of the matter is:
If 9/10 people you work with are likely to fuck up the C code, maybe don't work in C unless it's a solo project.
Also this guy is being dishonest with his statisics in his quest to prove that he IS good enough to use C. "88% are below average contributors" while having obvious mega outliers. The median number of commits is three. The "average" contributor makes about 3 commits, and the 1 or 2 guys maintaining a project do a lot more work, sure. That doesn't make the rest "below average".
What an insufferable author.
@@markogalevski6088 Absolutely agreed. Comparing skill with commit count is stupid.
looking forward to retirement, when I can do 100% solo projects. hell is other developers
@@markogalevski6088By definition, about half or more of people im any given profession are below average in measurable skills, and given Pareto, the levels of contribution spread over a group are abysmal data to review.
@@blarghblarghyou're the problem
NASA’s interplanetary missions use C/C++ on the spacecraft, but not like how most people use it. It’s not only a skill issue, but a process issue and a rules issue. Code for spacecraft uses a smaller subset of language features that is more predictable, deterministic and testable.
Well NASA tries to avoid using anything but static memory altogether, which eliminates a huge chunk of the safety issues with C
@@research417 Also every method needs to take a time budget (often simplified to a counter) as one of its inputs, and abort early & cleanup if it can't finish within the time constraint. Cooperative multitasking where each task is just proven to finish in time is much more predictable than preemption
Automakers also have to use a C subset for their systems. Programming C with all those restrictions and rules sounds like a MISRAble experience.
@@research417are no all memory a static memory?
@@InZiDes, no static memory is the memory which is allocated during compilation while dynamic is the one which could be created during runtime.
uint8_t arr[1024] = {0} vs uint8_t *array = malloc(1024) for example
A couple of things:
- The article exposes the author as not an "ace C programmer".
- The best cyclist in the world will still be faster in a car.
The weak version of the argument that the author makes is obviously true: Somewhere and for some reason, writing C will make more sense than Rust.
But the strong version is just plain wrong. C safety issues run deep, "You need a proof assistant to verify the correctness of your code, because humans are physically incapable of doing the analysis manually." kinds of deep. Not reaching 100kmph on a bicycle just cannot be called a skill issue with a straight face. If the author had used their examples to mention anything about topics like relaxed memory ordering, cpu barriers, undefined behaviors, etc, etc, then maybe there would be something to compare and consider. Without those, I can only say that I do not know of a single "ace C programmer" who has ever claimed that language-level safety features are an unwanted or unneeded luxury for them.
PS: Yes, I'm a Linux kernel contributor.
This. One of the best comments explaining this that I have seen so far.
exactomundo
💦
I was definitely suspicious of his "ace" claim after reading some of the code. In general he jumps through a lot of illegible hoops to do seemingly unsafe stuff that other languages abstract away.
Bro really just said "I *MAKE* Arch, btw" 😂
Isnt the article entirely contradictory. I mean it appears to be arguing that using C is dangerous, only a minority of top skilled devs can write truly safe code, most developers are not good enough at it and whats more arnt aware of how bad they actually are, making finding the safe minority effectively impossible... and thats why the White House is wrong to recommend everyone use languages designed to be inherently safer... What?
I think the author fears that he will be forced to use/ learn something else, when he feels like he is competent enough in C to not fall into the pitfalls.
Yes, this was my takeaway too. The author just basically argued against their own statement. If anything I would argue that if you rely on an unsafe language like C to write fast code than you are the one with the real skill issue.
revenge of the blub programmer. don't claw us back into your filthy pit, pot of crabs. we want to break free!
I mean his point was the White House argues ALL programs should move away from C, which he demonstrates isn't the case. Because there is a higher ceiling in C than in Rust, simply with how it let's you do more.
It's not an outright denial of the White Houses claim, he agrees most people should be using Rust, it is literally one of his first statements, his critique is very specifically that it shouldn't be said that all code bases should now ignore C.
The article makes it pretty clear what part of the White Houses statement he is arguing against, how do you misconstrue that as arguing against ALL of the statement? Especially when he outlines what part he does agree in the introduction.
The author has made so many logical errors in his article I can’t handle it
When 75% of 0day bugs used in the wild in 2023 alone are related to memory management issues despite decades of education, mitigations, formal methods, and tools like valgrind existing, suggesting that systems programming be done in a language like Rust is justified and pragmatic.
I mean you can buffer overflow or use after free in safe rust without that much resistance, so the 0 days are probably staying.
@@UnidimensionalPropheticCatgirl
No, you can't.
Things like cve-rs are neat tricks you can play with the type system, but they aren't things that are just going to randomly pop up in your code "without that much resistance".
While very interesting, they are very deliberate exploits that demonstrate hard to fix edge cases with the compiler.
The memory issues that Rust supposedly solves "because tooling" are also solved with C & C++ -- just use the tooling (general a flag or two).
When people refuse to do that we can take bets about how many "Rust" devs will soon be in disabling "those annoying safety things..."
@@UnidimensionalPropheticCatgirlYou're talking about temporary compiler bugs that don't even apply to normal code vs fundamentally unsafe design. Do you think corporations like Microsoft, Google etc would go through the effort to switch to this complex language if there was no clearly measurable advantage?
@@infinitelinkAnd another "skill issue" person. Just use the tooling, just jump through all the hoops, just do everything perfectly because that's what people can be famously relied on.
So: premise of the article, C is not safe -> it's a skill issue -> proceed to show "elegant & smart C" vs "naive C" to illustrate it. It somehow feels like a completely out of topic answer.
There are two kinds of C developers: those who have shot themselves in the foot and those of who have but don't know it yet.
Even the OpenBSD folks are not immune.
Find one major application written in C that doesn't have a CVE about an RCE vulnerability haha, the issue has nothing to do with skill like the author claims. I would even argue the author demonstrates their own lack of skill by demonstrating that they feel as if they require memory unsafety to write good code. There are a lot of issues with many of the author's points imo and they use a lot of logical fallacy.
@@Hexcede also, that 10x programmer also has to spend all his time fixing the other stuff written by the newbs that he won't get anything done at all. Rarely will you have a team of all 'aces'.
@@HexcedeseL4, probably, lol.
But then again, I don't expect people who don't want to learn rust to learn even more complicated Isabelle.
Zig is a better C and even it's projects have segfault issues.
Meanwhile the number of rust crashes involving safe code is basically 0.
Yeah this author didn't give a reason to not switch off of C, they just wanted to show off that they were knowledgeable about C best practices.
The real C programmer knows how cache impacts performance and so uses plain arrays instead of linked lists.
Bjarne Stroustrup (inventor of C++) argued exactly this point in order to convince people to use arrays/std::vector instead of linked lists.
It works that way at any level, not only CPU caches
Locality is the best way to make use of paging mechanisms.
This. every time i think about making a linked list i realize how in every case i can think of it's less efficient than just using an array or vector. C++ but same concept.
This is so completely stuck in peoples brains. Performance isn’t everything. Sometimes linked list fits the problem domain better than array/vec. Choose the right tool for the job and stop talking about performance. Computers are fast.
@@prestonmlangford Maybe in your domain. In mine computers could double in performance and all that would change is the resolution of my simulations.
When C was still a general purpose language people adhered to all kinds of style guides and best pratices to reduce the chances of terrible mistakes but since we have a whole host of other languages nowadays that fill these roles, the rules of what makes C code good have changed drastically. Nowadays performance has a much higher priority and unconventional solutions that are somewhat harder to understand but have runtime benefits are judged much more favorably.
The reason for this should be kind of self-evident. People use C mainly in performance critical and/or space constrained scenarios where these factors are of course critical.
I personally use it for joy and its simple syntax.
Exactly right. Good structure, indentation, careful attention to matching brackets. Avoid huge functions as much as possible; don't do everything in main(). Once you get a good function, you can re-use that function in many places. Check the boundary conditions. Never ever assume that a string ends properly.
I do some tricky things with unions where a particular memory address is simultaneously a byte, an int, or a string (or other things). This will sometimes be useful in the case of receiving data over the ethernet when you don't know for sure what exactly it is. Same with a chunk of data from a database. There's no telling what exactly it is sitting there in the buffer.
@@thomasmaughan4798but it's worth remembering that that union trick is one of those technically-UB-practically-works things
there's so many religions in programming world these days
Agreed
Sounds like the kind of guy who would argue endlessly in voice chat, whenever the group comp required him to play something else than a sniper.
@jazzycoder hey jazz, I love C too, but you don't need to spam a copy paste reply
@jazzycoderWhy are you spamming this everywhere? Also this statistic is WILDLY untrue.
@@comradepeter87Maybe not true but it paints a rough joke on what Rust means for the dev world. Just look at the cpp vs rust reddits, one has twice the other's active users while retaining the same sub count, meaning perhaps that people enjoy more talking about rust than coding in rust. Proof of this is that I can find 8 job posts in the Barcelona area relating to Rust (6 of them are the same remote position copy-pasted all across spain; C++ on the other hand...
@jazzycoder
A certain part of me finds you people endearing. Please continue to spam your pro-C comments to angry Rusties. They deserve it because they act like a cult.
Average Marksman player in Squad.
Worst part is, you would need an Automatic riflemen or maybe a LAT, but there's always some clown that hogs the marksman role for zero reason, and does nothing with it.
Was convinced of rust until the white house recommended it 😂
Ah, fellow contrarian. Or does that even count.
White house is also recommending Go. Look like we have to write stuff in assembly now
Oh man, you're like soooo edgy!
"Well, now I'm not doing it" vibes!
@@OnFireBytei will use V so i’ll be unb
segfault (core dumped)
The author is definitely the mid-guy in the bell curve meme.
nowadays, the mid guy is a js dev
@@Daniel_Zhu_a6f ahahaahaa its so funny but so sad, well played sir
@@natemaia9237 maybe if capitalists knew a thing or two about technologies they invest in, we wouldn't be in this stupid and sad js situation.
Arguing individual element mallocs is an ace is very bold. Not only are mallocs expensive and unnecessary, but no matter your intellect, simplicity is always better. It's much faster and simpler to allocate large chunks at startup, and write simple allocators for your requirements, while enforcing zero-as-initialization to minimize required error handling.
an arena allocator?
17:20 "No true C programmer" fallacy
this man's issue is that, regardless of whether "aces" exist, the process of determining who is an ace and who is not will inevitable produce a bucket ton of garbage code
The reason why the function takes a void pointer to "user_data" is because the function pointer is the C idiom for what would be a lambda expression or closure in another language because this is a construct that doesn't otherwise exist in C. So think variables that you inject into a lambda from an outer scope. That is exactly what this is.
yes, void* in C is the most generic type possible, the only downside is that the compiler won't help me if I screw up(and a possible indirection for small size types that could normally fit in a register)
@@TsvetanDimitrov1976I'm an embedded systems programmer, using void* on the daily together with about 20 to 30 people working in the same repo. I can't remember a single time this has caused a bug.
@@mananasi_ananas The only and only gamer found.
Hard disagree. I wrote secure C for over a decade and triple checking everything was just a constant pain and my functions to prevent array overflow a little ugly, not to mention non portable. It's so much better not to have to think much about security with Ada. I thought Ada would be harder to work with due to it's reputation but it is so much nicer once you get to grips with it. I couldn't be happier with Ada.
The author is effectively advocating to continue funding the building of tightropes instead of bridges because some people can cross the pit of corpses.
git gud*
Your mental gymnastics on that analogy are more insane than a tightrope over corpses
@@greg77389Sorry, I couldn't hear you over all of the screams about undefined behaviors and buffer overruns.
@@MrKlarthums
Skill issue
@@greg77389Yes, I'm aware of your skill issue in evaluating languages that weren't good 20 years ago and still aren't good now.
The sniper argument is a bit apples-to-oranges comparison, sniper isn't just a regular grunt but better. Sniper is a role. It requires much, much more than just being accurate shot. Similarly, C-programmer given modern security constraints is much, much more than just knowing how to write C. To write good and secure C you basically need to know all the same nuances, footguns, quirks and patterns as you'd need with C++ or Rust. You also need to verify you're not accidentally skill-issuing yourself. Therefore you need to setup static analyzers, add appropriate compiler and linker flags and so on. You can setup your build to be strict even in C. But it's not a given all C-projects do this. In my mind the main selling point in Rust is that it defaults to skill-issues being a compiler error.
if you think rust is going to save you from all that, then you're definitely not a sniper
@@blarghblarghRust definitely saves from the most of it, as was already demonstrated by google. But I guess they are not snipers either.
@blarghblarghA sniper knowing how to not shoot himself with a foot gun does not make the gun any less of a footgun.
@@blarghblargh That is exactly the point. I'm definitely a "non-sniper". Most of all software ever written and ever going to be written are by "non-snipers". Unlike for example civil engineering, software engineering has no way of preventing "non-snipers" from doing "sniper" work. The goal therefore is somehow make sure the code that gets written meets at least certain level of quality. I do agree Rust does not solve all issues. However, if a language makes by default memory errors into compiler errors, that is still a massive win for "non-snipers".
Also, how many C coders or C projects you know of which actually use -Werror -Wall -Wextra -pedantic? In my experience, pretty much none. Most C and C++ coders in my experience will just fight you if you enable them. Either because dependencies, too much work to fix or "compiler is wrong".
@@AM-yk5yd Google made stadia and wave. Checkmate, atheists.
You could consider driving a car without a safety belt a skill issue too - in the sense that a "skilled driver" wouldn't need one, but we usually wear one for obvious reasons. Why is it the kind of people that pontificate "skill issue" are the ones that have to apologize later? They are also the ones that nearly cry and become vindictive when a mistake in their code is found.
This right there. It doesn't matter if you're the top 1 car driver, you put a seat belt.
don't tell me what to do u fluffy react developer, I am superior
You put a seat belt on because OTHER people have skill issues. I don't have the same problem on my C programming team.
Better analogy is a car 5 - 8k thats a metal box but cheap and light.
Or a 50K - 80K+ car that has all every feature, etc.
Sometimes you just want something thats simple that gets the job done
@@7th_CAV_Trooper This is not even remotely true
The author didn't even mention typeof is a GNU C extension or that the pointer cast violates strict aliasing without -fno-strict-aliasing. These are not minor details if you're actually writing portable C, and the latter is the primary culprit behind amateur' "the optimizer broke my code" accusations.
Because the author severely overestimates their skill with C…
The condescension is incredible…
To be pedantic, In C23 standard "typeof" is an operator.
He doesn't need to, it's from the Linux kernel, ergo gcc extensions and -fno-strict-aliasing. Also, what compiler would you even remotely want to port to, MSVC?
@@paulie-g clang?
musl and not glibc?
@@Comeydclang supports all, or nearly all, gcc extensions, and is actually in front in some respects (clang had a guaranteed tail call optimisation func attribute first for example). This has nowt to do with libc.
The only valid point in 2024, now that even icc is dead, is proprietary compilers for embedded systems like Keil, but you're going to have much bigger fish to fry when porting to that.
Rust is not an alternative to C, it's an alternative to C++ for general purpose development. C use case in the modern era is essentially as a portable assembler that gives you semi-competent insn scheduling for free most of the time. It is perfectly reasonable to use it as such. Being able to compile your code with exotic legacy implementations like MSVC is not a requirement, and if it is one, that's a strong indication you're using the wrong tool for the job.
Astounding how the article used so many graphs so poorly, especially considering it was all for talking about a "skill issue." Also, isn't the whole dynamic he is describing part of the issue that the White House paper was describing? From what I understood, they want to move towards safety, not fully eliminate everything that may be unsafe. Specifically, they are suggesting to move towards the model his analogy with riflemen and snipers depicts. Wouldn't an extension of the analogy would be to model the current situation as having something like 4:6 riflemen to snipers? Is the author just yelling into the wind about how people who don't know things say oversimplified and nonsensical garbage but smart people are smart while shaking low quality graphs at a cloud? The whole paper feels insanely lacking in self awareness and high in hypocrisy while being very pretentious from beginning to end.
That "inverted" linked-list pattern in the second example is what would generally be called an "intrusive container", which is very common in C and also seen in C++, for performance reasons (reduce allocations and improve cache locality). The basic idea is that the library provides you with "things" to add to your classes/structs (in that example, the llist_node member) so that it can organize them into a container / data-structure directly. It's common for linked lists, trees, hash tables, etc. In C++, it's less common due to templates not having the overhead that the "void*" solutions have in C. In C++, you'd see that more often for multi-indexing, i.e., the same "nodes" are indexed multiple ways (e.g., a tree with nodes that can be "moved" to a free-list). Judging by the experienced C programmers I know, intrusive containers seem to be the default way to write any collection of items beyond a simple array.
IIUC, the pattern is used by std::make_shared, etc. to allocate the ref count (or other data) with the object type, avoiding an additional allocation.
@@msclrhd Not quite. With make_shared, the object and the ref count can be allocated together in the same block of memory, saving one indirection. But it's not technically an intrusive pattern. An example of the intrusive pattern is boost::intrusive_ptr, where you put the reference count in your class, making it usable by that smart pointer. But boost:: intrusive_ptr is pretty much useless now. Like I said, with C++ templates, it's pretty rare that you really need an intrusive pattern because you can most often just store the "T" object by value within whatever node you have and get the same performance, which is what std::make_shared does.
@@mike200017Ah, thanks! That makes sense.
In C++ it is less common for stupid reasons. Code managing links in link based data structures like lists or red-black trees is type-agnostic. You shouldn't have a different type-annotated instantiation of it for each "template instantiation" when the outputted assembly's the same. But the subsequent casting (container_of in the Llinux kernel) to the right container type should be done inline and in a contained fashion.
He proved that C is very flexible - you can write really bad code and you can write useable, relatively safe code. What he didn't show is - what do you achieve with this approach. Is the code faster than rust implementation? Uses less memory? Could be - I don't know rust...
What I don't like in "ace code" is the fact that someone must maintain it - and (as we see in the article) aces don't write comments...
The author proved the White House’s point without even realizing it
absolutely. 100%
You almost said Donning-Cougar effect at 10:38, which I found quite funny. I wonder what that would be - you think of yourself to be much hotter and younger-looking than you actually are?
That's not what a cougar is, so this isn't really landing. Cougars are mature women who play the field of young men, generally 40-50. The pejorative for an unattractive cougar is "cradle robber" but objective beauty is not real etc. and the term doesn't generally apply to functional narcissism, but rather the subtext described above. Almost clever, but perhaps too surface
@@BeamMonsterZeus no, you just missed it. The joke landed fine.
@@7th_CAV_Trooperthis is peak Internet conversation
Wouldn't it be dawning-cougar effect? When you wake up the next morning next to the cougar I guess.
I used to study philosophy and I came to believe the following: general skepticism about truth, morality and anything else that matters is very defensible, but if somebody needs to bring it up to defend their point, they can't defend their point. So yeah, I don't know what I don't know and it's possible that there's good reasons to use C. But anything is possible and until somebody can defend C in some specific way, I'm not impressed by these completely unspecific pontifications.
But if you studied statistics you would know that "Graph! Sniper riffle! Another graph! Most people who say they know DK don't know DK! You thought it was DK! Gottem! It was the mount stupid, stupid, haha,ha, look at me, I'm so smart and we haven't even started talking about C" is something to be impressed! Probably.
I love it. Thanks! Do you know what this topic is called? I would like to read about it.
It's not hard to "give good reasons for C". (1) where there is need for portability, supposed replacements with complex compilers or toolchains (e.g. Rust) don't work--C can be learned in about 20 pages (not a joke), it's that simple. That also means implementing a compiler on a new machine is always reasonable and quick. (2) Where portability is needed, a language *must* have "undefined" behaviors where defining the language would cause it to be machine-specific. (3) Where need for to solve difficult problems in spaces that are unknown: you aren't going to solve an ambiguous world-class problem in Rust if you're fighting to get Rust to work or let you compile: by definition to explore a space you need LOW friction from the tool. Most software projects fail... friction geometrically (at least) increases likelihood of that probable failure. (4) Where need raw control exists--from portability to need to implement your own structures as close to metal as possible...
That's before getting into the fact it's an API for machines now, not really a language the way you think, into psychological considerations where at times there is need for COMPACT expression, value in the capability to override anything...
That's a tremendous set of considerations where you begin to think through wanting to solve something with software and only C will really do--not even Zig will do (so bounding etc is great until you need that flexibility!). From writing another language to having assurances the language and its tools some modify names behind the scenes...I recently wanted to check out Nim for a project then discovered in horror that besides choices like case insensitivity--aka failing to deal with the world as it is--it then seeks to work around such bad choices by modifying names under the covers!!!!! 🫠 Not acceptable when you need assurances and control.
A BIIIIIG reason for C almost never stated is this: simple compilers and simple languages enable you to write sophisticated tooling to aid your efforts. C devs that did the prior reading tend to be able to write their own **** analysis tools in ways nobody else can...
many of the supposed shortcomings of C are there for reasons as mentioned above but *due to valid reasons the authors had in considering implementing C & Unit*: I know because they specify many of them in the early books about that things that they wrote. Many supposed faults in C are things people today think were missing "because it's old" that were present already in older langs (PL/1, Pascal, Algol (take your pick)...), and the C authors EXPLAINED WHY THESE HAD TO BE LEFT OUT OF A LANG LIKE C!!!!
Your problem isn't that you've "never heard" this it that to convince you but that you apparently never read into the history behind these things.
I did know about the portability of c, I've also heard it's simplicity lauded, and it's consistency over time. I don't know why the phrase "never heard" is in quotation marks. I didn't say that. I was responding to the article specifically and said the arguments were weak. Which they are even if their conclusion happens to be correct. I don't care to take sides between Rust and C. I don't know enough about either language.
Tell you one thing, after 20 years of thinking of I was learning C and C++ in unison, when I finally felt capable at C++ dev I realised I didn't know C for sh*t.
I love how we compare Rust to C when it is far more comparable to C++. There are so many abstractions not found in C, that are found in C++.
It's just cool to hate C++, even for C devs. While it's cool to say you like C, even though a majority who say that cannot write it at all.
Similar to Rust, in that way. Cool to say you use it, whether you actually can or not.
I cannot understand how C, which historically is the reference language that nearly everything else is defined against or must be written to interact with...
a language that can be learned in about 20 (or maybe 30) pages from the K&R book (the language, not stdlib etc),
is something in which that so many "professionals" cannot write Even basic programs.
It's literally about 20-30 pages with examples, reasoning, explanations of the examples to be able to write some seriously useful C programs. But supposedly nobody can use it?
@@infinitelink that's because really it's kind of shit
@@isodoubIet Maybe youre just shit at programming.
@@infinitelink it's because the language really doesn't give you enough abstractions so you resort to all sorts of ceremony and boilerplate. One example is the 'container_of' macro mentioned in the article, all it does is, you give it a pointer to some struct member and it gives you back a pointer of the whole struct. Reading and understanding the macro implementation is quite opaque for a beginner. Zig in contrast just gives you a built-in function to do the same thing called @fieldParentPtr.
The way C is written/read today isn't really comparable to what is in that book, because a lot is going to be convention and some of it is quite obscure.
I like the version of C++ thats super bare bones like C, and is super simple yk.
I hate memorizing stuff. I like to write my C++ code how I write my Roblox code, since I learnt to code on Roblox lmfao.
Abstractions are really retarded sometimes cus they just make certain stuff more complicated than it ever should be.
I shouldn't need to put time learning how to do the same thing but in a slightly different way.
Everythign in Roblox is so simple and just works. I litterally model all my irl code to work like how it would on Roblox lmfao.
Those "top 10%" C programmers got that way by writing a lot of code along the way. Are we supposed to keep letting developers write bad and unsafe code so 10% of them can become amazing at an unsafe language? And even if you write really amazing code making use of all sorts of obscure efficiency hacks... who is supposed to maintain it? Will there always be someone around with your skill level? This guy seems to be missing the big picture.
"Are we supposed to keep letting developers write bad and unsafe code so 10% of them can become amazing at an unsafe language?"
There is no WE. Unless you are the boss and not a developer yourself and looking to UA-cam commenters to answer this question.
"Will there always be someone around with your skill level?"
Sure; a few thousand miles away perhaps. I come from an era of mainframe assembly language. How "unsafe" is THAT? But when it works, you KNOW it works, no hidden library bugs.
Sounds like you suffer from skill issue. Recommended medication is a high dose of getting good
What portion of those top 10% are going to accept modest compensation for participating in a government contract awarded to the lowest bidder with the smallest project budget?
Somehow this whole "ace c developers write code like this" convinced me that C should not be used for certain types of applications (medical etc.). It just looks error prone even when you somewhat now what you are doing. Or think that you know what you are doing to that extent
As long as you debug your code I think C is fine. Some things demand speed without it things can go wrong.
@@computernerd8157 C is not objectively fast, you can write Rust code with identical or better performance to perfectly optimized C code, so this argument, although valid in some specific contexts, barely applies. Taken to its logical conclusion, this argument on its own pretty much just leads to hand written assembly being better than C.
C should absolutely be used in medical applications. Imagine getting 50x the dose of X-Rays because the garbage collector kicked in.
@@josephmellor7641C is not the only language without a GC.
@@josephmellor7641 you don't need a garbage collector to be safer than C.
The white house has a chronical case of skill issue.
12:50 I disagree strongly on this one Prime. The thing is they don't think they are bad, they think they are good, better than people worse than them, but they don't think they are as excellent as they are. That's like a 10/10 saying "I am a 9/10". They still think they are better than 90% of people.
The moment the author wrote 'unnecessarily verbose' was when this whole rant turned into a shitpost.
Fun fact : the "Mount Stupid" graph comes from a PARODIC website which found funny to make that graph, which has been circulating ever since.
I don't consider myself a C expert, but I learnt nothing new today. I also prefer the explicit over the implicit. if (bla == NULL) is much more understandable than if (!bla). Though I do tend to use ! for booleans. Yes I know they're effectively the same as an int, sometimes a byte.
Yeah flipping between languages a lot I find that this is a much safer approach as the not equivalency table is too hard to remember across many languages.
The general principle should be: "use a safe language unless you need to use an unsafe one". In rust we have this idea implemented in the form of unsafe {}.
Heck, even C# a language with a GC has the 'unsafe' feature and has for years too.
More importantly, in Rust only small, specified parts are unsafe, as opposed to the whole thing. Which is easier to audit: twenty lines, or ten thousand?
If you're writing "unsafe" then why use Rust at all?
@@greg77389 Because the vast majority of the codebase isn't unsafe, only the small portions that need it.
@@greg77389Because in general, it is easier to audit a small line of and make a prove of soundness than to have to audit multiple files.
For example, Everytime you make an unsafe block in rust, you will conventionally, have to provide a reason why the unsafe block wouldn't cause UB. (Like writing down that the memory in the pointer in this function are confirmed to always point to a valid address.
18:20 - 26:30 Funny how the author is nit-picking (ultimately) meaningless syntax, calling the person who wrote it a noob, but never even touches on the real issues here (memory unsafety), that would actually cause problems in the wild. (strcpy vs strncpy)
The goal of the code snippt was to show how fundamentally different an ace programmer would code from a base programmer but in an article defending c from its unsafety that is really funny.
Also funny is proposed "solution" of using static variable. if do_stuff uses john as a scratch buffer and sets `age=-25` in the process, it's not issue in "incorrect" example. But with static...
In rust John would be const to begin with and ownership would be tracked.
@@leonardschungel2689 The thing that comes to mind is "know your audience" an "acE" (cringe) programmer may be able to write and rapidly understand the code, but if he's the lead of a team of 4 or 5 who are mid or worse, it's actually shows that he's not infact act. Fully agree with prime on the point of making what you are writing deliberate and not being too cute.... even if future you is as smart enough to understand your tricks, is future other members of the team? and if they touch that area of code are they likely to cause issues? If yes then the developer is just enjoying their own farts.
The actual problem is that choosing to use a "safe" C codebase requires trust that the one(s) who wrote it were these so-called "aces". It's not a question of whether it *can* be done safely. Rather it is a question of whether we can have *confidence* that it was done safely, and since there's no objective way to be able to identify these so-called "aces", it is literally best to assume none exist.
The problem with the sniper analogy is that we can actually define operational parameters to be used for training and evaluation purposes that can designate someone as proficient enough with the sniper rifle to qualify for a mission requiring its use. Even then, said sniper doesn't just defacto go on all missions with his non-sniper buddies toting his trusty sniper rifle ... either he will be outfitted with another weapon suited to the task(s) his squad has been assigned (i.e. using basic bitch "safe" languages right alongside all the other plebes), or will otherwise be deemed to valuable to waste on missions where his expertise could go to waste (i.e. I hope you've budgeted for the financial feast/famine cycles inherent in highly specialized consultancy..... oh, and that you can manage to sell yourself as a so-called "ace" without also selling yourself as a total dickwad that nobody wants to hire).
In short, the only C that need be written is that which is absolutely necessary before handing it off to a "safe" language.... which is hopefully 0.
Still not using Rust.
The aces are likely going to demand more compensation for their skills than can be budgeted for in a competitive government contract for a software project. Best you can expect is a significant portion of middle skilled professionals involved in the projects. Special unicorn developers might be involved but you can’t really count on it or expect it.
I'll never understand this "this code is bad!!! it has macros!!!", like, this is the textbook definition of good C macro usage, it's literally text replacement, it's not even conditional. It's syntactic sugar, but for some reason people don't like it? There are mistakes (gtk source code), then pasters (why?!!?) and finally totally ok macros like this.
The issue with C is not that it's a sniper rifle, but the fact that it's a rifle constantly aiming for your foot. And the "mastery" of the "aces" mostly consists of dodging those damn bullets while jumping and arching in a way that makes them fly in a general direction of the enemy...
29:24 user_data is like a closure for the function pointer so that you can pass extra context into each function call
And, to elaborate, the extra context is there because function pointers do not work as closures, so would have to rely on global objects for any additional context if it wasn't passed in. And you obviously need that context to do anything else than simple in-place mutation.
I work in manufacturing, not tech, but we have a concept from Japan called "Poka Yoke", which is to make it as difficult as possible to make mistakes. Yes, we shouldn't need to prevent mistakes, but while we're in dreamland I want a pony. In real life, people make mistakes. And taking measures to prevent mistakes from being possible is worthwhile. That's what these other languages have, Poka Yoke for memory safety. I absolutely believe a grandmaster at C can write insanely secure code, but most people are not grandmasters.
I don't even think a C "grandmaster" could write a perfect C program.
99% of " elite C real men programers" are actually average skilled coders with a completely altered sense of reality, the huge number of memory related bugs are a proof of this. There is always better to rely on safety by design rather than safety by skill ¿Why would I want my system (specially a complex system) to rely solely on, let's say Harold, that super genius guy that can manage flawlessly pointers like a real Cha? ¿What happens when he is no more, when he quits or changes the job and I could not find a new guy with the same skill level thus I can't enhance or maintain Harold's code safety anymore? Fuck elite programmers that are mostly worried about proving they have the biggest coding cock.
Dunning-Kruger effect being Dunning-Krugered by popular media is funniest shit imaginable
He's right though. I wish I had a dollar for every time I saw a redditor saying "Dunning-Kruger effect" in response to losing an argument.
14:45 I’m a professional C++ dev and I wouldn’t touch C. I know the C++ code I write could not be easily translated to C. While C is almost a subset of C++, actually using this subset is considered bad style, “we have the extra stuff for a reason”. The best thing about C, IMO, is the C calling convention, allowing for language interop (e.g. Java and C++ via JNI) as both languages talk C.
Imagine the effort it took to write this article that just proves the point on C ironically. Wouldn't everyone love to be an elite ninja sniper 1%'er programmer? Like many things in life, you aren't one unless other 1%ers call you out as one of them. It isn't something you get to label yourself.
Holy shit this author is taking a pretentious tone to saying "exceptions exist".
He's a giga chad alpha wolf mega sniper who thinks "only bad engineers make bugs" and they cope with "C's just as safe as Rust, you just need to be good at it. It's easy, look, let me just run valgrind, absan, ubsan, clang-tidy, a formatter, and my tests written in a third party test suite." And then they call you back in 30 min because their make script was broken.
No they don't exist, at least not in C xD
hay if he's so good at C, then just use unsafe rust, but he won't because doing that would make obvious how many uncessery risks you are taking to do little anything
The author is fundamentally right - the only problem is the "average doesn't exist" point. They do exist because there's not only extremes. There's people that think they are good in C, and are bad. There are people that think they're good and they're mediocre. And there's people that really are good. Commits != good commits or good code.
@@markogalevski6088 so the point you're making is even bad engineers know bad engineers make bugs? Hmm
My brother, you couldn't find a better C programmer than Linus back in the day, who still has the most code contributed in the kernel. Now please go and check how many CVEs have been from code written back in the ye olden days. It's pretty obvious the more surface area increases, it's prettty much inevitable that vulns will come. This isn't a skill issue, lest you go ahead and tell me Linus is incompetent.
No, no, no, but THEORETICALLY a good enough programmer can avoid any and all C bugs!
Reminds me of how in some video games, players will to as far as to call professional players incompetent because they can't make something that's theoretically unbalanced work in practice. Looking at you, League community.
it is so much a skill issue, back in 2003/2004 I spent 3 months on contract to write a Linux driver for a company, and delivered a fully functional driver with 0 defects, and wrote a report on why their program performed improperly when compiled with specific versions of gcc because there was a C++ template processing bug which elided some nested templates. Of course I read the assembly output to identify the problem and do the comparison. There is a huge difference between someone with 20-40 years of experience with a language like C working on a wide range of projects, and processor architectures, and your average programmer. Just like there's a huge difference between your top surgeon and a boy scout with a first aid badge.
0 *known* defects. A dedicated professional examining the code or the executable with modern tooling, concolic analysis, fuzzing, etc could very likely find an exploitable vulnerability in most reasonably sized software, particularly from that era because of a lack of mitigations and particularly something like a driver.
@@leodlerThat's nonsensical. You don't even know what the driver was for, how would you even know there was an attack vector to begin with?
experience and evidence. You show me a program with no bugs (what the OP is claiming) and I'll show you a unicorn. Good luck.
@@rameynoodles152#include int main() {std::cout
It doesn't matter how good your code was, it's not because there are professional racers that we should get rid of speed limits. The fact of the matter is that in the actual software world, you will rarely if ever write something alone. That was the point of OOP, to get mediocre interns to not screw up your well-made code. It doesn't matter if you're in the top 1% or .1% or whatever, you wrote a ton of code to get there, and that code likely had bugs. And everyone needs to to through this to get good. And the industry needs devs now, not in 50 years. And devs need jobs now, not after 30 years of studying, experimenting and t*nkering.
C has been called the ultimate computer virus (cf Worse is Better). Simple enough for anyone to pick up the basics, you can quickly write something that mostly works, and it can run just about anywhere (and runs just about everywhere). The lack of safety features is itself a feature: you can go full cowboy mode and ship your code fast, use the initial release as a giant beta test, then patch whatever problems arise.
The C vs Rust debate is retreading a lot of ground from 40 years ago when Ada was the new language that was pushed for critical software where safety was paramount. Still in use, but failed to gain wider traction.
It's really interesting how only the Linux kernel is referenced The code in the example is absolutely forbidden in misra and anything close to functional safety.
I dont write rust because i think i am smart, i write rust because i know that i am stupid.
When doing something for hours at a time, are you fully attentive constantly? No, you do things subconsciously, you inevitably become complacent, you make stupid mistakes. This is a well known fact in most industries, why are so many people here ignorant to it?
Yeah, I don't know anyone who doesn't write silly code from time to time. Even the best engineers I know make bugs. And you know what? They usually laugh about it, it happens.
And the best thing is, they're also humble enough to know they'll screw up at some point. They put systems in place, unit tests, running the program themselves, canary releases, ... -- dealing with (temporary) skill issues of humans is just natural.
In (commercial) aviation, there's two pilots for a reason. Every pilot in a cockpit of a 737 or A320 can start and land the plane on their own. They can make all the fuel calculations, and handle most emergencies. So, why have the 2nd person? To catch skill issues. They have a pilot flying, and pilot monitoring. They have regular checks, and checklists. They have lots of manouvers they'd like to do, that would be more efficient, but that they don't do -- because skill issues.
But when it comes to writing code professionally, especially critical kernel-level code, suddenly all the huamns are perfect. And there's a bob with the most perfect skills, that never makes a mistake. Sure, Bob, sure. You never make a mistake.
12:00 Finally someone who's citing dunning and kruger right! It's happening!
21:00 The example you give for !ptr for null checks in C is both dumb and in the the wrong language. No one does !arr.length, but using !ptr is cleaner and less verbose than ptr == NULL. We're checking the validity of the object, not a property of it so its not analogous to !arr.length. L take
Whenever I see articles like this, I just imagine how the article would be written for another language. "JavaScript skill issue; how Typescript is wrong".
This analogy would work it rust was supetset of C
Except thats not even slightly the same thing? Lmao
@@alfiegordon9013both TS and Rust were created to eliminate certain runtime errors. The similarities are there, you're just being obtuse.
it's all a superset of gates. don't be the um actually guy, comment repliers
@JiggyJones0
That is a stretch. Stop trying to be acute.
What the author is correctly arguing against is the practical application of "equity," where instead of allowing those to succeed at what they are good at, they use broad strokes to try force everyone to do the same thing. Rather than acknowledging the fact that there are differences in skill level, the tool is blamed. His use of riffles as an analogy is apt, as it is common for the same types of people who would refuse to acknowledge differences in skill would also blame a tool for what someone does with it. Edit: I had a factory job once, where we operated machines that would output a large roll of printed vinyl, and because someone had difficulty picking it up, rather than the job making it a special case that this person could receive help, in order to not make this person feel bad, they made it a rule that everyone else would now also need to stop a second person from doing what they were doing and get assistance lifting the roll, whether or not they actually needed the help. This slowed down a lot of people who were faster and more productive otherwise.
I don't agree. The author didn't really make many solid points imo, and the whitehouse didn't say "use Rust or else" they simply recommended that people prioritize the use of memory safe languages, and I agree with this perspective for a number of reasons. First of all, this doesn't mean unsafe code is somehow disallowed, of course C code is going to be written still, and of course it is going to be useful, just like every other unsafe languages. Languages don't just go away because someone said something, because programming languages all have different characteristics, and different pros and cons. It simply doesn't mean what the author suggests that it means and much of their argument revolves around the idea that the US government is saying that memory safe languages replace memory unsafety. No, rather they are asking memory safe code to be the priority, and this prioritization can be met with the use of C code too.
The author's implication that C is objectively better than a memory safe language like Rust is also simply incorrect. Memory safe languages (e.g. Rust) can do the exact same things that C can with almost identical or better performance. You can close the gap by writing unsafe Rust code, but it is up to you the programmer to ensure that the unsafe code you write is safe, and by using a language like Rust you are able to write both memory safe and memory unsafe code simultaneously but you compartmentalize your unsafe code and keep it as limited as you can. Compartmentalizing that unsafe code and primarily relying on safe code except in performance sensitive cases will allow you to be a more effective programmer, because you are getting an immense number of gaurantees and static analysis benefits that you do not get in memory unsafe languages. Not just this, but Rust can make many, many optimizations around many of the memory safety gaurantees it makes while maintaining convenience to you, the programmer.
In other words, I think the author's own arguments entirely work against them and suggest the exact opposite of what they were trying to argue. If you require an unsafe language like C to write fast code, you have skill issue.
there are meaningful differences in skill level within a single team; those teams are likely going to use a particular language for systems programming rather than whatever every individual thinks they’re the best at; memory safety issues are responsible for a large majority of exploited bugs found in the wild; defaulting to the use of (more) memory-safe languages which is sensible, particularly for government vendors. The article is self-defeating and seems to only exist the stroke the ego of the author and other pedants who couldn’t even be bothered to read the original ONCD report.
@@Hexcede I agree with your general sentiment that most people are better off playing it safe. There are a lot of people out there who would probably cause accidents if they didn't have lane-assist and side and rear-view cameras and were made to drive manual transmission vehicles. That doesn't mean that a driver who knows how to operate and maintain an older stick-shift vehicle needs to drive one in order to be a better driver. They generally just are a better driver and don't require the same "recommended" safety features in order to accomplish the task of getting from point A to point B. There is no shame in admitting you don't know how to drive a manual. That's ok.
@@aazendude Yes but that's what the first part of my message was talking about. The point was never "Rust = better programmer" or something. The analogy also isn't perfect because many of the benefits of Rust result in practical speed-ups in the writing of code, and reductions in bugs and debugging. The difference is that something like lane-assist doesn't cut out any real burden of driving. Rust just as an example makes mathematical gaurantees that allow you to eliminate entire classes and subsets of bugs almost for free, and this isn't just a crutch, it's a tool.
I think for most Rust devs it isn't about the safety mainly, not even mainly about being blazing fast, fearless concurrency, etc.
It's about moving things into the editor, trading debugging for writing and testing for compiling. Of course it's a tradeoff, but it's one many welcome.
Exactly! It might take slightly longer to write Rust code… but that is 100% worth it once it is compiled because it… *just works*.
Rust gives you such a level of confidence in the reliability of your software you can sleep well at night, because you aren’t going to get 3AM calls to debug something *right now*
I agree. Rust brings more than just rustc to the table. The tooling and the ecosystem that grew out of that is definitely a huge part of why it got picked up in a relatively short amount of time. Though it still allows you to go outside of that when needed; it doesn't _force_ you a specific way, making it very general purpose.
Started watching your videos recently, I love it. Now I can follow what's happening in the dev world in an entertaining format. Keep it up man!
Legacy code in C was written when programmers had brains, so...
Legacy code has resulted in numerous security vulnerabilities that were effectively exploited by adversary governments.
In this post and many others, the author tries to put himself on the same level as Linus Torvalds, but based on examples like linked lists and not on any measurable success in programming. Even when he's technically right, he still makes sure to be a dick about it. He was banned from Git and made a fork git-fc (named after himself) that is already abandoned. He's a goldmine of intelligent but unhinged writing.
Honestly, he doesn't seem nearly as smart as he (seems to) thinks he is. Yeah, there are skill issues, but there are also a lot of easy mistakes made even by very good developers that are less probable with even C++, let alone languages with saner defaults. C has some restrictions that bring down the developer experience, and some allowances that improve it, but also introduce safety issues.
See I'd like to use if(!boo) but most of my team struggle with if(boo == False) and couldn't even with the fist option so all it gives me is more work
I will never stop programming in Assembler. I am a dinosaur and its ok. I am unsafe LOL.
I still use intrinsics, even when doing 90% of my stuff in Rust. Sometimes you just need a good extra bit of speed.
I'm with you old man. Nothing wrong with assembler. In fact I think all programmers should have at least some understanding and proficiency in assembler. "Unsafe"? No problem, we are aces right!. In fact I would claim that it is easier to write safe, UB free code in assembler than C++.
Don't tell Google Gemini.
hell yeah@@Heater-v1.0.0
lol@@DarrenAnderton-p4m
i know for sure there's a god level programmer out there, its whoever wrote that bitbashing squareroot function in Quake III.
JavaScript as a kernel sounds like the stuff of nightmares. Also there is a 99,9% chance there is someone out there trying to do exactly that.
everything that can be, will be
He just Dunning-Krugered the Dunning-Kruger effect
If it cannot be done in "C", it cannot be done. I didn't claim it would be EASY in C.
In my opinion, the move towards statically enforced memory safety by default is the right one. But Rust is messy and hasn't gotten a lot of things right yet, if it ever will. We may still see it "beaten" by another language which could well be written in C or another unsafe language. You can well have well-designed safe systems on top of well contained unsafe components. Electricity's plenty unsafe, and yet all these "safe" languages ultimately use it. Similar to that the mere use of an unsafe language shouldn't by itself be a cause for concern, provided it is well contained in small, proved components.
That article rubbed me the wrong way, I feel like the entire premise is wrong. These recommendations aren't based on feelings, but on 30 years of research showing even the smartest people suck at writing safe C. And maybe it's not just a god-damn skill issue.
Except for Felipec of course *they* are clearly smarter than everyone else.
My thoughts exactly. One doesn't need to go through the mental gymnastics of evaluating who's the real C programmer. Results speak for themselves and they're damning.
The "no true scotsman" fallacy force is strong in that person.
I think my other complaint about this 1 in 10 'Ace' C programmer argument is that it doesn't consider the fact that these 'Ace' programmers weren't born out of the womb 'Ace' C programmers. How many thousands or millions of lines of code did they have to write before they got there? How many memory vulnerabilities did they write on their path to becoming an 'Ace' programmer, and how much damage did that do to the software ecosystem at large? At the end of the day, learning to write memory safe C code is optional, whereas learning to write memory safe Rust code is MANDATORY (provided you don't take the easy way out and wrap your entire code base in unsafe blocks, which is really easy to catch in code reviews). Also, the compiler starts teaching you memory safe habits from day 1, so if/when you do start dipping your toes into unsafe code, you've already built up some good habits beforehand, and the word 'unsafe' itself signals to the programmer that they should probably do some additional reading to protect themselves before trying their hand at it.
Most user-facing C programs use a bunch of global variables defined in each translation unit. This and the lack of namespaces makes making large executables difficult. Rather, it encourages the creation of many small utilities that are integrated by a scripted language (Shell, Python, etc.) In C++, these global variables and small programs can be made into its own thing by wrapping the lines in a struct or class definition.
app_window_new ().
Globals should be used sparsely, If you need them, hide them behind a function call. (Singelton)
I find the simplicity of C very appealing. I can follow calls down the stack and know what is happening. That does not happen with stuff like C++ or Rust.
And don't get me started on language specific "package managers"... That should be illegal. It's abhorrent.
Or a namespace
The hierarchical filesystem *is* the namespace... that's what it's for!
@@PixelThornI never understood this. Why is modulename_funcname worse than modulename::funcname?
@@attilatorok5767 because if I'm writing a function in modulename I can type just funcname instead of the whole thing.
Lots of problems with how it was implemented in C++, but it's a good idea overall.
The Whitehouse didn't call for that. the Whitehouse Office of the National Cyber Director (ONCD) recommends it. This has been recognized by cyber security researchers for years.
That article was rough. A whole lot of text for not much substance. Has there been a serious argument that it is impossible to write safe C? The argument as I've understood it has always been that writing safe C is hard and error prone and the author even admits as much.
The defense of C really needs to come from a place of developer efficiency or patterns that aren't possible in rust rather than there being an archaic set of invocations known only to the old ones that achieves parity. Running with scissors is possible, but that still doesn't make it a good idea.
The user data is to provide an escape hatch because C doesn't have closures. So you pass a context struct that the API caller to substitute closures.
When I worked at Lockheed I coded in C and Ada. No way that codebase will ever be migrated to Rust. The code and the system using it is way too mission critical to change.
And why? What about being mission critical makes it not possible to change? If it's really that mission critical, wouldn't you want it to work?
@@antifa_communistHere’s the thing, it works and works great. I’ve seen my software work in multiple wars. If you ever worked in government or worked on any large government contract you wouldn’t even ask this question. I am talking about a multi-billion dollar military platform that would take 100s of millions of dollars re-code in Rust, plus test, deploy, and go then through all the other government and security requirements. Instead of spending an obscene amount of money to recode in Rust, it would be money better spent on building out new tactical capabilities and features. Plus this would never get approved by the Generals just from a cost and budgeting perspective.
@@antifa_communist The combo of Ada and C is a very good one. It is hard to beat that.
@@antifa_communistDon't break what already works
Lockheed is willing to spend ridiculous amounts of resources to make something slightly better and the government is willing to pay for that as a customer.
If c is a sniper rifle and rust is an ausault rifle doesnt that implie js is a butter knife?
So what are the odds that the people not good enough to be trusted with C are not also the people who would immediately just wrap all their code in unsafe if you hand them rust?
Not that much: borrow checker is not disabled in unsafe code, so they'll have to jump over their heads to cast all references to pointers(BC doesn't care) and back(and if they will make 2 mut references during their pointer play, it is UB).
@@AM-yk5ydThere are ways of abusing the type system to create immutable 'static borrows from safe code using the unsoundness of the function pointer type.
But if you can do that you are good enough to know what the f you're doing to not do it.
3:38 what is bro talking about? Local variables are not a feature of the CPU, they're a feature of the language. RBP (base pointer register) may point to where local variables begin but when writing assembly or in some other language that doesn't have the concept of local variables it could be used for something else. The same goes for the first few arguments being passed in registers. It's merely a convention.
The article explains skill issues.
Prime has skill issues understanding the article.
I have skill issues understanding Prime's explanation of the article.
I'm lost...
you pretty much just described how LLMs work
Defer the free to /when/? If it's just referring to the end of the program, then the free is actually unnecessary, since all memory is reclaimed by the OS when a process terminates.
That list implementation, while slightly confusing. is actually pretty clever.
It's still a linked list though, and if it's all about performance, usually you'll actually just want an array or a tree or a hash-map.
@@Sindrijo True, but that should be obvious. Linked lists are almost never the correct answer, but every once in a while they are what you need.
I think the best approach is to always go with the simplest thing (arrays) which is also usually good for performance.
I think the argument is, "It is possible to write safe code in C for a minority of developers, therefore C should not be abandoned for specific use cases in which it is advantageous." It makes me wonder, though: In order to determine which developers for whom it is safe to write C, you'd have to have somebody for whom C is not a skill issue making hiring decisions, and you may have no way of making that initial hire reliably. Simultaneously, this talent pool would initially be very narrow and therefore expensive, and I'm not sure that most organizations would find it beneficial to maintain these use cases as a result of both this as well as the hiring problem, especially since they also take on the risk of having those use cases, hiring incorrectly, and then suffering the fallout of unsafe C code in whatever their product is.
I support rust adoption because it makes all the bad-at-coding mentally unwell people who aren't even sure whether they're man or woman migrate away from C.
Does this article author works at Apple? Why doesn't he label his graphs axes?
does this user has bot account? why doesn't they have non-generated username
45:24 I would love to see a video going into more detail on how you accidentally leaked memory in rust.
21:20 getting PHP flashbacks remembering that the string "0" is falsy in PHP (obviously not of length 0).
actually the worst part about this is that empty("0") will return true as well........
40:15 why use node* but convert to llist_node when saving? why not use node struct in the first place
Yeah, no. I've been programming in C for over 30 years. Ain't stopping now.
That's not a good reason.
@@antifa_communist How long have *you* been programming, building up a massive knowledge and code base, along with methodologies and practices that prevent unstable code, eliminating the need to run to new languages to solve problems brought on due to lack of mindfulness and discipline, huh?
@@yapdog Still no good reason provided.
I think that the L graph at 8:40 is caused also by the fact that after each commit adding more becomes easier each time. You are familiar with the project. You know what you did last time and what you didn't do that you could have done. So it is easier for you to commit rather than for someone who had specifically targeted some bug in the code and fixed it.
Do it the Pythonic way: snake case for variables and names of functions, Title upper case for classes names
my_variable
my_function
MyClass
For me this is peak beauty, but why? Well, because it is what I am used to see.
No classes
13:40-13:55 hit me like a ton of bricks. Imposter syndrome is a result of thinking about yourself too much rather than focusing on the problems you are solving.
I lost it laughing about the `container_of()` macro as I just got done explaining that to one of my engineers not long ago.
34:17 Goto for memory clean up is cool. I'm still a noob but thats basically like Odin defer() isnt it?
It helps to keep things in scope and clean up once its done or if/when the program closes. (I also dabble into Odin to learn more C and vice versa)
defer is more like a comefrom.
defer will still call the code if you return from the function early. goto will not.
Please learn some zig when you get the chance, it's super interesting because it does as much as possible during compile time, leaving as little to run at runtime. It also lets you write super simple parsers because it enables inline for and switch (aka, iterating over the fields of structs). among so many other useful features.
he already did learn Zig (somewhat)
@@comradepeter87 there's always more to learn with zig
One thing people forget to mention when they look at these things is how Rust starts out hard and slow, but becomes powerful and fast fairly quickly as your skills improve. A proficient Rust programmer can do incredible things, with fewer mistakes, than an equivalently skilled C programmer, because Rust includes some very powerful patterns in its standard library.
Rewriting all the legacy c(or c++) codebases in rust/zig/go/etc. is simply not practical, regardless of performance differences. Nobody would throw a ton of $ for marginal "safety" improvements. On top of that, looking at the rust linux kernel it's hardly even "safer" - most of the Rust code is tagged with unsafe all over the place, I hardly see that as improvement compared to C
I did not at all interpret it as "rewrite everything in rust/zig/go/etc." Secondly taking something as low level as the Linux kernel and taking Rust code from that and treating it like average Rust code is super silly. "The Linux kernel does memory unsafe stuff and operates directly with hardware? Woaah, that's crazy"
@@Hexcede I think that was precisely OP's point - there's a lot of low lvl stuff for which C is simply better(or at least less cumbersome)
@@TsvetanDimitrov1976 Yes and no, but this is a super specific example where unsafe code will obviously be used heavily. You are taking "this code has a lot of unsafe blocks" as "C is unsafe and is therefore better because most of this code is unsafe" but this is a flawed conclusion/assumption, which was part of my point kinda, but also that this example is just bad. Nobody said "no more C, C bad, never use C," the focus is simply on the prioritization of memory safe language.
The additional benefits of Rust's memory safety are kind of being ignored too because they still apply to unsafe blocks. Having a lot of unsafe blocks is still safer than being fully unsafe. You are relying on a lot of safe rust code inside the unsafe blocks that uses a lot of memory safe principles and you still get a lot of safety gaurantees. You are able to make some gaurantees that your unsafe code is safer than it might otherwise be, and when it is compartmentalized you are able to much more effectively debug and identify issues, find where bugs may be, etc in addition to being able to then utilize that unsafe code under its promise that it is a safe implementation.
Safe Rust doesn't enclose all safe, valid programs, that is what unsafe blocks are for. But it encloses a huge majority of safe programs. And much of your code can rely on code containing unsafe blocks and maintain safety as long as you have made the guarantee that your unsafe block is implemented safely.
You have greatly lessened the burden of identifying problems and gauranteeing safety by explicitly demonstrating where issues may be and in doing so you are able to make greater, stronger assertions about the safety of your program even with the use of lots of unsafe blocks.
The government will spend it. Even a slight improvement is worth vast amounts of money when adversaries are well funded governments actively seeking and exploiting vulnerabilities every day and have successfully penetrated systems on numerous occasions. It doesn’t solve the problem but can contribute to the solution as one measure among many. The cost effectiveness is completely unrelated to business side cost benefit analysis because the consequences of failure are potentially catastrophic and horrendously expensive on the scale of Trillions of dollars in damages.
@@stupidburp imho government spending *is the problem*. I've never seen any government sponsored project not plagued with corruption and inefficiency.