Wrote my first C compiler in 1982 for CDC6400 machine. 60 bit words so 60 bit chars, pointers, ints. Just enough memory to do simple constant folding of expressions.
It's probably the best video on how to structure C programs, in history. I have downloaded it and kept it in every media files I got. Hopefully Eskil realizes how changing that video is. Even though I don't program in C in anything, it's really the best general programming ethics guide.
I like that you chose a dark color scheme for the slides but the random white flashes in between really hurt my eyes because of the stark contrast to the rest of the video
As someone learning C/CPP this is a true goldmine. I feel like I have managed to at least experience time travel and issues caused by volatile values not being declared as such, when writing code for arduino. I just wish GCC was more helpful. This might be a RTFM issue on my part, but it would be nice to get a hint like ”maybe you meant to write a function that has defined behaviour?” or something.
Not sure if that's what you mean, but you can use the flag "-fsanitize=undefined" with gcc. And also don't forget to add the same thing to the linker flags, if you're doing separate compilation-linking.
@@xugro there's a bunch of UB that cannot be found during compilation but qualifies as UB. for example, you can declare an "extern float x" in one file and "int x" in another (which is prime time UB) and compiler is unable to find it (since type information per symbol is not preserved after compilation). also, there is a bunch of UB that can happen when passing arguments to functions. let's say you have a function that takes two pointers and compares them - there is no way for compiler to determine whether you passed the correct values to the call, since the function can be defined in another file ("correct values" part relates to the provenance part of this video, meaning you can only compare addresses within the same object address space). this kind of things generally makes it impossible to get rid of UB and also is the reason why C requires the programmers to know what they are doing.
Ty for making a video abt this that doesnt feel like it relies on peoples short attention span. This is exactly what I'm looking for when I look for a coding video on youtube
Thank you for making this. As someone who gets asked when 'the compiler does weird easily biodegradable matter', being able to point people to this is gold. Restrict is something I miss in C++, it is so useful for SIMD intrinsics.
Restrict, or an equivalent, is available in all major C++ compilers. That said, restrict itself is a woefully inadequate tool for working with aliasing semantics. It hasn't been standardized in C++ because it's fundamentally a dead end. Also, C++ is leaps and bounds better for SIMD programming relative to C. Libraries like E.V.E. or Eigen are literally impossible to write in C.
49:29 bamboozled me a lot. Binging UA-cam in bed on my iPad, apparently with it about half an arm's length away, BOTH my blind spots converge on the closing curly brace when I look at the second _i_ in the for loop. Was kinda freaky seeing an instance of UB in my own retinas after you talked about instances of it in C so much. Sub earned.
This was interesting! I did not know that the compiler did (or could do) such weird and scary optimizations. Now I appreciate that I know assembly even more because at least there you know what you write is gonna stay there no matter what. Or at least I can debug C code by viewing the assembly.
I am a bit baffled that when a C compiler encounters user code that does the impossible (such as a range check that always passes/fails at compile time, or guaranteed undefined behaviour detectable at compile time) that its first instinct is "how can I exploit this to make the code run faster" rather than "tell the user their code probably has a bug".
FI agree that compilers should be a lot better at explaining what they are doing. For instance syntax highlight code deletion. However, it should also do the optimizations that the standard affords it.
@@eskilsteenberg yeah it would be great if the compiler gave some notice that its just ignoring code because it thinks its pointless, like 'hey maybe use volatile' or 'this expression is always true' etc. its been a while so maybe they are warnings now but it doesnt sound like it lol
The story here isn't actually too hard to explain! If you remember back when GCC and clang/LLVM were at each other's throats for being the "better compiler", the number one issue was speed- the faster compiler, the one that won all the benchmarks, was expected to win the compiler holy war. Therefore, compiler developers put massive numbers of hours into making their compiler generate the fastest code possible. Until shockingly recently, they didn't really worry about the effects this would have on developers, so they didn't put nearly as many hours into warnings and heuristics that warn when the code exerts unexpected behavior. As a result, the warnings that exist are mostly for simple rule breaks, and there's just not enough reporting infrastructure for the optimizer to report that some function is being optimized out of existence in a way that's probably not what the programmer intended. The fix is to put pressure on the devs- either make the patches on your own and contribute them to the projects (the best option!), or repeatedly ask for improved UB detection and ask others to advocate with you.
@@cosmic3689 Right, more warnings about those strange optimizations are wanted. But there is a catch: macros sometimes result in such code, especially when used with literal arguments. So at the same time, there must be some method to avoid overwhelming the developer with such warnings.
I spend my time working with people who ponder sources of truth and believe that there is one true dogma that will safe our souls (keep it simple). I learned C some 30 years ago and when I feel nostalgia watching this video it's not because I miss C. What I miss are people who actually know what they're talking about and why, people like Eskil.
How is ommiting the malloc() == NULL not a compiler bug? The standard clearly defines this to be a possible error case which has to be checked against? Edit: the real issue seems to be that the compiler optimizes the malloc itself away, because it knows the memory is never used. Therefore it can assume it always succeeds because it never called it in the first place.
33:00 another way to understand this issue is: The multiplication of a and b first multiply as shorts, wrapping if needed, and then is cast to an unsigned int. This means that the highest 16 bits will always be 0, and it will eliminate the if.
So much genuinely valuable information that contextualizes and explains many C intuitions that I've built over time. Seriously one of the best quality videos I've seen on this platform in recent memory.
this is maximum anxiety for everything ive ever written. At first it was like "alright, perhaps i should reorganise some things for better performance" and then it was "oh god, i hope i didn't implicitly assume that the padding in my structs would be persistent."
Rust is my language of choice these last three years or so. However I still love C and would be happy to use it where needed. I love it for its hard core simplicity.I love it because it has hardly changed in decades and I hope that remains the case. However I've have also use C++ a lot and absolutely refuse to ever go back to that deranged monster.
@@Heater-v1.0.0 Say what you will about C++, you'll have to square it with the fact that even the major C implementations (Clang/GCC/MSVC/etc) choose the "deranged monster" of C++ over the "hard core simplicity" of C. Simply put, the fact is that C++ is more popular than ever because it's actually *more* insane to use C lmfao
@@69696969696969666 The is true. Most of the worlds C compilers were written in C. C++ evolved from C and the compiler implementations followed. All seems quite reasonable. I agree that C++ offers a lot of conveniences that can make life much easier than C, although I'm still happy to use C or the C subset of C++ where appropriate. It is possible to write nice C++ code if one stays away form much of the ugliness the language. Unfortunately it's hard to do that on a large project with many people working on it as they tend to start introducing all kind of C++ weirdness. Anyway, all that long and tortuous history does not mean we have ended up in a good place with C++. Many agree with me. Like Hurb Sutter with.his C++Front work. And Hurb is on the C++ committee!
Such an amazing video! I loved all these fascinating tidbits about C (and compiler design in general) and you held my attention the entire time. I think I'll watch it a few more times to really grok the material. Bravo!
Even though VLA objects with automatic storage (stack-allocated) are not very useful in practice, the VLA **types** are really useful for handling multidimensional arrays.
Understanding underlying hardware and coding while taking that into account, is a dying breed. People are coding large programs with languages that far remove them from the fact that it'sa running on a HARDWARE that has limitations, idiosyncrasies, it's not immediate when you tell it to do something... that has multiple processes running on it... And that code is very, VERY inefficient. WE're running into a wall with constantly rising performance / dollar and that's starting to cause real issues and people who understand how to write code for certain architecture, taking that into account, are valuable again. Hopefully enough people watch this and realize that it MATTERS what you write and that you understand the hardware as well.
Two things: C23 now requires VLAs again, rather ridiculously. And, GDB has a TUI mode that is a little buggy, but quite good, and gives you a visual debugger featureset.
Why are VLA requirements ridiculous? What's feasible for implementations can change with time. The first cc(1) I used had =+ & =- and didn't even support K&R C or the C widely published in books. BTW the VLA inclusion unbroke the single error I made in an exam at Uni. which cost me a 100% result long before the C standard inclusion so you need a really good rationale.
@@RobBCactive It's ridiculous because only GCC properly supports it, and the feature was added and then deprecated and then re-added to the standard. This is an absurd thing to do, especially for a committee that is so overwhelmingly committed to keeping the language as much the same as possible over the decades.
@@RobBCactive That's a disingenuous argument. The situation is not comparable. C didn't add function prototypes to the standard and then remove them in the next version and then add them back in the next version. They didn't do that with any feature except VLAs. And they haven't done that with any other compiler-specific feature, either. They didn't add an MSVC-specific extension or a Clang-specific extension or a Sun extension that no one else implemented. They only did that with GCC's VLAs.
The optimizations flags in gcc bit me a long time ago. My code had no bugs without optimization flags on but then would develop a bug after O2. I don’t recall what the exact issue was, but from then on I would run my unit tests with and without optimization flags to minimize the potential for aggressive optimizations or missing a keyword to force the compiler to be more careful with a function.
Your code was buggy before you turned on the optimization flags, the optimization flags just revealed them. your strategy of testing in multiple different optimization modes is the right one!
@34:50 not really sure if promotion happens only if hardware has no 16-bit operations. promotions HAVE TO happen, but you can use 16 bit version of instructions only if the behavior is the same as if the promotions actually happened (according to ISO 9899:1999 - 5.1.2.3.10). this is just a way to standardize the semantics of the program, so it can be agnostic to whether the machine supports lower bit instructions or not.
Yeah, he did a great job explaining the rationale behind integer promotions (and why they scare me), but neglected to explain that his hypothetical implementation--one which can't do 16-bit operations--would define (signed and unsigned) `int` to be 32 bits *_because_* it would allow the implementation to promote any integer type with a shorter width.
Finally a good explanation of what the volatile keyword does mean in c\c++. Just finished watching. VERY GOOD stuff here. It's shame that this's no mention of how do the things relate to c++. Is it same or different in c++. I wish I had the same quality video about c++.
Yeah, i was surprise that he started explaining volatile accurately. So often, even from very brilliant people, you hear rants about volatile and how it does not mean what we think it means - and then it turns out they them self are giving false explanations.
Lots of learnings here! Thanks a ton! Regarding uninit values example - the compiler optimization kicks in because there’s no *buf = … statement before the reads in c1 and c2 right?
Is the explanation at 7:57 actually correct? I would have assumed the problem is that *a* can change elsewhere, meaning x and y are not necessarily the same, not that x can change elsewhere, causing y to be equal to x but not a.
I'm looking forward to compilers optimising away array index checks, assuming programmers are too clever to make mistakes is obviously the way forward.
The compiler can't optimize away my bounds checks because I don't check in the first place. Hopefully in the long term the undefined behavior in my out-of-bounds array accesses will result in even greater performance. Ideally compilers will become sophisticated enough to replace my entire code base with "return 0".
24:42 Subtitles aren't helping me here, because it also hears both signed and unsigned having possible optimizations. I *think* the second one is "can't, but let's just say it's not clearly defined ;)
Thank you, this is terrifying. Compilers are amazing. So many times I think I've found a faster way to do something, then the compiler just shakes it's head at me and produces the same binary. @23:47 Sometimes I depend on overlap. Splitting the operation into multiple statements ie. x *= 2; x /= 2; has always produced the behaviour I want. It is interesting that x *= 2; x/= 2; is not always the same as x = (x*2)/2. @34:32 I'm sceptical that this can happen. I can't reproduce it on GCC 8.3.0, even if I add the casts! @51:08 there's something wrong with your newline here ;-)
If you write nonsense code that gets into language or compiler details unnecessarily, you are not doing anyone any favors. Clearing the high bit can be done by masking e.g. x &= (1
@@gregorymorse8423 I don't. It was a bad example. I would never intentionally overflow a multiply. The only times I depend on overflow are for addition and subtraction. In 8 bits 2-251=7. This is necessary if you want to calculate the elapsed time of a free running 8 bit timer. People tend to think of number ranges as lines, which is why overflow causes some confusion. For addition and subtraction It can help to think of number ranges as circular, or dials. Then the boundaries become irrelevant.
@Michael Clift overflows are well defined behavior in twos complement number systems. And applications like cryptography rely on this, and deliberately overflowing multiplication when doing modular arithmetic is practically vital to schieve performance. That C has tried to be low level but introduced bizarre undefined behavior concepts all over to capture generality that is useless is beyond me. The formal concept is beyond the dial analogy that a+b is e.g. for 32 bit unsigned (a+b) % 2^32 or likewise for multiplication. C does seem to respect thus for unsigned numbers in fact, it's signed ones that are trickier to describe so they chickened out.
with regards to 34:32, copying the code as written in the video and compiling with just "gcc -O3 t.c -o t" reproduced the result for me on gcc 9.3.0 (ubuntu, wsl)
@@nim64 Thanks nim. I tried it with -O3 and now I see the symptom too (still on GCC 8.3.0). It appears to happen with any optimisation level apart from -O0
30:30 I think here it might be better to define an enum with values 0,1,2,3 and to cast a to that type / to have it that type. With -Wswitch, I would hope that means that the value being outside of that enum should also be UB / unreachable (although I would have to look it up, it also depends on how the compiler warnings work here). I would prefer that since it doesn't depend on compiler intrinsics, and it also doesn't let you skip values in between (at least if it's a sensible enum like "enum value_t {A,B,C,D};" and not something strange like "enum weird_t {A=55, B=17, C=1, D= -1854};").
in C, it is not undefined behavior for an enum to have a value that is not enumerated. Basically enums are just ints or whatever integer type you picked.
@@ronald3836 absolutely, that's why I pointed to -Wswitch, which makes it a warning (hopefully). It's not in the standard, but it is a pretty typical optional limitation of what you can do in most compilers. Also I should say that I usually use -Werror with lots of warnings turned on. I know many people are not as diligent tho
39:04 There's a new proposal document, N3128, on the WG14 site that actually wants to stop this exact thing because of how observable behavior (printf) is affected.
Nice vid, just one question: In your union aliasing example around the 52m mark, the union has a compatible type as a member, as per C 2011 6.5 7, is this not valid and defined behavior?
I loved this video, thanks for sharing! As someone who started programming on the x86 processor, which I think has a more forgiving memory model, it's great to review the acquire/release semantics and other little things that may trip me up. Regarding undefined behavior: Do you have an estimate on how often the compiler will raise a warning before relying on the UD to delete a bunch of code? To me it seems most or all of these should be a big red flag that there's an error in the program - even thought the C language assumes the programmer knows what they're doing.
34:24 If we have unsigned shorts a,b = USHRT_MAX; then multiplying a and b together produces undefined (implementation specific) behavior. Do I understand this example correctly? We might expect unsigned integers to wrap around modulo USHRT_MAX+1, but in fact they do not due to implicit type promotion to signed integers. And this only applies to types with rank lower than integer (i.e. char, short).
Hi Mister Steenberg! If you happen to read this message, would you consider doing a video about C23? I'd like to hear what you think about the new features coming in C23.
Interesting talk! It’s always fun seeing C code and realizing that it’s undefined :) One thing I don’t understand is: in what scenario would you ever free an array and then check that you didn’t reallocate the same block? I kind of get if thread A allocates, thread B does some calculation, thread A frees and reallocates, then thread B checks if it’s already done the calculation for the current block. Seems like a flawed architecture though, if this is the case then A should trigger B on a reallocation and B will wait otherwise. Maybe I just don’t get it though
There is a common pattern using a mechanic called "compare and exchange". Lets say you have a shared counter that is shared and many threads want to merriment the value. Each thread wants to access this value and add one to it. To do this you read the value, add one to it, and write it back. The problem with this is that between reading and writing it back some other thread may have incremented the value. so if thread one reads the value 5, adds one to it, then tread two reads the value and adds one to it, and then both write it back, then the value is set to 6, not 7, even though 2 threads have added 1 to 5. To deal with this processors, have a set of instructions that are called "compare and exchange" , thy let a thread say "if this calye is X, change it to Y". So our threads that use that to say: if the shared value is still 5 change it to 6. If two threads try to change 5 to 6, the first one will succeed, and the second one will fail, and will have to re-read the value and try again. This teqniqe is often used with pointer swaps. So you have a pointer to some data that describes a state, you read that state, creates some new state, and then uses compare and exchange to swap out the pointer to the new state. In this case you are using the pointer to see if the pointer has changed since you read it, and this is where an ABA bug can happen, if two states have the same pointer.
@@zabotheother423 Lockless algorithms are generally faster because they don't require any operating system intervention. Mutexes are convenient because if you use a function that locks them, any thread that gets stuck on a lock will sleep until the lock is available, and the operating system can wake up the thread when the lock gets unlocked. This OS intervention is good, because threads don't take up CPU while waiting for each other. On the other hand, sleeping and waking threads take many cycles, so if you really want good performance its better not to have a sleeping lock but just do a spin lock if you expect to wait for a few cycles for the resource to become available. This means that you can only hold things for very short amounts of time, so its harder to design lockless systems, but also more fun!
34:00 Would be fun to see this run on an architecture that uses something other than 2's complement for hardware acceleration of signed integer operations
29:35, I actually have a better way to write that code, make member 0 a default function that does nothing or at least assumes invalid input, then MULTIPLY a against it's boolean check, so in this example it would be a *= (a >= 0 && a < 4); func[a](); Notice how there's no if statement that would result in a jump instruction which in turn slows down the code, if the functions are all in the same memory chunk then even if the cpu assumes a is not 0 it only has to read backwards a bit to get the correct function and from my understanding reading backwards in memory is faster than reading forwards
"Misconception: C is a high level assembly language" Interesting that some years ago the company that i worked for (big big player in embedded IoT) invited a renowned trainer to give us a presentation about advanced C stuff, and he exactly said that we should handle the C language as a "portable assembler". The point is, don't set your mind, to one side :)
Eskil Steenberg was a really kind, hard working fellow who puts his very soul to his works, fun guy to work with without a dull moment! Eskil Steenberg you will be missed!
33:55, um int is NOT always 32bit though,sometimes it's 16bit like short, the compiler could easily optimise out the call altogether in that situation, better to have used a long, at least that is guaranteed to be bigger than a short. Also (and I'm assuming you're leading up to this) should've put a in x 1st then multiplied x by b, a *b by itself might & probably will, remain a unsigned short operation and will just lose the upper bits before it even gets to x
38:40 I think this is wrong - the compiler isn't allowed to propagate undefined behaviour backwards past an I/O operation like printf, which might cause an external effect such as the OS stopping the program anyway. (depending on what the output is piped into)
There is nothing in the standard that forbids this, but you are not alone thinking this does not make sense (many people in the ISO C standard group agree with you). People do file compiler bugs for this behaviour, and some compilers try to minimize this, even thought the standard does not forbid it. I think ISO will come out with some guidance of this soon-ish.
The compiler "knows" that *x can be accessed, so x cannot be NULL. If what the compiler "knows" turns out to be false, then that is undefined behavior and anything is allowed to happen, both before and after. The C standard allows the compiler to annihilate the universe if a program exhibits UB.
sorry, I am sort of a beginner, but regarding the example at 9:40, there are many examples of code in the linux kernel that do this kind of thing without volatile keywords. What's up with that?
The bit with if(x==NULL) printf("Error "); did not happening is making perfect sence. We are not avoiding access to the memory at NULL address, thus we assuming that x is not NULL, otherwise it would create segfault. If we were calling goto or put the access to the x inside else block, we would avoid this issue.
But that is not what is happening. His claim about the optimised being allowed to assume malloc always returns memory is strictly wrong. You can easily check that by looking at things like compiler-explorer. The problem is with Linux as the kernel will return a memory-address even if it does not have anymore free memory.
45:35 is there a reason compilers will avoid overwriting padding in the initialization example, but they can overwrite padding in the case that writing a larger value is faster? or are both examples the same in that compilers *can* overwrite padding, but sometimes choose not to?
Thats a really good question! Ive been trying to figure this out myself. I think they are scared of overwriting padding because it may break some rare program, but they don't even do it with freshly allocated memory. They do it with memory that has been memset, but not memory that has been calloced. I think it might just be an oversight.
Dude the wrapping... I have a high performance monotonic clock that determines frame rate based off the time that passed since the beginning of the program. Eventually I was like, "wait a minute... what it this hits max??". Man I it was like 3-4 days until I was able to fix it. I didn't really think anyone would run my program that long but it was just the thought of it happening. I switched everything to unit64_t which is really all I needed to do but I still went ahead and made it roll over anyway.
It's ok in not high im just in a daze, not used to specifying the sizes of everything i work with, like working with scanf input, i get it since i allocate the memory to begin with i need to know the length of everything if i want to do anything at all with data, on the plus side i almost stopped using classes in oo unless absolutely necessary
I have always put both C and C++ code thru the same C++ compiler deliberately so that one is forced to write C code that is going to be C++ compatible from the outset. It may be time for the languages to be harmonised so that C is genuinely a C++ subset and programmers can incrementally learn C++ by extending what they do in C without impediments.
C and C++ do try to harmonize, but C++ doesn't mind breaking backwards compatibility as much, and C really cares about that. This means that right now it feel like the languages are slightly diverging. If C++ ever wanted to be a super set of C, they would have to make a commitment to that and break backwards compatibility. Unless C++ started to refer directly to the C standard. it would be very difficult from a purely editorial point of view to describe the exact same behaviours in two entirely different specifications by 2 different teams. So even if we wanted to, it would be hard to do.
@@eskilsteenberg The backwards compatibility of ISO C hardly matters when it's so divorced from C as it is used in practice. Despite the hundreds of implementations for ISO C, it's actually quite exceptional for a C code base to work across more than a handful of compilers, indeed the clang compiler was only competitive with gcc on Linux because it implemented gcc's behavior. By comparison, the ISO C++ standard is a practical target for portable, cross-platform development. In a sense, ISO C gets to pretend it's maintains backwards compatibility because they seemingly don't care about divergence among implementations. Honestly the truth seems to be that C++ has effectively smothered C language evolution, i.e. most people interested in improving ISO C eventually gave up and/or found the C++ was far more serious about addressing the needs of users. I mean, after five decades of the C language one would think string processing would be solved, but instead it looks like C will never even have a competent alternative to std::string.
One thing I see/hear often regarding C++ is that the compiler defines the behavior of your program in terms of the "Abstract Machine". UB and the "as-if" rule are consequences of this machine's behavior, even if it would be ok on real hardware. Does C have a similar concept? For example, what you say at 55:46: In the C++ Abstract Machine, every allocation is effectively its own address space. This has important consequences: no allocation can be reached by a pointer to another allocation, comparison of pointers to different allocations is not well defined, etc.
I am in third year computer science and somehow my program never taught me C. I learned Java, Go, assembly, Scheme Prolog and more but not C. I can read it and I understood this video but I lack fundamentals. I'll look into the ressources you mentionned and i'll try to hack some of the software you wrote. There's a game called "stephen's sausage roll" that has a minimal tutorial and its first levels are not trivial. Even at the start they require thought. I need that but for C.
You should write a small game with code reloading. Like Handmade Hero. That'll teach you everything you need to know. You don't need to make the whole game, by the time you draw some textured quads and maybe some text, you will have learned.
Skip C and learn C++. Not only does C++ allow for all of the "low-level" bit-fiddling of C, but it also makes it possible to automate most of the uninteresting busy work required in C. Moreover, C++ is the language of choice for GPU/SIMD programming, as well as far better parallelism and concurrency.
24:42 Is floating point precision also UB? Because I think that would be much more likely to break (with a general multiplier/divider, not with the special case of 2).
No it is not. C follows the IEEE floating points standards and most things are defined, the things that are not defined are platform defined, and not UB. Platform defined means that the platform should define what the behavior should be for that platform. Than means that it is defined and consistent, on that platform, but that it may work differently on other platforms. UB means that there is no defined behavior at all and anything can happen.
@@eskilsteenberg Ah good. Yes, most languages follow that one, and that bit of platform dependent behavior is also the reason Rust doesn't do floating point math in const fn (aka constexpr in C++). Just to expand my knowledge, if you know: This is assymetric with whole numbers, and in many instances you see floating point numbers get special treatment to follow that standard. Do whole numbers indeed not have a similar standard? It's certainly much less important for the behavior of code, the general type already gives all the info (minus, for C and C++, platform specific stuff like the mapping of int etc to size), so I can see why that would be the case.
1:03:22 To be honest I don't think such optimizations should be done by the compiler at all. Instead the compiler should warn the user not to use malloc but the stack here. At least there should be an option to get warnings, where the compiler does fancy optimizations and I would always turn them on.
The thing I hate the most is strict aliasing. What do you mean pointers of different types cannot overlap? The whole point of union was to allow these operations. What are these compiler vendors thinking that they could achieve by optimizing a union? Why does msvc don't have an option to disable strict aliasing. There is a "restrict" keyword goddamit. If I am optimizing a critical code, I am smart enough to use restrict keyword to allow these optimizations.
43:04 I don’t think it is the way you explained this. Another process cannot obtain the same memory due to virtual memory pages protection. It could execute apis like readProcessMemory and WriteProcessMemory to change it but it is purposeful manipulating of memory.
My favourite is @1:04:04. The compiler assumes malloc can't return null when it literally can!? Am I understanding that correctly!? I wonder, were there ever compiler wars? Like the browser wars that gave us so much crap. There's that saying in coding: "You should throw the first one away"; I'm beginning to think it applies to the whole industry. We just need to learn from our mistakes and design a new one.
It just isn't true: the compiler can not and DOES NOT assume that malloc always returns a non-null value. But malloc is performing syscalls to ask the OS for dynamic memory and things like the linux memory allocation scheme is opportunistic: It will always give you a valid address and only when trying to access that memory will you know if you can really use that. but that is not a problem of C but of Linux.
I don't understand!? Who highlighted your reply to my post? If it was Eskil Steenberg then he seems to be disagreeing with his own statement at 1:04:04. What's going on? BTW, I don't claim to know the answer, I was commenting on the statement in the video, and assuming it was something Eskil Steenberg had experienced. @@ABaumstumpf
I was very surprised by this advice, given the talk seemed to be targeting fairly experienced programmers. Possibly the main reason I stick with C is its powerful preprocessor. If you know exactly how it works, you can create incredible powerful abstractions and generic code - all completely safe. I assume he meant people doesn't know how to do that, otherwise this would be a very poor advice.
@@tylovset I wouldn't use C because of the powerful preprocessor. If I want a language with a low level core and powerful abstractions, I'd rather use Scopes.
Wrote my first C compiler in 1982 for CDC6400 machine. 60 bit words so 60 bit chars, pointers, ints. Just enough memory to do simple constant folding of expressions.
Nice to see a new C lang focussed video.
Your "How I program C" video was great.
It's probably the best video on how to structure C programs, in history. I have downloaded it and kept it in every media files I got. Hopefully Eskil realizes how changing that video is. Even though I don't program in C in anything, it's really the best general programming ethics guide.
I like that you chose a dark color scheme for the slides but the random white flashes in between really hurt my eyes because of the stark contrast to the rest of the video
It is like a very effective Flashbang
It's my 2nd favorite part of the video. The whole time I was like, "lmao off someone is probably really pissed about this...".
Dark color schemes hurt my eyes. I can't even look at them for a minute without getting a horrible headache.
@@macicoinc9363your brain terrifies me
I once blinked while he was trying to flashbang me that was funny
As someone learning C/CPP this is a true goldmine. I feel like I have managed to at least experience time travel and issues caused by volatile values not being declared as such, when writing code for arduino.
I just wish GCC was more helpful. This might be a RTFM issue on my part, but it would be nice to get a hint like ”maybe you meant to write a function that has defined behaviour?” or something.
I guess checking for undefined behaviour is slow but I wish there was an option to warn if they exist. (At least the known ones)
Not sure if that's what you mean, but you can use the flag "-fsanitize=undefined" with gcc. And also don't forget to add the same thing to the linker flags, if you're doing separate compilation-linking.
Bash can tell me I probably meant a different command when I typo but gcc can’t tell me that I forgot to close a bracket, fantastic.
@@xugro there's a bunch of UB that cannot be found during compilation but qualifies as UB. for example, you can declare an "extern float x" in one file and "int x" in another (which is prime time UB) and compiler is unable to find it (since type information per symbol is not preserved after compilation). also, there is a bunch of UB that can happen when passing arguments to functions. let's say you have a function that takes two pointers and compares them - there is no way for compiler to determine whether you passed the correct values to the call, since the function can be defined in another file ("correct values" part relates to the provenance part of this video, meaning you can only compare addresses within the same object address space). this kind of things generally makes it impossible to get rid of UB and also is the reason why C requires the programmers to know what they are doing.
Fantastic - the only video I know of, that reach the level of "How I program C".
And no music and other disturbing video stuff - just pure and clean.
Why am I getting flashbanged on a video about c
Ty for making a video abt this that doesnt feel like it relies on peoples short attention span. This is exactly what I'm looking for when I look for a coding video on youtube
Thank you for making this. As someone who gets asked when 'the compiler does weird easily biodegradable matter', being able to point people to this is gold. Restrict is something I miss in C++, it is so useful for SIMD intrinsics.
most c++ compilers allow for a restrict extension like __restrict for g++ and clang.........
Restrict, or an equivalent, is available in all major C++ compilers. That said, restrict itself is a woefully inadequate tool for working with aliasing semantics. It hasn't been standardized in C++ because it's fundamentally a dead end.
Also, C++ is leaps and bounds better for SIMD programming relative to C. Libraries like E.V.E. or Eigen are literally impossible to write in C.
49:29 bamboozled me a lot. Binging UA-cam in bed on my iPad, apparently with it about half an arm's length away, BOTH my blind spots converge on the closing curly brace when I look at the second _i_ in the for loop.
Was kinda freaky seeing an instance of UB in my own retinas after you talked about instances of it in C so much.
Sub earned.
This was interesting! I did not know that the compiler did (or could do) such weird and scary optimizations. Now I appreciate that I know assembly even more because at least there you know what you write is gonna stay there no matter what. Or at least I can debug C code by viewing the assembly.
Native Android dev here. Such good explanation, you captured my attention. Thanks for this!
I am a bit baffled that when a C compiler encounters user code that does the impossible (such as a range check that always passes/fails at compile time, or guaranteed undefined behaviour detectable at compile time) that its first instinct is "how can I exploit this to make the code run faster" rather than "tell the user their code probably has a bug".
FI agree that compilers should be a lot better at explaining what they are doing. For instance syntax highlight code deletion. However, it should also do the optimizations that the standard affords it.
@@eskilsteenberg yeah it would be great if the compiler gave some notice that its just ignoring code because it thinks its pointless, like 'hey maybe use volatile' or 'this expression is always true' etc.
its been a while so maybe they are warnings now but it doesnt sound like it lol
The story here isn't actually too hard to explain! If you remember back when GCC and clang/LLVM were at each other's throats for being the "better compiler", the number one issue was speed- the faster compiler, the one that won all the benchmarks, was expected to win the compiler holy war. Therefore, compiler developers put massive numbers of hours into making their compiler generate the fastest code possible. Until shockingly recently, they didn't really worry about the effects this would have on developers, so they didn't put nearly as many hours into warnings and heuristics that warn when the code exerts unexpected behavior. As a result, the warnings that exist are mostly for simple rule breaks, and there's just not enough reporting infrastructure for the optimizer to report that some function is being optimized out of existence in a way that's probably not what the programmer intended. The fix is to put pressure on the devs- either make the patches on your own and contribute them to the projects (the best option!), or repeatedly ask for improved UB detection and ask others to advocate with you.
@@cosmic3689 Right, more warnings about those strange optimizations are wanted. But there is a catch: macros sometimes result in such code, especially when used with literal arguments. So at the same time, there must be some method to avoid overwhelming the developer with such warnings.
@@HauketalThe compiler invokes the pre-processor, it can report a few instances of an error, then summarise repetitions.
I wish they had a C con the way they have Cpp cons. C is like a fine wine and I wish there were conferences.
The C compiler is really that guy that says "oh yeah buddy you didnt mean to do that did you, lemme get that for ya *deletes code block*"
40:53 Assembly jumpscare. I'm damn terrified.
I spend my time working with people who ponder sources of truth and believe that there is one true dogma that will safe our souls (keep it simple).
I learned C some 30 years ago and when I feel nostalgia watching this video it's not because I miss C. What I miss are people who actually know what they're talking about and why, people like Eskil.
Jesus is that truth.
How is ommiting the malloc() == NULL not a compiler bug?
The standard clearly defines this to be a possible error case which has to be checked against?
Edit: the real issue seems to be that the compiler optimizes the malloc itself away, because it knows the memory is never used.
Therefore it can assume it always succeeds because it never called it in the first place.
I didn't realize that was the issue! Thanks for pointing it out, was also confused why it was misbehaving.
33:00 another way to understand this issue is:
The multiplication of a and b first multiply as shorts, wrapping if needed, and then is cast to an unsigned int. This means that the highest 16 bits will always be 0, and it will eliminate the if.
This can explain the branch decision. But the the result won't be 4 billion this way (?)
So much genuinely valuable information that contextualizes and explains many C intuitions that I've built over time.
Seriously one of the best quality videos I've seen on this platform in recent memory.
this is maximum anxiety for everything ive ever written. At first it was like "alright, perhaps i should reorganise some things for better performance" and then it was "oh god, i hope i didn't implicitly assume that the padding in my structs would be persistent."
Clean and constructive talk about great language.
c is awesome! please make more about EVERYTHING you would like to share! 🥺
The whole talk I had a feeling that it's John Carmack talking (funny fact, he also mentioned he prefers Visual Studio debugger)
I've found I prefer Rust these days, but I have fond memories of the C and C++ standards from 20 years ago, thanks for the fun video
Rust is my language of choice these last three years or so. However I still love C and would be happy to use it where needed. I love it for its hard core simplicity.I love it because it has hardly changed in decades and I hope that remains the case. However I've have also use C++ a lot and absolutely refuse to ever go back to that deranged monster.
@@Heater-v1.0.0 Say what you will about C++, you'll have to square it with the fact that even the major C implementations (Clang/GCC/MSVC/etc) choose the "deranged monster" of C++ over the "hard core simplicity" of C. Simply put, the fact is that C++ is more popular than ever because it's actually *more* insane to use C lmfao
@@69696969696969666 The is true. Most of the worlds C compilers were written in C. C++ evolved from C and the compiler implementations followed. All seems quite reasonable. I agree that C++ offers a lot of conveniences that can make life much easier than C, although I'm still happy to use C or the C subset of C++ where appropriate. It is possible to write nice C++ code if one stays away form much of the ugliness the language. Unfortunately it's hard to do that on a large project with many people working on it as they tend to start introducing all kind of C++ weirdness.
Anyway, all that long and tortuous history does not mean we have ended up in a good place with C++. Many agree with me. Like Hurb Sutter with.his C++Front work. And Hurb is on the C++ committee!
Such an amazing video! I loved all these fascinating tidbits about C (and compiler design in general) and you held my attention the entire time. I think I'll watch it a few more times to really grok the material. Bravo!
C: I'm not going to crash therefore I don't need a seatbelt
"You never drive so I removed your car"
I thought I knew C on an above-average-level at least but after watching this video, the only thing I know is that I’m scared of C now…
Even though VLA objects with automatic storage (stack-allocated) are not very useful in practice, the VLA **types** are really useful for handling multidimensional arrays.
Thank you for this new C lesson.Great as always.
It seems to start boring and thick and slow but it gets interesting fast. Excellent.
Understanding underlying hardware and coding while taking that into account, is a dying breed. People are coding large programs with languages that far remove them from the fact that it'sa running on a HARDWARE that has limitations, idiosyncrasies, it's not immediate when you tell it to do something... that has multiple processes running on it...
And that code is very, VERY inefficient. WE're running into a wall with constantly rising performance / dollar and that's starting to cause real issues and people who understand how to write code for certain architecture, taking that into account, are valuable again.
Hopefully enough people watch this and realize that it MATTERS what you write and that you understand the hardware as well.
Two things: C23 now requires VLAs again, rather ridiculously. And, GDB has a TUI mode that is a little buggy, but quite good, and gives you a visual debugger featureset.
Why are VLA requirements ridiculous? What's feasible for implementations can change with time.
The first cc(1) I used had =+ & =- and didn't even support K&R C or the C widely published in books.
BTW the VLA inclusion unbroke the single error I made in an exam at Uni. which cost me a 100% result long before the C standard inclusion so you need a really good rationale.
@@RobBCactive It's ridiculous because only GCC properly supports it, and the feature was added and then deprecated and then re-added to the standard. This is an absurd thing to do, especially for a committee that is so overwhelmingly committed to keeping the language as much the same as possible over the decades.
@@greyfade compilers didn't support ANSI C until they did, obviously function prototypes are absurd by your reasoning.
@@RobBCactive That's a disingenuous argument. The situation is not comparable. C didn't add function prototypes to the standard and then remove them in the next version and then add them back in the next version. They didn't do that with any feature except VLAs. And they haven't done that with any other compiler-specific feature, either. They didn't add an MSVC-specific extension or a Clang-specific extension or a Sun extension that no one else implemented. They only did that with GCC's VLAs.
@@greyfade nope, VLA is implementable and understandable by competent people.
You've made no case why VLA is not useful or impractical.
Thank you so much, this video is awesome! I appreciate this a lot
The optimizations flags in gcc bit me a long time ago. My code had no bugs without optimization flags on but then would develop a bug after O2. I don’t recall what the exact issue was, but from then on I would run my unit tests with and without optimization flags to minimize the potential for aggressive optimizations or missing a keyword to force the compiler to be more careful with a function.
Your code was buggy before you turned on the optimization flags, the optimization flags just revealed them. your strategy of testing in multiple different optimization modes is the right one!
48:30 That's why the Rust rules for mutable references are so nice.
@34:50 not really sure if promotion happens only if hardware has no 16-bit operations. promotions HAVE TO happen, but you can use 16 bit version of instructions only if the behavior is the same as if the promotions actually happened (according to ISO 9899:1999 - 5.1.2.3.10). this is just a way to standardize the semantics of the program, so it can be agnostic to whether the machine supports lower bit instructions or not.
Yeah, he did a great job explaining the rationale behind integer promotions (and why they scare me), but neglected to explain that his hypothetical implementation--one which can't do 16-bit operations--would define (signed and unsigned) `int` to be 32 bits *_because_* it would allow the implementation to promote any integer type with a shorter width.
Finally a good explanation of what the volatile keyword does mean in c\c++.
Just finished watching. VERY GOOD stuff here. It's shame that this's no mention of how do the things relate to c++. Is it same or different in c++. I wish I had the same quality video about c++.
Yeah, i was surprise that he started explaining volatile accurately. So often, even from very brilliant people, you hear rants about volatile and how it does not mean what we think it means - and then it turns out they them self are giving false explanations.
Lots of learnings here! Thanks a ton!
Regarding uninit values example - the compiler optimization kicks in because there’s no *buf = … statement before the reads in c1 and c2 right?
Definitely request more of these detailed C videos from Eskil. Its a space where there just is not a lot of content
Ill try but im not a youtuber so i have limited time.
@@eskilsteenberg understood just if you ever feel a little inspired like this one and How I Program C, a lot of fans will appreciate it I think 👍
Agreed, so cool that i accidentally stumbled on this video!
Is the explanation at 7:57 actually correct? I would have assumed the problem is that *a* can change elsewhere, meaning x and y are not necessarily the same, not that x can change elsewhere, causing y to be equal to x but not a.
I'm looking forward to compilers optimising away array index checks, assuming programmers are too clever to make mistakes is obviously the way forward.
The compiler can't optimize away my bounds checks because I don't check in the first place. Hopefully in the long term the undefined behavior in my out-of-bounds array accesses will result in even greater performance. Ideally compilers will become sophisticated enough to replace my entire code base with "return 0".
If you check array bounds and then access the array ANYWAY, then the compiler is indeed free to remove the bounds check.
24:42 Subtitles aren't helping me here, because it also hears both signed and unsigned having possible optimizations. I *think* the second one is "can't, but let's just say it's not clearly defined ;)
Thank you, this is terrifying. Compilers are amazing. So many times I think I've found a faster way to do something, then the compiler just shakes it's head at me and produces the same binary.
@23:47 Sometimes I depend on overlap. Splitting the operation into multiple statements ie. x *= 2; x /= 2; has always produced the behaviour I want. It is interesting that x *= 2; x/= 2; is not always the same as x = (x*2)/2.
@34:32 I'm sceptical that this can happen. I can't reproduce it on GCC 8.3.0, even if I add the casts!
@51:08 there's something wrong with your newline here ;-)
If you write nonsense code that gets into language or compiler details unnecessarily, you are not doing anyone any favors. Clearing the high bit can be done by masking e.g. x &= (1
@@gregorymorse8423 I don't. It was a bad example. I would never intentionally overflow a multiply. The only times I depend on overflow are for addition and subtraction. In 8 bits 2-251=7. This is necessary if you want to calculate the elapsed time of a free running 8 bit timer. People tend to think of number ranges as lines, which is why overflow causes some confusion. For addition and subtraction It can help to think of number ranges as circular, or dials. Then the boundaries become irrelevant.
@Michael Clift overflows are well defined behavior in twos complement number systems. And applications like cryptography rely on this, and deliberately overflowing multiplication when doing modular arithmetic is practically vital to schieve performance. That C has tried to be low level but introduced bizarre undefined behavior concepts all over to capture generality that is useless is beyond me. The formal concept is beyond the dial analogy that a+b is e.g. for 32 bit unsigned (a+b) % 2^32 or likewise for multiplication. C does seem to respect thus for unsigned numbers in fact, it's signed ones that are trickier to describe so they chickened out.
with regards to 34:32, copying the code as written in the video and compiling with just "gcc -O3 t.c -o t" reproduced the result for me on gcc 9.3.0 (ubuntu, wsl)
@@nim64 Thanks nim. I tried it with -O3 and now I see the symptom too (still on GCC 8.3.0). It appears to happen with any optimisation level apart from -O0
30:30 I think here it might be better to define an enum with values 0,1,2,3 and to cast a to that type / to have it that type. With -Wswitch, I would hope that means that the value being outside of that enum should also be UB / unreachable (although I would have to look it up, it also depends on how the compiler warnings work here). I would prefer that since it doesn't depend on compiler intrinsics, and it also doesn't let you skip values in between (at least if it's a sensible enum like "enum value_t {A,B,C,D};" and not something strange like "enum weird_t {A=55, B=17, C=1, D= -1854};").
in C, it is not undefined behavior for an enum to have a value that is not enumerated. Basically enums are just ints or whatever integer type you picked.
@@ronald3836 absolutely, that's why I pointed to -Wswitch, which makes it a warning (hopefully). It's not in the standard, but it is a pretty typical optional limitation of what you can do in most compilers.
Also I should say that I usually use -Werror with lots of warnings turned on. I know many people are not as diligent tho
39:04
There's a new proposal document, N3128, on the WG14 site that actually wants to stop this exact thing because of how observable behavior (printf) is affected.
Nice vid, just one question: In your union aliasing example around the 52m mark, the union has a compatible type as a member, as per C 2011 6.5 7, is this not valid and defined behavior?
Those white flashes (when changing slides) are hurting my eyes 😕
Don't do the flashing white screen. It hurts my eyes and is annoying in general.
I had to stop watching about 10 mins in because of it. Seizure inducing.
I loved this video, thanks for sharing! As someone who started programming on the x86 processor, which I think has a more forgiving memory model, it's great to review the acquire/release semantics and other little things that may trip me up.
Regarding undefined behavior: Do you have an estimate on how often the compiler will raise a warning before relying on the UD to delete a bunch of code? To me it seems most or all of these should be a big red flag that there's an error in the program - even thought the C language assumes the programmer knows what they're doing.
34:24 If we have unsigned shorts a,b = USHRT_MAX; then multiplying a and b together produces undefined (implementation specific) behavior. Do I understand this example correctly? We might expect unsigned integers to wrap around modulo USHRT_MAX+1, but in fact they do not due to implicit type promotion to signed integers. And this only applies to types with rank lower than integer (i.e. char, short).
Congratulation! you cracked it!
Hi Mister Steenberg! If you happen to read this message, would you consider doing a video about C23? I'd like to hear what you think about the new features coming in C23.
This was fascinating! I had no idea about some of the things happening under the hood. Thanks!
Interesting talk! It’s always fun seeing C code and realizing that it’s undefined :)
One thing I don’t understand is: in what scenario would you ever free an array and then check that you didn’t reallocate the same block? I kind of get if thread A allocates, thread B does some calculation, thread A frees and reallocates, then thread B checks if it’s already done the calculation for the current block. Seems like a flawed architecture though, if this is the case then A should trigger B on a reallocation and B will wait otherwise. Maybe I just don’t get it though
There is a common pattern using a mechanic called "compare and exchange". Lets say you have a shared counter that is shared and many threads want to merriment the value. Each thread wants to access this value and add one to it. To do this you read the value, add one to it, and write it back. The problem with this is that between reading and writing it back some other thread may have incremented the value. so if thread one reads the value 5, adds one to it, then tread two reads the value and adds one to it, and then both write it back, then the value is set to 6, not 7, even though 2 threads have added 1 to 5.
To deal with this processors, have a set of instructions that are called "compare and exchange" , thy let a thread say "if this calye is X, change it to Y". So our threads that use that to say: if the shared value is still 5 change it to 6. If two threads try to change 5 to 6, the first one will succeed, and the second one will fail, and will have to re-read the value and try again.
This teqniqe is often used with pointer swaps. So you have a pointer to some data that describes a state, you read that state, creates some new state, and then uses compare and exchange to swap out the pointer to the new state. In this case you are using the pointer to see if the pointer has changed since you read it, and this is where an ABA bug can happen, if two states have the same pointer.
Yes, some kind of smart pointers can be easily implemented with C.
@@eskilsteenberg why is this advantageous to using a lock? Seems like a rather roundabout way to solve the shared resource problem
@@zabotheother423 Lockless algorithms are generally faster because they don't require any operating system intervention. Mutexes are convenient because if you use a function that locks them, any thread that gets stuck on a lock will sleep until the lock is available, and the operating system can wake up the thread when the lock gets unlocked. This OS intervention is good, because threads don't take up CPU while waiting for each other. On the other hand, sleeping and waking threads take many cycles, so if you really want good performance its better not to have a sleeping lock but just do a spin lock if you expect to wait for a few cycles for the resource to become available. This means that you can only hold things for very short amounts of time, so its harder to design lockless systems, but also more fun!
@@eskilsteenberg interesting. I’ve heard of lockless designs before but never really explored them. Thanks
34:00 Would be fun to see this run on an architecture that uses something other than 2's complement for hardware acceleration of signed integer operations
Old video, but people designing new languages should all watch this - I feel like people miss the point of UB in C quite often.
43:00 -- could you provide an example of a platform where this happens? It's certainly not the case on Linux or any Unix system.
It doesnt happen.
At least I definitely know when the slide changes
29:35, I actually have a better way to write that code, make member 0 a default function that does nothing or at least assumes invalid input, then MULTIPLY a against it's boolean check, so in this example it would be
a *= (a >= 0 && a < 4);
func[a]();
Notice how there's no if statement that would result in a jump instruction which in turn slows down the code, if the functions are all in the same memory chunk then even if the cpu assumes a is not 0 it only has to read backwards a bit to get the correct function and from my understanding reading backwards in memory is faster than reading forwards
Christmas came early this year!
I don't even know C but I find this extremely entertaining
"Misconception: C is a high level assembly language"
Interesting that some years ago the company that i worked for (big big player in embedded IoT) invited a renowned trainer to give us a presentation about advanced C stuff, and he exactly said that we should handle the C language as a "portable assembler".
The point is, don't set your mind, to one side :)
Maybe you need to hire better trainers? 🙂
Working with SunOS/Linux/gcc since the early 90s but it's good to be reminded from time to time about these easy to forget pitfalls.
39:45 Does this happen for this, too?
assert(x && "Error!");
or does it notice that assert will guard the program from dereferencing a null pointer?
Eskil Steenberg was a really kind, hard working fellow who puts his very soul to his works, fun guy to work with without a dull moment! Eskil Steenberg you will be missed!
Wait, what? Did he pass away?
@@cavasnel i guess the commenter worked with him togheter or something but no didn't pass away, hes still active on Twitter.
33:55, um int is NOT always 32bit though,sometimes it's 16bit like short, the compiler could easily optimise out the call altogether in that situation, better to have used a long, at least that is guaranteed to be bigger than a short. Also (and I'm assuming you're leading up to this) should've put a in x 1st then multiplied x by b, a *b by itself might & probably will, remain a unsigned short operation and will just lose the upper bits before it even gets to x
C community love ❤️
Good topics and tips!
Thanks :)
I now have a clearer understanding as to why C is :
a) Fast
b) Dangerous
:D
Thats what the video is meant to do! Thank you!
growing up is realizing C is the best programming language
And Pascal
1:07:34 It is my understanding that it is UB to define macros with names identical to standard library functions. Am I mistaken about this?
That is very nice content!! Nice effort
The white flashes whenever the slide changes make this impossible to watch.
38:40 I think this is wrong - the compiler isn't allowed to propagate undefined behaviour backwards past an I/O operation like printf, which might cause an external effect such as the OS stopping the program anyway. (depending on what the output is piped into)
There is nothing in the standard that forbids this, but you are not alone thinking this does not make sense (many people in the ISO C standard group agree with you). People do file compiler bugs for this behaviour, and some compilers try to minimize this, even thought the standard does not forbid it. I think ISO will come out with some guidance of this soon-ish.
The compiler "knows" that *x can be accessed, so x cannot be NULL. If what the compiler "knows" turns out to be false, then that is undefined behavior and anything is allowed to happen, both before and after. The C standard allows the compiler to annihilate the universe if a program exhibits UB.
35:38 C without automatic casting would be nice, I guess. Especially when having such weird casting rules.
sorry, I am sort of a beginner, but regarding the example at 9:40, there are many examples of code in the linux kernel that do this kind of thing without volatile keywords. What's up with that?
Great video Eskil!
ahah so basically the compiler is just a UA-cam comment troll, that look at your code and respond with "Ahaha too long didn't read".
Great video, but the flashes between slides are quite irritating.
The bit with if(x==NULL) printf("Error
"); did not happening is making perfect sence. We are not avoiding access to the memory at NULL address, thus we assuming that x is not NULL, otherwise it would create segfault. If we were calling goto or put the access to the x inside else block, we would avoid this issue.
But that is not what is happening. His claim about the optimised being allowed to assume malloc always returns memory is strictly wrong. You can easily check that by looking at things like compiler-explorer.
The problem is with Linux as the kernel will return a memory-address even if it does not have anymore free memory.
Oh, this is definitely underrated.
45:35 is there a reason compilers will avoid overwriting padding in the initialization example, but they can overwrite padding in the case that writing a larger value is faster? or are both examples the same in that compilers *can* overwrite padding, but sometimes choose not to?
Thats a really good question! Ive been trying to figure this out myself. I think they are scared of overwriting padding because it may break some rare program, but they don't even do it with freshly allocated memory. They do it with memory that has been memset, but not memory that has been calloced. I think it might just be an oversight.
hmm, weird that slides are not text files in Visual Studio (-:
Is there a compiler that warns you when it decides to delete your code? :)
Possible, but realistically it does so all the time in large code bases so it won’t really help most of the time.
Dude the wrapping... I have a high performance monotonic clock that determines frame rate based off the time that passed since the beginning of the program. Eventually I was like, "wait a minute... what it this hits max??". Man I it was like 3-4 days until I was able to fix it. I didn't really think anyone would run my program that long but it was just the thought of it happening. I switched everything to unit64_t which is really all I needed to do but I still went ahead and made it roll over anyway.
It's ok in not high im just in a daze, not used to specifying the sizes of everything i work with, like working with scanf input, i get it since i allocate the memory to begin with i need to know the length of everything if i want to do anything at all with data, on the plus side i almost stopped using classes in oo unless absolutely necessary
Soo many, inexplicable behavior explained!
I have always put both C and C++ code thru the same C++ compiler deliberately so that one is forced to write C code that is going to be C++ compatible from the outset. It may be time for the languages to be harmonised so that C is genuinely a C++ subset and programmers can incrementally learn C++ by extending what they do in C without impediments.
C and C++ do try to harmonize, but C++ doesn't mind breaking backwards compatibility as much, and C really cares about that. This means that right now it feel like the languages are slightly diverging. If C++ ever wanted to be a super set of C, they would have to make a commitment to that and break backwards compatibility. Unless C++ started to refer directly to the C standard. it would be very difficult from a purely editorial point of view to describe the exact same behaviours in two entirely different specifications by 2 different teams. So even if we wanted to, it would be hard to do.
sizeof('a'}
I rest my case.
@@eskilsteenberg The backwards compatibility of ISO C hardly matters when it's so divorced from C as it is used in practice. Despite the hundreds of implementations for ISO C, it's actually quite exceptional for a C code base to work across more than a handful of compilers, indeed the clang compiler was only competitive with gcc on Linux because it implemented gcc's behavior. By comparison, the ISO C++ standard is a practical target for portable, cross-platform development. In a sense, ISO C gets to pretend it's maintains backwards compatibility because they seemingly don't care about divergence among implementations. Honestly the truth seems to be that C++ has effectively smothered C language evolution, i.e. most people interested in improving ISO C eventually gave up and/or found the C++ was far more serious about addressing the needs of users. I mean, after five decades of the C language one would think string processing would be solved, but instead it looks like C will never even have a competent alternative to std::string.
@@69696969696969666 I use a "depenbable subset" of C, that i know is portable. I want to make a video about it.
Thank you for the video
34:40, Wow I knew about promotion, but not that small unsigned types promote to a signed int. That’s -stupid- really surprising and inconvenient.
One thing I see/hear often regarding C++ is that the compiler defines the behavior of your program in terms of the "Abstract Machine". UB and the "as-if" rule are consequences of this machine's behavior, even if it would be ok on real hardware. Does C have a similar concept? For example, what you say at 55:46: In the C++ Abstract Machine, every allocation is effectively its own address space. This has important consequences: no allocation can be reached by a pointer to another allocation, comparison of pointers to different allocations is not well defined, etc.
Yes, the C language standard also uses the abstract machine concept.
I am in third year computer science and somehow my program never taught me C. I learned Java, Go, assembly, Scheme Prolog and more but not C. I can read it and I understood this video but I lack fundamentals. I'll look into the ressources you mentionned and i'll try to hack some of the software you wrote.
There's a game called "stephen's sausage roll" that has a minimal tutorial and its first levels are not trivial. Even at the start they require thought. I need that but for C.
You should write a small game with code reloading. Like Handmade Hero. That'll teach you everything you need to know. You don't need to make the whole game, by the time you draw some textured quads and maybe some text, you will have learned.
@@MagpieMcGraw This is good advice.
Skip C and learn C++. Not only does C++ allow for all of the "low-level" bit-fiddling of C, but it also makes it possible to automate most of the uninteresting busy work required in C. Moreover, C++ is the language of choice for GPU/SIMD programming, as well as far better parallelism and concurrency.
@69696969696969666 calm down big guy its just some text, no need to get worked up and crusade in the comments :)
Thank you vey much for this! Absolutely love your talks
Every single day, I learn about new UB in C/C++.
What puzzles me is that a+b operation takes one clock, but 'if (a
24:42
Is floating point precision also UB? Because I think that would be much more likely to break (with a general multiplier/divider, not with the special case of 2).
No it is not. C follows the IEEE floating points standards and most things are defined, the things that are not defined are platform defined, and not UB. Platform defined means that the platform should define what the behavior should be for that platform. Than means that it is defined and consistent, on that platform, but that it may work differently on other platforms. UB means that there is no defined behavior at all and anything can happen.
@@eskilsteenberg Ah good. Yes, most languages follow that one, and that bit of platform dependent behavior is also the reason Rust doesn't do floating point math in const fn (aka constexpr in C++).
Just to expand my knowledge, if you know: This is assymetric with whole numbers, and in many instances you see floating point numbers get special treatment to follow that standard. Do whole numbers indeed not have a similar standard?
It's certainly much less important for the behavior of code, the general type already gives all the info (minus, for C and C++, platform specific stuff like the mapping of int etc to size), so I can see why that would be the case.
1:03:22 To be honest I don't think such optimizations should be done by the compiler at all.
Instead the compiler should warn the user not to use malloc but the stack here.
At least there should be an option to get warnings, where the compiler does fancy optimizations and I would always turn them on.
The thing I hate the most is strict aliasing. What do you mean pointers of different types cannot overlap? The whole point of union was to allow these operations. What are these compiler vendors thinking that they could achieve by optimizing a union? Why does msvc don't have an option to disable strict aliasing. There is a "restrict" keyword goddamit. If I am optimizing a critical code, I am smart enough to use restrict keyword to allow these optimizations.
hi where should one start c to have fun ? can you make video for begineers?
43:04 I don’t think it is the way you explained this. Another process cannot obtain the same memory due to virtual memory pages protection. It could execute apis like readProcessMemory and WriteProcessMemory to change it but it is purposeful manipulating of memory.
My favourite is @1:04:04. The compiler assumes malloc can't return null when it literally can!? Am I understanding that correctly!?
I wonder, were there ever compiler wars? Like the browser wars that gave us so much crap.
There's that saying in coding: "You should throw the first one away"; I'm beginning to think it applies to the whole industry. We just need to learn from our mistakes and design a new one.
It just isn't true: the compiler can not and DOES NOT assume that malloc always returns a non-null value. But malloc is performing syscalls to ask the OS for dynamic memory and things like the linux memory allocation scheme is opportunistic: It will always give you a valid address and only when trying to access that memory will you know if you can really use that.
but that is not a problem of C but of Linux.
I don't understand!? Who highlighted your reply to my post? If it was Eskil Steenberg then he seems to be disagreeing with his own statement at 1:04:04.
What's going on? BTW, I don't claim to know the answer, I was commenting on the statement in the video, and assuming it was something Eskil Steenberg had experienced.
@@ABaumstumpf
1:05:37 What about using the preprocessor for common code snippets?
I was very surprised by this advice, given the talk seemed to be targeting fairly experienced programmers. Possibly the main reason I stick with C is its powerful preprocessor. If you know exactly how it works, you can create incredible powerful abstractions and generic code - all completely safe. I assume he meant people doesn't know how to do that, otherwise this would be a very poor advice.
@@tylovset I wouldn't use C because of the powerful preprocessor. If I want a language with a low level core and powerful abstractions, I'd rather use Scopes.
Great video, although it would be better not to have those white flashes in between slides. Really hurts my eyes when watching this at night.