Best talk in dozens I've listened. 15:45, hm... I thought it would always compare 2 different addresses. A negative bit shift does not goes to the other side... This is so unC++ish! Why can't convert a double to a float?! Wasn't it just a matter of cutting off numbers? 28:32, and what about this(?): auto var = my_ptr ? *my_ptr : *default_ptr; May it dereference a nullptr (from my_ptr)? 45:45, there's a compiler flag that could catch this easily. 47:27, I guess there's another bug here: when index is "pointing" to 'd', the if will be true, index += 2 will "point" to ':'. And if mid (index) gets the interval of [index; Input.size() ), the result will has a ':vector', still with the ':' attached to it. If it was defined behaviour, an automated test could catch it. 48:58, std::vector is a chunk of contiguous memory. So any new member, resulting in size increasing, will be likely to require a new location for memory, unless there was space in the same place or enough memory was preallocated, by its member f() 'reserve'. And due to performance reasons, iterators are not reconfigured to the new locations - _not always will be reused_ . 56:00, I use to never apply ++, on the same attribution, whenever the var appears more than 1x.
Hello, I’m a bit late here, but I thought I would answer you. At the beginning, they said that you usually hear discussion on UB from compiler writers. Your question is something compilers writers often discuss, so they didn’t want to rehash it here I’m sure. The short answer is that leaving certain things undefined is what allows compilers to achieve excellent optimization. Everything that isn’t defined leaves wiggle room in how the compiler actually implements it. By defining what must happen for undefined behavior, some of the levers for compiler optimization get frozen having to match the standard.
I wish people understood this stuff way more. Maybe the biggest myth about UD is that you can figure it out by experimenting (i.e. writing code, running it, and observing the result). That is just false. Whatever behavior you observe is NOT reliable. The compiler is allowed to generate something different on the next run. It may appear that you are getting consistent behavior, but that is a false sense of security that people lull themselves into, an illusion that will likely break down the line, perhaps when you (or more likely, someone else on your team) makes a change to an unrelated piece of code, or perhaps when you upgrade your compiler version, or when you try to port to another platform. At that point, you will probably have one hell of a time tracing back to your original sin.
This was surprisingly entertaining! I just saw and older UB talk and you guys are definitely getting better at this.
yes I thought so, captured my attention with the 3 legged animal.
Barbara and Ansel are so chill, I love to hear their talks.
nice presentation. your efforts and passion for teaching c++ is amazing. always gives me positive energy though learning c++. thanks a lot.
I really enjoy your lectures - you guys are a great tag team ;)
Thank you so much!
Best talk in dozens I've listened.
15:45, hm... I thought it would always compare 2 different addresses. A negative bit shift does not goes to the other side... This is so unC++ish! Why can't convert a double to a float?! Wasn't it just a matter of cutting off numbers?
28:32, and what about this(?): auto var = my_ptr ? *my_ptr : *default_ptr;
May it dereference a nullptr (from my_ptr)?
45:45, there's a compiler flag that could catch this easily.
47:27, I guess there's another bug here: when index is "pointing" to 'd', the if will be true, index += 2 will "point" to ':'. And if mid (index) gets the interval of [index; Input.size() ), the result will has a ':vector', still with the ':' attached to it. If it was defined behaviour, an automated test could catch it.
48:58, std::vector is a chunk of contiguous memory. So any new member, resulting in size increasing, will be likely to require a new location for memory, unless there was space in the same place or enough memory was preallocated, by its member f() 'reserve'. And due to performance reasons, iterators are not reconfigured to the new locations - _not always will be reused_ .
56:00, I use to never apply ++, on the same attribution, whenever the var appears more than 1x.
many thanks, that's what i need now ;)
Undefined behaviour could be eliminated as it is done in Rust.
In debug Rust do not have undefined behaviour, except if you do not use unsafe section
Hey, could you upload your lecture slides?
From teaching standpoint it's better to talk about why we need UB at all. For example why compiler can remove null accesses?
Hello, I’m a bit late here, but I thought I would answer you.
At the beginning, they said that you usually hear discussion on UB from compiler writers. Your question is something compilers writers often discuss, so they didn’t want to rehash it here I’m sure.
The short answer is that leaving certain things undefined is what allows compilers to achieve excellent optimization. Everything that isn’t defined leaves wiggle room in how the compiler actually implements it. By defining what must happen for undefined behavior, some of the levers for compiler optimization get frozen having to match the standard.
Why do you think that two speakers are better than one?
I wish people understood this stuff way more. Maybe the biggest myth about UD is that you can figure it out by experimenting (i.e. writing code, running it, and observing the result). That is just false. Whatever behavior you observe is NOT reliable. The compiler is allowed to generate something different on the next run. It may appear that you are getting consistent behavior, but that is a false sense of security that people lull themselves into, an illusion that will likely break down the line, perhaps when you (or more likely, someone else on your team) makes a change to an unrelated piece of code, or perhaps when you upgrade your compiler version, or when you try to port to another platform. At that point, you will probably have one hell of a time tracing back to your original sin.
A good example is code that seems to work with -O1 but not -O2 or -O3.