33:30 I actually looked that up, but no, Rust as a language does not (currently) have a specification. Some people think that "The Rust Reference" is a specification, but it's explicitly not. But a few months back an RFC got accepted to actually create a language specification.
The self-driving car was a fascinating example of safety and in my opinion demonstrates the difficulty of defining safety exclusively. A self-driving car that can't drive off the road is not the safest car. In an unexpected traffic event the safest option can very well be driving off the road, even crashing off the road. I don't know how that would directly apply to software design though, but it's a good one for making you think how sometimes the context is different enough that the perspective changes in what's safe. Or like how some physical safeties rely on the fact that something breaks and gives up for not being up for the task. Like you don't want your fuse to be replaced by a thick nail, even though it allows you to use your device without cutting electricity. And by definition it is a good and a safe nail because it can stand the stresses applied to it. You recognize safety when you're missing it and something goes wrong, but you don't recognize safety when nothing goes wrong. You might assume safety, but that's not accurate. Like you think your house is adequately safe until you get robbed and realize that in fact the safeties in our houses are really poor if someone really wants to get in and isn't too afraid of the consequences. On the other hand you might be perfectly safe in an amusement park ride, but have a strong sense of lack of safety. These talks by Sean Parent always make everyone fall so quiet until after. It's like you can hear a pin drop, everyone is so focused on what he has to say. Without exception it seems to be important and thought provoking.
1:14:00ish - thinking in terms of preconditions - those requirements need to be machine-enforceable. That might be more painful to use, but think of Concepts - these kinds of tools ought to help, not hinder us.
On Chandler's video someone commented about not being able to hear the audience questions, and he replied saying that they had mic'd up the audience in some way they thought would capture questions
It depends on what you're trying to do. RMS has always been hostile to attempts to modularize GCC to make it more embeddable into commercial IDEs, like Clang/LLVM and Xcode, or at least that's what I've heard. Then there's the GPLv3 issue, a license that some companies and groups (Apple, FreeBSD) refuse to use in the core system. That's why Apple stopped at GCC 4.2.1 and an old version of bash. Those were the last GPLv2-licensed versions. Most companies that use open-source projects want to have their changes upstreamed for ease of maintenance, but they hate to be forced to do anything, as GPL does.
FWIW, the GNAT Ada compiler is a front-end for GCC that's commercially-supported and is GPLv3 now, but that project started in the 1990s, and there wasn't any open-source alternative the authors could've used instead.
Every product deals with user information because that is a second form of rarevenue that make it a companies want to sell. A robotic vacuum cleaner now transmits its images to a server that is not the consumer. The washing machine now send your data to the manufacturer that they sell to third parties and it's all in the user licence agreement. The language is not the problem. Buy defaults, Smart tvs opt you in uploading your user data to the manufacturer. Every smart phone record your text envoice and send it to whoever you have your smartphone with. Your modern car keeps all your text messages, And who knows what else in they cannot be deleted. The boogie man is to blame the programmer, Select let's see how safe we can keep getting the languages.
48:45 "There's no real downside [to using std algorithms vs raw loops]" - the big downsides remains readability and debbugability. I love std::accumulate and a few others, but it becomes incredibly gnarly stuffing lambda's inside algorithms inside algorithms. With a simple loop, one can easily follow the flow and press F7 through each step.
auto my_array = std::array{0,1,2,3,4,5}; bool const all_under_10 = std::ranges::all_of(my_array, [](auto v){return v < 10;}); Sure maybe you have a point about debug-ability. but when in the world do you need to debug this? Why would you have to debug an algorithm? If you do need to, then maybe you shouldn't be using an algorithm. Readability though? You're beyond wrong, try writing the snippit I wrote in 10 seconds with a for loop, and make sure you don't screw up the condition, that's an easy to make mistake. Algorithms do EXACTLY what their name says, declarative programming is immensely superior in readability than for loops. Furthermore, if you're not at least using ranged for loops, then you're the type of programmer that will eventually kill someone. Raw for loops are harmful.
Unfortunately, debuggers don't support us a lot with navigating C++. It should be possible to tell the compiler to stop at the "N" iteration starting from begin() in a std algorithm.
@@alexb5594 "Raw for loops are harmful." pray you never see what the compiled code looks like. You may not like it, but raw for loops are what peak performance looks like. specifically, for(;;) loops.
@@GeorgeTsiroswell obviously they get compiled to raw for loops. It’s considered harmful, because of humans write bad code from time to time and stl algos try to remove unnecessary points of failure like bounds checking in raw for loops
Haven't all these problems been solved by the adoption of ada/spark in industry? Ada seems to be a suitable answer to every potential safety-related issue thanks to provability and its other compiler/toolchain features.
As far I can judge Ada/SPARK isn't well suited for applications which require dynamic memory management. Yet, Ada/SPARK is probably the best language to specify, implement and verify correct software.
C++ programmers in 2004: "you should move on from C to C++, C++ is so much safer, reduces mistakes and is just as performant" C++ programmers in 2023: "what even is safety? let me split hairs for half an hour about safety definitons, moving to a safer and just as performant language is not the solution"
The knowledge of programing language design continue to advance, and the perceptions about safety have changed, a talk about safety is just a part of this reevaluation, calling this splitting hairs is just showing how ignorant you are, but if you don't want to learn about safety nobody is forcing you to watch this talk.
no such thing as "memory safe language". No, not even GW-BASIC. The _moment_ you make an API call to the OS, all your "memory safety" goes straight out the window. Why does the NSA say it, are they stupid? No idea, they could be, they could be not. _That_ piece of advice, though, is just bad. Besides, "memory safe language" means _nothing_ when your compiler is not proven correct: as far as I know there is only one formally verified compiler, compcert for a _subset_ of c99.
I’ve often thought about how ironic it is that there are all these formal verification languages implemented in totally unverified platforms. Inductively, it seems nonsensical. How can you prove that something is sound if the tool you’re using to do it isn’t sound?
ua-cam.com/video/MO-qehjc04s/v-deo.html Divide by zero most commonly gives +/-Inf depending on the signs of the dividend and devisor. Only if the dividend is zero will you get NaN. If floating point exceptions are enabled you will get a SIGFPE.
33:30 I actually looked that up, but no, Rust as a language does not (currently) have a specification.
Some people think that "The Rust Reference" is a specification, but it's explicitly not.
But a few months back an RFC got accepted to actually create a language specification.
The self-driving car was a fascinating example of safety and in my opinion demonstrates the difficulty of defining safety exclusively. A self-driving car that can't drive off the road is not the safest car. In an unexpected traffic event the safest option can very well be driving off the road, even crashing off the road.
I don't know how that would directly apply to software design though, but it's a good one for making you think how sometimes the context is different enough that the perspective changes in what's safe.
Or like how some physical safeties rely on the fact that something breaks and gives up for not being up for the task. Like you don't want your fuse to be replaced by a thick nail, even though it allows you to use your device without cutting electricity. And by definition it is a good and a safe nail because it can stand the stresses applied to it.
You recognize safety when you're missing it and something goes wrong, but you don't recognize safety when nothing goes wrong. You might assume safety, but that's not accurate. Like you think your house is adequately safe until you get robbed and realize that in fact the safeties in our houses are really poor if someone really wants to get in and isn't too afraid of the consequences. On the other hand you might be perfectly safe in an amusement park ride, but have a strong sense of lack of safety.
These talks by Sean Parent always make everyone fall so quiet until after. It's like you can hear a pin drop, everyone is so focused on what he has to say. Without exception it seems to be important and thought provoking.
It's amazing that the first 20 minutes are a repetition of a lecture I had last week.
Like, how did I manage to do this from a timing perspective?
Great talk Sean!!
50:58 Stepanov’s lecture (I believe): ua-cam.com/video/YlVUzJwN_Xc/v-deo.html
1:14:00ish - thinking in terms of preconditions - those requirements need to be machine-enforceable. That might be more painful to use, but think of Concepts - these kinds of tools ought to help, not hinder us.
Was the audience microphoned too? I'm curious why there's so much background noise, when the speaker seems to be nicely mic'ed.
On Chandler's video someone commented about not being able to hear the audience questions, and he replied saying that they had mic'd up the audience in some way they thought would capture questions
Muahahaha. Why it doesn’t surprise me that Adobe employee calls GCC non-commercially-friendly.
It depends on what you're trying to do. RMS has always been hostile to attempts to modularize GCC to make it more embeddable into commercial IDEs, like Clang/LLVM and Xcode, or at least that's what I've heard. Then there's the GPLv3 issue, a license that some companies and groups (Apple, FreeBSD) refuse to use in the core system. That's why Apple stopped at GCC 4.2.1 and an old version of bash. Those were the last GPLv2-licensed versions. Most companies that use open-source projects want to have their changes upstreamed for ease of maintenance, but they hate to be forced to do anything, as GPL does.
FWIW, the GNAT Ada compiler is a front-end for GCC that's commercially-supported and is GPLv3 now, but that project started in the 1990s, and there wasn't any open-source alternative the authors could've used instead.
Every product deals with user information because that is a second form of rarevenue that make it a companies want to sell.
A robotic vacuum cleaner now transmits its images to a server that is not the consumer.
The washing machine now send your data to the manufacturer that they sell to third parties and it's all in the user licence agreement.
The language is not the problem.
Buy defaults, Smart tvs opt you in uploading your user data to the manufacturer.
Every smart phone record your text envoice and send it to whoever you have your smartphone with.
Your modern car keeps all your text messages, And who knows what else in they cannot be deleted.
The boogie man is to blame the programmer, Select let's see how safe we can keep getting the languages.
17:55 VirtualAlloc says hi! :D
48:45 "There's no real downside [to using std algorithms vs raw loops]" - the big downsides remains readability and debbugability. I love std::accumulate and a few others, but it becomes incredibly gnarly stuffing lambda's inside algorithms inside algorithms. With a simple loop, one can easily follow the flow and press F7 through each step.
I feel the same about most of "modern c++!" :p
auto my_array = std::array{0,1,2,3,4,5};
bool const all_under_10 = std::ranges::all_of(my_array, [](auto v){return v < 10;});
Sure maybe you have a point about debug-ability. but when in the world do you need to debug this? Why would you have to debug an algorithm? If you do need to, then maybe you shouldn't be using an algorithm.
Readability though? You're beyond wrong, try writing the snippit I wrote in 10 seconds with a for loop, and make sure you don't screw up the condition, that's an easy to make mistake.
Algorithms do EXACTLY what their name says, declarative programming is immensely superior in readability than for loops. Furthermore, if you're not at least using ranged for loops, then you're the type of programmer that will eventually kill someone. Raw for loops are harmful.
Unfortunately, debuggers don't support us a lot with navigating C++. It should be possible to tell the compiler to stop at the "N" iteration starting from begin() in a std algorithm.
@@alexb5594 "Raw for loops are harmful." pray you never see what the compiled code looks like.
You may not like it, but raw for loops are what peak performance looks like.
specifically, for(;;) loops.
@@GeorgeTsiroswell obviously they get compiled to raw for loops. It’s considered harmful, because of humans write bad code from time to time and stl algos try to remove unnecessary points of failure like bounds checking in raw for loops
Haven't all these problems been solved by the adoption of ada/spark in industry? Ada seems to be a suitable answer to every potential safety-related issue thanks to provability and its other compiler/toolchain features.
As far I can judge Ada/SPARK isn't well suited for applications which require dynamic memory management. Yet, Ada/SPARK is probably the best language to specify, implement and verify correct software.
We can have 150 finds, then AI can help us pick the right one. (As I would pick the wrong one).
We can also have 150 find-finders. Then we can build 150 LLM's that each tell you what find-finder to use.
AI verification when AI its self is unverifiable?
1500 views, 98 likes; cpp programmers really are introverted
😂😂😂😂
C++ programmers in 2004: "you should move on from C to C++, C++ is so much safer, reduces mistakes and is just as performant"
C++ programmers in 2023: "what even is safety? let me split hairs for half an hour about safety definitons, moving to a safer and just as performant language is not the solution"
The knowledge of programing language design continue to advance, and the perceptions about safety have changed, a talk about safety is just a part of this reevaluation, calling this splitting hairs is just showing how ignorant you are, but if you don't want to learn about safety nobody is forcing you to watch this talk.
no such thing as "memory safe language". No, not even GW-BASIC. The _moment_ you make an API call to the OS, all your "memory safety" goes straight out the window. Why does the NSA say it, are they stupid? No idea, they could be, they could be not. _That_ piece of advice, though, is just bad. Besides, "memory safe language" means _nothing_ when your compiler is not proven correct: as far as I know there is only one formally verified compiler, compcert for a _subset_ of c99.
I’ve often thought about how ironic it is that there are all these formal verification languages implemented in totally unverified platforms. Inductively, it seems nonsensical. How can you prove that something is sound if the tool you’re using to do it isn’t sound?
ua-cam.com/video/MO-qehjc04s/v-deo.html Divide by zero most commonly gives +/-Inf depending on the signs of the dividend and devisor. Only if the dividend is zero will you get NaN. If floating point exceptions are enabled you will get a SIGFPE.