back in the day (late 80s in a university environment) a unix administrator and resident c guru warned me not to be fooled by the small size of the language since i could spend the rest of my life learning new tricks and exploring its possibilities.
I have been doing a lot writing custom libraries, and the amount of advice that is just "use the standard" really gets on my nerves. pointers take some getting used to, but they're not that hard. I don't need a std::array or a std::string sometimes.
as a C programmer i find myself moving more and more towards assembly for performance and deeper understanding of the computer and i hate that assembly with ardor horrendous thing full of demonic and unholy stuff that come from a warp dimension , very taxing on the brain but a inevitability if you try to really understand OS structure and programming , C++ still remains my preferred language
I kinda like to use multiple languages all the time. Most of the code I write is C++ though. I like to have an alternative implementation for a component in plain C or some other language(Java, Python, Haskell, Scala). Sometimes other languages pose some restrictions, which turned out to be useful and I end up making the original implementation better. Base on my experience the C++ version is the fastest and the safest. Some things are still kinda hard to accomplish properly: dependency injection(can improve performance at the expense of very complicated code) and reflection(needed for better error messages when something fails). Also the thread safety checks seems to not being properly supported on some platforms. We still get some thread safety issues. I also like many things in python - its formatted strings inspire whole libraries in C++.
I first learned C in 1984, shortly after working a long time in assembler. I could not believe that there was a high-level language that was so sloppy, did not check anything much, did not give any warnings, just let your mistakes causes crashes and even bring down the whole machine in those day. Today I love C. It's minimal, the least "high-levelness" required to get one off assembler. It's small, one can can write a C compiler in reasonable time. It's best in class for performance. Today I also love Rust. Finally a high level language that banished UB and checks almost all silly mistakes I might make. While maintaining the performance of C and avoiding the monstrous complexity and ugliness of C++. And it interoperates with C very well. So I end up loving both C and Rust for polar opposite reasons. Rust will not replace C for me, they will coexist very well. I'm sure that is true in the wider world as well.
I like Rust, but it’s also becoming monstrous and its syntax is just as ugly as C++. Sure you could write trivial examples with nice concise syntax, but look around the ecosystem, a lot of the rust code is hideous, verbose and very dense
15 is way low, most things in this video can be done already for 24 years =D The modern aspect is not about the features, it's about learning how to write good C with the features available in it since forever.
Rust is a common language where you can do the fad thing for 15 months and be called "modern," but still be a fad. We would be so much better off if people just learned basic programming skills in C instead of software "engineering" their way into a black hole with every new fad that comes along.
@@totheknee Rust is a memory-safe language without a GC or a runtime, which mostly runs as fast as C/C++, so I would say that's pretty modern and fad indeed.
25 year programmer here. I always knew C was a solid language, but I wasn't very productive in it. It gives you so much freedom, that without some knowledge of software design it can be difficult to build and maintain useful software. I learned so many languages over the 25 years, and it wasn't until I learned Golang and Rust that I actually understood how to design and build software correctly. When I came back to C, I had fresh eyes and I'm feeling like I can be productive in C. I'm still convinced C is the best programming language.
Same here. I spend most of my career writing Ruby code, then moved to Go, and it made me want to try C. And I completely felt in love with it! It's now my main hobby language, although I'm sure the way I use it is unorthodox and nonsensical for professional C developers (I mostly write very small programs, sticking to UNIX idioms, and also lot of CGI programs, for use with my web interfaces). It just clicks for me, back to the simplicity we were aiming for in the very early days of Ruby (we failed quite spectacularly 😅), but this time I understand what's going on under the hood.
I sometimes make the analogy that calling C a bad language is like saying a 2-wheeled bicycle has no balance . It is definitely up to __YOU__ to properly organize your files and namespace your [ global / filescope ] variables . Other languages force that sort of organization on you .
Designated array initializers are amazing and have a use. The trick here is that in the brackets you can use any constexpr expression, like enum values. So you can have a map from enum values to whatever you want.
Thank you very much for mentioning MISRA. I've been programming C (medical devices, automotive, telecom) since working at Bell Labs in the 1970s, love it, but your methodology, coding standards, and process determines the literal safety of the outcome.
@@KFlorent13 im mainly talking about void pointers, but function pointers are certainly another way. especially if they take and return void pointers. (this is an alt account)
45:21 Assuming sz can get very big, the argument to malloc can overflow and create a smaller allocation than expected. There's calloc for this purpose. Thought this was common knowledge.
1:08:46 The semantics associated with flexible array members are a bit weird. They're arrays of incomplete of an incomplete type, but one can use a compound literal to initialize the structure, so long as the flexible array member remains uninitialized. However, a flexible array member has no presence in the language itself aside from being a named field in a structure: it has no size itself, and it has incomplete type, so you cannot initialize it, yet you can still refer to it, and get its address. Also, the allocation is simplified to sizeof(string_s) + sizeof(char[len+1]) because there is assumed to be no padding between the length field and the flexible array member. If you add another field of type char between the two, you may find that the offset of the arr field is less than the size of the entire struct due to padding at the end, resulting in allocation of unused bytes. To use every bit of the space, you need to allocate the maximum of either sizeof(string_s) or offsetof(string_s, arr) + sizeof(char[len+1])
Either I don't understand or I found a bug at @45:45: In the line *(pK + i*sizeof(double[sz])) + j) = _gcoeff(sz, i, j) sizeof(double[sz]) is only valid if your pointer was a char*. it won't work because pointer arithmetic is performed relative to the base size.
46:00 - Is it really that bad? I remember trying to wrap my head around multidimensional arrays in BASIC when I was 14. But it's always `row*stride + col` for 2D. Always. It isn't that hard, right?
Yeah it's not, he also wrote that down unnecessarily complicated. Instead of *(pK + i*sizeof(double[sz]) + j) = _gcoeff(sz, i, j); it's just pK[i * sz + j] = _gcoeff(sz, i, j);
@@TomPomme Yep, it's actually a bug since he's using a double* and the compiler would automatically multiply the index by sizeof(double), so he's using the stride wrong. He was probably writing up an example using unsigned char* and decided after he'd written it that it'd be better to properly type it and forgot to delete that.
Finished Effective C recently and started to do some advent. Man does it feel so nice to use the more modern features. It reminds me of a more stripped down Zig.
16:30 Unfortunately this is also wrong, if it compiles, then only under C23. auto buffer = (char[42]){}; 1) the compound literal has an empty initializer list, which is not allowed according to ISO C (until C23 it was available as a gcc feature) 2) auto buffer : default type to int results in at least a warning (-Wall or -pedantic). 3) auto buffer : if the default type would be int (with warning) the type does not match, because the compound literal (char array of size 42) gives the address of the first element (char *) This would be a conversion from char* to int.
45:35 the thing you're missing here is that these one-dimensional arrays "emulating" multidimensional arrays have several advantages over multidimensional arrays. A big one is that they are simpler and more efficient to allocate, deallocate and memset, since it is guaranteed to be one contiguous space. If you call malloc N times, then you have N spaces that are not guaranteed to be contiguous. As for not being able to understand it... skill issue. If you see the pattern enough times, y * width + x becomes obvious and idiomatic. Case in point, I understand that code perfectly fine. In fact, I find it easier to read and understand than the example you show soon after, where you create a pointer to an array of arrays and then collapse 3 layers of indirection into 1 in a cast at the end. I would not be confident that that would work if I were writing that code. The one dimensional pointer, on the other hand, is much easier to reason about, and you can easily write it in a way that it is more readable. You could collapse the size of a row into a constant called "width", or you could even make a macro to do the transformation from (x, y, width) to y * width + x. But when I see a pointer to array of array of scalar being cast to a pointer to scalar, that's a massive code smell to me. Very dirty cast IMO.
I'm just gonna say this, and I'm not trying to be offensive or insulting, but a lot of the code shown in this video is code I would call "stinky". I think one of the worst examples would probably be the INVOKE macro. It's clever, sure, but it only works to switch between 2-arg and 3-arg functions. Try it on a 4-arg function and the macro generates text that attempts to call the fourth argument as a function. Try it on a 1-arg function and I'm not even sure it'll compile because this happens: `INVOKE(arg1, , function`. note the double comma. I think it's pretty clear that the macro should not be called INVOKE at all. It's too situation-specific and too much of a dirty hack. If it was called something like INVOKE_2or3, I probably wouldn't be complaining, or at least this comment would be seven sentences shorter. You also seem to think that modern C doesn't need to cast malloc, as opposed to "old" C. Well I'm here to tell you that C has always been weakly typed and casting a void pointer to a non-void implicitly has been allowed ever since void was first introduced in C89. Before void pointers were a thing, malloc returned a char*, but I don't think it was casted then, either. Again, C is weakly typed. Some other examples of patterns you use that I don't like are typedefing structs (never do this, just use the struct keyword, it's clearer and easier to read) and that [static 1] thing. Both because it puts brackets in the function parameters and because it's telling the compiler to literally throw the whole program in the trash because a null pointer gets send to the function. In my opinion, it's the callee's responsibility to handle that case gracefully. And it's not hard to do in a way that is readable as long as you structure your if statements appropriately. If you are putting 98% of the function in a single if statement, you are doing it wrong. Invert the if and early-return, get rid of that unnecessary indentation and unnecessary complexity. It was an interesting video, but you and I clearly have very different ideas on what modern C looks like. I shouldn't be surprised because obviously you are a C++ programmer first and a C programmer second. I gather this from you casting malloc in that refactoring segment and calling it "old C" and from you disappointedly saying that C doesn't have name mangling, as if it would be a good thing to add that crap to the language and make compilation slower. It's ironic because earlier in the video you quote Torvalds on VLAs, but if you knew Torvalds, you'd know he hates C++ and frankly, if you submitted code that looks like what you showed in this video (especially that kernel related one that I touched on in the parent comment) to the kernel source tree, Linus would absolutely cuss you out.
int (*arr)[WIDTH]=malloc(sizeof(*arr)*HEIGHT); is also continuous in memory, you call malloc 1 time, and you use arr[y][x] instead of arr[y * width + x]. And it's not a new feature. It always was possible.
I think the VLA syntax is a good idea in theory but I _wish_, badly so, that it worked better with sanitizers vs. just a pointer. I tested this a bit and it's better than just a pointer but it's not perfect so, eh, I am really conflicted if I should use this or not. You could always use macros to index into them, and such a macro is cleaner and a good use of a macro IMO. So basically, I wanna love this but I can't yet unless clang and gcc (or is the sanitizer standalone) add exclusive benefits to this syntax. I will add though, that in practice if you're careful it wasn't as error prone as you'd think. My use case was casting an void* to int[x][y]. I think something like int a[2][2] = data; was good enough for me IIRC, this was awhile ago. edit: I tested this out, and yup, VLA syntax + sanitizer on on your compiler catches all the out-of-bounds UB in my tests, while a classic pointer style syntax doesn't catch any when passed to a function.
Yes, well spotted. Unfortunately, the man doesn't know exactly what he's talking about. The idea is to use it to check whether the address that was passed belongs to an array that has at least that many elements. void foo(char p[static 5]); char array[4]; foo(array); This example would give you a compile warning. Both under clang and under gcc. The whole thing has its limits, of course. And of course the function cannot check whether a null pointer has been passed if this was not already known as a constant value at compile time. That is nonsense. foo(NULL) results in a warning because a constant value was passed here. In the following example, the compiler has no way of determining at compile time whether NULL has been passed. int x = 1; char const * p = "Hi"; if(x) p = NULL; foo(p); // No warning, even if p should be NULL The whole thing is not intended for this, but in cases where a "real" array (as the address of the first element) is passed - and not a pointer.
15:42 that int (*pk)[ sz ] [ sz ] has the *pk in parenthesis because the array operator [] has higher precedence IF the parenthesis where not to be used you will have double array full of pointers to int which is a totally and completely differnt thing from a pointer variable , more so it will invalidate the malloc call , further reasoning *pk is a pointer to int therefore Encapsulates information about the data object aka int which is a 4bytes (32bits) this encapsulation/metadata if you want to call it such is what allows the pointer arithmetic to be performed it makes the pointer aware of the data object therefore when you add/sub 1 from it it will Move the instruction pointer registers exactly 4bytes using a LEA instruction it is extremely fast
Interesting talk, but there are a few errors. Around the 16:30 mark, he said that program was well formed, but he should have only said that it compiles. Using a magic number for the size of a buffer is a huge problem, but he didn't check the return value from strftime() which could lead to weird bugs and/or a segfault. The slide around the 18:15 mark was all kinds of wrong. The accept() function in that example only took one argument, not multiple. This may be a language barrier issue, but he should get someone to revise the text, and he misspoke with regards to what static meant in the argument, as it required *at least* 1 value to be in the array. One could attempt to rationalize that as enforcing a C-string, as in a nul-terminated array of char's, but it wouldn't do that as you could pass the address of a single char and it could be set to anything. The two slides at 23:00 and beyond have a weird comment that seems to equate zero initialization of a char * to the value "" which is incorrect as that would have it pointing somewhere and only a tiny percentage of niche systems allow data to be stored at address 0, which is why that was chosen as NULL. I still don't get why C needs a nullptr constant when we already had NULL, and if it's about casing, why do we need the `ptr`. If it's a type checked thing, then again, why `ptr`. This also relates to the addition of constexpr as I'd rather they just change the functionality of const but also, I'm not averse to defining macros inside functions and regularly do so, especially when it's just for that function, and aside from that, if it's about type checking, constant values can use type postfixes and there's always the standard #define SZ ((size_t)100) to give it a particular type, if you don't know the correct postfix. At 33:00 I actually agree regarding VLA's and generally try to avoid using them, but there are legitimate use cases since they allocate on the stack, and I would definitely avoid using a maximum allocation size for every use of one. Given that the stack is generally limited, it'd be better if the max value was dynamically checked either way, but if you need that much space that it would blow the stack then use the heap instead. Around 46:00 I would just use i[*pK][j] to avoid needing the extra set of parentheses. I still don't understand why so few people use this superior approach to referencing arrays of data to avoid LISP-ing your code. As for 1:07:00 and flexible array members, I find that generally it's better to not use them. It adds hidden complexity to structs and does prevent you from making arrays of them or taking the sizeof them, and embedding in other structs and yes, even directly initializing them. If you're worried about multiple allocations then use the strategy I employ where the container is allocated on the stack. While the separation is a valid concern, it's an overblown one for certain types, especially when we're talking about large data sets. The data will dictate more how it should be accessed than the few pieces of bookkeeping information, and from an ergonomics standpoint, you shouldn't be allocating for a single container on the heap ever.
There is a lot that has been presented I already use regularly in my personal projects, especially for clearer initialization; my priority is always about how to make the code more understandable, discoverable and debuggable though: so many creative uses of macros, old and new, are banned from my code.
18:30 Unfortunately, this is not correct. The idea is to check whether the passed address belongs to an array that has AT LEAST (and not exactly) this many elements. void foo(char p[static 5]); char array[4]; foo(array); In this example, you would receive a compile warning. Both under clang and under gcc. The whole thing has its limits, of course. And of course the function cannot check whether a null pointer was passed if it was not already known as a constant value at compile time. foo(NULL) leads to a warning because a constant value was passed here. In the following example, the compiler has no way of determining at compile time whether NULL was passed. int x = 1; char const * p = "Hello"; if(x) p = NULL; foo(p); // No warning, even if p should be NULL The whole thing is not intended for this, but for cases in which a "real" array (as the address of the first element) is passed - and not a pointer. void accept(char const str[static 4]); char const a[] = "Hi"; char const * p = "Hi"; accept(a); // Warning with clang and gcc accept(p); // Warning with gcc (not with clang) Clang and gcc also do different things here to determine whether the limits of the array have been observed. For example, clang no longer warns at all if a pointer is passed. gcc tries - as far as it is possible at compile time. Unfortunately, the way it is presented in the video is not correct.
"... so you can impress your colleagues!" This is the mentality where things go south usually. If anybody catches themselves writing "fancy", "abstract" code just so that you can impress someone, please take a good hard look into yourself because that's the opposite of what you should be doing.
Yeah , you should be aiming to write your code in such a simple straightforward stupid simple style that people ridicule you for it's simplicity . For example : free( some_pointer ); some_pointer = 0x0 ; Those who are proponents of D.R.Y. might tell you that if you are going to always get rid of your dangling pointers like this , you could reduce your lines of code by HALF if you made a specialized helper free that did the zeroing-out inside the function . my_special_free( &( some_pointer ) ); assert( 0x0 == some_pointer ); Problem is , every little thing you do like this adds a bit more to your cognitive load . Same thing with language features . If you keep adding stuff every few years , eventually you will no longer have a language that easily fits within your head . -KanjiCoder
57:01 - Looks like a UB. Compiler will destroy our "NULL check" (Why even need that if "free" do nothing with null pointers) or destroy dereference of a null ptr at line 7.
Yeah, the clearing should be done in that `if` block where he makes sure that `pa` points somewhere. As for checking that `data` points somewhere before calling free(), I just assumed that he's been using it on embedded platforms where the function call to do nothing would take more processing time than a local branch to test against NULL.
Ooooh! I use designated initializers for arrays all the time for a couple of things: 1) I often make static tables of data that are indexed by enum values. You just [MY_ENUM_INDEX] = {.the = data, .for = it} 2) Graphics code often uses integers for binding slots, so I initialize plenty of complex nested structures such as: .targets[3].blending_mode = BLEND_ADD or something like that where different shaders need to set data to certain indexes deep within a large, mostly empty structure. Designated init works *great* for this!
Great presentation. I think that the presenter should more real application of modern initialization in C. I suggest showing how compound literals (CLs) combined with designed initializers can simplify setup of complex tree of structures in Vulkan API. No need to create a bunch of auxiliary object and vectors. Just taking address of CLs or CL arrays gets the job done.
One change I would like to see in C would be to add a syntax like [*]= in array notation to allow for a "default to something other than zero" mode. Something like `double array[10] = {[1] = 3.14, [2] = 2.78, [*] = 6.02e23};`
If you mean your * has to become 3, then you write this as {[1] = 3.14, [2] = 2.78, 6.02e23}. The elements without indices follow after the last element with an index.
45:35 That's a mess because it's not correctly allocated as a multidimensional array. It's only being allocated as a single dimension, which will work, but it's a mess and/or behaves as a single dimension. If you want a multidimensional array, it really should be allocated multidimensionally. Which is to say, allocate your first array dimension and assign it to a **ptr. Then use a for loop to step through each element of the array, assigning a pointer to it for another allocated array of the desired size of your next dimension. Then it will behave as a normal multidimensional array, using standard arr[col][row] notation. No VLA's required. It just has to be freed the same way it was allocated. Example: #include #include int main(void) { int i, j, col = 5, row = 5, **arr = NULL; arr = malloc(sizeof(int *) * col); for(i = 0; i < col; ++i) arr[i] = malloc(sizeof(int) * row); for(i = 0; i < col; ++i) { for(j = 0; j < row; ++j) { arr[i][j] = 'A' + i + j; printf("%c ", arr[i][j]); } putchar(' '); } for(i = 0; i < col; ++i) free(arr[i]); free(arr); return 0; }
It's very naive approach to multidimensional arrays. Access speed is shit, alloc speed is shit. You could drop basically anything you have been taught in uni regarding C and C++.
That's downright horrific. The better method is to allocate for the first dimension and the second in one allocation then patch up the pointers of the first dimension. It still requires a single loop for 2D arrays, but only one malloc()/free() per 2D array. Also, you should put it in its own function and do some basic error checking. Of course, most would recommend using a 1D array and just indexing by [i*w+j] instead of doing something so insane.
In the refactor example: why keeping the check for nullptr when he's using static 1 in the oarameter list? Shouldn't that prevent a NULL vec pointer being passed in?
Adding auto with the same meaning as in C++ is a strange choice - I thing its generally a bad feature in C++ but at least has some utility due to long typenames, iterators, namespaces- but in C? Why?
This talk is very interesting and i cannot program in C while listening. So I’m gonna watch this later. :P Btw i’ve been programing more than 15 years 20+ programming languages and now days programming back to C and Rust.
THANKYOU ! I don't care how much I like a new feature . DO NOT ADD IT . Question : If you add 1 new feature to C over an infinite timeline , how many features do you end up with ? Answer : Infinite features . Are people TRYING TO KILL MY LANGUAGE ? Because this fact should be OBVIOUS to programmers who should have mathematical reasoning . If you want to take the current C23 standard and make it it's own language ... That is totally okay . I have no issues with : JAI , ZIG, ODEN , GO , RUST Maybe this is hypocritical as I compile as C11 ( because of the memory model ). But besides that , I just write my code like it is C89 with "//" style comments . -KanjiCoder
C programming language is a tool and the quintessential soul of computer architecture and foundational programming. It is akin to an elemental force that underpins the very fabric of computing, transcending mere syntax or semantics. This soul, deeply ingrained in the computing structure, migrates through various embodiments - from operating systems to embedded systems, manifesting its presence in numerous forms, much like an incarnation. Each new form it takes on, be it a simple microcontroller or a complex operating system, is endowed with the essence of computational logic and efficiency that C embodies. Just as a body may function with limitations but cannot exist without a soul, so too can modern computing operate with various languages and tools, but it cannot escape the fundamental principles and efficiencies ingrained by C. This language embodies more than just a set of instructions; it represents computational thinking and efficiency, acting as a cornerstone that builds the edifice of modern computing.
I still find that Fotran dynamic multi-dimensional arrays (allocate(pK(sz,sz)) and an intuitive access syntax, pK(i,j), much easier than this VLA syntax.
Do you even know what a header file is ? You can literally replace all you .h file with .c and it will still do the same thing they are called "header" for organization purpose, header files themselves are C files
I was taught C in 1981 as a new grad in Computer Science when working for a company called ARBAT by my boss Peter Madams. Best boss I ever had. We used Whitesmiths C on a PDP-11 and later on a Vax. 42 years later I am still using C/C++ in the financial trading industry, so I am very lucky that it has given me my whole career. My boss used to say, "There is no such thing as a correct C program, just one that has never gone wrong yet !!" This is as true today as it was in 1981.
The program at 17:00 is malformed, but for none of the reasons listed. The main function is a function and requires to return at least 0 or 1 (EXIT_SUCCESS/EXIT_FAILURE) statement.
Check out www.open-std.org/jtc1/sc22/wg14/www/docs/n3096.pdf , section "5.1.2.2.3 Program termination"... It reads: "... reaching the } that terminates the main function returns a value of 0". I think this was added in C99, but may be off on the history...
Why are you assuming that those are the values of those macros? The standard doesn't define them, and they could be 0 and 1, or they could be -22 and 33. In fact, the wording they use for describing how exit() works is grammatically ambiguous and could imply that EXIT_SUCCESS couldn't be 0, but either way, the host system may very well take any value and exit() would be required to convert to that value for a successful exit.
tl;dr: I am looking for good old fashioned (hardcore) C and C++ books that are current (e.g. 20 or 23). I have an MSCS degree (with EE undergrad). I think it is one of the reasons I find efforts to dumb down the material mind numbing. Most of my learning has been through textbooks, and the Java/Salesforce developer guides provide a good textbook-like in-depth coverage of major topics. P.S: Happy holidays. Long version: Can anyone suggest good C and C++ books? I have not really coded in either for over 10 years. It has been Java and Salesforce (Apex) during that time. I post the above question because I am not able to find a good C++ book. I think the textbook-like book from Stroustrup would be OK if it was current. The Murach book seems like it is targeting learners who are new to programming. I find the pattern of material presentation mind numbing. To me it seems that to get a deep understanding of a subject you need to go through 10 mini-tutorials. And the video tutorials are no better.
I havent programmed in C myself since college, however my friend was recently learning C and Lisp. He used the books "The C Programming Language" by Kerrigan and Ritchie. For learning Lisp he used "Structure and Interpretation of Computer Programs" by Harold Abelson, Gerald Jay Sussman, and Julie Sussman. Even if you are not interested in Lisp, the book contains important concepts for understanding and learning programming in general. Also there are Python and JavaScript versions of "Structure and Interpretation" if you are more inclined to those languages. I hope this helps.
Just do the exercises in K&R set your compiler flag to C89. Then do some work in C99, then C11/C17. The changes will come naturally because there isn't many of them.
All the hype around rust but barely anyone uses it in embedded. I've work HDD and SSD firmware. I wrote how these things operate. Written in C. Worked in SIM cards, credit cards, NFC cards, firmware is written in C too. My friends from the industry, cars head unites, POS, and lot more, C. It's C, C, C.
Better in all points. Memory leaks is overrated. And meny tools for memory checking already exist. Leaks,Valgrind for example. And all memory allocation is operation system business. All crappy talks about dengerous of memory leaks it's only crap. Languages without memory leaks is java or c#. Thats all. Even ancient Pascal. Verdict : Marketing crap for smoosie lovers
If designated initializers work differently in c and c++, but c++ is supposed to compile c code, does that mean that eventually requirements for designated initializers are going to be relaxed in c++?
C++ is no longer just a superset of C. There are things which are different in C++ and C. For example the auto keyword or function declarations with no parameters. Edit: After watching the talk, it turns out that the auto keyword behaves exactly the same as in C++
Maybe, but C++ isn't always able to compile C code so C++ might keep its stricter rules. For example, `auto float` in C is a valid way to declare a float, but in C++ it won't compile because auto always stands in for a type name
No, and the reason why is because C++ is not, never has been, and never will be, a superset of C. Try assigning malloc to a pointer without casting in C++ and tell me how far you get. Better yet, try to compile literally any real-world C program in a C++ compiler. They won't compile. Funnily enough, Python code also won't run in a Lua interpreter. Nor can MASM compile Java. The answer is simple: use a C compiler to compile your C code.
neither does c89 have binary literals. but if you try hard enough, and you turn on compiler optimizations... who's to really say? binary literals are less than an afternoon of work. a few minutes, even!
I've been a JS/Python soydev for the past 5 years, basically because I learned to code specifically to work as a developer rather than to learn about computers. The last 6 months I've been coding multiple personal projects in C and I feel like I have a superpower now, I understand a lot of low level stuff that the high level languages hide and my code skills are much better. Probably learning C was the best decision I could make. I feel like I want to stop being a webdev because I'm sick of it, but I don't think I have the skills yet to apply for a full C/C++ job, that's gonna be a long ride.
I am probably an old retarded. I don't find much of "modern" C attractive. Maybe I am allergic to changes. I think one of the main advantage of C is that it does not change that much. So that both old and new code can be compiled on both old and new compilers (if you are lucky). And so that C remains simple to learn. And writing a compiler remains simple. Don't get me wrong, there are a lot of things that I don't like in C, for example the #include mechanism which basically implies slow compilations, some of the undefined behavior stuff, some aliasing rules which force the compiler to generate instruction to re-read unchanged memory... But adding an initializer that sets non referenced fields to zero? That sounds against C principles. Non-referenced fields should not be initialized. C is not Java or something. And the sizeof a struct with a VLA inside that returns the size of the struct ignoring the VLA... That sounds weird, almost buggy. And [static 1] for non-null looks awful. It would be nice to be able to define if a function can or cannot take null in one of the argument. But it should look a lot more straightforward, like a nonnull keyword or something. Or better, the other way around, a null keyword.
A lot of this talk was slightly horrifying for me... (with due respect and thanks to the presenter for the whirlwind tour of the new features!) but syntax which means different things in different situations, or expressions that are jokingly "confusing" but the compiler accepts... these are the sorts of things that squeak through code reviews and will absolutely cause serious bugs. The C and C++ committees both are aware that these languages have a reputation for complexity and tricky shoot-yourself-in-the-foot gotchas. The goal should be to simplify the murky areas of syntax or to only introduce syntax that is very cleary distinguished from other & previous rules, and does what you would intuitively think it does, not something "surprising" or quizzical (literally). This talk demonstrates the committee is going in the opposite direction, introducing more and murkier syntax rules and surprising implicit/contextual semantics. Also, what if I need to have two VLAs at the end of a struct? 😅
I'm curious as to what aliasing rules you're talking about. If you mean evaluation of macro arguments, well, that's annoying but also a symptom of bad macro writing. If something else, then please expound upon what they are. C99 added the ability to zero initialize entire arrays and structs with { 0 } and extending that only makes sense. You can't use a VLA in a struct definition. The term is flexible array member because the size isn't defined at all. Pre-C99, a lot of code would use [0] for the array size and just allocate the extra space as needed, but you can literally use any non-zero size you want and do the same thing, just with a default minimum. I will agree slightly with the static keyword in an array argument, but I'd prefer no keyword and instead to just use [1] to indicate that it requires at least 1 element. That would change the meaning of that construct, and I don't care. The only alternative that I would accept is to use something like [>=1] or [>0] or maybe even the more ambiguous [1?].
@@VV-rk3wu I'll tell you what I told the other guy, FAM, not VLA. But if the two arrays you wish to place at the end are the same length, then struct the two types and use one array of both together. If they're differently sized, then either one's a pointer and the other a FAM, or they could both be pointers and make things safer. Also, you should be allocating containers either together on the heap or singularly on the stack.
Does int a = {5.8}; work in C or not? In c++ ( without the equals sign) it would error out due to narrowing conversion, is it same in C or C does the conversion anyway?
Of course it works. Well, the only reason why this would not work is if you have conversion warnings enabled + warnings as errors, then you would have a compile error. Other than that, it works.
May I ask why line 4 in 15:51 works ? I thought an array definition would ask the compiler to automatically allocate the memory on stack, with size corresponding to the type and number of element it has ? And can someone also explain line11 in 17:51 ? 😂
Because it declares a pointer, not an array. Think about line 4 as `typedef int T[sz][sz]; T* pK = malloc(sizeof *pK);`. The array type is defined and next a pointer to this type which is used to allocate object on heap.
1. pK is just a pointer. The type of this pointer is an array of 'sz' arrays of 'sz' ints. Since it's just a pointer, it needs malloc to obtain its memory chunk. Before the '=', the data type has been defined, so C already knows the size it's going to have. That's why sizeof(*pK) can return that size in bytes. 1. 'auto' is a data type inferred by the compiler from the expression on the right side of the '='. Remember that (char [42]) is the same as (sizeof(char) * 42). (char [42]) is the cast of {}. {} is a portion of memory on the stack, which, due to the cast, will have a size of (char [42]) bytes.
The type of line 4 is a pointer so it’s storage is simply the size need to store an address, so like 32 or 64 bits. But this pointer is given additional metadata about what it is pointing to. We know the type of the elements are “int” and that it points to a 2 dimensional array of equal height and width (sz). This means sizeof(*pK) is asking not for the size of the pointer but the data that it is intended to point to and thus resolves to sizeof(int)*sz*sz. If we assume, as he did/noted, that an int is 4 bytes this gives us 400. Line11 is definitely an weird one, since he says it works my only rationale is that it is declaring “buffer” as 42 character array and not specifying an initializer value for the elements in the curly braces . Definitely bad style, I hope I never see such code in the wild.
For the first question: it's a pointer to an array. So it won't allocate the array on the stack, just a pointer. Same way that int *x won't allocate stack memory for an int. Second question, it's equivalent to char buffer[42] = {0}; (char[42]) {} is an array literal of 42 chars, initialized to zero. auto buffer = ...; means assign ... to buffer, and infer its type. So in this case the type will be inferred as array of 42 chars, and it will be assigned the value of 42 zeros.
At 15:51 `int (*pK)[sz][sz]` is a pointer to a 2 dimensional variable size array, so when when `malloc`ing you wan't to have the size of the 2D array and not the pointer, that's why he dereferences `pK` in the `sizeof()`. At 17:51 in line 11, he creates a `int` variable (no type is specified, so defaults to int) `buffer` with storage duration `auto` and assigns it the pointer to the first element in the zero initialized `char` array. (Arrays in C and C++ automatically decay to pointers).
I am with you on this. C's selling point is that it is a very small language that can be easily memorized. It is what sets it apart from all other languages. If people INSIST on adding to a language FOREVER with no stopping point, the language is eventually going to hit a critical mass of un-maintainability where no one in the world can write a compiler for it. C might be furthest from that future, but doesn't mean we should march towards it.
Standardizing C was a mistake. Language committees only attract self-important people who want to control how other people code instead of problem solvers looking for the truly best way forward. C had existed for almost 30 years when it got handed over to a committee that promised to "only standardized existing practice". It's become a tool for pushes a bunch of folks' crazy untested idea onto everybody under a well-recognized brand name.
The new features are neat if they will help reduce issues without requiring modifications of the existing code. But things like that non-NULL check are wack, if you start introducing those things you might as well recompile your code in C++ with use of references if needed. Not sure why they decided to make non-NULL argument descripton so complicated. It makes literally NULL sense.
No, in this case C23 fixes weird behavior, where the new C23 output is what a sensible person would expect and the old output is an arcane artifact. I like this change!
The auto keyword is as old as the C language. It's just the opposite of static and tells the compiler to allocate a local variable on the stack, which is the default option anyway (auto, static, register). It has a very different meaning in C++.
The truth is that programmers have to make a living and the market dictates demand for emerging languages such as Rust. In 5 years time there’s a suspicious chance that Zig will paint a great picture.
C is the language which created the Cyber War Domain. The Hamburger+Cola of computer science. There have always been much better alternatives around: +ALGOL +PASCAL +MODULA-2 +ADA Plus some newer ones such as RUST and SAPPEUR.
All these new features but just more confusion. What a missed opportunity! Speaking as someone who had worked with C and C++ in telecom industry for over 10 years. If you have a large team but without a very strict coding style and do not enforce it strictly in code review, you would have a big mess. Have not used these two languages for more 15 years. After watching this presentation, I can't say I miss them.
back in the day (late 80s in a university environment) a unix administrator and resident c guru warned me not to be fooled by the small size of the language since i could spend the rest of my life learning new tricks and exploring its possibilities.
As a C++ programmer I find myself moving more and more towards C for performance and simplicity. C is truly a legendary language.
I have been doing a lot writing custom libraries, and the amount of advice that is just "use the standard" really gets on my nerves. pointers take some getting used to, but they're not that hard. I don't need a std::array or a std::string sometimes.
I tend to move in the other direction. I can write C++ code C-like, I can't write C code C++-like.
as a C programmer i find myself moving more and more towards assembly for performance and deeper understanding of the computer and i hate that assembly with ardor horrendous thing full of demonic and unholy stuff that come from a warp dimension , very taxing on the brain but a inevitability if you try to really understand OS structure and programming , C++ still remains my preferred language
I kinda like to use multiple languages all the time. Most of the code I write is C++ though. I like to have an alternative implementation for a component in plain C or some other language(Java, Python, Haskell, Scala). Sometimes other languages pose some restrictions, which turned out to be useful and I end up making the original implementation better. Base on my experience the C++ version is the fastest and the safest. Some things are still kinda hard to accomplish properly: dependency injection(can improve performance at the expense of very complicated code) and reflection(needed for better error messages when something fails). Also the thread safety checks seems to not being properly supported on some platforms. We still get some thread safety issues. I also like many things in python - its formatted strings inspire whole libraries in C++.
C++ sometimes can be as fast if not faster, but you need extra effort for it.
I first learned C in 1984, shortly after working a long time in assembler. I could not believe that there was a high-level language that was so sloppy, did not check anything much, did not give any warnings, just let your mistakes causes crashes and even bring down the whole machine in those day.
Today I love C. It's minimal, the least "high-levelness" required to get one off assembler. It's small, one can can write a C compiler in reasonable time. It's best in class for performance.
Today I also love Rust. Finally a high level language that banished UB and checks almost all silly mistakes I might make. While maintaining the performance of C and avoiding the monstrous complexity and ugliness of C++. And it interoperates with C very well.
So I end up loving both C and Rust for polar opposite reasons. Rust will not replace C for me, they will coexist very well. I'm sure that is true in the wider world as well.
what is your native language
I like Rust, but it’s also becoming monstrous and its syntax is just as ugly as C++. Sure you could write trivial examples with nice concise syntax, but look around the ecosystem, a lot of the rust code is hideous, verbose and very dense
You should try Zig and C3.
C is one rare language where you can do the same thing for 15 years and be called modern. And I like that.
15 is way low, most things in this video can be done already for 24 years =D
The modern aspect is not about the features, it's about learning how to write good C with the features available in it since forever.
Modern C has the same meaning as modern chess
Rust is a common language where you can do the fad thing for 15 months and be called "modern," but still be a fad. We would be so much better off if people just learned basic programming skills in C instead of software "engineering" their way into a black hole with every new fad that comes along.
Modern C has different syntax from older C
@@totheknee Rust is a memory-safe language without a GC or a runtime, which mostly runs as fast as C/C++, so I would say that's pretty modern and fad indeed.
25 year programmer here. I always knew C was a solid language, but I wasn't very productive in it. It gives you so much freedom, that without some knowledge of software design it can be difficult to build and maintain useful software. I learned so many languages over the 25 years, and it wasn't until I learned Golang and Rust that I actually understood how to design and build software correctly. When I came back to C, I had fresh eyes and I'm feeling like I can be productive in C. I'm still convinced C is the best programming language.
Same here. I spend most of my career writing Ruby code, then moved to Go, and it made me want to try C. And I completely felt in love with it! It's now my main hobby language, although I'm sure the way I use it is unorthodox and nonsensical for professional C developers (I mostly write very small programs, sticking to UNIX idioms, and also lot of CGI programs, for use with my web interfaces). It just clicks for me, back to the simplicity we were aiming for in the very early days of Ruby (we failed quite spectacularly 😅), but this time I understand what's going on under the hood.
I sometimes make the analogy that calling C a bad language is like saying a 2-wheeled bicycle has no balance .
It is definitely up to __YOU__ to properly organize your files and namespace your [ global / filescope ] variables .
Other languages force that sort of organization on you .
Designated array initializers are amazing and have a use.
The trick here is that in the brackets you can use any constexpr expression, like enum values.
So you can have a map from enum values to whatever you want.
This has been very interesting to watch.
Thank you, Dawid Zalewski, for taking your time to make this presentation.
C is like free will:
If you do good, you're the one to praise.
If you do bad, you're the one to blame.
C is the only one worthy of applause!
Thank you very much for mentioning MISRA.
I've been programming C (medical devices, automotive, telecom) since working at Bell Labs in the 1970s, love it, but your methodology, coding standards, and process determines the literal safety of the outcome.
What a fantastic talk. As someone who hasn't spent much time with C since college, this was very informative and entertaining.
i love c
c is best
all hail c
c doesn't totally lack type safety. C just has this feature called 'manual polymorphism' :)
Are you referring to function pointers ?
@@KFlorent13 im mainly talking about void pointers, but function pointers are certainly another way. especially if they take and return void pointers. (this is an alt account)
I love it.
45:21 Assuming sz can get very big, the argument to malloc can overflow and create a smaller allocation than expected. There's calloc for this purpose. Thought this was common knowledge.
C has changed so much since I learned it in 86. C has barely changed since I learned it in 86!
Do you mean that C++ has changed so much?
C improved radically with minimum change... this is how I also see it :)
1:08:46 The semantics associated with flexible array members are a bit weird.
They're arrays of incomplete of an incomplete type, but one can use a compound literal to initialize the structure, so long as the flexible array member remains uninitialized.
However, a flexible array member has no presence in the language itself aside from being a named field in a structure: it has no size itself, and it has incomplete type, so you cannot initialize it, yet you can still refer to it, and get its address.
Also, the allocation is simplified to
sizeof(string_s)
+ sizeof(char[len+1])
because there is assumed to be no padding between the length field and the flexible array member.
If you add another field of type char between the two, you may find that the offset of the arr field is less than the size of the entire struct due to padding at the end, resulting in allocation of unused bytes.
To use every bit of the space, you need to allocate the maximum of either
sizeof(string_s)
or
offsetof(string_s, arr)
+ sizeof(char[len+1])
Either I don't understand or I found a bug at @45:45:
In the line
*(pK + i*sizeof(double[sz])) + j) = _gcoeff(sz, i, j)
sizeof(double[sz]) is only valid if your pointer was a char*. it won't work because pointer arithmetic is performed relative to the base size.
sizeof(double[sz]) is sizeof(double) * sz
@@the_original_dude Yep, and he's right, it's a bug.
Thank you, I love C
46:00 - Is it really that bad? I remember trying to wrap my head around multidimensional arrays in BASIC when I was 14. But it's always `row*stride + col` for 2D. Always. It isn't that hard, right?
Yeah it's not, he also wrote that down unnecessarily complicated.
Instead of
*(pK + i*sizeof(double[sz]) + j) = _gcoeff(sz, i, j);
it's just
pK[i * sz + j] = _gcoeff(sz, i, j);
@@TomPomme Yep, it's actually a bug since he's using a double* and the compiler would automatically multiply the index by sizeof(double), so he's using the stride wrong. He was probably writing up an example using unsigned char* and decided after he'd written it that it'd be better to properly type it and forgot to delete that.
Are there any reference/resources (i.e., books, blogs...) that describe Modern C (besides the standard itself)?
Seacord's book Effective C
i’d love to know this
Modern C by Jens Gustedt is the best reference. I don't think it covers C23 yet though
Book: Modern C by Jens Gustedt
There's not much in C23 to add to his book. C26 should have proper strings, maybe
Finished Effective C recently and started to do some advent. Man does it feel so nice to use the more modern features. It reminds me of a more stripped down Zig.
16:30 Unfortunately this is also wrong, if it compiles, then only under C23.
auto buffer = (char[42]){};
1) the compound literal has an empty initializer list, which is not allowed according to ISO C (until C23 it was available as a gcc feature)
2) auto buffer : default type to int results in at least a warning (-Wall or -pedantic).
3) auto buffer : if the default type would be int (with warning) the type does not match, because the compound literal (char array of size 42) gives the address of the first element (char *)
This would be a conversion from char* to int.
1. the point of the talk is to highlight modern c features including c23
2. wrong
3. wrong
I write in C++ over 5 years, but nothing came close to C even now. Its a legendary language.
💪
Long live C.... the grandfather of all programming languages!
45:35 the thing you're missing here is that these one-dimensional arrays "emulating" multidimensional arrays have several advantages over multidimensional arrays. A big one is that they are simpler and more efficient to allocate, deallocate and memset, since it is guaranteed to be one contiguous space. If you call malloc N times, then you have N spaces that are not guaranteed to be contiguous. As for not being able to understand it... skill issue. If you see the pattern enough times, y * width + x becomes obvious and idiomatic. Case in point, I understand that code perfectly fine. In fact, I find it easier to read and understand than the example you show soon after, where you create a pointer to an array of arrays and then collapse 3 layers of indirection into 1 in a cast at the end. I would not be confident that that would work if I were writing that code. The one dimensional pointer, on the other hand, is much easier to reason about, and you can easily write it in a way that it is more readable. You could collapse the size of a row into a constant called "width", or you could even make a macro to do the transformation from (x, y, width) to y * width + x. But when I see a pointer to array of array of scalar being cast to a pointer to scalar, that's a massive code smell to me. Very dirty cast IMO.
I'm just gonna say this, and I'm not trying to be offensive or insulting, but a lot of the code shown in this video is code I would call "stinky". I think one of the worst examples would probably be the INVOKE macro. It's clever, sure, but it only works to switch between 2-arg and 3-arg functions. Try it on a 4-arg function and the macro generates text that attempts to call the fourth argument as a function. Try it on a 1-arg function and I'm not even sure it'll compile because this happens: `INVOKE(arg1, , function`. note the double comma. I think it's pretty clear that the macro should not be called INVOKE at all. It's too situation-specific and too much of a dirty hack. If it was called something like INVOKE_2or3, I probably wouldn't be complaining, or at least this comment would be seven sentences shorter. You also seem to think that modern C doesn't need to cast malloc, as opposed to "old" C. Well I'm here to tell you that C has always been weakly typed and casting a void pointer to a non-void implicitly has been allowed ever since void was first introduced in C89. Before void pointers were a thing, malloc returned a char*, but I don't think it was casted then, either. Again, C is weakly typed. Some other examples of patterns you use that I don't like are typedefing structs (never do this, just use the struct keyword, it's clearer and easier to read) and that [static 1] thing. Both because it puts brackets in the function parameters and because it's telling the compiler to literally throw the whole program in the trash because a null pointer gets send to the function. In my opinion, it's the callee's responsibility to handle that case gracefully. And it's not hard to do in a way that is readable as long as you structure your if statements appropriately. If you are putting 98% of the function in a single if statement, you are doing it wrong. Invert the if and early-return, get rid of that unnecessary indentation and unnecessary complexity.
It was an interesting video, but you and I clearly have very different ideas on what modern C looks like. I shouldn't be surprised because obviously you are a C++ programmer first and a C programmer second. I gather this from you casting malloc in that refactoring segment and calling it "old C" and from you disappointedly saying that C doesn't have name mangling, as if it would be a good thing to add that crap to the language and make compilation slower. It's ironic because earlier in the video you quote Torvalds on VLAs, but if you knew Torvalds, you'd know he hates C++ and frankly, if you submitted code that looks like what you showed in this video (especially that kernel related one that I touched on in the parent comment) to the kernel source tree, Linus would absolutely cuss you out.
int (*arr)[WIDTH]=malloc(sizeof(*arr)*HEIGHT);
is also continuous in memory, you call malloc 1 time, and you use arr[y][x] instead of arr[y * width + x]. And it's not a new feature. It always was possible.
I think the VLA syntax is a good idea in theory but I _wish_, badly so, that it worked better with sanitizers vs. just a pointer. I tested this a bit and it's better than just a pointer but it's not perfect so, eh, I am really conflicted if I should use this or not. You could always use macros to index into them, and such a macro is cleaner and a good use of a macro IMO. So basically, I wanna love this but I can't yet unless clang and gcc (or is the sanitizer standalone) add exclusive benefits to this syntax.
I will add though, that in practice if you're careful it wasn't as error prone as you'd think. My use case was casting an void* to int[x][y]. I think something like int a[2][2] = data; was good enough for me IIRC, this was awhile ago.
edit: I tested this out, and yup, VLA syntax + sanitizer on on your compiler catches all the out-of-bounds UB in my tests, while a classic pointer style syntax doesn't catch any when passed to a function.
@18:50 Is it right? Looks like a no check for NULL ptr in this example and only warning for arrays with [0] size as expected.
Yes, well spotted.
Unfortunately, the man doesn't know exactly what he's talking about.
The idea is to use it to check whether the address that was passed belongs to an array that has at least that many elements.
void foo(char p[static 5]);
char array[4];
foo(array);
This example would give you a compile warning. Both under clang and under gcc.
The whole thing has its limits, of course. And of course the function cannot check whether a null pointer has been passed if this was not already known as a constant value at compile time. That is nonsense.
foo(NULL) results in a warning because a constant value was passed here.
In the following example, the compiler has no way of determining at compile time whether NULL has been passed.
int x = 1;
char const * p = "Hi";
if(x) p = NULL;
foo(p); // No warning, even if p should be NULL
The whole thing is not intended for this, but in cases where a "real" array (as the address of the first element) is passed - and not a pointer.
15:42 that int (*pk)[ sz ] [ sz ] has the *pk in parenthesis because the array operator [] has higher precedence IF the parenthesis where not to be used you will have double array full of pointers to int which is a totally and completely differnt thing from a pointer variable , more so it will invalidate the malloc call , further reasoning *pk is a pointer to int therefore Encapsulates information about the data object aka int which is a 4bytes (32bits) this encapsulation/metadata if you want to call it such is what allows the pointer arithmetic to be performed it makes the pointer aware of the data object therefore when you add/sub 1 from it it will Move the instruction pointer registers exactly 4bytes using a LEA instruction it is extremely fast
Interesting talk, but there are a few errors. Around the 16:30 mark, he said that program was well formed, but he should have only said that it compiles. Using a magic number for the size of a buffer is a huge problem, but he didn't check the return value from strftime() which could lead to weird bugs and/or a segfault. The slide around the 18:15 mark was all kinds of wrong. The accept() function in that example only took one argument, not multiple. This may be a language barrier issue, but he should get someone to revise the text, and he misspoke with regards to what static meant in the argument, as it required *at least* 1 value to be in the array. One could attempt to rationalize that as enforcing a C-string, as in a nul-terminated array of char's, but it wouldn't do that as you could pass the address of a single char and it could be set to anything.
The two slides at 23:00 and beyond have a weird comment that seems to equate zero initialization of a char * to the value "" which is incorrect as that would have it pointing somewhere and only a tiny percentage of niche systems allow data to be stored at address 0, which is why that was chosen as NULL. I still don't get why C needs a nullptr constant when we already had NULL, and if it's about casing, why do we need the `ptr`. If it's a type checked thing, then again, why `ptr`. This also relates to the addition of constexpr as I'd rather they just change the functionality of const but also, I'm not averse to defining macros inside functions and regularly do so, especially when it's just for that function, and aside from that, if it's about type checking, constant values can use type postfixes and there's always the standard #define SZ ((size_t)100) to give it a particular type, if you don't know the correct postfix.
At 33:00 I actually agree regarding VLA's and generally try to avoid using them, but there are legitimate use cases since they allocate on the stack, and I would definitely avoid using a maximum allocation size for every use of one. Given that the stack is generally limited, it'd be better if the max value was dynamically checked either way, but if you need that much space that it would blow the stack then use the heap instead. Around 46:00 I would just use i[*pK][j] to avoid needing the extra set of parentheses. I still don't understand why so few people use this superior approach to referencing arrays of data to avoid LISP-ing your code.
As for 1:07:00 and flexible array members, I find that generally it's better to not use them. It adds hidden complexity to structs and does prevent you from making arrays of them or taking the sizeof them, and embedding in other structs and yes, even directly initializing them. If you're worried about multiple allocations then use the strategy I employ where the container is allocated on the stack. While the separation is a valid concern, it's an overblown one for certain types, especially when we're talking about large data sets. The data will dictate more how it should be accessed than the few pieces of bookkeeping information, and from an ergonomics standpoint, you shouldn't be allocating for a single container on the heap ever.
There is a lot that has been presented I already use regularly in my personal projects, especially for clearer initialization;
my priority is always about how to make the code more understandable, discoverable and debuggable though:
so many creative uses of macros, old and new, are banned from my code.
you can use c++ modern without the classes and templates and still can get the modern features that it provide
You know there’s a problem when the format for your talk about why C is great is a quiz on how confusing C semantics are.
This is a really fun talk!
18:30 Unfortunately, this is not correct.
The idea is to check whether the passed address belongs to an array that has AT LEAST (and not exactly) this many elements.
void foo(char p[static 5]);
char array[4];
foo(array);
In this example, you would receive a compile warning. Both under clang and under gcc.
The whole thing has its limits, of course. And of course the function cannot check whether a null pointer was passed if it was not already known as a constant value at compile time.
foo(NULL) leads to a warning because a constant value was passed here.
In the following example, the compiler has no way of determining at compile time whether NULL was passed.
int x = 1;
char const * p = "Hello";
if(x) p = NULL;
foo(p); // No warning, even if p should be NULL
The whole thing is not intended for this, but for cases in which a "real" array (as the address of the first element) is passed - and not a pointer.
void accept(char const str[static 4]);
char const a[] = "Hi";
char const * p = "Hi";
accept(a); // Warning with clang and gcc
accept(p); // Warning with gcc (not with clang)
Clang and gcc also do different things here to determine whether the limits of the array have been observed. For example, clang no longer warns at all if a pointer is passed. gcc tries - as far as it is possible at compile time.
Unfortunately, the way it is presented in the video is not correct.
the invoke macro is genius, made me pause for a minute, the preprocessor magic will never stop to impress me
"... so you can impress your colleagues!" This is the mentality where things go south usually. If anybody catches themselves writing "fancy", "abstract" code just so that you can impress someone, please take a good hard look into yourself because that's the opposite of what you should be doing.
Have you ever heard of these fancy literary things called "jokes"?
Both are true. Funny joke. Better not do it.
Yeah , you should be aiming to write your code in such a simple straightforward stupid simple style that people ridicule you for it's simplicity .
For example :
free( some_pointer );
some_pointer = 0x0 ;
Those who are proponents of D.R.Y. might tell you that if you are going to always get rid of your dangling pointers like this , you could reduce your lines of code by HALF if you made a specialized helper free that
did the zeroing-out inside the function .
my_special_free( &( some_pointer ) );
assert( 0x0 == some_pointer );
Problem is , every little thing you do like this adds a bit more to your cognitive load .
Same thing with language features . If you keep adding stuff every few years , eventually
you will no longer have a language that easily fits within your head .
-KanjiCoder
57:01 - Looks like a UB. Compiler will destroy our "NULL check" (Why even need that if "free" do nothing with null pointers) or destroy dereference of a null ptr at line 7.
Yeah, the clearing should be done in that `if` block where he makes sure that `pa` points somewhere. As for checking that `data` points somewhere before calling free(), I just assumed that he's been using it on embedded platforms where the function call to do nothing would take more processing time than a local branch to test against NULL.
I had to wait 6 months to find this??? I'VE BEEN INITIALIZING STUFF WRONG FOR 6 MONTHS NOW!?!??! The shame, oh the shame!!!
Ooooh! I use designated initializers for arrays all the time for a couple of things:
1) I often make static tables of data that are indexed by enum values. You just [MY_ENUM_INDEX] = {.the = data, .for = it}
2) Graphics code often uses integers for binding slots, so I initialize plenty of complex nested structures such as:
.targets[3].blending_mode = BLEND_ADD
or something like that where different shaders need to set data to certain indexes deep within a large, mostly empty structure. Designated init works *great* for this!
good presentation
at 31:45, section "Static vs. dynamic?"... why not use "int* numbers = malloc(sizeof(int[a_lot]));" instead of "a_lot * sizeof(int)"?
Forget about it, my question what answered at 45:33.
1:22:34 I was unsure if the double cast was needed, but now I know.
Great presentation. I think that the presenter should more real application of modern initialization in C. I suggest showing how compound literals (CLs) combined with designed initializers can simplify setup of complex tree of structures in Vulkan API. No need to create a bunch of auxiliary object and vectors. Just taking address of CLs or CL arrays gets the job done.
One change I would like to see in C would be to add a syntax like [*]= in array notation to allow for a "default to something other than zero" mode. Something like `double array[10] = {[1] = 3.14, [2] = 2.78, [*] = 6.02e23};`
If you mean your * has to become 3, then you write this as {[1] = 3.14, [2] = 2.78, 6.02e23}. The elements without indices follow after the last element with an index.
then you are looking for a dictionary or a hash table, not an array
Gnu c has that. E.g., you can do: char x[100]={ [0 ... 98]='x', [99]=0 }; int main(){ puts(x); }
How do I define a strong C programmer? Its someone that knows precisely the assembly code that will be generated by each corresponding line of C.
45:35 That's a mess because it's not correctly allocated as a multidimensional array. It's only being allocated as a single dimension, which will work, but it's a mess and/or behaves as a single dimension. If you want a multidimensional array, it really should be allocated multidimensionally. Which is to say, allocate your first array dimension and assign it to a **ptr. Then use a for loop to step through each element of the array, assigning a pointer to it for another allocated array of the desired size of your next dimension. Then it will behave as a normal multidimensional array, using standard arr[col][row] notation. No VLA's required. It just has to be freed the same way it was allocated.
Example:
#include
#include
int main(void)
{
int i, j, col = 5, row = 5, **arr = NULL;
arr = malloc(sizeof(int *) * col);
for(i = 0; i < col; ++i)
arr[i] = malloc(sizeof(int) * row);
for(i = 0; i < col; ++i)
{
for(j = 0; j < row; ++j)
{
arr[i][j] = 'A' + i + j;
printf("%c ", arr[i][j]);
}
putchar('
');
}
for(i = 0; i < col; ++i)
free(arr[i]);
free(arr);
return 0;
}
It's very naive approach to multidimensional arrays. Access speed is shit, alloc speed is shit. You could drop basically anything you have been taught in uni regarding C and C++.
That's downright horrific. The better method is to allocate for the first dimension and the second in one allocation then patch up the pointers of the first dimension. It still requires a single loop for 2D arrays, but only one malloc()/free() per 2D array. Also, you should put it in its own function and do some basic error checking. Of course, most would recommend using a 1D array and just indexing by [i*w+j] instead of doing something so insane.
single dimension arrays emulating multidimension arrays are way faster than having to go through pointers to get the data
Cool. I’m getting back into C after a decade or so.
In the refactor example: why keeping the check for nullptr when he's using static 1 in the oarameter list? Shouldn't that prevent a NULL vec pointer being passed in?
Can the standards committee please, please, please remove the implied "promotion" of short unsigned types to "signed int"? It only caused bugs
IMO unsigned types should only be promoted to other unsigned types
Great talk!
if a programming language has many tricks, that language's purpose tricks you. RIP C.
after learning programming for 7 years since I was 12 years old this finally convinced me to leave c++ for c for good. thank you.
Adding auto with the same meaning as in C++ is a strange choice - I thing its generally a bad feature in C++ but at least has some utility due to long typenames, iterators, namespaces- but in C? Why?
It leads to many errors because most C programmers also know C++. Also the auto keyword is redundant in C.
Can't wait for it to become common place in 2069
This talk is very interesting and i cannot program in C while listening. So I’m gonna watch this later. :P Btw i’ve been programing more than 15 years 20+ programming languages and now days programming back to C and Rust.
So C23 is basically C++, it looks like C is becoming what he always hated
THANKYOU ! I don't care how much I like a new feature . DO NOT ADD IT .
Question : If you add 1 new feature to C over an infinite timeline , how many features do you end up with ?
Answer : Infinite features .
Are people TRYING TO KILL MY LANGUAGE ?
Because this fact should be OBVIOUS to programmers who should have mathematical reasoning .
If you want to take the current C23 standard and make it it's own language ...
That is totally okay . I have no issues with :
JAI , ZIG, ODEN , GO , RUST
Maybe this is hypocritical as I compile as C11 ( because of the memory model ).
But besides that , I just write my code like it is C89 with "//" style comments .
-KanjiCoder
C programming language is a tool and the quintessential soul of computer architecture and foundational programming. It is akin to an elemental force that underpins the very fabric of computing, transcending mere syntax or semantics. This soul, deeply ingrained in the computing structure, migrates through various embodiments - from operating systems to embedded systems, manifesting its presence in numerous forms, much like an incarnation. Each new form it takes on, be it a simple microcontroller or a complex operating system, is endowed with the essence of computational logic and efficiency that C embodies. Just as a body may function with limitations but cannot exist without a soul, so too can modern computing operate with various languages and tools, but it cannot escape the fundamental principles and efficiencies ingrained by C. This language embodies more than just a set of instructions; it represents computational thinking and efficiency, acting as a cornerstone that builds the edifice of modern computing.
😊😮 Bravo! ChatGPT or Bard....... or, You
?
I still find that Fotran dynamic multi-dimensional arrays (allocate(pK(sz,sz)) and an intuitive access syntax, pK(i,j), much easier than this VLA syntax.
Long life to C
Nice!, can we compare it with C3Lang
when are we getting rid of header files?
Do you even know what a header file is ?
You can literally replace all you .h file with .c and it will still do the same thing
they are called "header" for organization purpose, header files themselves are C files
Hopefully never. Hate that Python doesn't have somthing similiar. 😮. Cluttering up the source code. 😂
Compound literals, don't they also exist in Java for example?
Very informative and entertaining
very good video, and the speaker was amazing :D
Very pleased to hear your appreciative comments.
I was taught C in 1981 as a new grad in Computer Science when working for a company called ARBAT by my boss Peter Madams. Best boss I ever had. We used Whitesmiths C on a PDP-11 and later on a Vax. 42 years later I am still using C/C++ in the financial trading industry, so I am very lucky that it has given me my whole career.
My boss used to say, "There is no such thing as a correct C program, just one that has never gone wrong yet !!"
This is as true today as it was in 1981.
The program at 17:00 is malformed, but for none of the reasons listed. The main function is a function and requires to return at least 0 or 1 (EXIT_SUCCESS/EXIT_FAILURE) statement.
Check out www.open-std.org/jtc1/sc22/wg14/www/docs/n3096.pdf , section "5.1.2.2.3 Program termination"... It reads: "... reaching the } that terminates the main function returns a value of 0". I think this was added in C99, but may be off on the history...
C99 and onward let you imply return 0 from int main.
Why are you assuming that those are the values of those macros? The standard doesn't define them, and they could be 0 and 1, or they could be -22 and 33. In fact, the wording they use for describing how exit() works is grammatically ambiguous and could imply that EXIT_SUCCESS couldn't be 0, but either way, the host system may very well take any value and exit() would be required to convert to that value for a successful exit.
I don't like generics, INVOKE macro trick, and constexpr. I'm not sure about bool. The rest seems worth a closer look.
What! Are they trying to make C safer? Move responsibility from the developer to the compiler? Have they gone mad?
tl;dr: I am looking for good old fashioned (hardcore) C and C++ books that are current (e.g. 20 or 23). I have an MSCS degree (with EE undergrad). I think it is one of the reasons I find efforts to dumb down the material mind numbing. Most of my learning has been through textbooks, and the Java/Salesforce developer guides provide a good textbook-like in-depth coverage of major topics. P.S: Happy holidays.
Long version:
Can anyone suggest good C and C++ books? I have not really coded in either for over 10 years. It has been Java and Salesforce (Apex) during that time. I post the above question because I am not able to find a good C++ book. I think the textbook-like book from Stroustrup would be OK if it was current. The Murach book seems like it is targeting learners who are new to programming. I find the pattern of material presentation mind numbing. To me it seems that to get a deep understanding of a subject you need to go through 10 mini-tutorials. And the video tutorials are no better.
I havent programmed in C myself since college, however my friend was recently learning C and Lisp. He used the books "The C Programming Language" by Kerrigan and Ritchie. For learning Lisp he used "Structure and Interpretation of Computer Programs" by Harold Abelson, Gerald Jay Sussman, and Julie Sussman. Even if you are not interested in Lisp, the book contains important concepts for understanding and learning programming in general. Also there are Python and JavaScript versions of "Structure and Interpretation" if you are more inclined to those languages. I hope this helps.
Just do the exercises in K&R set your compiler flag to C89. Then do some work in C99, then C11/C17. The changes will come naturally because there isn't many of them.
C is just better than rust and I love it.
Better in what sense?
All the hype around rust but barely anyone uses it in embedded. I've work HDD and SSD firmware. I wrote how these things operate. Written in C. Worked in SIM cards, credit cards, NFC cards, firmware is written in C too. My friends from the industry, cars head unites, POS, and lot more, C. It's C, C, C.
@@jakubsebekit’s actually used in production
Better in all points. Memory leaks is overrated. And meny tools for memory checking already exist. Leaks,Valgrind for example. And all memory allocation is operation system business. All crappy talks about dengerous of memory leaks it's only crap. Languages without memory leaks is java or c#. Thats all. Even ancient Pascal. Verdict : Marketing crap for smoosie lovers
If designated initializers work differently in c and c++, but c++ is supposed to compile c code, does that mean that eventually requirements for designated initializers are going to be relaxed in c++?
No. C++ isn't strictly a superset of C, never was. C++ won't change anything just because C2X wants to add a feature or whatever
C++ is no longer just a superset of C. There are things which are different in C++ and C. For example the auto keyword or function declarations with no parameters.
Edit: After watching the talk, it turns out that the auto keyword behaves exactly the same as in C++
> c++ is supposed to compile c code
It's not, C++ isn't a strict superset of C
Maybe, but C++ isn't always able to compile C code so C++ might keep its stricter rules. For example, `auto float` in C is a valid way to declare a float, but in C++ it won't compile because auto always stands in for a type name
No, and the reason why is because C++ is not, never has been, and never will be, a superset of C. Try assigning malloc to a pointer without casting in C++ and tell me how far you get. Better yet, try to compile literally any real-world C program in a C++ compiler. They won't compile. Funnily enough, Python code also won't run in a Lua interpreter. Nor can MASM compile Java. The answer is simple: use a C compiler to compile your C code.
Good stuff!
Pleased to hear that you liked the presentation!
Amazing video with lot's of insights.
Thank you for your appreciative comment!
I wish C had namespaces. I like those...
neither does c89 have binary literals. but if you try hard enough, and you turn on compiler optimizations... who's to really say? binary literals are less than an afternoon of work. a few minutes, even!
I keep trying to watch this because it's a legitimately interesti g topic, but every time I do I fall asleep and it's DRIVING ME NUTS.
I've been a JS/Python soydev for the past 5 years, basically because I learned to code specifically to work as a developer rather than to learn about computers. The last 6 months I've been coding multiple personal projects in C and I feel like I have a superpower now, I understand a lot of low level stuff that the high level languages hide and my code skills are much better. Probably learning C was the best decision I could make. I feel like I want to stop being a webdev because I'm sick of it, but I don't think I have the skills yet to apply for a full C/C++ job, that's gonna be a long ride.
I am probably an old retarded. I don't find much of "modern" C attractive. Maybe I am allergic to changes.
I think one of the main advantage of C is that it does not change that much.
So that both old and new code can be compiled on both old and new compilers (if you are lucky).
And so that C remains simple to learn. And writing a compiler remains simple.
Don't get me wrong, there are a lot of things that I don't like in C, for example the #include mechanism which basically implies slow compilations, some of the undefined behavior stuff, some aliasing rules which force the compiler to generate instruction to re-read unchanged memory...
But adding an initializer that sets non referenced fields to zero? That sounds against C principles. Non-referenced fields should not be initialized. C is not Java or something.
And the sizeof a struct with a VLA inside that returns the size of the struct ignoring the VLA... That sounds weird, almost buggy.
And [static 1] for non-null looks awful. It would be nice to be able to define if a function can or cannot take null in one of the argument. But it should look a lot more straightforward, like a nonnull keyword or something. Or better, the other way around, a null keyword.
A lot of this talk was slightly horrifying for me... (with due respect and thanks to the presenter for the whirlwind tour of the new features!) but syntax which means different things in different situations, or expressions that are jokingly "confusing" but the compiler accepts... these are the sorts of things that squeak through code reviews and will absolutely cause serious bugs.
The C and C++ committees both are aware that these languages have a reputation for complexity and tricky shoot-yourself-in-the-foot gotchas. The goal should be to simplify the murky areas of syntax or to only introduce syntax that is very cleary distinguished from other & previous rules, and does what you would intuitively think it does, not something "surprising" or quizzical (literally). This talk demonstrates the committee is going in the opposite direction, introducing more and murkier syntax rules and surprising implicit/contextual semantics.
Also, what if I need to have two VLAs at the end of a struct? 😅
finally a comment i can fully agree to!
I'm curious as to what aliasing rules you're talking about. If you mean evaluation of macro arguments, well, that's annoying but also a symptom of bad macro writing. If something else, then please expound upon what they are. C99 added the ability to zero initialize entire arrays and structs with { 0 } and extending that only makes sense. You can't use a VLA in a struct definition. The term is flexible array member because the size isn't defined at all. Pre-C99, a lot of code would use [0] for the array size and just allocate the extra space as needed, but you can literally use any non-zero size you want and do the same thing, just with a default minimum. I will agree slightly with the static keyword in an array argument, but I'd prefer no keyword and instead to just use [1] to indicate that it requires at least 1 element. That would change the meaning of that construct, and I don't care. The only alternative that I would accept is to use something like [>=1] or [>0] or maybe even the more ambiguous [1?].
@@VV-rk3wu I'll tell you what I told the other guy, FAM, not VLA. But if the two arrays you wish to place at the end are the same length, then struct the two types and use one array of both together. If they're differently sized, then either one's a pointer and the other a FAM, or they could both be pointers and make things safer. Also, you should be allocating containers either together on the heap or singularly on the stack.
the problem with {} init is that it's bad for refactoring or search etc.
Does
int a = {5.8};
work in C or not? In c++ ( without the equals sign) it would error out due to narrowing conversion, is it same in C or C does the conversion anyway?
Of course it works.
Well, the only reason why this would not work is if you have conversion warnings enabled + warnings as errors, then you would have a compile error.
Other than that, it works.
Yes, it works. `int a = {5.8}` is the same as `int a = 5.8`, C truncates the decimal part and returns the integer part.
a narrowing conversion is definitely a warning, not an error.
It doesn't error on gcc trunk at least, and I don't think the standard requires it to error either.
But there's always -Werror=conversion
May I ask why line 4 in 15:51 works ? I thought an array definition would ask the compiler to automatically allocate the memory on stack, with size corresponding to the type and number of element it has ? And can someone also explain line11 in 17:51 ? 😂
Because it declares a pointer, not an array. Think about line 4 as `typedef int T[sz][sz]; T* pK = malloc(sizeof *pK);`. The array type is defined and next a pointer to this type which is used to allocate object on heap.
1. pK is just a pointer. The type of this pointer is an array of 'sz' arrays of 'sz' ints. Since it's just a pointer, it needs malloc to obtain its memory chunk. Before the '=', the data type has been defined, so C already knows the size it's going to have. That's why sizeof(*pK) can return that size in bytes.
1. 'auto' is a data type inferred by the compiler from the expression on the right side of the '='. Remember that (char [42]) is the same as (sizeof(char) * 42). (char [42]) is the cast of {}. {} is a portion of memory on the stack, which, due to the cast, will have a size of (char [42]) bytes.
The type of line 4 is a pointer so it’s storage is simply the size need to store an address, so like 32 or 64 bits. But this pointer is given additional metadata about what it is pointing to. We know the type of the elements are “int” and that it points to a 2 dimensional array of equal height and width (sz). This means sizeof(*pK) is asking not for the size of the pointer but the data that it is intended to point to and thus resolves to sizeof(int)*sz*sz. If we assume, as he did/noted, that an int is 4 bytes this gives us 400.
Line11 is definitely an weird one, since he says it works my only rationale is that it is declaring “buffer” as 42 character array and not specifying an initializer value for the elements in the curly braces . Definitely bad style, I hope I never see such code in the wild.
For the first question: it's a pointer to an array. So it won't allocate the array on the stack, just a pointer. Same way that int *x won't allocate stack memory for an int.
Second question, it's equivalent to char buffer[42] = {0};
(char[42]) {} is an array literal of 42 chars, initialized to zero. auto buffer = ...; means assign ... to buffer, and infer its type. So in this case the type will be inferred as array of 42 chars, and it will be assigned the value of 42 zeros.
At 15:51 `int (*pK)[sz][sz]` is a pointer to a 2 dimensional variable size array, so when when `malloc`ing you wan't to have the size of the 2D array and not the pointer, that's why he dereferences `pK` in the `sizeof()`.
At 17:51 in line 11, he creates a `int` variable (no type is specified, so defaults to int) `buffer` with storage duration `auto` and assigns it the pointer to the first element in the zero initialized `char` array. (Arrays in C and C++ automatically decay to pointers).
At 5 minutes in it looks like an International Obfuscated C Code Contest entry.
i think ill start using k&r after this
I am with you on this. C's selling point is that it is a very small language that can be easily memorized. It is what sets it apart from all other languages. If people INSIST on adding to a language FOREVER with no stopping point, the language is eventually going to hit a critical mass of un-maintainability where no one in the world can write a compiler for it. C might be furthest from that future, but doesn't mean we should march towards it.
yeah, I loved the old function syntax,
I don't understand why they removed it 😭
I'm sticking with C99, actually.
Standardizing C was a mistake. Language committees only attract self-important people who want to control how other people code instead of problem solvers looking for the truly best way forward. C had existed for almost 30 years when it got handed over to a committee that promised to "only standardized existing practice". It's become a tool for pushes a bunch of folks' crazy untested idea onto everybody under a well-recognized brand name.
The new features are neat if they will help reduce issues without requiring modifications of the existing code. But things like that non-NULL check are wack, if you start introducing those things you might as well recompile your code in C++ with use of references if needed.
Not sure why they decided to make non-NULL argument descripton so complicated. It makes literally NULL sense.
Love C
I don't know if I love or hate the fact that the only people using C nowadays are Python devs
c89 is all we've ever needed. repent, sinners!
Every time he says, "It's like in C++", I say, "Oh no"; "Oh no, they will make C look like that".
Best programming language that human kind invented.
my rule of thumb is. If a macro changes the C syntax, you shouldn't use it
@12:53: Wow, so C23 made even more ways to get silent errors. Lovely! Compile your program in the new version, get different answers! Brilliant!
No, in this case C23 fixes weird behavior, where the new C23 output is what a sensible person would expect and the old output is an arcane artifact.
I like this change!
Weird people of C++ community somehow flowed into the world of C and bring this mess. Long Live C17!
Maybe we should migrate to Fortran lol.
3:12 auto is for not specifying the storage type. Lol😂
listen i'm either gonna stay at c99 or just go to zig
I'm trying everything here except for the auto keyword.
The auto keyword is as old as the C language. It's just the opposite of static and tells the compiler to allocate a local variable on the stack, which is the default option anyway (auto, static, register). It has a very different meaning in C++.
The truth is that programmers have to make a living and the market dictates demand for emerging languages such as Rust. In 5 years time there’s a suspicious chance that Zig will paint a great picture.
C is the language which created the Cyber War Domain. The Hamburger+Cola of computer science.
There have always been much better alternatives around:
+ALGOL
+PASCAL
+MODULA-2
+ADA
Plus some newer ones such as RUST and SAPPEUR.
I love C but imagine if Pascal won back in the day.
Turbo Pascal is much, much better 😊
c is a mess, like c++ too many non sense addition instead of first just fixing the short coming of which there is a lot.
All these new features but just more confusion. What a missed opportunity! Speaking as someone who had worked with C and C++ in telecom industry for over 10 years. If you have a large team but without a very strict coding style and do not enforce it strictly in code review, you would have a big mess. Have not used these two languages for more 15 years. After watching this presentation, I can't say I miss them.
it's indeed a great video
a great video to keep me away from c...
oh nice a new standard i will never use
I don't know man, this whole video feels like mocking C, showing whats wrong with it and still saying it's great.