1:02:24 "No open source build system seems to have support for header units" well XMake does (and XMake support all what can be supported for modules as 09/22/2023, and more like package module distribution :) )
CMake has this too as they are developing hand in hand with the developers of microsoft visual studio. But it's still not useable. Hope it will soon because the new Windows App SDK is insane slow with a gigabyte large precompiled header.
The Ada programming language required compiler makers to sort out this problem in the 1980s, since the language had generic packages with private data types since 1983. Some Ada compilers had the concept of a code library directory you'd create for a project and all your compiled code would go there for the linker to assemble into an executable, recompiling as needed. The free GNAT compiler generates metadata in ".ali" files and essentially requires you to use "gnatmake" or "gprbuild" to figure out what needs to be rebuilt when code changes. The toolchain won't allow you to link outdated or mismatched modules together. It's sad that C/C++ have largely stuck with this 1970's model of timestamps and manual or generated Makefile dependencies.
Or do it like Eiffel which was doing complete system compilations in a single process instead of going through single compilation units. I really don't get it why it is so slow on C++. I once wrote an Eiffel Compiler myself and transpiled to C++ and even that blew the C++ compilation speed out of the water.
18:34 "You cannot use the BMI produced by one compiler by another compiler" - sadness, for I want to build with a library (no std types in interface) to share with others, regardless of their compiler. 30:29 "All compilers right now support the dependency scanning in a single output format" - happiness. At least there's that.
Build the library to what to share with what? There are multiple different CPU instruction set architectures in active use, source code will always be the most portable option
Unfortunately once you run the preprocessor, it becomes unportable. That's because many system headers include system-specific macros, and trying to use the generated code on a different platform won't work. System-specific inline functions in headers also break portability. You could have a portable AST / high-level intermediate representation format if you banned including any headers and using any ifdefs, but I imagine we're about 25 years away from that being possible. :)
@@N.... "Share with what?" To share precompiled modules with users of course. Naturally we'd need one for each major architecture (x86 32, x64, arm32, arm64), but at least we wouldn't be exploding out the combos times each compiler too (Clang, MVSC, GCC, EDG, Intel Cpp...).
@@hemerythrin Most libraries I produce are stand-alone and portable, not dependent on system headers anyway. Module BMI's are closer to token streams/AST's than generated code.
Knowing what we know now and considering a world where they could be removed from the standard: Is there a way to change header unit restrictions to make the "half-way" solution work? Or somehow make them even more special, so that non well behaved code would cause a compile error? Examples: make the discover step more explicit by requiring a different syntax for importing modules and header units. Or drop the requirement that header units export macros.
Best C++ talk on Modules I have seen so far! Most talks are either too abstract or too optimistic.
What a coincidence; the length of the video is 1:23:45 (though it shows as 46 on some devices).
1:02:24 "No open source build system seems to have support for header units"
well XMake does (and XMake support all what can be supported for modules as 09/22/2023, and more like package module distribution :) )
CMake has this too as they are developing hand in hand with the developers of microsoft visual studio. But it's still not useable. Hope it will soon because the new Windows App SDK is insane slow with a gigabyte large precompiled header.
The Ada programming language required compiler makers to sort out this problem in the 1980s, since the language had generic packages with private data types since 1983. Some Ada compilers had the concept of a code library directory you'd create for a project and all your compiled code would go there for the linker to assemble into an executable, recompiling as needed. The free GNAT compiler generates metadata in ".ali" files and essentially requires you to use "gnatmake" or "gprbuild" to figure out what needs to be rebuilt when code changes. The toolchain won't allow you to link outdated or mismatched modules together.
It's sad that C/C++ have largely stuck with this 1970's model of timestamps and manual or generated Makefile dependencies.
Or do it like Eiffel which was doing complete system compilations in a single process instead of going through single compilation units. I really don't get it why it is so slow on C++. I once wrote an Eiffel Compiler myself and transpiled to C++ and even that blew the C++ compilation speed out of the water.
18:34 "You cannot use the BMI produced by one compiler by another compiler" - sadness, for I want to build with a library (no std types in interface) to share with others, regardless of their compiler.
30:29 "All compilers right now support the dependency scanning in a single output format" - happiness. At least there's that.
Build the library to what to share with what? There are multiple different CPU instruction set architectures in active use, source code will always be the most portable option
Unfortunately once you run the preprocessor, it becomes unportable. That's because many system headers include system-specific macros, and trying to use the generated code on a different platform won't work. System-specific inline functions in headers also break portability.
You could have a portable AST / high-level intermediate representation format if you banned including any headers and using any ifdefs, but I imagine we're about 25 years away from that being possible. :)
@@N.... "Share with what?" To share precompiled modules with users of course. Naturally we'd need one for each major architecture (x86 32, x64, arm32, arm64), but at least we wouldn't be exploding out the combos times each compiler too (Clang, MVSC, GCC, EDG, Intel Cpp...).
@@hemerythrin Most libraries I produce are stand-alone and portable, not dependent on system headers anyway. Module BMI's are closer to token streams/AST's than generated code.
As a developer maintaining a large make-based build system: this is sad
Knowing what we know now and considering a world where they could be removed from the standard:
Is there a way to change header unit restrictions to make the "half-way" solution work? Or somehow make them even more special, so that non well behaved code would cause a compile error?
Examples: make the discover step more explicit by requiring a different syntax for importing modules and header units. Or drop the requirement that header units export macros.