Reducing C++ Compilation Times Through Good Design - Andrew Pearcy - ACCU 2024

Поділитися
Вставка
  • Опубліковано 21 гру 2024

КОМЕНТАРІ •

  • @AndrePoffo
    @AndrePoffo 2 місяці тому

    The separation of protocol and implementation is really helpful. It makes testing much easier, too.

  • @tomkirbygreen
    @tomkirbygreen 4 місяці тому +2

    Excellent talk. Pretty much essential material for software at scale.

  • @llothar68
    @llothar68 3 місяці тому +3

    Restrict your use of header only libraries and templates in API boundaries, use PIMPL and forward declaration and you are fine.

  • @bfitzger2
    @bfitzger2 3 місяці тому

    On the "macros are evil" bit where "#define WIDGET 7" messed up other code, we sometimes used wrapping headers that #undef'd macros we didn't want to leak out, or did it in source code. This originally was in the context of making some code cross-platform where Windows or Apple headers liked to define very common names for their constants, so platform-specific code would use the raw header, but public code in our project used the wrapping headers. I think Unreal does this as well, and I wouldn't be surprised to see this in older Unix cross-platform projects.

  • @TalJerome
    @TalJerome 3 місяці тому +1

    Does anyone understand what he meant with "more granular" regarding the protobuf issue? (23:45)

    • @ContortionistIX
      @ContortionistIX 3 місяці тому

      instead of including the whole schema, only include the schema for specific endpoints

    • @paulluckner411
      @paulluckner411 3 місяці тому +1

      I guess he just means to include the smallest header possible. E.g. instead of only use the ones you need, maybe , .

  • @hbobenicio
    @hbobenicio 3 місяці тому

    Very good talk, thank you!

  • @tlacmen
    @tlacmen 4 місяці тому +1

    Will lld or mold improve build time with whole program optimizations enabled?

    • @paulluckner411
      @paulluckner411 3 місяці тому +1

      mold is supposedly faster in any standard usecase. It is optimized for modern hardware by making use of parallelization as much as possible.

  • @GeorgeTsiros
    @GeorgeTsiros 4 місяці тому +3

    Anyone remember Turbo Pascal?
    Remember how fast it was? Remember the _hardware_ that it ran on?
    Yeah. We've got a _lot_ of catching up to do.
    There is zero reason the executable can't be ready some milliseconds after a character has changed in the code.
    That THE ENTIRE SOURCE is worked on, as if it has never been seen before, every time a build is started is comical.

    • @depralexcrimson
      @depralexcrimson 3 місяці тому +1

      thank the software industry for that one... instead of hiring passionate people, they hire vloggers that do anything but coding 90% of their work day.

    • @maxrinehart4177
      @maxrinehart4177 3 місяці тому

      ​@@depralexcrimson the tech industry really screwed itself hard.

    • @allNicksAlreadyTaken
      @allNicksAlreadyTaken 3 місяці тому +2

      There are actually a thousand reasons, you obviously just don't understand them. If you are so smart, go out and fix it.

    • @27182818284590452354
      @27182818284590452354 3 місяці тому +1

      Turbo Pascal had modules 40 years ago.
      C++ compilers can't properly implement them still.
      It's just mind-boggling.

    • @realhet
      @realhet 3 місяці тому

      One valid reason is optimization.
      I remember TP, also I remember Delphi on win32 up until 2012ish. It was still lightning fast, but generated a little bit slower code than LLVM.
      Later I got to work with Cpp and got totally sick from those compile times: 50kloc project, and 45 seconds to launch a debugger on unoptimized code. I don't even do big projects, just my one man project... With optimization it was like 2-3 minutes. It was fast compared to this presentation, but coming from Borland Pascal, Delphi it's slow as hell. When the program starts running, I already forgot why I started it earlier