Love your videos, thanks to this one I hadn't stop talking with friends about about how C# can now run naively without the CLR and JIT. They all respond with "then you don't know what's a managed language" until I pull this video. I think it's a game changer
Finally!!! Since I've started the C# course I'm currently part of, and ever since becoming interested in C#, I've always hoped to find a way to build to every architecture natively in order to avoid relying on the end user having the .NET runtime, or having to ship gigantic amounts of supporting files! The amount of performance optimization that can be done here is most likely a good chunk, and I can't wait to see what this AoT option evolves to!
This is actually great, depending on the project of course, if you're already using those limitations of dynamic code gen, reflection etc, you're not going to care that much. But if you're on a 'proper' microservice architecture or even a well decoupled one, this is really useful. dotNet is getting more competitive in the right ways every day.
I guess you can split your project into the appropriate modules and design ahead which parts you want to use Aot with. It also depends on the usage perspective beyond performance requirements. A problem with C# IL code is that it can be easily decompiled to the original source code, unless you use some obfuscator, including VM-based protectors which will eventually add extra overhead against performance. These can still be de-obfuscated/virtualised into the original source code or extremely close to that. Having the ability to build some stuff into native code it means that someone may be able to spend time to figure out the logic but they definitely can't 'steal' your source code. That said, I am mainly into C/C++ and ASM stuff, but I am building a project now in C# for portability reasons and I am happy with it so far.
On top of what Drazen Šoronda mentions you saw Nick also targeting a a runtime, like "win-x64". When you say traditional desktop applications or DLL's... if one of your use cases is to make sure your desktop application can be run by people on 32 bit (x86) machines still... then you would have to compile another version of the exe.
I mean if you use Reflection for your small typical application then you are doing something wrong. Reflection is bad code design and should be only used when absolutely necessary. Also yes there is partial reflection support if you pass in to the compiler which classes / types must be compiled.
@@lx2222x Small typical application is all relative though. The fact that you think you might be writing code yourself that does not do/use reflection... even in a simple app.. you might still 'use' it but under the covers. If you use something like dependency injection for example.
This seems like it could be a real benefit to WebAssembly/Blazor Client Side. I can imagine the limitations, even the current trimming features often cause me issues, but even as it is now, I think this will be really useful for us in a few cases.
@@nickchapsas great that it already works, I didn't expect that. Hopefully that will improve with time, it seems like an obvious place where decoupling from the CLR would be an overall win. Personally though, I have immediate uses for this, we distribute some small command line tools that customers use in their own systems, and none of the deployment options have been ideal. The AoT optionseems ideal, and may already be suitable, you can bet I'll be testing it soon. Thanks for the video!
Hi Nick, great video, the binary size reduction is crazy. Now I really want to see a video of how NativeAOT works under the hood, especially memory management.
Great to see that this is coming to official C# as well. Unity engine had their own implementation trough IL2CPP for quite a while and Mono had AOT to some extent as well. Question: So reflection emitting does not work, kind of expected, but I would be extremely impressed if they somehow made it work, but does regular reflection (eg. setting private fields, etc.) work? Could it in theory work faster?
Yes, regular reflection works if you don't strip out reflection information (which NativeAot has as an option). Pretty much everything will work except dynamic code generation. IL2CPP works the same way (IL2CPP had a lot more restrictions in the past which have mostly been resolved since 2021.2, except IL2CPP still doesn't support TypedReference reflection APIs).
@@protox4 Whats the reason that dynamic code generation can't be done while in Native mode? Surely you could package the parser, compiler etc with it to do that?(or .net framework being installed)
@@mrx10001 I'm not sure about other platforms, but iOS specifically forbids it (or at least it used to, not sure if it still does). Which is the whole reason why IL2CPP exists in the first place. Also, dynamic code generation is the job of the JIT compiler, and the whole point of AOT is to get rid of that. Doing dynamic code generation at runtime goes against the entire premise of AOT. An alternative is to use an interpreter, which Mono has a mode for (works similar to JIT, but slower).
@@protox4 It does go against the premise of AOT, but can speed up reflection to native speeds. For example things like serialization and dependency injection rely on code emission in order to get insane performance boosts. But I guess the alternative would be to implement things using source generators.
@@protox4 is there any reason as to why dynamic code generation _couldn't_ work? Based on my knowledge of computers, couldn't the program write to its own memory, overwriting its own instructions and data, and therefore generating code "dynamically", so to speak?
Very interesting. Comming from Commodore 64, Amiga, all native machinecode and then forward through C/C++, then now... C# and... Docker... and now we can get back to natively compiled code... I still love .NET for its scalability, but cool to know we can also bake and compile our code
I dived into this a couple years ago because I finally was thinking I was going to get c++ performance (the only real thing I missed when I moved from C++ to c# like a decade ago). It the .net native turned out it was not any faster... pretty much exactly what Nick was getting.
NativeAOT will have a faster startup as it does not need to JIT the IL into native code, but once that has occurred then both will run at pretty much the same speed as all nativeAOT is doing is moving when the IL to native code compilation to build time rather than runtime. For some intensive code, the JIT may end up being faster as it re-JIT the code based on the specific branches taken, flattening some jumps etc.
@Sam Spencer - I guess I was thinking that it was going to be more like c++ where, since it was only going to be compiled once, the compiler spends lots of resources creating an extremely optimized native code and not use the quick-to-compile JIT engine. So when I saw this come out years ago, I was hoping the compiler would spend the extra time compiling, but it seems just to use the JIT. The JIT needs to be quick for startup costs. The JIT does have its advantages as you said. It can use values that have been set and use them more like constants. Constants are super fast because they can pre-calculate math, hardcode or elimiate branches(as you mentioned). Hopefully one day the C# engine will offer some optional deep compile options. I say optional because any deep optimize compiling would have a very big impact on the time it takes to do the JIT (like slow loads). I guess the nativeAOT would be useful for items that are spun up - run a task- and close - I recall the video mentioning that. Thanks for the reply by the way. Sometimes I feel like my comments just go into the ether never to be read.
@@FunWithBits AFAIK it uses the 2nd tier of JIT which is slower to compile and faster to execute. The JIT however has another trick up its sleeve which is that as the program is run, it can see what the hot paths are, and re-JIT based on those. NativeAOT doesn’t get to do that as it doesn’t have that data.
Hey Nick, awesome! thanks for bringing this out, on thing to point is as a great AoT advantage is that the code is way more (but not impossible) protected against ILspy, DnSpy or deobfuscation in general. One question tho, is it possible to have a class, with some calculations and stuff generated into native (to have sensitive code protected *unmanaged*) and then use it in as a import of a project and call /use it? Thanks!
This is great! Now what we need is a lightweight web server and framework that doesn't rely on reflection. I'm not a fan of ASP, so I would have wanted that either way - but maybe now there's a more compelling reason to do it? 😄
very interesting. i've always liked C#, having to carry around so much baggage (and expect target machines to have certain files installed) with every app had me go to another language. this will convince me to take another look at C# in the coming months.
C# native will still have more safeguards to prevent any dangerous errors, it's still garbage collected and it's still primarily heap based. This makes C# a delight to program in, but it also limits how much you can optimize your code on the low level. On the other hand, C# has automatic memory pools which makes heap allocations faster in C# than, for example C++. C++ is propably faster overall but it's not usually that much faster unless you think about the performance. Though C++ allows for more deeper optimizations which makes it run circles around C#, in those cases however you'll be writing much more custom code and thinking about CPU cache lines. So far my experiences are... C# is easy and fast to use. C++ anywhere from medium easy to absolute nightmare depending what you're doing and how much you care about performance. C++ doesn't really have a centralized ecosystem of libraries so you may end up spending a lot of time finding the stuff you need or programming it yourself. It's really about a tradeoff between how much time you're willing to spend programming to make the application spend less time executing. I usually prefer C++ as I find it more interesting and closer to the metal, but I also like the relative simplicity of C#. I haven't used Go, or Rust yet but I can definitely see the appeal.
We already have a usecase for this, on Unity, it is hard to download files from e.g. an object storage asynchronous and multiple at the same time. So we came up with a native plugin, written in net7, which we DL to from mono in Unity. Works perfectly lol
Is it smaller, yes, but it still seems large for such a tiny bit of code compiled to native instructions. I remember fitting several C++ compiled applications, on a single 1.44MB floppy disk back in my high school days.
Yeah, there's still a small VM in the application, as well as the whole .NET GC. About 500KB of that size are native runtime bits while the rest is either the managed reflection stack, globalization or other stuff the app indirectly depends upon.
I am not a C++ developer, but I can imagine that even on the C++ ecosystem the compiler will not generate anything much better. Nick has a reference to the System.Diagnostics and uses the stopwatch. What seems simple method calls still means that all of the backend code of that is going to be added in. the same goes for Console.Writeline(). When you were several compiled applications on a single floppy 20 years ago you were only compiling 32 bit applications with pretty much anything you wanted to be able to do having to code that yourself. And we never heard of unicode strings either. No extensive libraries to get code from. All of these things would affect and contribute to smaller executable. Needless to say the current architecture of computers/processors has changed drastically and these comparisons cannot be made, because then if you see commodore 64 games, especially the recent ones, it's mind baffling that people can fit that in 64KB.
I rewrote the example in Rust and it takes 290KB. I couldn't be bothered to do the same in C++, but I have a fairly similar C++ program that I wrote the other day, and it's 27KB. That's on Linux x64. The difference is that C++ can rely on the OS runtime library (libstdc++, which is always there on Linux), while Rust and .NET have to link their runtimes statically in the binary, and .NET's is still a lot bigger.
@Paul Koopmans I disagree, try explaining that to GoLang or Rust developers creating sub 200kb programs. There's even an entire subculture of hackers/coders that try to stuff the most amount of visual/audio art into a 64kb file size, mostly built in C++ (called demoscene). I'm reminded of an old Bill Gates quote where he claimed 640k is all the memory we would ever need on a computer. We can't stop at that size, it's still too large for what it represents. We can either make excuses for it's size, or we (collective "we") can continue the great work towards making it smaller.
@@yondaime500 my thoughts exactly. CoreRT with AOT was able to accomplish something similar a few years ago using a test Core library. Compiling down to under 500kb. This supports the static linking, gc, exception handling etc. And Mono many years before that.
This is amazing! To have this option at least as a choice in many cases. And yeah, those limitations with the dynamically loading or generating code that you mention are kinda logical ones, when having no runtime and no jit (but why would you even want to, when the whole perpose of ahead of time compilation was aready to fixate everything from the start). Currently I have already experimented with the existing ReadyToRun feature, which is a partial version of ahead of time compilation, but actually making your binary bigger instead of smaller. :P So great improvement is coming here. I am going to try it. In many cases though, more important to me is that self contained single file option, which is VERY slow at startup, making by far the greatest difference for me. Basically it's kind of a self extracting archive, but a lot of things can be done with that too. Are they considering faster compression algorithms? Because that could be very helpful as well.
This will be very nice for containers as soon as it's stable etc.! I have a question, though: Does this also mean, that the attack surfice is lower? I mean, it's all machine code so decompilation is not as good/fast as with JITed code, right?
Only from a security by obscurity standpoint. The nativeAOT code can probably be reverse engineered to the original source code, much as can be done with IL, but it will take more effort to do so.
I tried AoT native compilation pre-Net 5 (iirc) and it was nice but wouldn't reliably work. Looks like it's still in much the same place but at least it's getting officially included in the runtime which is something. Definitely something to think about if you're running in containers.
Thanks for the video , I am new in c# , do you have any tutorials / courses that show how to create Apis in c# and deploy the final app to linux cloud ?
Now when starting a new project in VS. When you choose .NET 8; You can choose to use the Aot into native by default in a checkbox. So when you open your project file. It has the true in it already. This doesn't apply for .NET 7 or earlier.
Forgive me if the question is silly. Would it be possible to load PHP core library (written in C) into C# and build extensions with it? Or maybe with a PHP-CPP intermediary layer library?
I guess yes. I don't know exactly how Zend works and thats why I have not tried it yet. Im doing the same but for Java and my goal is reduce the complexity of JNI using .NET.
Would have been cool to compare the startup time too. Even something very simple like a bash script that records the time, runs the exe, prints out the duration.
You should know that the AOT option can't really compete with C or Rust, when it comes to speed, start up time, RAM usage and binary size (while actual numbers depend on the use case). If your target has such constraints and your knowlegde of memory management & optimization techniques is profound enough, you can consider using a language that was designed to compile to native machine code in the first place. C# gives you memory management and platform independence for free. Needless to say performance optimizations which you also get for free with almost every major release without changing your code. The AOT option is actually great, if you need to run your application on systems which do not allow JIT (like iOS & game consoles). It is actually crazy that this works at all, albeit being limited on some features. I thought, I leave this here as an addition to the comparison with other languages at the beginning of the video. :)
I would think that there are more substantial performance benefits for larger libraries and some apps? Particularly for start up, memory, and protection from de-compilation. Something I am interested if following.
Electron apps face the same issue, without having a framework preinstalled on the local machine each application must contain everything to run which makes each project unnecessarily large. I'm hoping at some point web browsers adopt the ability to install DLL's which can be used by any site so that we can get away from having to use JS to interact with the DOM. Assuming Blazor takes hold maybe we can work towards Wasm's being cacheable for all domains.
Another argument in favor of native code is power consumption. There's a paper from 2017 showing that non-native languages are quite significantly more power hungry than native languages like C++ and Rust to do the exact same task, and C# was no exception.
True to some extent... It's all about the execution speed, including how fast it is to start up the application, and C# generally isn't very slow. Primarily it comes down to the efficiency of your algorithms. C++ is my favorite but I do admit it can take a very long time to create an efficient program.
Surprised it doesn't run way faster, does it not get optimized by the compiler or something? Like with LLVM. Asking because in Unity, we use C# and they have what's called a burst compiler, which converts C# code to native application code and it makes a huge difference like x100 to x10,000 faster.(it also sets some strict coding restrictions to enable far better memory management).
@@Yupmoh Yeah, Unity has stated that they are still committed to IL2CPP for the foreseeable future since it's already a mature technology (more mature than NativeAot), but they are keeping an eye on it, so who knows?
This feature would be amazing. Just yesterday I thought about learning and using Rust to create a native DLL so that my chat client (which also has file transfer and soon data streaming etc) can be used in Java Android and Swift for IPhone. If it would be possible to code in C# and make it usable for other languages, it would be great. Phones have that peculiar problem that they cannot run a 2nd executable due to security reasons. Maybe something like services? I tried to find a solution for this problem that does not bother the other two developers too much.
NativeAOT can also generate dynamic and static link libraries with regular extern "C" exports (see [UnmanagedCallersOnlyAttribute]), which would allow them to be just loaded or linked into your apps.
I'm currently writing a program that is cross platform between Windows and Android, all written in C# using .NET 6 and .NET 6 Mono (Xamarin), you can create all your shared libraries in a .NET Standard 2.1 project and have the same code run on Windows and Android with zero adjustments.
Can you talk about the hybrid mode? As far as I understand there is an AOT mode where dynamically loaded code is supported, however it actually runs using an interpreter (the eval is running off of IL directly). I'm really curious about the internals and details of that approach.
PublishSingleFile works well for me (I ignore the giant EXEs!), unless I need to use mysql. The Mysql client uses the 'Codebase' property in reflection which apparently does not get populated in PublishSingleFile mode
It is possible to specify a baseline of supported instructions at compile time, if they're not available at runtime, the application will fail to start. For every instruction set above the baseline, dynamic detection at runtime is performed (e.g. Avx.IsSupported will be dynamically evaluated).
It's not exactly new feature. Compiling C# to native code was available for a quite some time, but limited to UWP with something called .NET Native. It was (is?) amazing. I was building an app for Windows 10 IoT Core on Raspberry Pi 3 and the difference between JIT and AOT was huugeeee... like 30 seconds difference in startup. For AOT version startup took few seconds, and for JIT it was +30 seconds. I'm happy that they managed to go with it beyond UWP
The concept has been here for a long time. NGen was working like this for years but it was .NET framework specific. Xamarin forms for iOS was using Mono AOT etc
.Net Native was experimental, it became Core RT (still experimental), and finally became official as NativeAot. You're talking about an older version of the same technology, so while it's not technically new, it's new official.
Hey man, how about write the exact same thing in C/C++ or Delphi and compare perf and memory footprint? Because it's norm for me to pick a native code language when anything with client-side high performance is concerned.
Reflection is supported but you can only reflect over what has been actually compiled (as NativeAOT implies trimming, aka not compiling things it doesn't see as used). MakeGenericType / MakeGenericMethod, when is a value type / struct, only works if Type was either explicitly used in the code somewhere else or it has been specified as needed in the rd.xml file. There's also a hidden switch, `IlcDisableReflection`, which disables the reflection stack and reflection metadata generation, which, with a few other flags, can bring down the app size to below 1MB.
0:28 "...and why it's probably far from something you will realistically use." I work for an MSP and run a lot of things through our RMM that I would love to do in C#... but can't because not all of our several thousand endpoints even have .NET installed let alone up to date. I have at ton of endpoints that don't even have a working powershell because some idiot installed something that corrupted the PATH environment variable. (And yes we're fixing those as we find them, but they keep cropping back up.) So yeah, I'm going to be using AOT a lot.
The JIT warmup is part of the difference between the two options so it definitely needs to be included in the benchmark. Startup time should be too, but Nick will cover that in another video (accordin to his comment on another thread).
The GC aspect of the CLR is still embedded in your output, plus all the BCL code your app uses. What's ultimately removed is the JIT, but the "runtime" is still there (similar to what happens in Go).
I've installed .net 8 and it's not possibile that I cannot find a compiler for c# 12 in the stuff I've installed, where is it? Not the old csc.exe for c#5 from the windows directory
dllimport works same as before, so you have to compile rust code in a library or static lib and link that lib on your csproj. NativeAOT has good doc about
I wonder what it does if you depend on some NuGet package that uses features that are not compatible with AOT. Does it just throw a build error, or is there a chance that it will build anyway and then do something weird at runtime? Is there a way to scan the dependencies to see which ones are not compatible? For what it's worth, I rewrote this example code in Rust (because of course I did) and it took about 2.8s to run with the same input, which is pretty close, although I don't know if your processor is faster or slower than mine, so not that meaningful. But the exe is only 290KB and only uses 100KB of RAM, so there's that.
Love your videos, thanks to this one I hadn't stop talking with friends about about how C# can now run naively without the CLR and JIT.
They all respond with "then you don't know what's a managed language" until I pull this video.
I think it's a game changer
C# still is a managed language it's just that the code is compiled directly to machine code.
Having a neat clean F# code compiled into a tiny executable is like a dream come true ❤
Finally!!! Since I've started the C# course I'm currently part of, and ever since becoming interested in C#, I've always hoped to find a way to build to every architecture natively in order to avoid relying on the end user having the .NET runtime, or having to ship gigantic amounts of supporting files!
The amount of performance optimization that can be done here is most likely a good chunk, and I can't wait to see what this AoT option evolves to!
The versatility far outshines any memory amount.
Plus the much lower runtime memory use is very welcomed.
This is actually great, depending on the project of course, if you're already using those limitations of dynamic code gen, reflection etc, you're not going to care that much. But if you're on a 'proper' microservice architecture or even a well decoupled one, this is really useful. dotNet is getting more competitive in the right ways every day.
I guess you can split your project into the appropriate modules and design ahead which parts you want to use Aot with. It also depends on the usage perspective beyond performance requirements. A problem with C# IL code is that it can be easily decompiled to the original source code, unless you use some obfuscator, including VM-based protectors which will eventually add extra overhead against performance. These can still be de-obfuscated/virtualised into the original source code or extremely close to that. Having the ability to build some stuff into native code it means that someone may be able to spend time to figure out the logic but they definitely can't 'steal' your source code. That said, I am mainly into C/C++ and ASM stuff, but I am building a project now in C# for portability reasons and I am happy with it so far.
Is there any reason to NOT use the AOT for traditional desktop applications or DLLs? This looks really promising.
Some reasons are: you need dynamic assembly loading (like plugin system) or System.Emit (runtime code generation) etc.
Incompatibility with UIs especially with DisableReflection set true
On top of what Drazen Šoronda mentions you saw Nick also targeting a a runtime, like "win-x64". When you say traditional desktop applications or DLL's... if one of your use cases is to make sure your desktop application can be run by people on 32 bit (x86) machines still... then you would have to compile another version of the exe.
I mean if you use Reflection for your small typical application then you are doing something wrong. Reflection is bad code design and should be only used when absolutely necessary.
Also yes there is partial reflection support if you pass in to the compiler which classes / types must be compiled.
@@lx2222x Small typical application is all relative though. The fact that you think you might be writing code yourself that does not do/use reflection... even in a simple app.. you might still 'use' it but under the covers. If you use something like dependency injection for example.
Very nice! Although there are limitations as you mentioned, this looks great!
Very exciting news. About time as well. The limitations it seems should be non issues for many projects.
This seems like it could be a real benefit to WebAssembly/Blazor Client Side. I can imagine the limitations, even the current trimming features often cause me issues, but even as it is now, I think this will be really useful for us in a few cases.
It is usable in web assembly Blazor right now. The main problem is that the generated executable is too big
@@nickchapsas great that it already works, I didn't expect that. Hopefully that will improve with time, it seems like an obvious place where decoupling from the CLR would be an overall win.
Personally though, I have immediate uses for this, we distribute some small command line tools that customers use in their own systems, and none of the deployment options have been ideal. The AoT optionseems ideal, and may already be suitable, you can bet I'll be testing it soon.
Thanks for the video!
Nice, short, clear and simple for understanding! Way to go man!
Hi Nick, great video, the binary size reduction is crazy.
Now I really want to see a video of how NativeAOT works under the hood, especially memory management.
Memory management doesn’t change, the GC code is still inside that executable. The same thing happens if you make a library.
If the memory management would change (at least, in any way that matters to a developer using it), it wouldn't really be C# anymore.
The GC is why the size is still relatively large for small applications.
the reduced memory is due to the fact that the dotnet runtime does not need to run to execute the app
Mindblowing .NET 7 feature. Waiting to see the a demo on how to use it in AWS Lambas. Thx.
Wow, that just blew my mind! Amazing content, Nick
Great to see that this is coming to official C# as well.
Unity engine had their own implementation trough IL2CPP for quite a while and Mono had AOT to some extent as well.
Question:
So reflection emitting does not work, kind of expected, but I would be extremely impressed if they somehow made it work, but does regular reflection (eg. setting private fields, etc.) work?
Could it in theory work faster?
Yes, regular reflection works if you don't strip out reflection information (which NativeAot has as an option). Pretty much everything will work except dynamic code generation. IL2CPP works the same way (IL2CPP had a lot more restrictions in the past which have mostly been resolved since 2021.2, except IL2CPP still doesn't support TypedReference reflection APIs).
@@protox4 Whats the reason that dynamic code generation can't be done while in Native mode? Surely you could package the parser, compiler etc with it to do that?(or .net framework being installed)
@@mrx10001 I'm not sure about other platforms, but iOS specifically forbids it (or at least it used to, not sure if it still does). Which is the whole reason why IL2CPP exists in the first place.
Also, dynamic code generation is the job of the JIT compiler, and the whole point of AOT is to get rid of that. Doing dynamic code generation at runtime goes against the entire premise of AOT.
An alternative is to use an interpreter, which Mono has a mode for (works similar to JIT, but slower).
@@protox4 It does go against the premise of AOT, but can speed up reflection to native speeds.
For example things like serialization and dependency injection rely on code emission in order to get insane performance boosts.
But I guess the alternative would be to implement things using source generators.
@@protox4 is there any reason as to why dynamic code generation _couldn't_ work? Based on my knowledge of computers, couldn't the program write to its own memory, overwriting its own instructions and data, and therefore generating code "dynamically", so to speak?
Very interesting. Comming from Commodore 64, Amiga, all native machinecode and then forward through C/C++, then now... C# and... Docker... and now we can get back to natively compiled code... I still love .NET for its scalability, but cool to know we can also bake and compile our code
Hey Nick, can you make a video on the purpose of .pdb files and how to use them, please?
Also setting up symbol servers.
This is so underdocumented thank you so much for this video!
I dived into this a couple years ago because I finally was thinking I was going to get c++ performance (the only real thing I missed when I moved from C++ to c# like a decade ago). It the .net native turned out it was not any faster... pretty much exactly what Nick was getting.
NativeAOT will have a faster startup as it does not need to JIT the IL into native code, but once that has occurred then both will run at pretty much the same speed as all nativeAOT is doing is moving when the IL to native code compilation to build time rather than runtime. For some intensive code, the JIT may end up being faster as it re-JIT the code based on the specific branches taken, flattening some jumps etc.
@Sam Spencer - I guess I was thinking that it was going to be more like c++ where, since it was only going to be compiled once, the compiler spends lots of resources creating an extremely optimized native code and not use the quick-to-compile JIT engine. So when I saw this come out years ago, I was hoping the compiler would spend the extra time compiling, but it seems just to use the JIT. The JIT needs to be quick for startup costs.
The JIT does have its advantages as you said. It can use values that have been set and use them more like constants. Constants are super fast because they can pre-calculate math, hardcode or elimiate branches(as you mentioned). Hopefully one day the C# engine will offer some optional deep compile options. I say optional because any deep optimize compiling would have a very big impact on the time it takes to do the JIT (like slow loads).
I guess the nativeAOT would be useful for items that are spun up - run a task- and close - I recall the video mentioning that.
Thanks for the reply by the way. Sometimes I feel like my comments just go into the ether never to be read.
@@FunWithBits AFAIK it uses the 2nd tier of JIT which is slower to compile and faster to execute. The JIT however has another trick up its sleeve which is that as the program is run, it can see what the hot paths are, and re-JIT based on those. NativeAOT doesn’t get to do that as it doesn’t have that data.
AOT is great! Thanks Nick for sharing. It finally solves self-contained exe's hugh size (with runtime packaged in it).
Hey Nick, awesome! thanks for bringing this out, on thing to point is as a great AoT advantage is that the code is way more (but not impossible) protected against ILspy, DnSpy or deobfuscation in general.
One question tho, is it possible to have a class, with some calculations and stuff generated into native (to have sensitive code protected *unmanaged*) and then use it in as a import of a project and call /use it? Thanks!
I wish there was an option to compile a C# application for bare-metal embedded systems.
This is great! Now what we need is a lightweight web server and framework that doesn't rely on reflection. I'm not a fan of ASP, so I would have wanted that either way - but maybe now there's a more compelling reason to do it? 😄
Love your explanation. Thank you
very interesting. i've always liked C#, having to carry around so much baggage (and expect target machines to have certain files installed) with every app had me go to another language. this will convince me to take another look at C# in the coming months.
Great for cloud native applications
Great video nick! How is garbage collection working when in AOT. Is it included in the 2mb file size?
It is yes
C# native will still have more safeguards to prevent any dangerous errors, it's still garbage collected and it's still primarily heap based. This makes C# a delight to program in, but it also limits how much you can optimize your code on the low level. On the other hand, C# has automatic memory pools which makes heap allocations faster in C# than, for example C++. C++ is propably faster overall but it's not usually that much faster unless you think about the performance. Though C++ allows for more deeper optimizations which makes it run circles around C#, in those cases however you'll be writing much more custom code and thinking about CPU cache lines.
So far my experiences are... C# is easy and fast to use. C++ anywhere from medium easy to absolute nightmare depending what you're doing and how much you care about performance. C++ doesn't really have a centralized ecosystem of libraries so you may end up spending a lot of time finding the stuff you need or programming it yourself. It's really about a tradeoff between how much time you're willing to spend programming to make the application spend less time executing.
I usually prefer C++ as I find it more interesting and closer to the metal, but I also like the relative simplicity of C#. I haven't used Go, or Rust yet but I can definitely see the appeal.
We already have a usecase for this, on Unity, it is hard to download files from e.g. an object storage asynchronous and multiple at the same time. So we came up with a native plugin, written in net7, which we DL to from mono in Unity. Works perfectly lol
@Павел Иванов well, if your talking about C# to Native. Just DllImport
Is it smaller, yes, but it still seems large for such a tiny bit of code compiled to native instructions. I remember fitting several C++ compiled applications, on a single 1.44MB floppy disk back in my high school days.
Yeah, there's still a small VM in the application, as well as the whole .NET GC. About 500KB of that size are native runtime bits while the rest is either the managed reflection stack, globalization or other stuff the app indirectly depends upon.
I am not a C++ developer, but I can imagine that even on the C++ ecosystem the compiler will not generate anything much better.
Nick has a reference to the System.Diagnostics and uses the stopwatch. What seems simple method calls still means that all of the backend code of that is going to be added in. the same goes for Console.Writeline().
When you were several compiled applications on a single floppy 20 years ago you were only compiling 32 bit applications with pretty much anything you wanted to be able to do having to code that yourself. And we never heard of unicode strings either. No extensive libraries to get code from. All of these things would affect and contribute to smaller executable.
Needless to say the current architecture of computers/processors has changed drastically and these comparisons cannot be made, because then if you see commodore 64 games, especially the recent ones, it's mind baffling that people can fit that in 64KB.
I rewrote the example in Rust and it takes 290KB. I couldn't be bothered to do the same in C++, but I have a fairly similar C++ program that I wrote the other day, and it's 27KB. That's on Linux x64. The difference is that C++ can rely on the OS runtime library (libstdc++, which is always there on Linux), while Rust and .NET have to link their runtimes statically in the binary, and .NET's is still a lot bigger.
@Paul Koopmans I disagree, try explaining that to GoLang or Rust developers creating sub 200kb programs. There's even an entire subculture of hackers/coders that try to stuff the most amount of visual/audio art into a 64kb file size, mostly built in C++ (called demoscene). I'm reminded of an old Bill Gates quote where he claimed 640k is all the memory we would ever need on a computer. We can't stop at that size, it's still too large for what it represents. We can either make excuses for it's size, or we (collective "we") can continue the great work towards making it smaller.
@@yondaime500 my thoughts exactly. CoreRT with AOT was able to accomplish something similar a few years ago using a test Core library. Compiling down to under 500kb. This supports the static linking, gc, exception handling etc. And Mono many years before that.
I have waiting this moment for years. I waiting it from when it was called CoreRT.
Very cool. Please update this when it is production ready.
Does that affect Attributes, since attributes often make use of reflection to work?
This is amazing! To have this option at least as a choice in many cases. And yeah, those limitations with the dynamically loading or generating code that you mention are kinda logical ones, when having no runtime and no jit (but why would you even want to, when the whole perpose of ahead of time compilation was aready to fixate everything from the start).
Currently I have already experimented with the existing ReadyToRun feature, which is a partial version of ahead of time compilation, but actually making your binary bigger instead of smaller. :P So great improvement is coming here. I am going to try it.
In many cases though, more important to me is that self contained single file option, which is VERY slow at startup, making by far the greatest difference for me. Basically it's kind of a self extracting archive, but a lot of things can be done with that too.
Are they considering faster compression algorithms? Because that could be very helpful as well.
This will be very nice for containers as soon as it's stable etc.! I have a question, though: Does this also mean, that the attack surfice is lower? I mean, it's all machine code so decompilation is not as good/fast as with JITed code, right?
Only from a security by obscurity standpoint. The nativeAOT code can probably be reverse engineered to the original source code, much as can be done with IL, but it will take more effort to do so.
@@ironictragedy Gotcha thanks!
Brilliant, just brilliant, thank you for sharing
Very cool. I will try it soon
I guess will have to wait for that aws lambda video. Thanks for great content 🙏🏼
I tried AoT native compilation pre-Net 5 (iirc) and it was nice but wouldn't reliably work. Looks like it's still in much the same place but at least it's getting officially included in the runtime which is something. Definitely something to think about if you're running in containers.
Thanks for the video , I am new in c# , do you have any tutorials / courses that show how to create Apis in c# and deploy the final app to linux cloud ?
Woah. this is what I been waiting for.
Nice vid. I have found Avalonia to be a great showcase for startup time improvements in AOT scenarios.
Now when starting a new project in VS. When you choose .NET 8; You can choose to use the Aot into native by default in a checkbox.
So when you open your project file. It has the true in it already.
This doesn't apply for .NET 7 or earlier.
Forgive me if the question is silly. Would it be possible to load PHP core library (written in C) into C# and build extensions with it? Or maybe with a PHP-CPP intermediary layer library?
I guess yes. I don't know exactly how Zend works and thats why I have not tried it yet. Im doing the same but for Java and my goal is reduce the complexity of JNI using .NET.
Would have been cool to compare the startup time too. Even something very simple like a bash script that records the time, runs the exe, prints out the duration.
I intentionally didn’t to show it in the lambda video where cold starts really make a difference
You should know that the AOT option can't really compete with C or Rust, when it comes to speed, start up time, RAM usage and binary size (while actual numbers depend on the use case). If your target has such constraints and your knowlegde of memory management & optimization techniques is profound enough, you can consider using a language that was designed to compile to native machine code in the first place.
C# gives you memory management and platform independence for free. Needless to say performance optimizations which you also get for free with almost every major release without changing your code. The AOT option is actually great, if you need to run your application on systems which do not allow JIT (like iOS & game consoles). It is actually crazy that this works at all, albeit being limited on some features.
I thought, I leave this here as an addition to the comparison with other languages at the beginning of the video. :)
I would think that there are more substantial performance benefits for larger libraries and some apps?
Particularly for start up, memory, and protection from de-compilation.
Something I am interested if following.
Does it compile a mini garbage collector within the executable?
yes
No, it compiles the full regular .NET GC into the executable, which makes a few hundred of KB of that size.
Yes, GC is still there. Mainly what’s not there is the JIT
Is there a garbage collector or do we have to manage allocations? Since it's the CLR's job
Code will work the same as before, so still a GC
@@andreakarasho how tho? Garbage collection and threads are the CLR's job that's why C can't do it
There's still a slimmed down VM written mostly in C# in that executable, but there's the full gc.cpp from CoreCLR in there.
@@suchiman123 oh I think I get it
So AOT apps basically look like?:
int main()
{
while(true)
{
UserApp();
GarbageCollection();
ThreadAndStuff();
}
}
Yes the GC is still there. Technically there still is a “runtime” in there, there is just no JIT.
Hi Nick, Will DIE show the app as .Net app (which it should I guess) and at 7:22 what is that 2nd AOTApp process taking 84.6MB?
It's windows terminal process, not the main application,
@@TheRPGminerThanks, I totally overlooked that fact 😅
Cool. Expect to see it in AWS lambdas
Electron apps face the same issue, without having a framework preinstalled on the local machine each application must contain everything to run which makes each project unnecessarily large.
I'm hoping at some point web browsers adopt the ability to install DLL's which can be used by any site so that we can get away from having to use JS to interact with the DOM. Assuming Blazor takes hold maybe we can work towards Wasm's being cacheable for all domains.
Another argument in favor of native code is power consumption. There's a paper from 2017 showing that non-native languages are quite significantly more power hungry than native languages like C++ and Rust to do the exact same task, and C# was no exception.
True to some extent... It's all about the execution speed, including how fast it is to start up the application, and C# generally isn't very slow. Primarily it comes down to the efficiency of your algorithms. C++ is my favorite but I do admit it can take a very long time to create an efficient program.
Already use this feature since NET5
Hi Nick, great video, as usual! Btw. where did you get this awesome-colored console?
Or Swift, which I created a whole shipping iOS app in but had to turn away from to go multiplatform.
Surprised it doesn't run way faster, does it not get optimized by the compiler or something? Like with LLVM.
Asking because in Unity, we use C# and they have what's called a burst compiler, which converts C# code to native application code and it makes a huge difference like x100 to x10,000 faster.(it also sets some strict coding restrictions to enable far better memory management).
Why we had to wait for .NET 7 for this remains a complete mystery.
This is cool. Unity will most likely take advantage of this and replace it's il2cpp tool :)
yeah in another 5 years
@@Yupmoh Yeah, Unity has stated that they are still committed to IL2CPP for the foreseeable future since it's already a mature technology (more mature than NativeAot), but they are keeping an eye on it, so who knows?
@@protox4 honestly they should just ditch everything and port to .net6/7
Hey, that's pretty good
This feature would be amazing. Just yesterday I thought about learning and using Rust to create a native DLL so that my chat client (which also has file transfer and soon data streaming etc) can be used in Java Android and Swift for IPhone. If it would be possible to code in C# and make it usable for other languages, it would be great.
Phones have that peculiar problem that they cannot run a 2nd executable due to security reasons. Maybe something like services? I tried to find a solution for this problem that does not bother the other two developers too much.
NativeAOT can also generate dynamic and static link libraries with regular extern "C" exports (see [UnmanagedCallersOnlyAttribute]), which would allow them to be just loaded or linked into your apps.
Shared memory (on Android) might be a solution to that. Not sure about iOS.
@@suchiman123 iOS doesn't allow dynamic linking - for security reasons
I'm currently writing a program that is cross platform between Windows and Android, all written in C# using .NET 6 and .NET 6 Mono (Xamarin), you can create all your shared libraries in a .NET Standard 2.1 project and have the same code run on Windows and Android with zero adjustments.
@@ZintomV1 - I know, but I didn't plan to replace 2 developers in the company... yet.
Coooooool, hope this come to webapis
No need. .Net 7 is super super fast, faster than Go
Can you talk about the hybrid mode? As far as I understand there is an AOT mode where dynamically loaded code is supported, however it actually runs using an interpreter (the eval is running off of IL directly). I'm really curious about the internals and details of that approach.
PublishSingleFile works well for me (I ignore the giant EXEs!), unless I need to use mysql. The Mysql client uses the 'Codebase' property in reflection which apparently does not get populated in PublishSingleFile mode
This is probably for Unitys upgrade to CoreCLR... pretty cool!
Oh my god! I always wanted that!
It's great for dockerized command line apps
Thanks a lot for this video! Does it work if code contains reflection? Are there any other restrictions?
Do you know how intrinsics are handled? Is there a way to target certain instructions (SSE, AVX, etc.)?
@@CodeNova I meant in the context of AOT compilation
It is possible to specify a baseline of supported instructions at compile time, if they're not available at runtime, the application will fail to start. For every instruction set above the baseline, dynamic detection at runtime is performed (e.g. Avx.IsSupported will be dynamically evaluated).
It's not exactly new feature. Compiling C# to native code was available for a quite some time, but limited to UWP with something called .NET Native. It was (is?) amazing. I was building an app for Windows 10 IoT Core on Raspberry Pi 3 and the difference between JIT and AOT was huugeeee... like 30 seconds difference in startup. For AOT version startup took few seconds, and for JIT it was +30 seconds. I'm happy that they managed to go with it beyond UWP
The concept has been here for a long time. NGen was working like this for years but it was .NET framework specific. Xamarin forms for iOS was using Mono AOT etc
.Net Native was experimental, it became Core RT (still experimental), and finally became official as NativeAot. You're talking about an older version of the same technology, so while it's not technically new, it's new official.
Awesome🎉, looking forward those changers could be adopt into Azure Webapp
Can't wait to see it running in a lambda function
Without supporting reflection.emit, does that mean EF lazy loading is out?
the single file option is only slow on the first run when it unpacks the file. after that it's fast.
If my program depends on a static file, can I bundle that somehow in that executable file too? Or do I have to put that file separately?
Previous self-contained compilation was the NGEN?
Hey man, how about write the exact same thing in C/C++ or Delphi and compare perf and memory footprint?
Because it's norm for me to pick a native code language when anything with client-side high performance is concerned.
Can we convert windows forms into intermediate language? I'm guessing not lmk
Ouch, that platform-specific headache... Not again :)
I wonder if this could be used on a Raspberry Pi - the small footprint
Does AOT apps have GC and if not how are strings cleanup?
Yes AOT apps have GC
How does the AoT mix with things like Reflection?
The reduction in size is really amazing, but I guess a lot of information is lost in the process
Reflection is supported but you can only reflect over what has been actually compiled (as NativeAOT implies trimming, aka not compiling things it doesn't see as used). MakeGenericType / MakeGenericMethod, when is a value type / struct, only works if Type was either explicitly used in the code somewhere else or it has been specified as needed in the rd.xml file. There's also a hidden switch, `IlcDisableReflection`, which disables the reflection stack and reflection metadata generation, which, with a few other flags, can bring down the app size to below 1MB.
Imagine if you could AOT compile your c# program with clang -O3 and avx2 this would be nuts
0:28 "...and why it's probably far from something you will realistically use."
I work for an MSP and run a lot of things through our RMM that I would love to do in C#... but can't because not all of our several thousand endpoints even have .NET installed let alone up to date. I have at ton of endpoints that don't even have a working powershell because some idiot installed something that corrupted the PATH environment variable. (And yes we're fixing those as we find them, but they keep cropping back up.) So yeah, I'm going to be using AOT a lot.
What is the execution time with PGO? That is probably faster than AOT
nice, thanks
Can't wait for the Lambda examples.
love when they release some real "hhnnnnnghhh"-features
I would love to watch the AOT Lambda video.
It's cool for containers, lambdas, and game engines.
for small container? :D
JIT version should be benchmarked after a warmup. I'm not sure if this will affect the result for this particular benchmark though.
It’s doesn’t have an effect in this particular scenario
The JIT warmup is part of the difference between the two options so it definitely needs to be included in the benchmark. Startup time should be too, but Nick will cover that in another video (accordin to his comment on another thread).
stupid question: do you still have garbage collection? i mean thats the job of the CLR, and with aot you skip clr basically? im confused
The GC aspect of the CLR is still embedded in your output, plus all the BCL code your app uses. What's ultimately removed is the JIT, but the "runtime" is still there (similar to what happens in Go).
Great stuff. Can Native AOT be used to create micro services / minimal API apps with MapGet?
hi, im trying to register on your web page for some on your courses, but i really dont see that option : (, only login. thx
I've installed .net 8 and it's not possibile that I cannot find a compiler for c# 12 in the stuff I've installed, where is it? Not the old csc.exe for c#5 from the windows directory
Would it ever be able to use rust code for libraries to get the speed I need?
dllimport works same as before, so you have to compile rust code in a library or static lib and link that lib on your csproj. NativeAOT has good doc about
I wonder what it does if you depend on some NuGet package that uses features that are not compatible with AOT. Does it just throw a build error, or is there a chance that it will build anyway and then do something weird at runtime? Is there a way to scan the dependencies to see which ones are not compatible?
For what it's worth, I rewrote this example code in Rust (because of course I did) and it took about 2.8s to run with the same input, which is pretty close, although I don't know if your processor is faster or slower than mine, so not that meaningful. But the exe is only 290KB and only uses 100KB of RAM, so there's that.
You will receive a PlatformNotSupportedException at runtime if you come across something unsupported.
What about winforms? Or console only?
For now it is console and class library only
definitely cool
How come pre-jitted not any faster?
Or AotNative is not actually pre-jitted, is AotNative only slimmed down CLR?
Not every usecase will be faster in runtime. It depends on the complexity of what you’re running
So it can cross compile. From linux to windows binarys