I was really hesitant to use procedural animation with traces in my project because I was worrying about the performance impact, but this video gave me the encouragement to dive into it! This was so helpful - your videos are always on point!!
Just found out while watching this. You can also set the LOD Threshold of the Control Rig node in the Animation Blueprint so you can stop evaluating it at a certain LOD. Great video.
You can! But it's a little risky visually to just stop it altogether, depending on the use case. Because once it's back evaluating it will reset everything (I think?) like the controlrig asset has just been created - perhaps running construction again? Or just a really long tick. Unsure, haven't tested.
@@LincolnMargison yes it is a little jarring. Would rather have a way to lerp the Alpha from 1 to 0 when going past the Treshold. Couldn’t find how sadly at the moment.
I really appreciate your content on CR. I haven’t seen anyone go in depth as you have. Please when you get a change, shed some light on the control rig component for BPs. How to map rig elements in an actor to adapt controllers in the rig to physics bodies in a skeletal mesh. Maybe also do a follow up to this video by exploring the performance benefits of CR component lazy evaluation.
This is amazing timing: I'm literally in the middle of trying to do a fake physics system for finger collisions in control rig. Here I am worrying about doing 30 traces on tick. Great info as always!
I was working on that too today! Making a procedural 'grab'. An interesting solution I came up with was to move the trace point as the hand closes. So a trace per finger, starting at the first joint of the finger (knuckle), interpolating down the chain towards the tip as the hand closes more. Then you can do fewer traces (rather than 1 per knuckle+tip) and it approximates the same thing, where a larger surface/object will still block the whole hand from closing any further, and if nothing is blocking it, it will be tracing more at the finger tips. You could do this temporally too, where it remembers where was blocked without redoing the traces and just gradually closes around the object - if your object can't/wont change shape or position relative to the hand. Then during the closing, you're only ever doing 5 traces, and zero once it's fully grabbed it as far as it can go. OR you could decide that 30 traces really isn't hurting anyone and go wild
@@LincolnMargison Interesting, I was thinking of doing something like that to gate if the finger collision would run, but I could imagine some scenarios where a finger could be completely inside an object and the sphere trace wouldn't hit anything. Right now I am starting the trace about half a finger length up from the bone in relative position, and then offsetting the hit location by the finger size and then turning all of those into points in a spline. The vector math is hurting my head, and there's some edge cases with some strange behavior, but its impressive otherwise! I'm going to throw it up on my VR mocap system github once I figure it out, but I would love to see other approaches!
@@Megasteakman You can create a spline from controls or bones, and map a spline TO controls/bones. So maybe just turn it into a spline, trace the individual points and move them, then map it back to the chain. I think individually, each joint would want to trace the top of that joint, rotationally. So you'd basically want to be tracing the arcs of each finger bone? When one hits, move on to the next and so on, so it wraps around. EG the base of index01 is the rotation origin, but the trace is being done from index02, arcing around until it hits. Then do the same but moved up the chain, etc
Hello saw your course on Udemy. Interested in taking it. If I want to have the character procedurally touch a door. Kinda like the Witcher when you walk by a door. Is your control rig course a good way to learn. Or should I be trying to learn something else to approach that
Cool performance test! Interesting to see it perform so well :) I reckon ControlRig runs on a separate thread and traces are async? Who knows, haven't looked into it much
I think it's all part of the animation, which is a separate thread I think. It's basically just a regular animation node within the animgraph that does some extra stuff.
I don't know what's possible in terms of C++ with control rig. I don't think you can do it, but I'm not totally certain. From my understanding it's all highly optimized and running as part of the animation thread, so I don't think you'd get any benefit from if you were able to do it via C++. The math nodes and such aren't causing any issue. So optimizing those won't make a difference. And if you're using traces/set transforms and such it's probably doing the exact same thing under the hood, so maybe no benefit there. I think it would be more akin to trying to set up materials via C++. The interface is like blueprints but I don't think it's the same as regular BP. Similarly, Niagara wouldn't benefit from C++.
Thats what I do, minus the control rig part. All the data (from pointers) gets gathered on the game thread at the start of the frame, then the data gets processed and calculated on worker threads so multiple characters can have their logic being done in parallel without waiting for each one to finish. This gives me the values I need which I then use in the anim graph. Then for the things that utilize traces (specifically firearm collision) I use async sphere traces for and when the result is received I interpolate to the result so its smooth. All the logic in the anim instance adds very little to the frame time (tested with 40 clients, without any logic it was around 13.9-14ms and with all procedurals it was 14-14.2ms), the anim graph was the most expensive part which is not a surprise given its still their own VM running for the logic. I imagine you can do something similar where you take that approach and you plug said values into control rig to run logic on?
I think it's mainly useful as a general comparison of different approaches, and the relative FPS, rather than absolute numbers. As the situation shown here is unrealistic anyway. In a game scenario you'd likely have more complex character meshes which would cause lower FPS and so on. So it's definitely worth just testing an approximation/exaggerated version of your actual use-case to get absolute values. But this was on a 4080, i9-13900k, however without any optimization with the engine (eg. lumen was probably still on?) and also screen recording at 1440p/60. So hard to gauge any exact values from this.
@@lifeartstudios6207 This is a comparison showing the impact of control rig doing different things, not a hardware benchmark. Even if you get 10% of that FPS with 400 basic skeletal meshes, it would still go down/up/etc based on the different tests. If you need to know real world absolute FPS values that's something you'd have to test on the target hardware with actual models
@@LincolnMargison Knowing the hardware gives you a ballpark estimate of how much room you have to play with. The impact of this test will change based on the bandwidth you have to use. If you only dropped 20 FPS, on my machine I might drop way more than that because of the bottleneck. I don't need a full profile of the scene, I was just kind of curious because I can eyeball it after having worked with Unreal for so long.
I know it's a joke, but for anyone reading, keep in mind this is just the numbers on my system & setup! The absolute numbers will vary, and it's more a case "is this better than this" rather than the precise FPS values
@@LincolnMargison yeh. it is indeed a joke. but it does have some validity to it. there are a lot of factors that need to be taken into account when making games so they can actually run on the hardware of the time but the performance cost of our skeletal rigs is effectively trivial in comparision to days past (given that vector intersection testing is one of the more expensive "basic" operations one uses in geme dev). point being that using animation "LoDs" in combination with efficient reuse of data makes the performance cost of procedural skeletal animation negligible compared to the early days. while we dont need a full on pixar level animation rig (yet) it is within performance budget to do it... if you wanted to.
I was really hesitant to use procedural animation with traces in my project because I was worrying about the performance impact, but this video gave me the encouragement to dive into it! This was so helpful - your videos are always on point!!
Just found out while watching this. You can also set the LOD Threshold of the Control Rig node in the Animation Blueprint so you can stop evaluating it at a certain LOD. Great video.
You can! But it's a little risky visually to just stop it altogether, depending on the use case. Because once it's back evaluating it will reset everything (I think?) like the controlrig asset has just been created - perhaps running construction again? Or just a really long tick. Unsure, haven't tested.
@@LincolnMargison yes it is a little jarring. Would rather have a way to lerp the Alpha from 1 to 0 when going past the Treshold. Couldn’t find how sadly at the moment.
I really appreciate your content on CR. I haven’t seen anyone go in depth as you have. Please when you get a change, shed some light on the control rig component for BPs. How to map rig elements in an actor to adapt controllers in the rig to physics bodies in a skeletal mesh. Maybe also do a follow up to this video by exploring the performance benefits of CR component lazy evaluation.
This is amazing timing: I'm literally in the middle of trying to do a fake physics system for finger collisions in control rig. Here I am worrying about doing 30 traces on tick. Great info as always!
I was working on that too today! Making a procedural 'grab'. An interesting solution I came up with was to move the trace point as the hand closes. So a trace per finger, starting at the first joint of the finger (knuckle), interpolating down the chain towards the tip as the hand closes more. Then you can do fewer traces (rather than 1 per knuckle+tip) and it approximates the same thing, where a larger surface/object will still block the whole hand from closing any further, and if nothing is blocking it, it will be tracing more at the finger tips. You could do this temporally too, where it remembers where was blocked without redoing the traces and just gradually closes around the object - if your object can't/wont change shape or position relative to the hand. Then during the closing, you're only ever doing 5 traces, and zero once it's fully grabbed it as far as it can go.
OR you could decide that 30 traces really isn't hurting anyone and go wild
@@LincolnMargison Interesting, I was thinking of doing something like that to gate if the finger collision would run, but I could imagine some scenarios where a finger could be completely inside an object and the sphere trace wouldn't hit anything.
Right now I am starting the trace about half a finger length up from the bone in relative position, and then offsetting the hit location by the finger size and then turning all of those into points in a spline. The vector math is hurting my head, and there's some edge cases with some strange behavior, but its impressive otherwise! I'm going to throw it up on my VR mocap system github once I figure it out, but I would love to see other approaches!
@@Megasteakman You can create a spline from controls or bones, and map a spline TO controls/bones. So maybe just turn it into a spline, trace the individual points and move them, then map it back to the chain.
I think individually, each joint would want to trace the top of that joint, rotationally. So you'd basically want to be tracing the arcs of each finger bone? When one hits, move on to the next and so on, so it wraps around.
EG the base of index01 is the rotation origin, but the trace is being done from index02, arcing around until it hits. Then do the same but moved up the chain, etc
Very very helpful. Thanks! I was also wondering how the control rig impact is and very glad to see how it handles so far. :)
Excellent, as always!
This is a great breakdown, and some excellent tips. Thanks for sharing!
You can also lower the number of traces based on how far the camera is from the skeletal mesh. The far a object is from camera the lwss detail.
Thank you
Hello saw your course on Udemy. Interested in taking it.
If I want to have the character procedurally touch a door. Kinda like the Witcher when you walk by a door.
Is your control rig course a good way to learn.
Or should I be trying to learn something else to approach that
Cool performance test! Interesting to see it perform so well :) I reckon ControlRig runs on a separate thread and traces are async? Who knows, haven't looked into it much
I think it's all part of the animation, which is a separate thread I think. It's basically just a regular animation node within the animgraph that does some extra stuff.
Shouldn't it be possible to make the prodcedual math in c++ to make it even perform better?
I don't know what's possible in terms of C++ with control rig. I don't think you can do it, but I'm not totally certain. From my understanding it's all highly optimized and running as part of the animation thread, so I don't think you'd get any benefit from if you were able to do it via C++. The math nodes and such aren't causing any issue. So optimizing those won't make a difference. And if you're using traces/set transforms and such it's probably doing the exact same thing under the hood, so maybe no benefit there. I think it would be more akin to trying to set up materials via C++. The interface is like blueprints but I don't think it's the same as regular BP. Similarly, Niagara wouldn't benefit from C++.
Thats what I do, minus the control rig part. All the data (from pointers) gets gathered on the game thread at the start of the frame, then the data gets processed and calculated on worker threads so multiple characters can have their logic being done in parallel without waiting for each one to finish. This gives me the values I need which I then use in the anim graph. Then for the things that utilize traces (specifically firearm collision) I use async sphere traces for and when the result is received I interpolate to the result so its smooth. All the logic in the anim instance adds very little to the frame time (tested with 40 clients, without any logic it was around 13.9-14ms and with all procedurals it was 14-14.2ms), the anim graph was the most expensive part which is not a surprise given its still their own VM running for the logic. I imagine you can do something similar where you take that approach and you plug said values into control rig to run logic on?
any chance you could make a video about fish/shark control rigs and animations
have you found out how or did you cancel your project like a puss?
would help to know your computer specs for comparison
I think it's mainly useful as a general comparison of different approaches, and the relative FPS, rather than absolute numbers. As the situation shown here is unrealistic anyway. In a game scenario you'd likely have more complex character meshes which would cause lower FPS and so on. So it's definitely worth just testing an approximation/exaggerated version of your actual use-case to get absolute values.
But this was on a 4080, i9-13900k, however without any optimization with the engine (eg. lumen was probably still on?) and also screen recording at 1440p/60. So hard to gauge any exact values from this.
@@LincolnMargison I can tell you that I would not be able to do 400 skeletal meshes at that FPS so it's pretty relevant
@@lifeartstudios6207 This is a comparison showing the impact of control rig doing different things, not a hardware benchmark. Even if you get 10% of that FPS with 400 basic skeletal meshes, it would still go down/up/etc based on the different tests. If you need to know real world absolute FPS values that's something you'd have to test on the target hardware with actual models
@@LincolnMargison Knowing the hardware gives you a ballpark estimate of how much room you have to play with.
The impact of this test will change based on the bandwidth you have to use. If you only dropped 20 FPS, on my machine I might drop way more than that because of the bottleneck.
I don't need a full profile of the scene, I was just kind of curious because I can eyeball it after having worked with Unreal for so long.
Dune 2 vibes. No but seriously thanks for this tutorial ;)
so. the limit is around 20k traces. interesting... (proceeds to make a fully procedural player controller making use of 19,999 traces) XD
I know it's a joke, but for anyone reading, keep in mind this is just the numbers on my system & setup! The absolute numbers will vary, and it's more a case "is this better than this" rather than the precise FPS values
@@LincolnMargison yeh. it is indeed a joke. but it does have some validity to it. there are a lot of factors that need to be taken into account when making games so they can actually run on the hardware of the time but the performance cost of our skeletal rigs is effectively trivial in comparision to days past (given that vector intersection testing is one of the more expensive "basic" operations one uses in geme dev). point being that using animation "LoDs" in combination with efficient reuse of data makes the performance cost of procedural skeletal animation negligible compared to the early days. while we dont need a full on pixar level animation rig (yet) it is within performance budget to do it... if you wanted to.