More Info: Another technique to consider is using the "extractTransform" node to capture the position and orientation data of a moving geometry ( not deforming ). By caching only these key points, you can significantly reduce the amount of data storage required. Once the points are cached, you can utilize transform pieces node post-cache. I've also done some research on using a single Alembic file, and I tested it on the crag scene ( unpacked ). It consumed unpacked 1.21 GB Size ( packed 504 MB ), and it will save in USD format with the same size.
yes wanted to write about extracttransform for the packed objects it can store PSR , but yes it only works only with non deforming mesh, constand point count.
The problem with this method is it is only helping you save space for data that is going to come back into SOPs. You cannot use this to render, as you'd need a procedural at render-time to do the transfer. Single alembic Vs per frame alembic is already doing a very similar method to what you've shown, USD data format is also furthering this, and both those data types can be directly rendered, either in houdini or another DCC. Have you found it usable for final frame rendering, or more an intermediate caching reduction workflow?
Hey Lewis, Of course we can use this for the Final render check this ua-cam.com/video/V7FIKScwC_E/v-deo.html I've also done some research on using a single Alembic file, and I tested it on the crag scene. It consumed 1.21 GB Size, and it will save in USD format with the same size. So, I'm importing an optimized version of the scene, which is only 199 MB, directly into Solaris. It's working well, but we still need to recompute the velocity. Other than that, everything is looking good! To export the optimized cache for use in other software, we can try using a Houdini Digital Asset. Although I haven't personally tested this approach, it should allow us to load the optimized cache into other software. If for some reason this doesn't work as expected, we can resort to caching it using the traditional methods we've used in the past.
@@VFXMagics anything you load into Solaris that is coming live from sops is going to be buried in the exported usd stage, it won’t be a reference or delayed load. USD export has the same functionality to store and transfer data, we use it for hair. The problem with making a hda is you’d need to inject hengine into the mix, that will pull memory and a license. I don’t know how it would scale either, with big data sets.
@@lewistaylorFX I haven't specifically cached anything additionally for rendering in USD. However, if it's necessary, we can explore the option of using temporary caching as you mentioned for the USD export.
@@VFXMagics yeah that’s what I mean. If you export the usd stage that data will be baked in as a binary instead of referencing. Then you will need to call hengine and run your sop data transfer, that’s where these methods of cache reduction fall over as they need a process run on them to be usable.
It works best for RBD, actually. In fact, in RBD, you get a chance to save points only. So, I recommend saving a single frame on glass geo and the full range of sim points so you can save 95% of data.
More Info: Another technique to consider is using the "extractTransform" node to capture the position and orientation data of a moving geometry ( not deforming ). By caching only these key points, you can significantly reduce the amount of data storage required. Once the points are cached, you can utilize transform pieces node post-cache.
I've also done some research on using a single Alembic file, and I tested it on the crag scene ( unpacked ). It consumed unpacked 1.21 GB Size ( packed 504 MB ), and it will save in USD format with the same size.
yes wanted to write about extracttransform for the packed objects it can store PSR , but yes it only works only with non deforming mesh, constand point count.
Very informative thanks for sharing👍
Very important stuff, Thanks for sharing.
Very cool! Thank you!
Thanks for sharing!
Its Very Usefully Info Thanks....
The problem with this method is it is only helping you save space for data that is going to come back into SOPs.
You cannot use this to render, as you'd need a procedural at render-time to do the transfer.
Single alembic Vs per frame alembic is already doing a very similar method to what you've shown, USD data format is also furthering this, and both those data types can be directly rendered, either in houdini or another DCC.
Have you found it usable for final frame rendering, or more an intermediate caching reduction workflow?
Hey Lewis, Of course we can use this for the Final render check this ua-cam.com/video/V7FIKScwC_E/v-deo.html
I've also done some research on using a single Alembic file, and I tested it on the crag scene. It consumed 1.21 GB Size, and it will save in USD format with the same size. So, I'm importing an optimized version of the scene, which is only 199 MB, directly into Solaris. It's working well, but we still need to recompute the velocity. Other than that, everything is looking good!
To export the optimized cache for use in other software, we can try using a Houdini Digital Asset. Although I haven't personally tested this approach, it should allow us to load the optimized cache into other software. If for some reason this doesn't work as expected, we can resort to caching it using the traditional methods we've used in the past.
@@VFXMagics anything you load into Solaris that is coming live from sops is going to be buried in the exported usd stage, it won’t be a reference or delayed load. USD export has the same functionality to store and transfer data, we use it for hair.
The problem with making a hda is you’d need to inject hengine into the mix, that will pull memory and a license. I don’t know how it would scale either, with big data sets.
@@lewistaylorFX I haven't specifically cached anything additionally for rendering in USD. However, if it's necessary, we can explore the option of using temporary caching as you mentioned for the USD export.
@@VFXMagics yeah that’s what I mean. If you export the usd stage that data will be baked in as a binary instead of referencing. Then you will need to call hengine and run your sop data transfer, that’s where these methods of cache reduction fall over as they need a process run on them to be usable.
sir in my version is not show that node is not like that how can i get this node
Very useful 👍🏼
does this work for RBD simulations? like a braking glass?
It works best for RBD, actually. In fact, in RBD, you get a chance to save points only. So, I recommend saving a single frame on glass geo and the full range of sim points so you can save 95% of data.
Thanks!!!
❤❤
Thank you
Thank you for sharing, can we assume that this will speed up render times bc caches are smaller?
Technically, using Wrangle or VOP, it still takes time to read data. While it can save some disk space, it won't necessarily speed up the render.
great tutorial! How can I apply this technique to a FLIP, POP, or RBD sim like you are showing in your teaser trailer?
Hey in trailer demo i have used constant particles only, u can use wrangle or vop inside dop or post dop and read P after cache
Did you compare with alembic ? I believe alembic is able to do it in the same way
Yes, tested on crag geo .abc and .usd both consumed 1.21 GB size
Yes yes yes yes yes
*promo sm* 😊
i follow this guide, but it dont works and i made another way. in wrangle i wrote
v@storeP=v@P;
then attribute rename i changed name
store`@Frame`
Which Houdini version are you using?
@@VFXMagics Hi, 18.5
@@VFXMagics I was able to add the frame number to the attribute name, but then I can't reproduce it using your code in Wrangle
@@nyashkov Yes, this code doesn't support Houdini versions older than 19.5. In this tutorial, I am using version 19.5, which is why it is working.
@@VFXMagics thank you very much!