Thank you for the great video! How does the process of warranted abstraction in creating a digital twin balance the need for accuracy with the practicality of model complexity? Specifically, how do you decide what aspects of the physical copy machine are essential to include in the digital twin, and what can be left out without significantly impacting the model's performance?
Thanks for the question. At its current level of complexity and with only a few of these in an office building would have a minimal impact on run time. If the building hosted scores of these for production work then scaling would be an issue - one would likely need to setup lots of different work loads and the host room would certainly grow in complexity (with the shared surfaces) and that could test a solver. The shorter tilmestep is the elephant in the room. It will create larger performance results files as well as longer to run-times. It would be worth testing to see if a 2 or 3 minute tilmestep degrades the deliverables. I would make a differently complex model if it were being made for use by the mfg of the kit who wanted to study heat transfer within the machine. That would be multi-zone and might require an order of magnitude more surfaces as well as that destructive tear-apart and lots more monitoring. And if one were serious about run-time creating an even more abstract variant would be worth trying. If a detailed comfort study was not called for a more abstract DT might be good enough.
Thank you for the great video! How does the process of warranted abstraction in creating a digital twin balance the need for accuracy with the practicality of model complexity? Specifically, how do you decide what aspects of the physical copy machine are essential to include in the digital twin, and what can be left out without significantly impacting the model's performance?
Thanks for the question. At its current level of complexity and with only a few of these in an office building would have a minimal impact on run time. If the building hosted scores of these for production work then scaling would be an issue - one would likely need to setup lots of different work loads and the host room would certainly grow in complexity (with the shared surfaces) and that could test a solver.
The shorter tilmestep is the elephant in the room. It will create larger performance results files as well as longer to run-times. It would be worth testing to see if a 2 or 3 minute tilmestep degrades the deliverables.
I would make a differently complex model if it were being made for use by the mfg of the kit who wanted to study heat transfer within the machine. That would be multi-zone and might require an order of magnitude more surfaces as well as that destructive tear-apart and lots more monitoring.
And if one were serious about run-time creating an even more abstract variant would be worth trying. If a detailed comfort study was not called for a more abstract DT might be good enough.