Very good video. So do you have any suggestions as to how to approach a big case in parallel? would it require to reconstruct/decompose after each refinement step for example? This sounds really time consuming but it might be the only way to make manageable a huge case.
Since this video I have run some experiments on the damBreakWithObstacle using the interIsoFoam solver in parallel and with dynamic mesh. My results thus far have confirmed my suspicion that the case effectively becomes less parallelized due to the dynamic mesh. It seems to be somewhat case dependent whether the best option would be to reconstruct and decompose the case at each timestep (thus saving on simulation time) or to just initially decompose the case and let it run (thus saving on preprocessing time). Is this clear? I am considering making a future video to explore this in more depth.
@@interfluo6420 I'm not sure I get what you mean by less paralelized? That OF tends to reconstruct and reduce the number of partitions covering the refine region? I will run a test tomorrow on a real life case to experiment a bit.
But thanks for the video. I was preparing a case today and came to the conclusion that it'll take over 1B elements on a uniform grid. So this might save my life.
All I mean by less parallelized is that the distribution of cells in each core will become uneven. Imagine your initial case has 16 cells distributed across 4 cores, this should effectively run 4x faster than the single core (serial) case. If at the next step you refine and you now have 100 cells in core 1 but 4 in the other cores, this will run 1.12x faster than the serial case.
@@interfluo6420 AMR load balancing has been investigated in the past. Have a look here: github.com/ElsevierSoftwareX/SOFTX_2018_143 you can also find the related publication on researchgate
Does Strauss's music helps to converge OpenFOAM simulations?
It does indeed, in much the same way as flame stickers add 50 hp to any car
Very interesting stuff. Will you go into the details of setting the AMR fields in future videos?
That's the plan!
Very good video. So do you have any suggestions as to how to approach a big case in parallel? would it require to reconstruct/decompose after each refinement step for example? This sounds really time consuming but it might be the only way to make manageable a huge case.
Since this video I have run some experiments on the damBreakWithObstacle using the interIsoFoam solver in parallel and with dynamic mesh. My results thus far have confirmed my suspicion that the case effectively becomes less parallelized due to the dynamic mesh. It seems to be somewhat case dependent whether the best option would be to reconstruct and decompose the case at each timestep (thus saving on simulation time) or to just initially decompose the case and let it run (thus saving on preprocessing time). Is this clear? I am considering making a future video to explore this in more depth.
@@interfluo6420 I'm not sure I get what you mean by less paralelized? That OF tends to reconstruct and reduce the number of partitions covering the refine region? I will run a test tomorrow on a real life case to experiment a bit.
But thanks for the video. I was preparing a case today and came to the conclusion that it'll take over 1B elements on a uniform grid. So this might save my life.
All I mean by less parallelized is that the distribution of cells in each core will become uneven. Imagine your initial case has 16 cells distributed across 4 cores, this should effectively run 4x faster than the single core (serial) case. If at the next step you refine and you now have 100 cells in core 1 but 4 in the other cores, this will run 1.12x faster than the serial case.
@@interfluo6420 AMR load balancing has been investigated in the past. Have a look here: github.com/ElsevierSoftwareX/SOFTX_2018_143 you can also find the related publication on researchgate
Sir in my case the AMR didn't work?