I was just looking for a way to do this easily. You rock man. Thanks for sharing these workflows, I know how much time, effort and trial and error devising these methods takes.
Excellent and inspirational. I am a beginner Blender user with plenty of 360 VR and some immersive VR world building experience. I wonder if I can walk through this and create a base model to iterate on. Just decided to challenge myself, your video was the inspiration. Thank you.
The release of each of your videos gives me great expectations that of course are not disappointed. I share your interest in linking 3D modeling with AI. You are always at the forefront in this regard. Thank you very much for the workflows and your excellent tutorials.
It's videos like these that keep me going. There is a lot to learn on how to do all these things. Your workflow was great. Something I can follow. Blender is getting a bit easier, but that isn't saying a whole lot. Thank you for putting the work into this tutorial. You just got a subscribe from me!
This is something i was doing with SD 1.5 and with latent labs lora, but it was really low res (no 3d model, prompts only). The 360 panorama looked great on VR (i made anime environments only so it was easy to cheat the seam line) but projecting this on 3d model was something i was missing. This can turn out to be an efficient way to easily make 3d environment for VR games especially static shooter game like the house of the dead. Thank you very much for this wonderful tutorial.
Nice one! Only thing missing would be some kind of a thing that gets the information from mesh combined with texture that there is some stretching going on in certain regions. And then calculates the missing texture for that area so that it does not fall apart when changing perspective. Then use some displacement maps to get more detail. Binga bonga, nice scene!
Fantastic workflow. Thanks for sharing this. I wonder if it would be possible to separate some of the visual by using that depth map. It would then be possible to better simulate the parallax effect due to distance. ?
Really cool, like your channel and subscribed! However for this one I don't understand the difference between placing a HDRI backdrop, place assets in the scene and rendering this out realtime (in unreal for example), and this workflow. Wich imho takes a lot longer? Or am I missing something here?
Great video thanks for sharing. I've been trying to wrap my head around using this workflow to create true 32bit HDRI files (exr). So far I haven't seen any workflows for this. In theory you could use an i2i + controlnet to generate the panoramas at different exposures and merge them? I'm curious if you've explored this.
Nice work, thanks for sharing! I suggest Meshy 4 for 3d gen, it seems like is the best of options atm. (exporting out characters with animations too as glb or fbx)
Oh man. This is insane. Bravo. I'm going to try this. I find Blender painful but I need to push through to see this. I'm on a Mac, so wondering how I might see this inside my Quest 3?
I was really stunned by that waterfall tho! I really like how the idea turned out, but can you please give us all a bit of info how to implement animated water?
16:16 I used the workflow shown here! It works in a very similar way to the SDXL workflow for example but it uses AnimateDiff to create a looped animation! Fog, Fire, Water and clouds work extremely well!
For your upcoming project, you might recreate the Tiger Scene from "Gladiator" set in the Colosseum, featuring an animated tiger restrained by a chain and a crowd simulation
My suspension of disbelief was destroyed when you made the knight two stories tall. Like... come on man... he's towering over all of those archways and openings.... 😅 -- Anyway, this is a super cool process. Thanks for putting these tutorials and workflows together.
Testing this out I couldn't get Blender to find the edge of a smoothed sphere using this method. I ended up using Frestyle Rendering because for my scene it is simple but if you had a complex scene you could use the line Art modifier in greasepencil.
This is so good! Is it possible to import the meshes (buildings + sphere) and materials into Unreal? I imagine you have to do some sort of UV projection first? Thank you for sharing your work!
I thought the same. I think if you export (.fbx) the entire object from Blender with the texture and material applied it should work. I need to try it.
Yeah AI is the future of VR. It's going to blow past everything else once we get fast enough GPU's and people bring SD style image generation into a controllable environment so you can walk around inside it. Then add AI controlled characters, and voice synthesis and you have the Holodeck. It's going to be INSANE.
Wouldnt this method only texture the facing side of the objects, forcing the viewer to remain in the middle of the scene? For example if you were to go to the other side of the pillars, wouldnt you see nothing as nothing was projected there?
Damn, Mick. You're brilliant. So many cool things in one short vid. Some of the little Blender shader tips alone are worth the time of this video. And then you pile all the Comfy and Leonardo and other stuff in a tight, crisp presentation . . . excellent.
Wow, amazing, i will love to create the most amazing scenes, like distopic scenes, but i dont see how, i see your tutorials are pretty advanced, can you teach me with your patreon tier, to do something above the common flux images?
@@FranzMetrixs I mean this unfortunate cooperation with Elon Musk. That makes Flux unusable for many users. Unfortunately, unfiltered images generated with Flux appear on his platform X with a very dubious message!
The image rendered with the outline shader only displays the object I selected and not all the objects present in the project. Does anyone know the reason for this?
Is there a program where I can just prompt for these outputs? If not, why not make that? Why even program anymore, llms should be able to generate the right code for this.
I got a error " InstantX Flux Union ControlNet Loader FluxParams.__init__() missing 2 required positional arguments: 'out_channels' and 'patch_size'" Any help ?
Are you still using your 2070 super? i use 3070 right now and thinking yo upgrade to 4090. but if you still using your old gpu i think im gonna wait the 5000 series to upgrade
yeah, you just need to activate the “vr scene inspection” add-on and connect the headset (I used Steam VR for this) and click "start VR session" in blender. I was also surprised that it's so easy and will use it more often now!
@@mickmumpitz I've been dying to try that with unreal engine, but there's all this conflicting information about how they have changed, how it works or that it's not working lol unreal engine is already so confusing. I never even bothered
@@mickmumpitz It's super powerful as you can use the headset as a camera. For POV videos you can crawl under things and do all kinds of interesting angels that would be a pain with normal camera animating.
@@BabylonBaller TBH I am not sure. I have a feeling you probably can. Using the headset as a camera I did in unity and baking. I think you can bake the lights in blender too. Example: ua-cam.com/video/MSRrpgVrOoQ/v-deo.html
So inspiring! Thank you! Did you play around with Lightcraft Jetset already? Not only the Cinema Version but als with the free iPhone app. Would be great to learn about a blender Jetset workflow. 👍
Probably taking advantage of the procedural nodes on blender and knowing a little bit about texturing and lighting you would have waaay more control and better quality. Or I dont know, just use unreal with megascans, etc lol. A little bit over complicated for the result you get...Good case study of a workflow but a weak one.
I was just looking for a way to do this easily. You rock man. Thanks for sharing these workflows, I know how much time, effort and trial and error devising these methods takes.
Man, you are GOLD, you give freely golden informations, be blessed Bro
You could create a depth pass from the generate image in Comfy and use it as a displacement or bump map back in Blender.
Excellent and inspirational. I am a beginner Blender user with plenty of 360 VR and some immersive VR world building experience. I wonder if I can walk through this and create a base model to iterate on. Just decided to challenge myself, your video was the inspiration. Thank you.
The release of each of your videos gives me great expectations that of course are not disappointed. I share your interest in linking 3D modeling with AI. You are always at the forefront in this regard. Thank you very much for the workflows and your excellent tutorials.
It's videos like these that keep me going. There is a lot to learn on how to do all these things. Your workflow was great. Something I can follow. Blender is getting a bit easier, but that isn't saying a whole lot. Thank you for putting the work into this tutorial. You just got a subscribe from me!
This is something i was doing with SD 1.5 and with latent labs lora, but it was really low res (no 3d model, prompts only). The 360 panorama looked great on VR (i made anime environments only so it was easy to cheat the seam line) but projecting this on 3d model was something i was missing. This can turn out to be an efficient way to easily make 3d environment for VR games especially static shooter game like the house of the dead.
Thank you very much for this wonderful tutorial.
Nice one! Only thing missing would be some kind of a thing that gets the information from mesh combined with texture that there is some stretching going on in certain regions. And then calculates the missing texture for that area so that it does not fall apart when changing perspective. Then use some displacement maps to get more detail. Binga bonga, nice scene!
This looks insane, can't wait to start testing all these workflows out
Wow!!!! This is exactly what I been looking for!! Thank you so much.
Do you think it is also possible to mix the outlines on the Flux workflow?
Este tutorial foi incrível! Obrigado, parabéns e te desejo muito sucesso para os seus projetos!
Amazing work and value you are providing for free 🙏🏻
You're amazing. Thanks for all your work!
Fantastic workflow. Thanks for sharing this. I wonder if it would be possible to separate some of the visual by using that depth map. It would then be possible to better simulate the parallax effect due to distance. ?
Really cool, like your channel and subscribed! However for this one I don't understand the difference between placing a HDRI backdrop, place assets in the scene and rendering this out realtime (in unreal for example), and this workflow. Wich imho takes a lot longer? Or am I missing something here?
Great work again Mick , you are genius my friend.
German brains at work 🙂
Danke!❤😮
Danke DIR! 😊
Great video thanks for sharing. I've been trying to wrap my head around using this workflow to create true 32bit HDRI files (exr). So far I haven't seen any workflows for this. In theory you could use an i2i + controlnet to generate the panoramas at different exposures and merge them? I'm curious if you've explored this.
Great work man, thanks so much! I'll get you on Patreon!
This is pure insanity
Nice work, thanks for sharing! I suggest Meshy 4 for 3d gen, it seems like is the best of options atm. (exporting out characters with animations too as glb or fbx)
Oh man. This is insane. Bravo. I'm going to try this. I find Blender painful but I need to push through to see this. I'm on a Mac, so wondering how I might see this inside my Quest 3?
I was really stunned by that waterfall tho! I really like how the idea turned out, but can you please give us all a bit of info how to implement animated water?
16:16 I used the workflow shown here! It works in a very similar way to the SDXL workflow for example but it uses AnimateDiff to create a looped animation! Fog, Fire, Water and clouds work extremely well!
私もこのアニメーション部分の解説動画を期待しています😍
For your upcoming project, you might recreate the Tiger Scene from "Gladiator" set in the Colosseum, featuring an animated tiger restrained by a chain and a crowd simulation
My suspension of disbelief was destroyed when you made the knight two stories tall. Like... come on man... he's towering over all of those archways and openings.... 😅 -- Anyway, this is a super cool process. Thanks for putting these tutorials and workflows together.
He clearly just has a macrophilia fetish and wanted his knight to give the Dwarven dwellers a big steppie. Come on its not that hard to believe.
Testing this out I couldn't get Blender to find the edge of a smoothed sphere using this method. I ended up using Frestyle Rendering because for my scene it is simple but if you had a complex scene you could use the line Art modifier in greasepencil.
Top notch sir! Thank you.
This is so good! Is it possible to import the meshes (buildings + sphere) and materials into Unreal? I imagine you have to do some sort of UV projection first? Thank you for sharing your work!
I thought the same. I think if you export (.fbx) the entire object from Blender with the texture and material applied it should work. I need to try it.
I really like the fact that your hair getting shorter and shorter. Great video as always
Can you do this offline with something less involved than comfy ui? Easy diffusion or something else? Thanks.
Hi @Mickmumpitz, are you considering a video on a "Sora"-like, or Runway ML tutorial and flow? Would love to try that on Comfy UI
Thanks for all your wonderous workflows.
Thank you
Young Lurch
I "Rang"
And you definitely answered
damn, another subscription 😅really promising
THIS IS AWESOME!!! Save so much time!!!
very nice! do you think you could generate trimsheets with ComfyUI to texture the 3D environment and assets
What an amazing workflow.
That's really cool!
Yeah AI is the future of VR. It's going to blow past everything else once we get fast enough GPU's and people bring SD style image generation into a controllable environment so you can walk around inside it. Then add AI controlled characters, and voice synthesis and you have the Holodeck. It's going to be INSANE.
Wow, you're a genius!
Fantastic tutorial. Love the pacing as well
Hi, if i have less than 24GB vram, my GPU has 10GB.. is it still doable?
Once again a super duper tutorial. thx a lot!
Nice tutorial! Thanks again!
Is there any course dedicated to generative AI in depth? I wish I could actually understand what each setting means.
So great!!!
Wouldnt this method only texture the facing side of the objects, forcing the viewer to remain in the middle of the scene? For example if you were to go to the other side of the pillars, wouldnt you see nothing as nothing was projected there?
you basically see that in the video and he also said it in video when looking through it with VR
how can I export this to Unity? it seems that the equirectangular projection works only on blender
Damn, Mick. You're brilliant. So many cool things in one short vid. Some of the little Blender shader tips alone are worth the time of this video. And then you pile all the Comfy and Leonardo and other stuff in a tight, crisp presentation . . . excellent.
you're a genius!
Superb tutorial, kudos. ❤️🔥
Great tutorial, thank you for sharing amazing video
Wow, amazing, i will love to create the most amazing scenes, like distopic scenes, but i dont see how, i see your tutorials are pretty advanced, can you teach me with your patreon tier, to do something above the common flux images?
Oh My!! That looks really difficult for me, but it seems really easy for you :) 🤕🤤
amazing!
This is crazy
really amazing work! thank you for sharing. subbed! :)
Flux certainly does a great job, but is only suitable for users who don't care about the background at all
What background are you referring to?
@@FranzMetrixs I mean this unfortunate cooperation with Elon Musk. That makes Flux unusable for many users.
Unfortunately, unfiltered images generated with Flux appear on his platform X with a very dubious message!
@@FranzMetrixs Same thing I'm wondering.
Nah u 😂trolling
Leonardo textures
He picked out they weren't from scratch
Amaizing!
Es posible hacer el render de pases como el de profundidad y de lineas en comfyUI..
ok this is crazy af.
👋 Looking forward to watching this video
Nice tutorial as usual ;)
The image rendered with the outline shader only displays the object I selected and not all the objects present in the project. Does anyone know the reason for this?
fantastic! what app are you using to explore the scene in VR headset? a vr viewing option in blender?
Leonardo allows to enable tiling btw. But anyways, AI equirect projection is usually not exactly equirect, but its better than nothing!
Yeah true, but it's more for textures and things like that, so unfortunately it doesn't really work with those images.
Great tutorial!! Thanks!!
谢谢你的教程,期待看你后来怎么解决移动起来的问题。
thats pretty awesome
The exact same prompt didn't gave me 360° pictures at all, unfortunately.
Any idea about this ?
Is there a program where I can just prompt for these outputs? If not, why not make that? Why even program anymore, llms should be able to generate the right code for this.
Can anyone tell best resources to learn about stable diffusion,loras,models,control nets etc. Any UA-cam channel?
Amazing!!!!
can we use these modeles in maya?
I got a error " InstantX Flux Union ControlNet Loader
FluxParams.__init__() missing 2 required positional arguments: 'out_channels' and 'patch_size'" Any help ?
would a 3060 ti run the local ai to do this?
Are you still using your 2070 super? i use 3070 right now and thinking yo upgrade to 4090. but if you still using your old gpu i think im gonna wait the 5000 series to upgrade
'very cool - great work!
so how can we import this HDRI 3d environment in unity ????
its really important please help
your workflow.json is no longer linked in the google docs document?
It's attached to the Patreon post!
Have a look at Ian Hubert's Compify plugin to transform your environment texture from emission to principled BSDF shader
very cool!
woow its rl fresh technology for me
waiting for the 3D environment non stretch texture trick
Working on that!
So, how did the Oculus Quest part work? Just conecting the quest? And why do you see a 3d effect. This makes no sence to me.
did you just throw on an oculus and zapped into the blender scene? wow, i didnt know it supported that, I have a bunch of headsets lying around lol
yeah, you just need to activate the “vr scene inspection” add-on and connect the headset (I used Steam VR for this) and click "start VR session" in blender. I was also surprised that it's so easy and will use it more often now!
@@mickmumpitz I've been dying to try that with unreal engine, but there's all this conflicting information about how they have changed, how it works or that it's not working lol unreal engine is already so confusing. I never even bothered
@@mickmumpitz It's super powerful as you can use the headset as a camera. For POV videos you can crawl under things and do all kinds of interesting angels that would be a pain with normal camera animating.
@@resemblanceai niceee. Are you allowed to build out the scene while in VR with your controllers or is it just for viewing??
@@BabylonBaller TBH I am not sure. I have a feeling you probably can. Using the headset as a camera I did in unity and baking. I think you can bake the lights in blender too. Example: ua-cam.com/video/MSRrpgVrOoQ/v-deo.html
Klasse klasse!
вау вау вау, это так круто)
Gotta drop runway gen3 vid to vid into your flow!
Please also give some guidelines for Mac Users. A lot of things doesn't work and requires debugging for Mac Users with M1 M2 Chips
You are a passionate artist. And it is contagious 😊
Into CogvideoX ?
Bruh this needs a full video not 5 seconds 😭 16:28
10/10
i do have a vr headset but my laptop doesn’t work with it… kinda sucks…
Such awesome work bro ❤
Amazing... I'll stick with my NVDIA stock.... cause I tried FLUX needs way high end GPU, at least my RTX2080 is very slow on that model.
So inspiring! Thank you!
Did you play around with Lightcraft Jetset already? Not only the Cinema Version but als with the free iPhone app. Would be great to learn about a blender Jetset workflow. 👍
wow
Probably taking advantage of the procedural nodes on blender and knowing a little bit about texturing and lighting you would have waaay more control and better quality. Or I dont know, just use unreal with megascans, etc lol.
A little bit over complicated for the result you get...Good case study of a workflow but a weak one.