I was just looking for a way to do this easily. You rock man. Thanks for sharing these workflows, I know how much time, effort and trial and error devising these methods takes.
This is something i was doing with SD 1.5 and with latent labs lora, but it was really low res (no 3d model, prompts only). The 360 panorama looked great on VR (i made anime environments only so it was easy to cheat the seam line) but projecting this on 3d model was something i was missing. This can turn out to be an efficient way to easily make 3d environment for VR games especially static shooter game like the house of the dead. Thank you very much for this wonderful tutorial.
Nice one! Only thing missing would be some kind of a thing that gets the information from mesh combined with texture that there is some stretching going on in certain regions. And then calculates the missing texture for that area so that it does not fall apart when changing perspective. Then use some displacement maps to get more detail. Binga bonga, nice scene!
Excellent and inspirational. I am a beginner Blender user with plenty of 360 VR and some immersive VR world building experience. I wonder if I can walk through this and create a base model to iterate on. Just decided to challenge myself, your video was the inspiration. Thank you.
It's videos like these that keep me going. There is a lot to learn on how to do all these things. Your workflow was great. Something I can follow. Blender is getting a bit easier, but that isn't saying a whole lot. Thank you for putting the work into this tutorial. You just got a subscribe from me!
The release of each of your videos gives me great expectations that of course are not disappointed. I share your interest in linking 3D modeling with AI. You are always at the forefront in this regard. Thank you very much for the workflows and your excellent tutorials.
For your upcoming project, you might recreate the Tiger Scene from "Gladiator" set in the Colosseum, featuring an animated tiger restrained by a chain and a crowd simulation
Oh man. This is insane. Bravo. I'm going to try this. I find Blender painful but I need to push through to see this. I'm on a Mac, so wondering how I might see this inside my Quest 3?
My suspension of disbelief was destroyed when you made the knight two stories tall. Like... come on man... he's towering over all of those archways and openings.... 😅 -- Anyway, this is a super cool process. Thanks for putting these tutorials and workflows together.
I was really stunned by that waterfall tho! I really like how the idea turned out, but can you please give us all a bit of info how to implement animated water?
16:16 I used the workflow shown here! It works in a very similar way to the SDXL workflow for example but it uses AnimateDiff to create a looped animation! Fog, Fire, Water and clouds work extremely well!
Yeah AI is the future of VR. It's going to blow past everything else once we get fast enough GPU's and people bring SD style image generation into a controllable environment so you can walk around inside it. Then add AI controlled characters, and voice synthesis and you have the Holodeck. It's going to be INSANE.
Fantastic workflow. Thanks for sharing this. I wonder if it would be possible to separate some of the visual by using that depth map. It would then be possible to better simulate the parallax effect due to distance. ?
Great video thanks for sharing. I've been trying to wrap my head around using this workflow to create true 32bit HDRI files (exr). So far I haven't seen any workflows for this. In theory you could use an i2i + controlnet to generate the panoramas at different exposures and merge them? I'm curious if you've explored this.
@@FranzMetrixs I mean this unfortunate cooperation with Elon Musk. That makes Flux unusable for many users. Unfortunately, unfiltered images generated with Flux appear on his platform X with a very dubious message!
Nice work, thanks for sharing! I suggest Meshy 4 for 3d gen, it seems like is the best of options atm. (exporting out characters with animations too as glb or fbx)
This is so good! Is it possible to import the meshes (buildings + sphere) and materials into Unreal? I imagine you have to do some sort of UV projection first? Thank you for sharing your work!
I thought the same. I think if you export (.fbx) the entire object from Blender with the texture and material applied it should work. I need to try it.
Wow, amazing, i will love to create the most amazing scenes, like distopic scenes, but i dont see how, i see your tutorials are pretty advanced, can you teach me with your patreon tier, to do something above the common flux images?
Damn, Mick. You're brilliant. So many cool things in one short vid. Some of the little Blender shader tips alone are worth the time of this video. And then you pile all the Comfy and Leonardo and other stuff in a tight, crisp presentation . . . excellent.
yeah, you just need to activate the “vr scene inspection” add-on and connect the headset (I used Steam VR for this) and click "start VR session" in blender. I was also surprised that it's so easy and will use it more often now!
@@mickmumpitz I've been dying to try that with unreal engine, but there's all this conflicting information about how they have changed, how it works or that it's not working lol unreal engine is already so confusing. I never even bothered
@@mickmumpitz It's super powerful as you can use the headset as a camera. For POV videos you can crawl under things and do all kinds of interesting angels that would be a pain with normal camera animating.
@@BabylonBaller TBH I am not sure. I have a feeling you probably can. Using the headset as a camera I did in unity and baking. I think you can bake the lights in blender too. Example: ua-cam.com/video/MSRrpgVrOoQ/v-deo.html
Wouldnt this method only texture the facing side of the objects, forcing the viewer to remain in the middle of the scene? For example if you were to go to the other side of the pillars, wouldnt you see nothing as nothing was projected there?
The image rendered with the outline shader only displays the object I selected and not all the objects present in the project. Does anyone know the reason for this?
Still hoping one day we get all of this just integrated into an AI program so all I have to do is make a 3D scene, type some prompts, and ajust some values and bam whole finished artwork exactly the way I wanted.
Is there a program where I can just prompt for these outputs? If not, why not make that? Why even program anymore, llms should be able to generate the right code for this.
Are you still using your 2070 super? i use 3070 right now and thinking yo upgrade to 4090. but if you still using your old gpu i think im gonna wait the 5000 series to upgrade
Probably taking advantage of the procedural nodes on blender and knowing a little bit about texturing and lighting you would have waaay more control and better quality. Or I dont know, just use unreal with megascans, etc lol. A little bit over complicated for the result you get...Good case study of a workflow but a weak one.
So inspiring! Thank you! Did you play around with Lightcraft Jetset already? Not only the Cinema Version but als with the free iPhone app. Would be great to learn about a blender Jetset workflow. 👍
It looks like you forgot: you space for a follow-up link on your Patreon page for this a month or two ago but never posted it. Did you forget or change your mind?
Actually, why do you always use Blender for depth maps ? I saw you using that for the other video about set extensions or something... You would get much detailed depth maps with Depth Anything V2, with zero effort. So... WHY
I was just looking for a way to do this easily. You rock man. Thanks for sharing these workflows, I know how much time, effort and trial and error devising these methods takes.
This is something i was doing with SD 1.5 and with latent labs lora, but it was really low res (no 3d model, prompts only). The 360 panorama looked great on VR (i made anime environments only so it was easy to cheat the seam line) but projecting this on 3d model was something i was missing. This can turn out to be an efficient way to easily make 3d environment for VR games especially static shooter game like the house of the dead.
Thank you very much for this wonderful tutorial.
Nice one! Only thing missing would be some kind of a thing that gets the information from mesh combined with texture that there is some stretching going on in certain regions. And then calculates the missing texture for that area so that it does not fall apart when changing perspective. Then use some displacement maps to get more detail. Binga bonga, nice scene!
Excellent and inspirational. I am a beginner Blender user with plenty of 360 VR and some immersive VR world building experience. I wonder if I can walk through this and create a base model to iterate on. Just decided to challenge myself, your video was the inspiration. Thank you.
Amazing work and value you are providing for free 🙏🏻
You could create a depth pass from the generate image in Comfy and use it as a displacement or bump map back in Blender.
It's videos like these that keep me going. There is a lot to learn on how to do all these things. Your workflow was great. Something I can follow. Blender is getting a bit easier, but that isn't saying a whole lot. Thank you for putting the work into this tutorial. You just got a subscribe from me!
The release of each of your videos gives me great expectations that of course are not disappointed. I share your interest in linking 3D modeling with AI. You are always at the forefront in this regard. Thank you very much for the workflows and your excellent tutorials.
This is pure insanity
This looks insane, can't wait to start testing all these workflows out
Wow!!!! This is exactly what I been looking for!! Thank you so much.
Do you think it is also possible to mix the outlines on the Flux workflow?
Este tutorial foi incrível! Obrigado, parabéns e te desejo muito sucesso para os seus projetos!
Great work man, thanks so much! I'll get you on Patreon!
Great work again Mick , you are genius my friend.
German brains at work 🙂
For your upcoming project, you might recreate the Tiger Scene from "Gladiator" set in the Colosseum, featuring an animated tiger restrained by a chain and a crowd simulation
Oh man. This is insane. Bravo. I'm going to try this. I find Blender painful but I need to push through to see this. I'm on a Mac, so wondering how I might see this inside my Quest 3?
Nice tutorial! Thanks again!
Thank you
Young Lurch
I "Rang"
And you definitely answered
Fantastic tutorial. Love the pacing as well
My suspension of disbelief was destroyed when you made the knight two stories tall. Like... come on man... he's towering over all of those archways and openings.... 😅 -- Anyway, this is a super cool process. Thanks for putting these tutorials and workflows together.
He clearly just has a macrophilia fetish and wanted his knight to give the Dwarven dwellers a big steppie. Come on its not that hard to believe.
I really like the fact that your hair getting shorter and shorter. Great video as always
I was really stunned by that waterfall tho! I really like how the idea turned out, but can you please give us all a bit of info how to implement animated water?
16:16 I used the workflow shown here! It works in a very similar way to the SDXL workflow for example but it uses AnimateDiff to create a looped animation! Fog, Fire, Water and clouds work extremely well!
私もこのアニメーション部分の解説動画を期待しています😍
Wow, you're a genius!
Thanks for all your wonderous workflows.
That's really cool!
Once again a super duper tutorial. thx a lot!
What an amazing workflow.
Yeah AI is the future of VR. It's going to blow past everything else once we get fast enough GPU's and people bring SD style image generation into a controllable environment so you can walk around inside it. Then add AI controlled characters, and voice synthesis and you have the Holodeck. It's going to be INSANE.
Top notch sir! Thank you.
THIS IS AWESOME!!! Save so much time!!!
damn, another subscription 😅really promising
Great tutorial, thank you for sharing amazing video
Fantastic workflow. Thanks for sharing this. I wonder if it would be possible to separate some of the visual by using that depth map. It would then be possible to better simulate the parallax effect due to distance. ?
ok this is crazy af.
you're a genius!
really amazing work! thank you for sharing. subbed! :)
So great!!!
Great tutorial!! Thanks!!
Great video thanks for sharing. I've been trying to wrap my head around using this workflow to create true 32bit HDRI files (exr). So far I haven't seen any workflows for this. In theory you could use an i2i + controlnet to generate the panoramas at different exposures and merge them? I'm curious if you've explored this.
Nice tutorial as usual ;)
This is crazy
Oh My!! That looks really difficult for me, but it seems really easy for you :) 🤕🤤
thats pretty awesome
Flux certainly does a great job, but is only suitable for users who don't care about the background at all
What background are you referring to?
@@FranzMetrixs I mean this unfortunate cooperation with Elon Musk. That makes Flux unusable for many users.
Unfortunately, unfiltered images generated with Flux appear on his platform X with a very dubious message!
@@FranzMetrixs Same thing I'm wondering.
Nah u 😂trolling
Leonardo textures
He picked out they weren't from scratch
Nice work, thanks for sharing! I suggest Meshy 4 for 3d gen, it seems like is the best of options atm. (exporting out characters with animations too as glb or fbx)
amazing!
This is so good! Is it possible to import the meshes (buildings + sphere) and materials into Unreal? I imagine you have to do some sort of UV projection first? Thank you for sharing your work!
I thought the same. I think if you export (.fbx) the entire object from Blender with the texture and material applied it should work. I need to try it.
waiting for the 3D environment non stretch texture trick
Working on that!
👋 Looking forward to watching this video
Wow, amazing, i will love to create the most amazing scenes, like distopic scenes, but i dont see how, i see your tutorials are pretty advanced, can you teach me with your patreon tier, to do something above the common flux images?
Amaizing!
Hi @Mickmumpitz, are you considering a video on a "Sora"-like, or Runway ML tutorial and flow? Would love to try that on Comfy UI
Superb tutorial, kudos. ❤️🔥
woow its rl fresh technology for me
Amazing!!!!
Leonardo allows to enable tiling btw. But anyways, AI equirect projection is usually not exactly equirect, but its better than nothing!
Yeah true, but it's more for textures and things like that, so unfortunately it doesn't really work with those images.
very cool!
Gotta drop runway gen3 vid to vid into your flow!
вау вау вау, это так круто)
very nice! do you think you could generate trimsheets with ComfyUI to texture the 3D environment and assets
谢谢你的教程,期待看你后来怎么解决移动起来的问题。
Klasse klasse!
'very cool - great work!
Is there any course dedicated to generative AI in depth? I wish I could actually understand what each setting means.
Damn, Mick. You're brilliant. So many cool things in one short vid. Some of the little Blender shader tips alone are worth the time of this video. And then you pile all the Comfy and Leonardo and other stuff in a tight, crisp presentation . . . excellent.
Can you do this offline with something less involved than comfy ui? Easy diffusion or something else? Thanks.
You are a passionate artist. And it is contagious 😊
Hi, if i have less than 24GB vram, my GPU has 10GB.. is it still doable?
Have a look at Ian Hubert's Compify plugin to transform your environment texture from emission to principled BSDF shader
how can I export this to Unity? it seems that the equirectangular projection works only on blender
did you just throw on an oculus and zapped into the blender scene? wow, i didnt know it supported that, I have a bunch of headsets lying around lol
yeah, you just need to activate the “vr scene inspection” add-on and connect the headset (I used Steam VR for this) and click "start VR session" in blender. I was also surprised that it's so easy and will use it more often now!
@@mickmumpitz I've been dying to try that with unreal engine, but there's all this conflicting information about how they have changed, how it works or that it's not working lol unreal engine is already so confusing. I never even bothered
@@mickmumpitz It's super powerful as you can use the headset as a camera. For POV videos you can crawl under things and do all kinds of interesting angels that would be a pain with normal camera animating.
@@resemblanceai niceee. Are you allowed to build out the scene while in VR with your controllers or is it just for viewing??
@@BabylonBaller TBH I am not sure. I have a feeling you probably can. Using the headset as a camera I did in unity and baking. I think you can bake the lights in blender too. Example: ua-cam.com/video/MSRrpgVrOoQ/v-deo.html
Please also give some guidelines for Mac Users. A lot of things doesn't work and requires debugging for Mac Users with M1 M2 Chips
Such awesome work bro ❤
Wouldnt this method only texture the facing side of the objects, forcing the viewer to remain in the middle of the scene? For example if you were to go to the other side of the pillars, wouldnt you see nothing as nothing was projected there?
you basically see that in the video and he also said it in video when looking through it with VR
Es posible hacer el render de pases como el de profundidad y de lineas en comfyUI..
would a 3060 ti run the local ai to do this?
Can anyone tell best resources to learn about stable diffusion,loras,models,control nets etc. Any UA-cam channel?
wow
Danke!❤😮
Danke DIR! 😊
The image rendered with the outline shader only displays the object I selected and not all the objects present in the project. Does anyone know the reason for this?
Still hoping one day we get all of this just integrated into an AI program so all I have to do is make a 3D scene, type some prompts, and ajust some values and bam whole finished artwork exactly the way I wanted.
let's hope we'll never get there :P
Is there a program where I can just prompt for these outputs? If not, why not make that? Why even program anymore, llms should be able to generate the right code for this.
Amazing... I'll stick with my NVDIA stock.... cause I tried FLUX needs way high end GPU, at least my RTX2080 is very slow on that model.
can we use these modeles in maya?
Are you still using your 2070 super? i use 3070 right now and thinking yo upgrade to 4090. but if you still using your old gpu i think im gonna wait the 5000 series to upgrade
The exact same prompt didn't gave me 360° pictures at all, unfortunately.
Any idea about this ?
your workflow.json is no longer linked in the google docs document?
It's attached to the Patreon post!
so how can we import this HDRI 3d environment in unity ????
its really important please help
Probably taking advantage of the procedural nodes on blender and knowing a little bit about texturing and lighting you would have waaay more control and better quality. Or I dont know, just use unreal with megascans, etc lol.
A little bit over complicated for the result you get...Good case study of a workflow but a weak one.
Bruh this needs a full video not 5 seconds 😭 16:28
Into CogvideoX ?
now skybox AI by blockade labs become makes sense. That's awesome approach
So, how did the Oculus Quest part work? Just conecting the quest? And why do you see a 3d effect. This makes no sence to me.
So inspiring! Thank you!
Did you play around with Lightcraft Jetset already? Not only the Cinema Version but als with the free iPhone app. Would be great to learn about a blender Jetset workflow. 👍
It looks like you forgot: you space for a follow-up link on your Patreon page for this a month or two ago but never posted it. Did you forget or change your mind?
i do have a vr headset but my laptop doesn’t work with it… kinda sucks…
Thomas Linda Rodriguez Richard Martinez Deborah
he must has a good gpu. @_@
Actually, why do you always use Blender for depth maps ? I saw you using that for the other video about set extensions or something... You would get much detailed depth maps with Depth Anything V2, with zero effort. So... WHY
Lopez Margaret Allen Matthew Lee Frank