Image to 3D is one of the few AI innovations i am actually still excited for. "Generative" AI is not actually generative or intelligent and amounts to little more than interpolated copyright infringement. But being able to infer a sense of depth from static images? Infinitely more useful for actual artists.
Some may argue that the generative aspect of the image generators is simply in the iterations, but especially when combined with processes such as ControlNet. I do agree, being able to extract depth and 3-Dimensional geometry will make these explorations more useful 🤘
Hey Stephen, just sharing some thoughts: the outerside generated from the 360° image got me thinking of the possibility of generating a "inverse'' panoramic which would rotate around an object showing every side of the object to the 3d generator. I Tried with a image of a a face but the result were not good as expected yet i think it is a matter of changing some parameters in the AI algorithm.
That's an interesting thought and approach. Anything to rebuild imagery from multiple perspectives will help in constructing a useable mesh for further development. These tools are advancing so quickly, that even some of these thought processes will seem antiquated by the time new AI models are developed.
It can, but there are other programs coming out specifically designed for architecture that can produce better massing models / floor plans. Midjourney is not very site specific yet.
It looks like when you give a prompt to mid journey +some sketches ,and site plans info,the program uses a mind of its own ,and you get beautifully render ,but with no relevance to the information send ,how can you minimize,that
The link to their research is in the video description. Once you're there, scroll down and you should be able to download their research paper and video.
I make cinematics in Unreal Engine. My latest interest is to take an image of an environment from Midjourney, create a depth map of it and extrude it a tad more inside of Blender before exporting it over to Unreal Engine, turning specular to zero to prevent blotchy lighting on the image and film my scene. Sadly to date I have yet to pull it off with an environment, but I have had major luck with smaller objects (think statues, walls, bridges, etc). I am about to run through your videos in the hopes to find something I can use for this. I am confident that sometime in the next year or two someone will create that which I seek.
Hey - That sounds amazing. This might be the only video I have on the subject of image to 3D workflows, but I do agree that in the very near future there will be apps/online platforms available to convert images to usable 3D geometry.
@@StephenCoorlas I think I actually have something to test out. I don't know how familiar you are with Unreal Engine, but you basically create the depth map with LeiaPix, put it all together in Blender. Export it to Unreal Engine and then using the Lattice deform tool in the modeling section of Unreal Engine, you extrude the ground outwards and upwards. I have lost zero quality in my first test with this, and it looks 100% correct. I'd imagine you'd have to ensure that whatever scene you work with would need a decent width to it and perhaps extrude it in an arc shape to ensure the camera pan looked correct while filming, but thought I'd share with you since I had been trying to solve this problem for a few days and I believe this might be the solution.
Do you know what happns to the uploaded image that the 3D GLB file was generated from? Is it stored for learning, publicly available, discarded after i use it? In the world of proprentary and NDI work is there an issue with what is uploaded in that realm?
That's just 3D Builder that should come standard with windows 10 or higher. The 3D Files are .glb which should open in Rhino, Blender or SketchUp with an extension
Once you get your render done by midjourney ,what is the best way,to go for the construction drawings &details, and the best way for the construction to go specially for parametric or free form ,so the cost of the formwork does not become an impediment ,the 3d printer can work much better in walls but how about curve ceilings ,cloth formwork ,or what
That's a large gap that hasn't necessarily been solved yet. you would need to breakdown those processes into several steps which likely contain much human intervention to make the design hold true to the AI generated image through construction.
Very cool !!, this is a very promising step:)!! question for you , Im using Blender, and when I import there are no materials or textures, Any thoughts ? thanks, :)
Thanks! I've received that comment several times. I can't say I've experimented with bringing the models into blender so I'm not sure why that's occurring. I think it has something to do with how blender is reading the material mapping.
Also traditional (or not so traditional, but still in this sense non-ai) workflows will get AI in them soon. I sometimes do contracting for spacedesigner3D and I wanted to add feature for AI enchanced renderings just to see I am late and its already worked on by an other contractor haha. So 1-2 version down the line and you probably will see it. This means products you use will have AI features even if you keep your product based workflow - like you can create traditional floor plan as "basic idea of what is going on", then instantly create variations if you want - which is very useful when you are right in talk with a customer for example. Ps.: The other contractor who add that feature is also Hungarian like me so it feels nice that feature is still coming from here around even if not from me personally ;-)
It will be integrating into mainstream platforms and programs inevitably. So it is important to at least be aware that others will be using these tools
That's amazing. It's super effective with flat boxes and simple stuff so iget rid off all the boring images as planes cut outs for background elements. Can really help with small kitbashing. Do you know any other softwere or Tools that can do the same with a video? Like photogrammetry.... but not photogrammetry...(?)
Glad you found this useful - I'm not aware of anything that uses video to generate 3D geometry although that's a fascinating thought. You might want to look into NERF technology as it seems the most promising for what you're describing.
On the Pano/360 to 3d, a basic projection of image on top of a sphere/box produces better results right? Can you explain why this current approach is game changer than that for pano to 3d? Also the quality is better with that technique.
The sphere projection doesn't contain actual 3D mesh geometry. This allow you to actually pan through the 3D model, not just stand in one place and rotate.
@@StephenCoorlas not completely true, if you for example create a sphere and project a pano image on the box and make the box of size lets say 50m in radius, you could walk freely inside the box using ARKit. Adding depth with ARKit or any other rendering engine is quiet trivial in this case. Also how would you perceive depth in the case above?
Thank you for the knowledge Stephen. Do you think 3D Generative Adversarial Network (GAN) driven 3D model creation from 2D image midjourney input will be used in the future? This could be revolutionary in architecture i think.
Hey - Yes, I do believe this is a very early adaptation of this technology. The current models are a bit messy, with many polygons, but it's getting easier to imagine GAN's within the actual 3D model creation process that can clean up, simplify and interpolate usable geometry for architects. There are many avenues this technology can go in, and it's exciting to discuss the possibilities.
Hi! I'm doing a project on nearly the exact topic you talk about in the end. I feel like I have to be up to date with these experimental processes but while doing so I might lack in the more fundamental aspects in the chase of being ahead of the curve. I wonder what would you think about this topic.
There are different types of pursuits in life. Some people always chase the "newest" thing, while others pursue a craft, hobby or interest while utilizing a tool or technology the suites their needs. The hype can get very exciting and enticing, but if you are always chasing it leaves little time to settle and pursue anything deeper. My advice is to think about what really interests you, to focus on that topic, and incrementally pursue technologies or philosophies that assist you in developing your own theories while offering new perspectives to keep things fresh 🤘
Thanks for the video. AI looks like it will, very quickly now, wipe out illustration, graphic design and maybe photography and God knows what else as viable professions. The 800-pound gorilla in the corner in Architecture is going to be the same issue. AI will soon be spitting out, not just whacky slick Zaha Hadid concept images in seconds, but complete, ready to build BIM models; so where does an architect fit into that? It’s going to be wild ride over the next few years that’s for sure.
Yes - these are great thoughts and it's up to us to find our place or rather control how AI is implemented into our software, workflows and processes. At least during the beginning of this transition, AI will always need to be monitored by a human, so we need to remain educated and experienced to ensure the tools are developing content to our expectations.
This is so exciting, thanks Stephen for your videos
Trying to find the right software that's the right fit will be the hardest part for me!
This is crazy... thank you for the heads-up, Stephen!
I know, right!? These tools are getting really interesting 🤔
Спасибо! Полезные инструменты. Удачи!
Thank you! Hope the tools are useful for you too!
Image to 3D is one of the few AI innovations i am actually still excited for. "Generative" AI is not actually generative or intelligent and amounts to little more than interpolated copyright infringement. But being able to infer a sense of depth from static images? Infinitely more useful for actual artists.
Some may argue that the generative aspect of the image generators is simply in the iterations, but especially when combined with processes such as ControlNet.
I do agree, being able to extract depth and 3-Dimensional geometry will make these explorations more useful 🤘
Thank you VERY informative and exciting.
Glad it was useful!
Hey Stephen, just sharing some thoughts: the outerside generated from the 360° image got me thinking of the possibility of generating a "inverse'' panoramic which would rotate around an object showing every side of the object to the 3d generator. I Tried with a image of a a face but the result were not good as expected yet i think it is a matter of changing some parameters in the AI algorithm.
That's an interesting thought and approach. Anything to rebuild imagery from multiple perspectives will help in constructing a useable mesh for further development. These tools are advancing so quickly, that even some of these thought processes will seem antiquated by the time new AI models are developed.
Excellent ! Thanks a lot.
Right on 🤘
Is midjourney work from different workflows ,such as quick mass models,sketches,floor plans ,site plans and prompts all at once ,
It can, but there are other programs coming out specifically designed for architecture that can produce better massing models / floor plans. Midjourney is not very site specific yet.
It looks like when you give a prompt to mid journey +some sketches ,and site plans info,the program uses a mind of its own ,and you get beautifully render ,but with no relevance to the information send ,how can you minimize,that
I wonder if we can upload 3D mesh or any other 3D CAD file to an AI program to create photorealistic renders instantly. It will save a lot of time!
That would be great, there are some programs that are getting close, have you tried Veras?
@@StephenCoorlas Really cool. Thank you!
Scenemaker ai
keeps getting better and better .... thanks Stephen🤝
These tools will disrupt the current standards for practice. And this is the worst these tools will ever be!
I think it is more like an image to depth map ai, but looks awesome !
It is, but the depth map AI generates a 3D model, so it's pretty much there!
amazing
Agreed 🤘🏼
Hi, good job. Possible export to fbx with textures ?
Great question, I have not tried yet, but I believe I have seen other's attempting that. I'll need to take a deeper look.
@@StephenCoorlas Or else convert a glb to fbx with texture if possible?
Thanks for the video. I have a question. I can't find the download icon for Symmetry-Driven 3D Reconstruction from Concept Sketches. thanks.
The link to their research is in the video description. Once you're there, scroll down and you should be able to download their research paper and video.
@@StephenCoorlas thanks!
@@StephenCoorlas sorry again I saw the video but I can not find the link to download the program. thanks...
I make cinematics in Unreal Engine. My latest interest is to take an image of an environment from Midjourney, create a depth map of it and extrude it a tad more inside of Blender before exporting it over to Unreal Engine, turning specular to zero to prevent blotchy lighting on the image and film my scene. Sadly to date I have yet to pull it off with an environment, but I have had major luck with smaller objects (think statues, walls, bridges, etc). I am about to run through your videos in the hopes to find something I can use for this. I am confident that sometime in the next year or two someone will create that which I seek.
Hey - That sounds amazing. This might be the only video I have on the subject of image to 3D workflows, but I do agree that in the very near future there will be apps/online platforms available to convert images to usable 3D geometry.
@@StephenCoorlas I think I actually have something to test out. I don't know how familiar you are with Unreal Engine, but you basically create the depth map with LeiaPix, put it all together in Blender. Export it to Unreal Engine and then using the Lattice deform tool in the modeling section of Unreal Engine, you extrude the ground outwards and upwards. I have lost zero quality in my first test with this, and it looks 100% correct.
I'd imagine you'd have to ensure that whatever scene you work with would need a decent width to it and perhaps extrude it in an arc shape to ensure the camera pan looked correct while filming, but thought I'd share with you since I had been trying to solve this problem for a few days and I believe this might be the solution.
I very much hope the meshes are all clean quads.
They're triangulated... But I'd be curious to see if you can convert them to quad in blender or rhino
@@StephenCoorlas In blender? Can you?
Great tutorial as always
Thanks so much! 🤘🏼
Do you know what happns to the uploaded image that the 3D GLB file was generated from? Is it stored for learning, publicly available, discarded after i use it? In the world of proprentary and NDI work is there an issue with what is uploaded in that realm?
@@alastairbattson5123 that I do not know. You would need to reference their terms and conditions.
Hi,
the thygate/stable-diffusion-webui-depthmap-script can inpaint the meshes already :)
Oh wow - I'll need to check that out! Thanks for the heads up!
@@StephenCoorlas No worries, the development speed of these tools is insane right now and stuff like this is easy to miss
What software did you use to open the 3d model from Zoedepth
That's just 3D Builder that should come standard with windows 10 or higher. The 3D Files are .glb which should open in Rhino, Blender or SketchUp with an extension
Once you get your render done by midjourney ,what is the best way,to go for the construction drawings &details, and the best way for the construction to go specially for parametric or free form ,so the cost of the formwork does not become an impediment ,the 3d printer can work much better in walls but how about curve ceilings ,cloth formwork ,or what
That's a large gap that hasn't necessarily been solved yet. you would need to breakdown those processes into several steps which likely contain much human intervention to make the design hold true to the AI generated image through construction.
我想问问从zoedepth导出的dlb模型导入Blender中为什么缺失贴图,且有没有什么解决办法?诚挚的感谢您如果解答我的问题。
不好意思,是导出的glb模型
对不起,我不知道为什么纹理没有显示在 blender
Could you share a video about how can we convert these prototypes to completely a final 3d model
Yes, this is an important topic in advancing the usefulness of these tools, I’ll look into that coming soon
@@StephenCoorlas Thank you bro this video will very useful for me
Very cool !!, this is a very promising step:)!! question for you , Im using Blender, and when I import there are no materials or textures, Any thoughts ? thanks, :)
Thanks! I've received that comment several times. I can't say I've experimented with bringing the models into blender so I'm not sure why that's occurring. I think it has something to do with how blender is reading the material mapping.
@@StephenCoorlas thanks for the reply, :) its def very interesting and will be cool to see how it develops :)
Wow!!
Yes, wow indeed!
Also traditional (or not so traditional, but still in this sense non-ai) workflows will get AI in them soon. I sometimes do contracting for spacedesigner3D and I wanted to add feature for AI enchanced renderings just to see I am late and its already worked on by an other contractor haha. So 1-2 version down the line and you probably will see it.
This means products you use will have AI features even if you keep your product based workflow - like you can create traditional floor plan as "basic idea of what is going on", then instantly create variations if you want - which is very useful when you are right in talk with a customer for example.
Ps.: The other contractor who add that feature is also Hungarian like me so it feels nice that feature is still coming from here around even if not from me personally ;-)
It will be integrating into mainstream platforms and programs inevitably. So it is important to at least be aware that others will be using these tools
That's amazing. It's super effective with flat boxes and simple stuff so iget rid off all the boring images as planes cut outs for background elements. Can really help with small kitbashing. Do you know any other softwere or Tools that can do the same with a video? Like photogrammetry.... but not photogrammetry...(?)
Glad you found this useful - I'm not aware of anything that uses video to generate 3D geometry although that's a fascinating thought. You might want to look into NERF technology as it seems the most promising for what you're describing.
On the Pano/360 to 3d, a basic projection of image on top of a sphere/box produces better results right? Can you explain why this current approach is game changer than that for pano to 3d? Also the quality is better with that technique.
The sphere projection doesn't contain actual 3D mesh geometry. This allow you to actually pan through the 3D model, not just stand in one place and rotate.
@@StephenCoorlas not completely true, if you for example create a sphere and project a pano image on the box and make the box of size lets say 50m in radius, you could walk freely inside the box using ARKit. Adding depth with ARKit or any other rendering engine is quiet trivial in this case.
Also how would you perceive depth in the case above?
Thank you for the knowledge Stephen. Do you think 3D Generative Adversarial Network (GAN) driven 3D model creation from 2D image midjourney input will be used in the future? This could be revolutionary in architecture i think.
Hey - Yes, I do believe this is a very early adaptation of this technology. The current models are a bit messy, with many polygons, but it's getting easier to imagine GAN's within the actual 3D model creation process that can clean up, simplify and interpolate usable geometry for architects. There are many avenues this technology can go in, and it's exciting to discuss the possibilities.
❤
🤘
What if ai could turn 2d aerial imagery into a 3d environment?
That's the idea - people are programming AI to be better at interpolating things like this. It's all in the training.
Hi! I'm doing a project on nearly the exact topic you talk about in the end. I feel like I have to be up to date with these experimental processes but while doing so I might lack in the more fundamental aspects in the chase of being ahead of the curve. I wonder what would you think about this topic.
There are different types of pursuits in life. Some people always chase the "newest" thing, while others pursue a craft, hobby or interest while utilizing a tool or technology the suites their needs. The hype can get very exciting and enticing, but if you are always chasing it leaves little time to settle and pursue anything deeper. My advice is to think about what really interests you, to focus on that topic, and incrementally pursue technologies or philosophies that assist you in developing your own theories while offering new perspectives to keep things fresh 🤘
@@StephenCoorlas Wow, I really appreciate the response. Looking forward to any updates on the topic!
Does anyone know how I can open up that 16-bit raw depth, multiplier: 256 and so i can use it in blender
Never mind y’all image appears Black but it’s just really spread out. If you condense the levels down in Photoshop, you can see the image.
I've had a few inquire about the missing image in blender, but I'm not certain what the issue is or why it doesn't show on the imported models.
Why you made the ending sad
Just thoughts - they're always evolving!
Thanks for the video. AI looks like it will, very quickly now, wipe out illustration, graphic design and maybe photography and God knows what else as viable professions. The 800-pound gorilla in the corner in Architecture is going to be the same issue. AI will soon be spitting out, not just whacky slick Zaha Hadid concept images in seconds, but complete, ready to build BIM models; so where does an architect fit into that? It’s going to be wild ride over the next few years that’s for sure.
Yes - these are great thoughts and it's up to us to find our place or rather control how AI is implemented into our software, workflows and processes. At least during the beginning of this transition, AI will always need to be monitored by a human, so we need to remain educated and experienced to ensure the tools are developing content to our expectations.