- 19
- 101 461
Matt Hallett Visual
Canada
Приєднався 26 лис 2016
Learn the most practical and advanced AI techniques for Architectural Visualization. I've been developing methods to advanced the rendering process since 2022. Subscribe for expert insights and stay updated in architectural visualization. Or check me out on my dedicated website where I post all my lessons,
www.hallett-ai.com/
ua-cam.com/channels/deflqUJbAsL8k-TT6Pd0OQ.html
www.hallett-ai.com/
ua-cam.com/channels/deflqUJbAsL8k-TT6Pd0OQ.html
Image to 3D to ComfyUI: The Easy Way!
Transform 2D images into quality 3D models with ease! This guide shows you how to use a free Hugging Face space to convert an image to a 3D model and render incredible imagery in seconds using ComfyUI. No special nodes or complicated installs required-just follow along! Workflow link included below for seamless setup. Perfect for creators of all skill levels.
# Links from the Video #
huggingface.co/spaces/JeffreyXiang/TRELLIS
civitai.com/user/MattHVisual
civitai.com/models/1156226/3d-or-live-view-comfyui-workflow
# Contact Links #
Website: hallettvisual.com/
Website AI for Architecture: www.hallett-ai.com/
Instagram: hallettvisual
Facebook: hallettvisual
Linkedin: www.linkedin.com/in/matthew-hallett-041a3881
# Links from the Video #
huggingface.co/spaces/JeffreyXiang/TRELLIS
civitai.com/user/MattHVisual
civitai.com/models/1156226/3d-or-live-view-comfyui-workflow
# Contact Links #
Website: hallettvisual.com/
Website AI for Architecture: www.hallett-ai.com/
Instagram: hallettvisual
Facebook: hallettvisual
Linkedin: www.linkedin.com/in/matthew-hallett-041a3881
Переглядів: 3 767
Відео
How I Quickly Created This Stunning Image in 3ds Max Using TyDiffusion
Переглядів 2,4 тис.3 місяці тому
Discover how to quickly create stunning images using TyDiffusion inside 3ds Max. In this video, I’ll walk you through a real example, showing just how simple your scene can be while still achieving beautiful, accurate results in minutes. # Links from the Video # docs.tyflow.com/tyflow_AI/tyDiffusion/ github.com/LykosAI/StabilityMatrix civitai.com/models/866644/greece-seaside-from-above # Contac...
A Quick 5-Minute Introduction: How I Turn Day into Night Using AI
Переглядів 2,4 тис.4 місяці тому
You may have seen my post where I transformed a low-resolution image of London’s 30 St Mary Axe (The Gherkin) into a stunning night photo. The heavy lifting is done by the SDXL LoRA model I’m offering. In this quick 5-minute introduction, I’ll show you how to test out the process yourself before committing to any purchase. # Links from the Video # civitai.com/models/119229/zavychromaxl github.c...
WWII Movie Trailer Made with Various AI Tools
Переглядів 1,3 тис.5 місяців тому
A faux WWII movie trailer made with Images Generated with Stable Diffusion, Flux, Trained Flux LoRA on 100 images from WWII movies for Mood and Era Specific Details. Image to Video made with Runway Gen 3 and Local Stable Video Diffusion. These clips are very cherry picked. Even Runway has difficulty with this type of action, and Kling produced nothing usable, all results had modern artifacts. P...
Multi-Camera Texture Baking with TyDiffusion
Переглядів 3,8 тис.5 місяців тому
Learn how to master texture baking in TyDiffusion for 3ds Max. I’ll show you how to use a powerful modifier to unwrap and project multiple generations onto a single texture, ensuring seamless coverage across your entire object. # Links from the Video # docs.tyflow.com/tyflow_AI/tyDiffusion/ github.com/LykosAI/StabilityMatrix # Contact Links # Website: hallettvisual.com/ Website AI for Architect...
Practical Introduction for TyDiffusion
Переглядів 9 тис.6 місяців тому
TyDiffusion is an implementation of Stable Diffusion in 3ds Max. In this video I'll show you theory and help you understand how Stable Diffusion works in a practical, every day sense with real world examples. # Links from the Video # docs.tyflow.com/tyflow_AI/tyDiffusion/ # Contact Links # Website: hallettvisual.com/ Website AI for Architecture: www.hallett-ai.com/ Instagram: hall...
The Easiest Installer and Manager for All Things Stable Diffusion
Переглядів 1,9 тис.7 місяців тому
Stability Matrix makes installing and managing your various Stable Diffusion apps super easy, and allows you to use all your models from a single directory. I'll show you my suggested settings and how to get started in this quick video. # Links from the Video # github.com/LykosAI/StabilityMatrix # Contact Links # Website: hallettvisual.com/ Website AI for Architecture: www.hallett-ai.com/ Insta...
Achieving Hyper-Realistic Product Renderings in 4K Detail with AI
Переглядів 1,1 тис.11 місяців тому
Transforming a product background and adding shadows is easy. Lets elevate the basics with professional techniques, adding dynamic lighting, highlights, and shadows. Your client's Product should look like it was PHOTOGRAPHED in space. I'll show you how in this video. If you need Automatic1111, please check my video and written descriptions here: www.hallett-ai.com/getting-started-free # Links f...
Accurate Variations using Z-Depth Element and Stable Diffusion
Переглядів 4,7 тис.11 місяців тому
Skip the preprocessor and use a perfect Z-depth map from your rendering elements. This method works with any rendering engine, is faster, and provides much more accurate results. If you need Automatic1111, please check my video and written descriptions here: www.hallett-ai.com/getting-started-free # Links from the Video # Checkpoint Model: civitai.com/models/140737/albedobase-xl Collection of S...
Turn 3D Characters Realistic with One Click in Automatic1111
Переглядів 3,3 тис.Рік тому
I'll show you the settings and extension required to make your rendered 3D characters realistic with Stable Diffusion and Automatic1111. If you need Automatic1111, please check my video and written descriptions here:www.hallett-ai.com/getting-started-free # Links from the Video # Checkpoint Model: civitai.com/models/132632/epicphotogasm Upscale Model Database: openmodeldb.info/ # Personal Links...
Upscale and Enhance with ADDED DETAIL to 4K + (Better than Topaz)
Переглядів 18 тис.Рік тому
Similar to Krea and Magnific but offline using Stable Diffusion. Just follow these steps and enhance a low resolution image better than you ever thought possible. If you need to install Controlnet and Automatic1111, please check my video and written descriptions here: www.hallett-ai.com/getting-started-free # Links from the Video # Checkpoint Model: civitai.com/models/132632/epicphotogasm Upsca...
Cinematic Text to Video with Stable Diffusion in 2K
Переглядів 2,6 тис.Рік тому
Over 1 Minute of the highest quality text to video animations. A random selection of whats possible to generate locally on a RTX 4090
Fooocus is the Stable Diffusion's Answer to Midjourney | Now with 13 Subtitles Languages
Переглядів 10 тис.Рік тому
Fooocus is free, open source, and incredible fast and easy to use. I'll show you how to install, configure model paths, and get started using Fooocus MRE in your architectural workflow. # Links from the Video # Website and Shop: hallettvisual.com/downloads Install Fooocus: github.com/MoonRide303/Fooocus-MRE # Personal Links # Website: hallettvisual.com/ Instagram: hallettvisual F...
Fooocus es la respuesta de Stable Diffusion a Midjourney [Español]
Переглядів 480Рік тому
Fooocus es gratuito, de código abierto e increíblemente rápido y fácil de usar. Le mostraré cómo instalar, configurar rutas de modelo y comenzar a usar Fooocus MRE en su flujo de trabajo arquitectónico. # Enlaces del vídeo # Sitio web y tienda: hallettvisual.com/downloads Instale Fooocus: github.com/MoonRide303/Fooocus-MRE # Enlaces personales # Sitio web: hallettvisual.com/ Instagram: instagra...
Introduction to ComfyUI for Architecture | The Node Based Alternative to Automatic1111
Переглядів 28 тис.Рік тому
ComfyUI is free, open source, and offers more customization over Stable Diffusion Automatic1111. Now with Subtitles in 13 Languages # Links from the Video # Website and Shop: hallettvisual.com/downloads Install ComfyUI: github.com/comfyanonymous/ComfyUI Comfy UI Manager: github.com/ltdrdata/ComfyUI-Manager Git For Windows: gitforwindows.org/ # Personal Links # Website: hallettvisual.com/ Instag...
UPDATED: Getting Started with AI for Architecture
Переглядів 2 тис.Рік тому
UPDATED: Getting Started with AI for Architecture
Mountain Lake House Stable Diffusion Animation
Переглядів 469Рік тому
Mountain Lake House Stable Diffusion Animation
Getting Started with Stable Diffusion AI for Architecture
Переглядів 6 тис.Рік тому
Getting Started with Stable Diffusion AI for Architecture
love the content. keep it up.
thanks!!! the tutorial it 's wonderful!
now look at the enricos nodes, compositor v3. Would be great if you could create a scene with multiple 3d objects (upload them in a batch) would be great
hello, where can we download the workflow?
Links in the description, but here you go. civitai.com/models/1156226/3d-or-live-view-comfyui-workflow
Hey Matt. I'm really curious about your model testing & selection process. I've been spending a lot of time trying out models, and no closer to madness than method :)
In Forge or A1111 there is a script called XYZ plot. Replace SEED with CHECKPOINT. And select all the checkpoints you want to test. It will process each checkpoint one by one with your prompt and a fixed seed and produce a single wide image with all the image and labeled checkpoints. I use this on a 1K image in the img2img tab with a denoise of something like 0.5. What you want to look for are the changes you're after. Does the image contain the style and elements you're after. You can do this for Sampler types, and denoising, anything were you need some objective comparison. You'll never find the ultimate checkpoint unless you build one yourself. But find one that works for that project and stick with it for a while.. Eventually you'll have 5 or so you experiment with, buts not an exact science, but nothing about AI seems to be.
Thank you Matt keep up your good work!
Great work Matt! 👏
do not get to download models, except for the standard.what should I do? tell me.
Thanks for posting your video. Do you know of a way to get to a 32K upscale? What kind of hardware would I need to do it?
32K is half a billion pixels or 500 mega pixels. If a client asks you to make that, you can push back and ask why. Unless you want to have some type of interactive zooming digital art piece, then maybe, but there's no print application that would ever take advantage of that, even on a closeup billboard 10 feet high. And that's speaking from experience. Assuming you still want to produce a 32K image. You can produce it with this technique! Even on the average GPU. You just have to keep isolating the upscale image into manageable crops, and stitch the final image together in post. Eventually however, large smooth areas will hallucinate and start to produce weird freakish imagery in the sky or smooth walls for example. My suggestion is the upscale as I've shown to 10-12K using several upscale steps at using a multiplier that scales each time by approx 2.5K ... 2K to 4K to 7K to 10K (approx) after that you should focus on upscaling areas were you need further detail. Not the sky, water or smooth areas, you can but those have to be at a different denoise like 0.25. Now that you've got me thinking about this, email me and I'll help you further, I love a good challenge!
@@matthallettai Thanks for your reply. I'm very familiar with working with very large multi-gigabyte files. I do it on a daily basis. I am producing very large murals. 10+ foot tall at 150 dpi that are being viewed at less then an arm length. I have custom scanning cameras that can scan much larger than 32K. Just trying to figure out how far we can push AI generated images and what hardware is required to do it.
@@matthallettai Yes, it seems like large smooth areas become problematic.
You'd be the exception then! I have to mention it, so many clients expect high resolutions but don't know why. If you can handle 32K files comfortable you have all thats required. The only consideration is your GPU, but you can get around that by using GPU's online by minute. I'm interested in this, email me if you'd like help outside of UA-cam comments. Just find my website in the details, my email on the contact page.
@@matthallettai Thanks. I'll send you an email.
Would love a comfyUI tutorial for this. I intend to buy your AI course over xmas
Hey, This is Matt, but using my personal account. I don't cover much Comfy, and even after all this time, I still don't use it if I can accomplish the same thing with Forge. I've made the most insane Comfy workflows, and when I go to revisit them I can never remember all the little adjustments. You can easily enough replicate this with Comfy, you just have to install the Ultimate Upscaler node, and there's a few workflows out there for free that do the same thing I present. You can also email me and I'll send you my Comfy workflow collection.
@matthallett4126 Cheer's Matt. I work in house at an arch practice as a visualiser and they have pushed comfy UI (we had to license it) so no access to automatic1111. I think it would be much simpler to use that instead,but no one listens to me, anyway cheer's for the reply. I'll reach out over Christmas via email 👍
Not sure what I'm doing wrong but I keep getting an error on the LeRes saying: Comfy\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\lllyasviel\\Annotators\\.cache\\huggingface\\download\\latest_net_G.pth.50ec735d74ed6499562d898f41b49343e521808b8dae589aa3c2f5c9ac9f7462.incomplete' Any help would be appreciated.
Clear your huggingface cache. Its on C:\Users somewhere And delete that sub directory in \custom_nodes\ and start over. Happens all the time!
it is awesome!!!
Thank you! I'm going to make a new video showing my custom night conversion script if you want to subscribe and see then when its ready. It will be free to use.
@@matthallettai I was wondering how you learned loRa. Do you make a video of this as well?
There are many tutorials how to train a LoRA. I use OneTrainer, but you can use whatever is new. You can start with CivitAI, they have a really easy online method for training SDXL and FLUX models. Start with something simple, don't try using a 1000 images like I have, it will melt your brain and GPU.
Hey Matt, I'm loving your content on AI for archviz! Did you manage to create good looking animations/videos from images yet? I've tried a feel softwares like kling ai and runway but i'm not convinced there are any ready yet. Thanks!
Thanks man! Appreciate that.. Right now the best image to video is Runway, Kling and Minimax. Which are all online services. For local Cog is very popular, LTX just came out, and its a year old now, but Stable Video Diffusion. If you don't like the online generators, you won't like the local stuff! There's not frame to frame methods as of now that I know of.
Hi , I have a question, I can't uninstall package from stability matrix, l don't know why , can you help me
You can delete the entire program from \Data\Packages\ That will work.
Not working on my AMD GPU
There are a few solutions for AMD GPUs. Each package has a command line you need to add.. Stable Diffusion and many other AI architecture has been built for us with the CUDA platform developed by Nvidia. AMD GPUS are great affordable gaming cards, I use to only buy AMD GPUs.. Still only buy AMD CPUs. Its too bad they're not useful for the AI tech that's come out in the past few years. Thats why Nvidia stock went through the roof btw.
It's strange, but when I try to set up the masks like yours, everything turns black. Why does this happen?
Oh there could be so many reasons. Check the logs in the Comfy Tyflow dialog box. Can't remember were that is at the moment. Make sure it can "see" the depth of the model and your resolution,.. model checkpoint. Also try installing Comfy or A1111 on its own and test your Stable Diffusion workflow outside of Max.
thank you so we canjust generate until now .. not to edite on the final render image that we have .. ?
Not sure what you mean, but I have other video and there are other videos on UA-cam about using SD to edit your renderings in post
@@matthallettai i mean that can i render a photo and put it in ty diffusion and make it to generate just grass only for example
@@ibdesign8207 I would only use TyDiffusion for 3D models you want to quick turn into an AI image. What you're talking about is inpainting. Its best to take that image outside of 3ds max and use dedicated tools for AI photo editing like Forge, Comfy, Fooocus, Invoke. etc.
Thank you for this tutorial!!!
Glad you found it useful!
good video thank
So nice of you
Thanks a lot for the video. I get an error when I try to upscale this way: "RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)" Do you know what it means and how to fix it?
That error means your controlnets are mismatched. You have to us SDXL controlnets with SDXL checkpoints,
Awesome Trick
Cheers!
Thanks for this instructive tuto👍 Do you think its possible to customize the path for launching comfui from tydiffusion. I have already comfyui installed with lora, checkpoint.. ect Its anoying for me because i have no more freespace on my C:... so i can t use it properly
Just found something I think works, I'll post it on the Facebook group so I can include visuals. Assuming you're there as well. If not execute the 'run_ComfyUI.bat' in the root directory of the Tyflow install. In the CMD window look for the host copy and paste it to your web browser. it looks like this... http. 127 0 0 1 : 8188 but I can't paste it into the comments.
Always whit more amazing content!! and for free!!! thank you so much Matt!! 👏👏👏👏
Thanks Alessio! My production quality could be greatly improved, but I do my own editing, and by the time I get to that stage, I'm over it :)
Seems like if the objects' wire color could drive the segmentation controlnet, it could be quite a bit more accurate, I could imagine. Thoughts?
I've tried that using before in A1111 using an output ""object color" element like you're describing and removing the preprocessor in controlnet with the Segment model. It kind of works, this was last year and the model was shit at straight clean lines and small detail, so in a case like this were the subjects are far away it won't help. I was testing it making a kitchen and controlling single elements like floors and cabinets with a segment mask outputed from Max. I've always come back to using some type of base image with the colors I want in img2img and adjusting the denoise like I did here. But things change so quickly and often it comes down to the quality of the controlnet model. When flux has a segment model I'll try it again, Flux is amazing at straight and clean lines.
Very nice, thanks)
Thanks! Glad you liked it.
It is possible that the algorithm generates the depth data. If we can choose Color ID or Object ID, we can change the desired area in the artificial intelligence in the future.
I think you're talking about controlnet? Instead of generating a Zdepth pass, Stable Diffusion can make one with a preprocessor and a Depth model. Its actually the most common method when using AI since most users don't start from a rendering program. Just turn controlnet on, selected Depth and the defaults should work. You need a SDXL controlnet for SDXL checkpoints. Thats the quick youtube comment reply, it can get a lot more complicated.
Hey Matt, thanks for the video! I was wondering if there are ways of producing images from multiple reference images to generate ideas for projects? Would love help on this or a video would be great on it!! Thanks Jake :)
Yes, you can make a batch filled with images and use a generic prompt and a denoise like 0.55. There's a batch node in Comfy somewhere, but last time I tried to find it, I could figure out which one. I mostly use Forge, its batch generation is dead simple. If you can find my email address email me and send a message,maybe we can figure it out together. It also sounds like you could be asking about IPAdapter, which is another beast.
HOW CAN I UPLOAD PHOTO FOR A REFERENCE OR IMAGE INPUT?
Depends on your app! This is just Matrix, its an app manager for image generators.
how to keep the viewport map ch 2?
It works like any other channel, just click the "Show Shaded Material in Viewport" make sure you have your bitmap and your UVW set to 2.
First time to your UA-cam channel but I just like and subscribe b/c it's a quality content. Thanks for the useful video especially it's 3ds max and AI.
Thanks man! I appreciate the comment.
Hi Matt, with your latest knowledge do you think there is a way to use the original scanning photos to teach for swaps on the identical 3d characters? To me that seems the cleanest way how to go about this issue...
Yes, we can do that now. Remember we tried last year with your 3D characters. There's easier tools to achieve this now without a Lora for each character.
the "download resources" button does not seem to work for me on the site, it just opens a new tab and reload the current page. I've tried 2 different browsers as well
Only works for me with Chrome, and its because I have all popups blockers disabled. Try the regular Adetailer, not the Udetailer, I think its better now.
theres alot of stolen frames from private ryan in there.
As I mentioned on facebook they're inspired frames. I used img2img to create a few frames because I couldn't get flux to create anything like what I wanted with just text to image. I was more interested in seeing how those images would animate with Runway.
thanks, very interesting using ZDepth as as "guide" to new renders
Glad it was helpful!
Thank you for the tutorial. Where can I find the model for ControlNet(control_v11fie_sd15_tile)? My dropdown list is empty, and I can't find this model anywhere.
You can download models required on Hugginface or Civit. Then you have to place them in the appropriate download directory. If you use a Local AI manager like SD Matrix, it has a download menu you can use for automatic placement. Best to start with a startup video for Forge or Automatic1111
thanks
You're welcome!
I keep getting "Package Modification Failed" while installing packages....any ideas?? (pip install failed with code2)
Its a late reply, hopefully you have it figured out. If all your packages get that error then its a problem with Matrix or Windows security perhaps. IF its just one package, that happens during some updates, there's usually a fix posted quickly on Github. Or you have to revert back to a previous version.
Gold tutorial !
Thank you!
DOPE
AWESOME
SWWEET
Awesome!
Great!
Issue is all I see are the movies that it has taken its inspiration from, That most likely would be because of the limited data.
I extracted frames from some movies and built a Flux LoRA, the base model of flux has very limited WWII data.
unbelievable!
thx you
you are genius! thanks alot!
Thanks but its Tyson Ibele whos' the genius!
Great video , thanks , Wish blender has this addon.
There are AI addons for Blender, but I dont know if it has this cool unwrap feature.
Great video. Gotta try this.
Brilliant stuff!