- 58
- 735 891
ControlAltAI
India
Приєднався 5 січ 2008
Welcome to Control Alt Ai. We provide tutorials on how to master the latest ai tools like StableDiffusion, MidJourney, Blue Willow, ChatGPT, and Everything AI basically.
Our Channel provides simplistic and practical tutorials and information that are easy to understand for everyone from a tech enthusiast, a developer, or someone curious about the latest advancements in AI.
We are committed to sharing our knowledge and expertise with our viewers. So if you're looking to stay informed on the latest AI tools, News and expand your knowledge, subscribe to our channel.
Our Channel provides simplistic and practical tutorials and information that are easy to understand for everyone from a tech enthusiast, a developer, or someone curious about the latest advancements in AI.
We are committed to sharing our knowledge and expertise with our viewers. So if you're looking to stay informed on the latest AI tools, News and expand your knowledge, subscribe to our channel.
ComfyUI: Flux Region Spatial Control (Workflow Tutorial)
This tutorial focuses on a custom set of nodes and pipeline we developed, which allow you complete control over the Flux Region. The use case goes beyond just placing objects within the defined spatial coordinates.
------------------------
Reference Implementation From:
Attashe: github.com/attashe/ComfyUI-FluxRegionAttention
------------------------
JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join
Flux Workflow Part 1: ua-cam.com/video/6ZUJ18wR_Bo/v-deo.html
Flux Workflow Part 2: ua-cam.com/video/4_1A5pQkJkg/v-deo.html
Links for Models:
Flux.1 [dev]: huggingface.co/black-forest-labs/FLUX.1-dev
t5xxl: huggingface.co/comfyanonymous/flux_text_encoders/tree/main
GitHub:
ControlAltAI Nodes: github.com/gseth/ControlAltAI-Nodes
CivtiAI LoRA Used:
civitai.com/models/562866?modelVersionId=735063
civitai.com/models/633553?modelVersionId=740450
XFormers:
XFormers is now needed for the flux attention control node. Go to your python_embeded folder and Check your pytorch and cuda version:
python.exe -c "import torch; print(torch.__version__)"
Check for xformers if installed:
python.exe -m pip show xformers
Check for the latest Xformers version that is compatible with your installed Pytorch version:
github.com/facebookresearch/xformers/releases
You can Install the latest version of xformers using this command:
python.exe -m pip install xformers==PUTVERSIONHERE --index-url download.pytorch.org/whl/cuVERSION
example For PyTorch 2.5.1 with CUDA 12.4:
python.exe -m pip install xformers==0.0.28.post3 --index-url download.pytorch.org/whl/cu124
As of 8th December 2024:
Recommended:
xformers==0.0.28.post3
PyTorch 2.5.1
CUDA version: cu124 (for CUDA 12.4)
ComfyUI (Official): www.comfy.org/
------------------------
TimeStamps:
0:00 Intro.
02:00 Requirements.
05:01 Workflow.
21:24 Understanding Regions.
31:43 Flux Parameters.
37:52 Style & Color Manipulation.
42:04 Simple Split & Blend.
47:49 Complex Blends.
52:38 Token Limit.
------------------------
Reference Implementation From:
Attashe: github.com/attashe/ComfyUI-FluxRegionAttention
------------------------
JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join
Flux Workflow Part 1: ua-cam.com/video/6ZUJ18wR_Bo/v-deo.html
Flux Workflow Part 2: ua-cam.com/video/4_1A5pQkJkg/v-deo.html
Links for Models:
Flux.1 [dev]: huggingface.co/black-forest-labs/FLUX.1-dev
t5xxl: huggingface.co/comfyanonymous/flux_text_encoders/tree/main
GitHub:
ControlAltAI Nodes: github.com/gseth/ControlAltAI-Nodes
CivtiAI LoRA Used:
civitai.com/models/562866?modelVersionId=735063
civitai.com/models/633553?modelVersionId=740450
XFormers:
XFormers is now needed for the flux attention control node. Go to your python_embeded folder and Check your pytorch and cuda version:
python.exe -c "import torch; print(torch.__version__)"
Check for xformers if installed:
python.exe -m pip show xformers
Check for the latest Xformers version that is compatible with your installed Pytorch version:
github.com/facebookresearch/xformers/releases
You can Install the latest version of xformers using this command:
python.exe -m pip install xformers==PUTVERSIONHERE --index-url download.pytorch.org/whl/cuVERSION
example For PyTorch 2.5.1 with CUDA 12.4:
python.exe -m pip install xformers==0.0.28.post3 --index-url download.pytorch.org/whl/cu124
As of 8th December 2024:
Recommended:
xformers==0.0.28.post3
PyTorch 2.5.1
CUDA version: cu124 (for CUDA 12.4)
ComfyUI (Official): www.comfy.org/
------------------------
TimeStamps:
0:00 Intro.
02:00 Requirements.
05:01 Workflow.
21:24 Understanding Regions.
31:43 Flux Parameters.
37:52 Style & Color Manipulation.
42:04 Simple Split & Blend.
47:49 Complex Blends.
52:38 Token Limit.
Переглядів: 2 067
Відео
ComfyUI: Flux Part 2, ControlNet, Preserve Details Upscale (Workflow Tutorial)
Переглядів 5 тис.Місяць тому
This is part 2 of the Flux Workflow Comfy UI tutorial. In this video, we introduce new Nodes like Noise Plus Blend and Flux Control Net Union Pro (InstantX Model). The video focuses on the logical integration of preserving details on the skin and other textures during 5x upscale, which results in realistic facial skin and landscape textures. The video also covers the complete Control Net integr...
ComfyUI: Flux with LLM, 5x Upscale Part 1 (Workflow Tutorial)
Переглядів 14 тис.3 місяці тому
The video focuses on Flux.1[dev] usage and workflow in Comfy UI. The workflow is semiautomatic, with logical processing applied to reduce V Ram usage. It entails Image Reference, Image2Image, Text to Image, and Consistent upscaling techniques. Preserving the text during upscale was challenging. The workflow achieves upscaling with text retention up to 5.04x, approximately the original generatio...
ComfyUI: Imposing Consistent Light (IC-Light Workflow Tutorial)
Переглядів 21 тис.4 місяці тому
The video focuses on implementing IC-Light in Comfy UI, specifically for product photography. IC-Light is based on SD1.5, and we use a reference background and a product/object photo to regenerate the background and re-light the object. Images are generated in SDXL resolution, then upscaled by 4x. A number of unique techniques are used to transfer details to the final generation and even on the...
ComfyUI: nVidia TensorRT (Workflow Tutorial)
Переглядів 6 тис.5 місяців тому
nVidia TensorRT is officially implemented for Comfy UI and Supports SD 1.5, SD 2.1, SDXL, SDXL Turbo, SD3, SVD, and SVD XT. Using ComfyUI, you gain 14% to 32% faster image generation in Stable Diffusion. I explain how TensorRT works for Stable Diffusion in Comfy and provide a comprehensive workflow tutorial to generate TensorRT .engine files. JSON File (UA-cam Membership): www.youtube.com/@cont...
ComfyUI: CosXL, CosXL Edit InstructPix2Pix (Workflow Tutorial)
Переглядів 6 тис.7 місяців тому
This tutorial focuses on CosXL and CosXL Edit InstructPix2Pix workflows for ComfyUI. The workflow tutorial video includes all parameters explained, advanced model merging in comfy UI for converting any model to CosXL, the upscaling technique used with CosXL and CosXL edit, and some tips and tricks to get the best-desired outcome. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join...
ComfyUI: Scaling-UP Image Restoration, SUPIR (Workflow Tutorial)
Переглядів 29 тис.8 місяців тому
This tutorial focuses on SUPIR for ComfyUI, some core concepts and upscaling techniques used with SUPIR. Image restoration, enhancement, and some mixed techniques are used with the workflow to achieve the desired results. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join SUPIR Comfy UI: github.com/kijai/ComfyUI-SUPIR SUPIR GitHub: github.com/Fanghua-Yu/SUPIR SUPIR Model Download...
ComfyUI: Yolo World, Inpainting, Outpainting (Workflow Tutorial)
Переглядів 38 тис.9 місяців тому
This tutorial focuses on Yolo World segmentation and advanced inpainting and outpainting techniques in Comfy UI. It has 7 workflows, including Yolo World instance segmentation, color grading, image processing, object/subject removal using LaMa / MAT, inpaint plus refinement, and outpainting. For error: "cannot import name 'packaging' from 'pkg_resources'" The solution: Ensure that Python 3.12 o...
Distillery by FollowFox.AI (LoRA Training, IP Adapter in Discord)
Переглядів 2,8 тис.9 місяців тому
Distillery is a new Discord-based generative text-to-image AI based on Stable Diffusion. This is part 2 of the Distillery AI tutorial. They have launched new features like LoRA training in under 6 Minutes, IP Adapter, In Painting and more, all within the discord interface. Relevant Links: Channel Support (UA-cam Membership): www.youtube.com/@controlaltai/join Distillery Part 1 Tutorial: ua-cam....
ComfyUI: Animate Anyone Evolved (Workflow Tutorial)
Переглядів 21 тис.10 місяців тому
This is a comprehensive tutorial focusing on the installation and usage of Animate Anyone for Comfy UI. With Animate Anyone, you can use a single reference image and animate it using DW Pose Motion Capture. The tutorial focuses on the best techniques to achieve the desired results as well as uses IP Adapters, Animate Diff, and Segmentation to maintain & fix facial consistency in the animation. ...
ComfyUI: Batch Apply Watermark to Images (Tutorial)
Переглядів 3,3 тис.10 місяців тому
This tutorial focuses on masking techniques to apply your watermark or logo on AI-generated images or existing images in batches. The workflow tutorial focuses on a cornered watermark and repeating watermarks that cover the entire image. The workflow is automated but customizable, which allows you to make the watermark transparent as well as greyscale. JSON File (UA-cam Membership): www.youtube...
ComfyUI: IP Adapter Clothing Style (Tutorial)
Переглядів 19 тис.11 місяців тому
This tutorial focuses on clothing style transfer from image to image using Grounding Dino, Segment Anything Models & IP Adapter. Masking & segmentation are automated, and the workflow includes masking control, in-painting, ControlNet, and iterative upscale technique. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join Realistic Vision 5.1 InPainting: civitai.com/models/4201?modelV...
ComfyUI: Style Aligned via Shared Attention (Tutorial)
Переглядів 14 тис.11 місяців тому
This tutorial includes 4 Comfy UI workflows using Style Aligned Image Generation via Shared Attention. The tutorials focus on workflows for Text2Image with Style Aligned in Batches, Reference & Target Image Style Aligned along with the use of multi ControlNet with Style Aligned. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join Brian Fitzgerald GitHub: github.com/brianfitzgerald...
ComfyUI: Face Detailer (Workflow Tutorial)
Переглядів 53 тис.11 місяців тому
This tutorial includes 4 Comfy UI workflows using Face Detailer. The workflow tutorial focuses on Face Restore using Base SDXL & Refiner, Face Enhancement (Graphic to Photorealistic & Vice Versa), and Facial Details (including hair styling). Unique techniques are used to automate the workflow for auto-detection, selection, and masking. JSON File (UA-cam Membership): www.youtube.com/@controlalta...
ComfyUI: IP Adapter Workflows (Tutorial)
Переглядів 29 тис.11 місяців тому
This is a basic tutorial for using IP Adapter in Stable Diffusion ComfyUI. I showcase multiple workflows using Attention Masking, Blending, Multi Ip Adapters, Subject Positioning, Condition Combine, ControlNet, Image Variations, and Mask Conditioning. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join IP Adapter Models: huggingface.co/h94/IP-Adapter/tree/main Save File Name Codes...
ComfyUI: Stable Video Diffusion Clip Extension (Tutorial)
Переглядів 11 тис.11 місяців тому
ComfyUI: Stable Video Diffusion Clip Extension (Tutorial)
ComfyUI: Style Transfer using CoAdapter & ControlNet (Tutorial)
Переглядів 14 тис.Рік тому
ComfyUI: Style Transfer using CoAdapter & ControlNet (Tutorial)
A1111: nVidia TensorRT Extension for Stable Diffusion (Tutorial)
Переглядів 11 тис.Рік тому
A1111: nVidia TensorRT Extension for Stable Diffusion (Tutorial)
ComfyUI: Stable Video Diffusion (Workflow Tutorial)
Переглядів 40 тис.Рік тому
ComfyUI: Stable Video Diffusion (Workflow Tutorial)
ComfyUI: Image to Line Art Workflow Tutorial
Переглядів 22 тис.Рік тому
ComfyUI: Image to Line Art Workflow Tutorial
ComfyUI: Area Composition, Multi Prompt Workflow Tutorial
Переглядів 43 тис.Рік тому
ComfyUI: Area Composition, Multi Prompt Workflow Tutorial
InvokeAI 3.30 for Stable Diffusion (Tutorial)
Переглядів 10 тис.Рік тому
InvokeAI 3.30 for Stable Diffusion (Tutorial)
ComfyUI ControlNet Tutorial (Control LoRA)
Переглядів 24 тис.Рік тому
ComfyUI ControlNet Tutorial (Control LoRA)
A1111: ADetailer Basics and Workflow Tutorial (Stable Diffusion)
Переглядів 39 тис.Рік тому
A1111: ADetailer Basics and Workflow Tutorial (Stable Diffusion)
Distillery by FollowFox.AI (LoRA, ControlNet in Discord)
Переглядів 2,1 тис.Рік тому
Distillery by FollowFox.AI (LoRA, ControlNet in Discord)
Dall-E 3 with Chat GPT Plus: No Prompting (Workflow Tutorial)
Переглядів 3,5 тис.Рік тому
Dall-E 3 with Chat GPT Plus: No Prompting (Workflow Tutorial)
A1111: IP Adapter ControlNet Tutorial (Stable Diffusion)
Переглядів 67 тис.Рік тому
A1111: IP Adapter ControlNet Tutorial (Stable Diffusion)
Random Technique in MidJourney Prompts using Chat GPT
Переглядів 641Рік тому
Random Technique in MidJourney Prompts using Chat GPT
Stable Diffusion Lora Training with Kohya (Tutorial)
Переглядів 47 тис.Рік тому
Stable Diffusion Lora Training with Kohya (Tutorial)
ComfyUI: Upscale any AI Art or Photo (Workflow)
Переглядів 12 тис.Рік тому
ComfyUI: Upscale any AI Art or Photo (Workflow)
do you know how to fix this "Sizes of tensors must match except in dimension 1. Expected size 104 but got size 112 for tensor number 1 in the list." i kept getting this error in KSampler when trying to edit using another picture.
Hi thanks for your informative video but i got this error SamplerCustomAdvanced "xformers_attention() got an unexpected keyword argument 'mask'" and everything installed well, also i updated ControlAltAI nodes, how i can solve it?
You have to update comfy. then update the custom node. close everything and then restart. Check for fetch updates again to ensure you are on the latest version of the node.
@ thanks I’ll try again
Somehow the comfyui does not find the FluxRegionBBOX and RegionAttention nodes. Even though everything is up to date. Any ideas?
That is a different GitHub repository. Install ControlAltAI nodes from Comfy Manager. github.com/gseth/ControlAltAI-Nodes
Been waiting for your new videos, very helpful as always 👍
Me watching thos for the third time and still finding new information 😂
Could you lend me a hand? "FluxAttentionControl Xformers is required for this node when enabled. Please install xformers."
Please read description. I have explain how to install xformers in the requirement chapter on video.
Can regional prompting be combined with controlnet depth? For ex. Ill generate a background the on the center of it, ill use a controlnet and generate my subject.
This is the initial release. I have to explore integration of ControlNet. I have mentioned all this in the video. The current pipeline was complicate to reach at this level of stability. I have to test and see if ControlNet can be added and if possible or not.
I'm wondering if this will work with redux and image manipulation?
I know that image to image doesn’t work in this pipeline. So I guess after that redux should be done. I haven’t tried redux directly as I was in the middle of this when they released the tools.
All good, I just thought that would have so much potential and if it could be done. Nice video by the way, really nice work. 😊
Thanks. It was very hard to get this thing stable. Will research further what else can be added like ControlNet image to image, etc and how.
Update 20 Dec 2024: Update ControlAltAI nodes to work with the latest version of ComfyUI, incase you get the following error: "TypeError: FluxAttentionControl.xformers_attention() got an unexpected keyword argument 'mask'" Update 18 Dec 2024: Inspired by original Reference From: Attashe on GitHub: github.com/attashe/ComfyUI-FluxRegionAttention
You used my code, but you didn't mention my repository with the original implementation anywhere - neither in the video nor in the code. That's apache 2.0 reqs and rule of good manners
You are talking about this? github.com/attashe/ComfyUI-FluxRegionAttention Your code is not exactly used. The pipeline is completely different. I checked your repository some overlapping is there for Xformers attention. But it is beyond that. BBox and Mask feathering are there in Attention. Conditional strength is there. Attention handling is different taking all these things. Makes and BBox both are going in attention. Attention Cleanup is done. We worked on the BBox and Masking for over a month and a half. Unfortunately I cannot Update the Video, But I can update the description with your github link as "Base Implementation from Attashe: github.com/attashe/ComfyUI-FluxRegionAttention". And I can update my github page. Let me know if thats acceptable to you and apologies if there is an oversight on our part. Edit: Please check, I have made a Pin Post, Updated Video Description and included Acknowledgments in The GitHub page.
Thanks, I see that you made a lot of changes and obviously not against copying code parts
uch why isnt this working anymore
All things have changed in comfy. Clip seg is not supported. Ultra lyrics was compromised. It now requires a separate install. A whole new workflow has to be created.
@@controlaltai hey thanks for your reply. is there any similair mechnaic for face enhanement which is working ? 13.50 is what i need... please make a new video for us.
Yes Sam 2 or grounding dino. There is a custom node called layer style. It has multiple segmentation options. Will try and make a new face detailed video.
@@controlaltai Awesome. Looking Farward to :)
Would it be possible to do this with a low-resource PC? Is there a website where this can be done for free?
SUPIR requires a license. If you don’t have a powerful system you can rent a you and run workflows there. Probably like run pod or something.
Is this better than supir upscaler?
Depends on use case. Some stuff which I showed here supir cannot do. But there are some things supir can do. For flux most of the corporates use with some brand text etc or want too much consistency like 95%. Supir cannot handle text. Flux handles consistency very well without adding much details. Choose whatever suits your use case.
@ i see. for preparing dataset of a real person lora, which should i use do you think? My problem is the full and half body images of this person are not that great, so i was thinking to upscale these images
Lora, try supir and see if the skin texture and all are realistic and it’s not adding too much. If satisfied stick to supir, if not then switch to flux. Telling to try supir first since it’s easier and quicker, flux is not that straight forward.
@ thank you so much!!!
thank you for this tutorial ! can you help me. i got error KSampler The new shape must be larger than the original tensor in all dimensions what that mean
Something wrong with the model. Ensure checkpoint is sd1.5 and not flux or sdxl
thank you iam edit soft light makeup, very soft coral pink lipstick
I made image in piclumen but very different than mid journey images piclumen flux schnell don't provide mid journey imges similar
Flux schnell is way behind mid journey you need flux dev or pro which is far better than mid journey.
@controlaltai both are paid
how can i get this workflow?
Hi friend, thanks for your video, it's very useful. Please tell me how to remove an extra selected object when working with eyes? Sometimes I have two objects selected, but there is no numbering, like with the same hands or faces, so that I can change the detection model confidence threshold. I would like to figure this out to save time on creation
really great, thx
"AnimateAnyone] Load UNet3D ConditionModel no weights file found in F:\ComfyUI_Ai\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateAnyone-Evolved\pretrained_weights\stable-diffusion-v1-5\unet" - I solved this issue - I was missing the diffusion_pytorch_model.bin in the unet folder. The new issue is "up_blocks.1.attentions.2.transformer_blocks.0.norm1.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.3.resnets.0.norm1.bias, up_blocks.1.resnets.2.conv2.weight, up_blocks.3.attentions.1.transformer" any idea what that could be?
Not sure, could be the resolution. of the image. Check that and check the folder structure if it is correct as its a bit tricky, multiple models have to go in multiple places.
@@controlaltai the new error I get is about weights? Would that be linked to the image? "up_blocks.1.attentions.2.transformer_blocks.0.norm1.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.3.resnets.0.norm1.bias, up_blocks.1.resnets.2.conv2.weight, up_blocks.3.attentions.1.transformer"
Let me do some research and get back to within a day. The image dimension give mis match tensor errors. This is giving transformer attention blocks errors. Can you tell me if you are using legacy comfy or latest version?
@ i am using the portable version. I updated about a month ago. Python 3.10.
@@Nibot2023 ok thanks, let me check and research for you what is this error. Will get back shortly.
I cannot use YoloWorld, not even with the solution you provide in the description / comments. This is a mess, what a pain in the ass.
Yeah comfy changed everything, and it requires developers of the custom node to update their nodes. They custom nodes from ZhoZhoZho are never updated to be honest. I stopped using them. You can get an alternative yolo from layer style node. Basically the tutorial concepts are valid, yolo world can be replaced with SAM 2 for example. Anything that can do prompt detection would work.
The best Comfy tutorial !
ZHO BiRefNet module is no longer available for download via confyui manager
Use the birefnet from layer style custom node. It’s basically the same.
Can not download the json workflow from anywhere, can anyone tell me from where I can download that?
It’s only for paid channel members. You can however watch the tutorial and create the workflow yourself. Everything is shown.
Some news on @20241028 update? BTW: I also have to watch this vid much more times since its very detailed on compressed content.Well done CTRL-Alt-AI.
Not yet, Give me some time. I will release the fix, but won't be able to make a full tutorial on it. The basic thing is you can replace media pipe with SAM 2, and there is no need for clip seg. Blip is outdated and Florence can be used for that.
@@controlaltai Sure, pls don't feel stressed on that. Thanks for your response. BTW: do you know how to install the YoloWorld successfully? Throws an error on while installing YoloWorldESAM: "AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'"? U have a reliable Link on how to set it up successfully? Thank you for your detailed and explaining work by these tutorials on ComfyUI; I really appreciate that. Regards, Roger
was just about to sit and watch this, and saw that Black Forest Labs have released their own tools, hope you do a tutorial for them :) Fill, Depth, Canny, Redux
Yeah those are different. Will explain them in a tutorial. However highly recommend you watch this one for the logical flow understanding.
@@controlaltai Yes! I am excited to see what they can do, while grateful for the efforts other groups who have been making canny and depth, inpaint, and ipadapters, I have not been impressed. I have much higher expectations with these. I took a brief look at the Comfyui site and it looks like they have native support and a few examples to play with.
I have done a union pro tutorial. Next video is about flux region. We made custom nodes for that very cool concept. After that video will make a full workflow tutorial using these tools as well. We can do some amazing restoring and conversion using these tools union pro.
I'll be sure to check them both out. Thank you! :)
and how to create a watermark only on an item using flux redux?
Dude, just explain about cosxl...
bless you
I can´t find the IPADAPTER folder
ComfyUI\models\ipadapter, if there is none create it, or install from the manager any ip adapter model, it will create itself.
Thank you for this amazing workflow. Just a silly question: how do we download the ollama and llava guff models and where do we place them?
Please watch part 1 of the video on how to install ollama. GGUF is already explained in this video.
Awesome / Best explanation Of SVD is Here .. Good lock