ControlAltAI
ControlAltAI
  • 58
  • 764 572
ComfyUI: Flux Region Spatial Control (Workflow Tutorial)
This tutorial focuses on a custom set of nodes and pipeline we developed, which allow you complete control over the Flux Region. The use case goes beyond just placing objects within the defined spatial coordinates.
------------------------
Reference Implementation From:
Attashe: github.com/attashe/ComfyUI-FluxRegionAttention
------------------------
JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join
Flux Workflow Part 1: ua-cam.com/video/6ZUJ18wR_Bo/v-deo.html
Flux Workflow Part 2: ua-cam.com/video/4_1A5pQkJkg/v-deo.html
Links for Models:
Flux.1 [dev]: huggingface.co/black-forest-labs/FLUX.1-dev
t5xxl: huggingface.co/comfyanonymous/flux_text_encoders/tree/main
GitHub:
ControlAltAI Nodes: github.com/gseth/ControlAltAI-Nodes
CivtiAI LoRA Used:
civitai.com/models/562866?modelVersionId=735063
civitai.com/models/633553?modelVersionId=740450
XFormers:
XFormers is now needed for the flux attention control node. Go to your python_embeded folder and Check your pytorch and cuda version:
python.exe -c "import torch; print(torch.__version__)"
Check for xformers if installed:
python.exe -m pip show xformers
Check for the latest Xformers version that is compatible with your installed Pytorch version:
github.com/facebookresearch/xformers/releases
You can Install the latest version of xformers using this command:
python.exe -m pip install xformers==PUTVERSIONHERE --index-url download.pytorch.org/whl/cuVERSION
example For PyTorch 2.5.1 with CUDA 12.4:
python.exe -m pip install xformers==0.0.28.post3 --index-url download.pytorch.org/whl/cu124
As of 8th December 2024:
Recommended:
xformers==0.0.28.post3
PyTorch 2.5.1
CUDA version: cu124 (for CUDA 12.4)
ComfyUI (Official): www.comfy.org/
------------------------
TimeStamps:
0:00 Intro.
02:00 Requirements.
05:01 Workflow.
21:24 Understanding Regions.
31:43 Flux Parameters.
37:52 Style & Color Manipulation.
42:04 Simple Split & Blend.
47:49 Complex Blends.
52:38 Token Limit.
Переглядів: 4 116

Відео

ComfyUI: Flux Part 2, ControlNet, Preserve Details Upscale (Workflow Tutorial)
Переглядів 6 тис.3 місяці тому
This is part 2 of the Flux Workflow Comfy UI tutorial. In this video, we introduce new Nodes like Noise Plus Blend and Flux Control Net Union Pro (InstantX Model). The video focuses on the logical integration of preserving details on the skin and other textures during 5x upscale, which results in realistic facial skin and landscape textures. The video also covers the complete Control Net integr...
ComfyUI: Flux with LLM, 5x Upscale Part 1 (Workflow Tutorial)
Переглядів 15 тис.4 місяці тому
The video focuses on Flux.1[dev] usage and workflow in Comfy UI. The workflow is semiautomatic, with logical processing applied to reduce V Ram usage. It entails Image Reference, Image2Image, Text to Image, and Consistent upscaling techniques. Preserving the text during upscale was challenging. The workflow achieves upscaling with text retention up to 5.04x, approximately the original generatio...
ComfyUI: Imposing Consistent Light (IC-Light Workflow Tutorial)
Переглядів 24 тис.6 місяців тому
The video focuses on implementing IC-Light in Comfy UI, specifically for product photography. IC-Light is based on SD1.5, and we use a reference background and a product/object photo to regenerate the background and re-light the object. Images are generated in SDXL resolution, then upscaled by 4x. A number of unique techniques are used to transfer details to the final generation and even on the...
ComfyUI: nVidia TensorRT (Workflow Tutorial)
Переглядів 6 тис.6 місяців тому
nVidia TensorRT is officially implemented for Comfy UI and Supports SD 1.5, SD 2.1, SDXL, SDXL Turbo, SD3, SVD, and SVD XT. Using ComfyUI, you gain 14% to 32% faster image generation in Stable Diffusion. I explain how TensorRT works for Stable Diffusion in Comfy and provide a comprehensive workflow tutorial to generate TensorRT .engine files. JSON File (UA-cam Membership): www.youtube.com/@cont...
ComfyUI: CosXL, CosXL Edit InstructPix2Pix (Workflow Tutorial)
Переглядів 7 тис.8 місяців тому
This tutorial focuses on CosXL and CosXL Edit InstructPix2Pix workflows for ComfyUI. The workflow tutorial video includes all parameters explained, advanced model merging in comfy UI for converting any model to CosXL, the upscaling technique used with CosXL and CosXL edit, and some tips and tricks to get the best-desired outcome. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join...
ComfyUI: Scaling-UP Image Restoration, SUPIR (Workflow Tutorial)
Переглядів 31 тис.9 місяців тому
This tutorial focuses on SUPIR for ComfyUI, some core concepts and upscaling techniques used with SUPIR. Image restoration, enhancement, and some mixed techniques are used with the workflow to achieve the desired results. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join SUPIR Comfy UI: github.com/kijai/ComfyUI-SUPIR SUPIR GitHub: github.com/Fanghua-Yu/SUPIR SUPIR Model Download...
ComfyUI: Yolo World, Inpainting, Outpainting (Workflow Tutorial)
Переглядів 40 тис.10 місяців тому
This tutorial focuses on Yolo World segmentation and advanced inpainting and outpainting techniques in Comfy UI. It has 7 workflows, including Yolo World instance segmentation, color grading, image processing, object/subject removal using LaMa / MAT, inpaint plus refinement, and outpainting. For error: "cannot import name 'packaging' from 'pkg_resources'" The solution: Ensure that Python 3.12 o...
Distillery by FollowFox.AI (LoRA Training, IP Adapter in Discord)
Переглядів 2,9 тис.11 місяців тому
Distillery is a new Discord-based generative text-to-image AI based on Stable Diffusion. This is part 2 of the Distillery AI tutorial. They have launched new features like LoRA training in under 6 Minutes, IP Adapter, In Painting and more, all within the discord interface. Relevant Links: Channel Support (UA-cam Membership): www.youtube.com/@controlaltai/join Distillery Part 1 Tutorial: ua-cam....
ComfyUI: Animate Anyone Evolved (Workflow Tutorial)
Переглядів 22 тис.11 місяців тому
This is a comprehensive tutorial focusing on the installation and usage of Animate Anyone for Comfy UI. With Animate Anyone, you can use a single reference image and animate it using DW Pose Motion Capture. The tutorial focuses on the best techniques to achieve the desired results as well as uses IP Adapters, Animate Diff, and Segmentation to maintain & fix facial consistency in the animation. ...
ComfyUI: Batch Apply Watermark to Images (Tutorial)
Переглядів 3,4 тис.11 місяців тому
This tutorial focuses on masking techniques to apply your watermark or logo on AI-generated images or existing images in batches. The workflow tutorial focuses on a cornered watermark and repeating watermarks that cover the entire image. The workflow is automated but customizable, which allows you to make the watermark transparent as well as greyscale. JSON File (UA-cam Membership): www.youtube...
ComfyUI: IP Adapter Clothing Style (Tutorial)
Переглядів 19 тис.Рік тому
This tutorial focuses on clothing style transfer from image to image using Grounding Dino, Segment Anything Models & IP Adapter. Masking & segmentation are automated, and the workflow includes masking control, in-painting, ControlNet, and iterative upscale technique. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join Realistic Vision 5.1 InPainting: civitai.com/models/4201?modelV...
ComfyUI: Style Aligned via Shared Attention (Tutorial)
Переглядів 15 тис.Рік тому
This tutorial includes 4 Comfy UI workflows using Style Aligned Image Generation via Shared Attention. The tutorials focus on workflows for Text2Image with Style Aligned in Batches, Reference & Target Image Style Aligned along with the use of multi ControlNet with Style Aligned. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join Brian Fitzgerald GitHub: github.com/brianfitzgerald...
ComfyUI: Face Detailer (Workflow Tutorial)
Переглядів 56 тис.Рік тому
This tutorial includes 4 Comfy UI workflows using Face Detailer. The workflow tutorial focuses on Face Restore using Base SDXL & Refiner, Face Enhancement (Graphic to Photorealistic & Vice Versa), and Facial Details (including hair styling). Unique techniques are used to automate the workflow for auto-detection, selection, and masking. JSON File (UA-cam Membership): www.youtube.com/@controlalta...
ComfyUI: IP Adapter Workflows (Tutorial)
Переглядів 29 тис.Рік тому
This is a basic tutorial for using IP Adapter in Stable Diffusion ComfyUI. I showcase multiple workflows using Attention Masking, Blending, Multi Ip Adapters, Subject Positioning, Condition Combine, ControlNet, Image Variations, and Mask Conditioning. JSON File (UA-cam Membership): www.youtube.com/@controlaltai/join IP Adapter Models: huggingface.co/h94/IP-Adapter/tree/main Save File Name Codes...
ComfyUI: Stable Video Diffusion Clip Extension (Tutorial)
Переглядів 11 тис.Рік тому
ComfyUI: Stable Video Diffusion Clip Extension (Tutorial)
ComfyUI: Style Transfer using CoAdapter & ControlNet (Tutorial)
Переглядів 14 тис.Рік тому
ComfyUI: Style Transfer using CoAdapter & ControlNet (Tutorial)
A1111: nVidia TensorRT Extension for Stable Diffusion (Tutorial)
Переглядів 11 тис.Рік тому
A1111: nVidia TensorRT Extension for Stable Diffusion (Tutorial)
ComfyUI: Stable Video Diffusion (Workflow Tutorial)
Переглядів 42 тис.Рік тому
ComfyUI: Stable Video Diffusion (Workflow Tutorial)
ComfyUI: Image to Line Art Workflow Tutorial
Переглядів 23 тис.Рік тому
ComfyUI: Image to Line Art Workflow Tutorial
ComfyUI: Area Composition, Multi Prompt Workflow Tutorial
Переглядів 45 тис.Рік тому
ComfyUI: Area Composition, Multi Prompt Workflow Tutorial
InvokeAI 3.30 for Stable Diffusion (Tutorial)
Переглядів 11 тис.Рік тому
InvokeAI 3.30 for Stable Diffusion (Tutorial)
ComfyUI ControlNet Tutorial (Control LoRA)
Переглядів 25 тис.Рік тому
ComfyUI ControlNet Tutorial (Control LoRA)
A1111: ADetailer Basics and Workflow Tutorial (Stable Diffusion)
Переглядів 41 тис.Рік тому
A1111: ADetailer Basics and Workflow Tutorial (Stable Diffusion)
Distillery by FollowFox.AI (LoRA, ControlNet in Discord)
Переглядів 2,2 тис.Рік тому
Distillery by FollowFox.AI (LoRA, ControlNet in Discord)
Dall-E 3 with Chat GPT Plus: No Prompting (Workflow Tutorial)
Переглядів 3,5 тис.Рік тому
Dall-E 3 with Chat GPT Plus: No Prompting (Workflow Tutorial)
A1111: IP Adapter ControlNet Tutorial (Stable Diffusion)
Переглядів 68 тис.Рік тому
A1111: IP Adapter ControlNet Tutorial (Stable Diffusion)
Random Technique in MidJourney Prompts using Chat GPT
Переглядів 656Рік тому
Random Technique in MidJourney Prompts using Chat GPT
Stable Diffusion Lora Training with Kohya (Tutorial)
Переглядів 49 тис.Рік тому
Stable Diffusion Lora Training with Kohya (Tutorial)
ComfyUI: Upscale any AI Art or Photo (Workflow)
Переглядів 12 тис.Рік тому
ComfyUI: Upscale any AI Art or Photo (Workflow)

КОМЕНТАРІ

  • @MohamedAli-hz1cn
    @MohamedAli-hz1cn 13 годин тому

    how download ? workflow

  • @Goger_
    @Goger_ 14 годин тому

    Do you plan to create a similar worflow or modify this one so that you can use flux or sdxl to generate backgrounds? As far as I know there was a demo version of IC light that works with flux

    • @controlaltai
      @controlaltai 14 годин тому

      Not yet released. Once released for flux the entire workflow will need to be modified will make a new video. Sdxl requires minor changes. But the person who released the model was working on a flux release and not sdxl one.

    • @Goger_
      @Goger_ 14 годин тому

      @controlaltai Thanks for the info, so I am waiting for the release. Anyway thanks for the great worflow ❤️

  • @aleeesashaaa
    @aleeesashaaa День тому

    Hello, your tutorials are very good, but on this one I've problems: (IMPORT FAILED): D:\Users\Desktop\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Vextra-Nodes (IMPORT FAILED): D:\Users\Desktop\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-art-venture (IMPORT FAILED): D:\Users\Desktop\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM I successfully ran python -m pip install supervision Embedded python is Python 3.12.7 (not sure if it's ok... if not, how can I downgrade?) I successfully ran py -m pip install inference==0.9.13 --> (Successfully installed inference-0.9.13 onnxruntime-1.15.1) py -m pip install setuptools==65.5.1 --> (Successfully installed setuptools-65.5.1) If I use setuptools-75.8.0, I get only this error: (IMPORT FAILED): D:\Users\Desktop\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-YoloWorld-EfficientSAM ComfyUI-Vextra-Nodes and comfyui-art-venture are fine, but YOLOWorld doesn't work ("cannot import name 'YOLOWorld' from 'inference.models'"). 😞

  • @ReneBerwanger
    @ReneBerwanger 2 дні тому

    Hi, nice videos, and amazing nodes. Why SD3 Latent?

    • @controlaltai
      @controlaltai 2 дні тому

      Hi, The empty latent in Comfy ui is only compatible with sd1.5/ sdxl. When sd3 came out they made a node sd3 latent. The requirements for the empty latent is same at the code level for sd3 and flux. They just kept it at that.

  • @Nick-q4o
    @Nick-q4o 5 днів тому

    I'm new to Flux and keen to try your Flux Resolution Calculator node shown in your video. However, I notice below 1.0 megapixel it currently only supports 0.1 and 0.5, but 0.6 is the ideal mp I need. Do you have similar node that allow manual mp input (so I can input 0.6) or is there any workaround I can explore? Thanks

    • @controlaltai
      @controlaltai 5 днів тому

      No, I should update the node to give that option. If you want a workaround then you have to use a bunch of math nodes for multiplication and divisibility and calculate the entire formulation.

  • @oohlala5394
    @oohlala5394 6 днів тому

    at ~56:10, why are we using Florence2Run + Sam2Segmentation ? Couldn't we just use the masks output from Florence2Run ?

    • @controlaltai
      @controlaltai 5 днів тому

      Ermm no. We need SAM segmentation. Its more accurate. Florence just created bounding boxes. The masking is done via SAM.

  • @ZiadHayes
    @ZiadHayes 7 днів тому

    omg I will never watch one hour video made with AI voice

    • @controlaltai
      @controlaltai 7 днів тому

      Noted. This is a technical tutorial aimed at helping users with ComfyUI. If the AI voice isn’t your preference, the focus is on the content and learning-feel free to skip if it’s not for you.

  • @valentynshumakher5842
    @valentynshumakher5842 9 днів тому

    Hi man, thanks for this great video. Any chance to connect with you via email or other ways?

    • @controlaltai
      @controlaltai 9 днів тому

      Hi, thank you. You can connect via email: mail @ controlaltai . com (without spaces).

  • @Elainewuuuu
    @Elainewuuuu 9 днів тому

    Hi! Really nice video and really helpful! BUT I got an error from Flux Sampler and the message said : "# ComfyUI Error Report ## Error Details - **Node ID:** 46 - **Node Type:** FluxSampler - **Exception Type:** NotImplementedError - **Exception Message:** No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 4976, 24, 128) (torch.float32) key : shape=(1, 4976, 24, 128) (torch.float32) value : shape=(1, 4976, 24, 128) (torch.float32) attn_bias : <class 'torch.Tensor'> p : 0.0 `fa3F@0.0.0` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) attn_bias type is <class 'torch.Tensor'> operator wasn't built - see `python -m xformers.info` for more info `fa2F@0.0.0` is not supported because: device=cpu (supported: {'cuda'}) dtype=torch.float32 (supported: {torch.bfloat16, torch.float16}) attn_bias type is <class 'torch.Tensor'> operator wasn't built - see `python -m xformers.info` for more info `cutlassF-pt` is not supported because: device=cpu (supported: {'cuda'}) ## Stack Trace" Could you please help me with this issue? Thank you so much

    • @controlaltai
      @controlaltai 9 днів тому

      HI, You need a nVidia GPU to run this. Flux Region Control uses Xformers memory attention and required a GPU. What's your GPU and system Specs?

  • @amirmhmdart
    @amirmhmdart 10 днів тому

    Amazing.

  • @scarfcorner9171
    @scarfcorner9171 12 днів тому

    Work with which model? (Flux 1, Stable diffusion 3.5.......)

  • @rocren6246
    @rocren6246 12 днів тому

    where to put ip-adapter-plus_sdxl_vit-h.bin?

    • @controlaltai
      @controlaltai 11 днів тому

      Firstly, it should be a safetensor file and not .bin. And It goes here: ComfyUI\models\ipadapter

    • @rocren6246
      @rocren6246 11 днів тому

      @@controlaltai Thanks for replying! I figured it out.

  • @AndresFernandez-m9o
    @AndresFernandez-m9o 13 днів тому

    I am a member now, and downloaded the workflow. I get Missing Node Types for ImageResize and ShowText|pysssss

    • @controlaltai
      @controlaltai 13 днів тому

      @@AndresFernandez-m9o Hi, You have to install the missing custom nodes. Check the requirement chapter of the tutorial on which nodes to install.

    • @AndresFernandez-m9o
      @AndresFernandez-m9o 13 днів тому

      @@controlaltai it is working now, was using an old json, now I cant find the IPAdapter model even though I have downloaded already, where should I put it. It does not say in the video.

    • @controlaltai
      @controlaltai 13 днів тому

      Goes here: ComfyUI\models\ipadapter

    • @AndresFernandez-m9o
      @AndresFernandez-m9o 13 днів тому

      @@controlaltai thank you! it is working now! is it normal to have just one image in the four image outputs (the other three are red like errors)

    • @controlaltai
      @controlaltai 13 днів тому

      You have to enable them. They are upscale ones. If you don’t need up scaling then just the one is enough. Normally you generate one if fine then proceed further by enabling and then que prompt.

  • @Howcouldieverstop
    @Howcouldieverstop 15 днів тому

    Get my money! Hey I’m a photographer, and I’ve been watching your videos. I want to naturally composite backgrounds into photos I’ve actually taken, but when generating images in SDXL and Flux environments using the T2I method, the quality of the backgrounds isn’t very good. I’m considering using MidJourney to create backgrounds and naturally matching the product with them. Do you have any related videos, or is this something you plan to cover?

    • @controlaltai
      @controlaltai 15 днів тому

      Hi, You will require a custom workflow. Midjourney won’t solve the problem, blending will require too much post processing. IC light is only supported by sd 1.5, you cannot use flux or sdxl with IC light. We can achieve a natural blend with flux but the lighting will be pseudo. Again it would be a custom workflow. There won’t be a UA-cam tutorial for this as it’s a very specific use case.

  • @pedrosoldado90
    @pedrosoldado90 19 днів тому

    Could I run it on Google Colab?

  • @yuueiji7779
    @yuueiji7779 20 днів тому

    sorry but i cant find the basic apply controlnet, only the advanced one

    • @controlaltai
      @controlaltai 20 днів тому

      It’s build in comfy, native node.

  • @pateltushar53471
    @pateltushar53471 20 днів тому

    I GOT THIS ERROR FluxAttentionControl Xformers is required for this node when enabled. Please install xformers. CAN YOU PLZ HELP ME

    • @controlaltai
      @controlaltai 20 днів тому

      You need to install xformers. Regional control uses xformers for attention override. I have explain in the video. If you go to the custom nodes folder ControlAltAI nodes there is an xformers instructions there as well.

  • @lazlo342
    @lazlo342 21 день тому

    Thank you for producing this video. Can you please explain how it works with your supporters of your channel? If I subscribe, I get access to your workflows? Are your work clothes kept current? I've subscribed to a few other users on Patreon and what I find is that many of the workflows no longer work because they are outdated.

    • @controlaltai
      @controlaltai 21 день тому

      Hi, Welcome. AI moves very very fast, comfy updates breaks almost everything every now and then. All my workflows are still valid except for 2-3. For example when IP adapter changed, I updated everything. IC Light changed almost twice, and it was updated. Mostly when a user faces issues which is breaking for the entire community, I will update the workflows and post it in the members channel. Our main goal was always to maintain the workflows cause what's the point of the video then. Due to time constrains updating the workflows may take a month or two sometimes. But someone has to mention that the workflow is broke. Currently only two workflows are broken: The watermark workflow and the face Detailer. These are pending on my side. Rest of the time the workflows just require a node change since comfy updates and that specific node is not supported any more by the node dev. So if you tell me this is broken, I will fix it. There should be no issues with that, it may take some time, but it will be done.

    • @lazlo342
      @lazlo342 21 день тому

      @@controlaltai thanks for your prompt reply. I'll try to figure out how to subscribe now. Thanks for all your work.

    • @lazlo342
      @lazlo342 21 день тому

      I have joined. I see your workflows now. Thank you!

    • @controlaltai
      @controlaltai 21 день тому

      Welcome, If you need any help just write on the post or email.

  • @letitbedogs
    @letitbedogs 21 день тому

    This is fake as F....

  • @stu75854
    @stu75854 25 днів тому

    next time turn the music down or off. Made me stop watching.

    • @controlaltai
      @controlaltai 25 днів тому

      @@stu75854 yeah learn and already done that in all the latest videos

    • @stu75854
      @stu75854 25 днів тому

      @controlaltai Sorry, it's just that it makes it harder to hear you.

  • @TimesNewRomanAI
    @TimesNewRomanAI 26 днів тому

    Any one has received this error: ICLightAppply Cannot copy out of meta tensor; no data!

    • @controlaltai
      @controlaltai 26 днів тому

      Not sure what the error is never heard of it. I suggest you first try this locally then read documentation on how to set up run pod

    • @TimesNewRomanAI
      @TimesNewRomanAI 26 днів тому

      @@controlaltai The fun thing is I'm running on a pod because I don't have a pc to run on local

    • @controlaltai
      @controlaltai 26 днів тому

      Unfortunately I never worked with runpod. So I can’t advise. Please look at some other tutorials on how to load checkpoints workflows etc on runpod. Once loaded. If there are workflow issues I can help you with that. Most of the clients (companies) I work work have be do the workflows locally

  • @fernandrez
    @fernandrez 28 днів тому

    where is the json?

  • @TimesNewRomanAI
    @TimesNewRomanAI 28 днів тому

    Hello and Happy New Year to all. I have loaded from Hugging Face /juggernaut_aftermath.safetensors in the Checkpoints folder but I can't get it to appear as an option under I also get this error 'down_blocks.3.resnets.0.norm1' which I understand has to do with the wrong model. Can you help me? By the way, I'm running Comfyui on Runpod, the checkpoints folder doesn't show its content, but how do I delete the models I don't need?

    • @controlaltai
      @controlaltai 26 днів тому

      Happy new year to you. Unfortunately I don’t know how to do this on run pod. Check their documentation for the same.

  • @rivalraeval7322
    @rivalraeval7322 29 днів тому

    Great video! Could you explain the value you put in the prompt? I'm still confused about how the values can change the imagee

    • @controlaltai
      @controlaltai 29 днів тому

      Thank You, So this is a very old video. Things have changed, but these values are called weights. Meaning one keyword has more weight than the other. Midjourney 6 doesn't require this. However, they have image weights when using images in prompts. For Midjourney 6, just use natural language cause AI now understands the natural detailed language better.

  • @AndroKarpo
    @AndroKarpo Місяць тому

    Is the goal of this video to confuse the viewer as much as possible so that he doesn't understand anything? If so, you did a great job. Otherwise, why mix everything in one big heap? CosXL Edit is a model whose purpose is to change ready-made images according to a hint, so why was it necessary to combine it into a single common workflow with CosXL and Upscale so that the viewer would finally go crazy? What is your goal, to show how smart you are and how well you understand complex workflows, or to teach something to someone who doesn't understand it? If the latter, then there was no need to mix everything in one heap.

    • @controlaltai
      @controlaltai 29 днів тому

      Hi, thanks for taking the time to watch and share your thoughts. I’m sorry the video felt confusing. My goal was to show how CosXL Edit, CosXL, and Upscale can fit into a single workflow, but I understand it might feel overwhelming if you’re new to these tools. I really appreciate your honesty-it helps me see where I can improve. I created this single, comprehensive tutorial because the workflow is unique and is my own creation. CosXL and CosXL Edit go hand in hand, so I start with basic CosXL concepts before moving on to CosXL Edit-this way, you can understand how each parameter affects the output instead of guessing. The Upscale method I use is specific to CosXL and CosXL Edit, which is why splitting it into a separate video would feel disconnected. However, you can skip the CosXL/Upscale section if you only want to focus on CosXL Edit-the video has chapters, so you can jump right to whatever interests you. This way, you have the entire advanced process in one place but can still pick and choose what you watch. If there’s a specific part you’d like me to clarify, please let me know. I want to make sure this content is as clear and useful as possible. Thanks again for your feedback!

  • @augustronic
    @augustronic Місяць тому

    I'm using "Preserve Details Mask" on a face, filling around 5% of the image. Even at "0.01 noise_scale" and "1 blend_opacity" blotchy artifacts appear. Eyes aren't clear/sharp at all. As other details outside the details mask get too much modified/distorted, too, I rather suspect the upscalers, than "Noise Plus Blend". For example, smaller structures like text in an open book becomes really distorted. Is it like it is or do I have some controls?

    • @controlaltai
      @controlaltai 29 днів тому

      There is clearly something else wrong here. The workflow is designed to maintain consistency in upscale. If your 1024x1024 image is clear, the upscale should be clear. The upscale is specifically designed to maintain text and not add any details than already there in 1024x1024 raw generation. If you are using any other upscale model than showcased, you will get weird results as those models were specifically chosen and tested to maintain text consistency. If some other settings have changed in the upscale process or something is not getting triggered, wrong prompt switch etc, can cause this. I cannot help you out here on UA-cam, the only way to help you is if you send me the workflow with the generated image which comes out wrong. I need to see the workflow and your image or prompt and study the entire flow to understand what is going wrong and what settings to change to get the desired output. You can email me mail @ controlaltai . com (without spaces) I will be happy to help you out over email.

    • @augustronic
      @augustronic 29 днів тому

      @@controlaltai My 1024x1024 images aren't clear either. I suppose it's because I'm using photos as I2I source, with faces of small area, and these aren't high quality. Even if I prepare these with SUPIR beforehand, the results don't change. I can imagine, the sampler produces worse quality if the source isn't a SD image.

    • @controlaltai
      @controlaltai 29 днів тому

      It is imperative for the 1024 to be clear and proper. If you are using Image to Image there are plenty of settings. Basically for image to image in the workflow is different. It is not suppose to give you the exact image. The image to image function is to generate something flux is trained on using image as a reference. But you can use image to image to change style etc, but it any case it wont be the same, it will change and the objective is to generate something new from your reference image no matter what resolution. Now Image to Image will generate a proper flux 1024 output depending on the prompt. Your prompt has to be detailed. Once you get a proper 1024x1024, upscale will maintain that. If you want to enhance the 1024, then thats a different workflow. If I can see what Image to Image you are trying to achieve, I can help with the settings and how to go about it. Cause its a bit tricky, (too many minor settings to play around with).

  • @augustronic
    @augustronic Місяць тому

    All my I2I generations receive a gloomy look during the "Flux Upscale 5.04x" process. (I'm using the "Preserve Details Noise Plus Blend" step.) Do you have an idea for the reason?

    • @controlaltai
      @controlaltai Місяць тому

      Hi, Check the post processing section at the end. If its dark and poorly lit, bypass post processing. That should have no added leveling or contrast. You can then use those post processing nodes to increase brightness color correction etc. Preserve Details Noise Plus Blend has zero effect on the gloomy, its most likely the auto correct lighting post processing node.

    • @augustronic
      @augustronic Місяць тому

      @@controlaltai It's the "Color Match", spoiling the highlights. Thanks!

    • @augustronic
      @augustronic 29 днів тому

      @@controlaltai The Tiles CN is making the final output gloomy, too. I don't use it any more.

    • @controlaltai
      @controlaltai 29 днів тому

      The ControlNet is extremely difficult and tricky to use. May I know for what purpose are you using the tile controlnet for, cause what we used to use tile for SDXL doesn't actually apply or work with flux, its a different architecture. Tile can be used for say enhancing the image, generating 3d out of 2d, and most of the time the tile has to be daisy chained with some thing else. Also the the strengths of CN in flux are not like SDXL. The Steps % play a more critical role, that is, they are more sensitive to CN than CN strength. Which is very different than how CN works on SDXL. For example to get a pose, I prefer to use Depth rather than DW pose or if there are hands, then i use both. IN SDXL we would just use dw pose.

  • @nomoreterban
    @nomoreterban Місяць тому

    whooa thanks for the video. been searching for regional lora, it looks complicated for my level.

    • @controlaltai
      @controlaltai Місяць тому

      There is no multi regional Lora support with this or any other current solution at the moment. You can apply Lora it will apply to the whole region.

  • @TimesNewRomanAI
    @TimesNewRomanAI Місяць тому

    Any advise to install Ollama on Runpods with ComfyuI?. By the way, is there any json file to download the workflow?

    • @controlaltai
      @controlaltai Місяць тому

      Hi, expand the post the workflow link is given....Here it is again Ollama might not work on runpod. I have no experience with that. Instead of Ollama you can install Gemini Node or any Cluade AI node which uses API, API based LLM would easily work on runpod. For the instructions use the same I have given in the workflow. All you have to do is replace Ollama with your own API Custom node (whatever custom node you choose) and ensure the connections are all correct. Workflows: drive.google.com/file/d/1WH5Exmzij-shWnQ7MQOagE_jTTL8pEtx/view?usp=sharing Workflow Images: drive.google.com/file/d/1z_9my1bzxNEEArGWsFzHWlINl60BCtdY/view?usp=sharing

    • @TimesNewRomanAI
      @TimesNewRomanAI 29 днів тому

      @@controlaltai Thank you very much. Ill try

  • @Gaitchs
    @Gaitchs Місяць тому

    came for tutorial of clip extention not how to install comfyui, there are other tutorials for that

  • @Ozstudiosio
    @Ozstudiosio Місяць тому

    thanks again my question now how i can use multi region to add multi Lora train character in image, in which part i can adjust it to able do this?

    • @controlaltai
      @controlaltai Місяць тому

      Welcome, Not possible in the current pipeline. Comfy recently added flux attention mask in the flux model natively. I am working on a new pipeline called "Flux Attention Mask". That should allow to add multi lora train character in image. The current pipeline will only support lora's which apply to the entire region, not multi region. It will take some time to make that pipeline as you can image its a bit complicated. However, a lot of progress has been made regarding that. Once everything is ready and stable, first the code will be released, and shortly after the video tutorial.

    • @Ozstudiosio
      @Ozstudiosio Місяць тому

      @@controlaltai thanks hopefully can be done soon :)

    • @controlaltai
      @controlaltai Місяць тому

      @@Ozstudiosio Yup I am at the sampling stage. But I am not sure if flux itself supports regional lora, whatever research done so far suggest it doesn’t however there might be image to image support or something after using single Lora, I will know more in some days. If the model weight supports it ,it’s straightforward.

    • @Ozstudiosio
      @Ozstudiosio Місяць тому

      @@controlaltai thanks but there any other method to allow me compose multi train character in scene?

    • @controlaltai
      @controlaltai Місяць тому

      Not in Flux. In painting can be a solution. But I haven't explored yet. The last option is face swap if its only face lora. Regional Conditioning is the best bet, if its supported. Instant X does it with PULID regional faces maintained by PULID, but the PULID code is not migrated to comfy. Someone else is working on that and has ported the Instant Research to Comfy UI (region only without multi lora, as of now). The instant x team said not possible to have multi lora in their system as well.

  • @tdothot1
    @tdothot1 Місяць тому

    this dude is flying through the screens are trying to show us or not

    • @controlaltai
      @controlaltai Місяць тому

      Slow down the playback. Some people find it fast and some complain its too slow. I can't adjust to meet everyone's speed. But you can with slow and fast playback on you tube. Sorry if the tutorial is too fast, I have slowed down with a consistent speed in the latest videos (learning from feedback).

  • @wencho3616
    @wencho3616 Місяць тому

    do you know how to fix this "Sizes of tensors must match except in dimension 1. Expected size 104 but got size 112 for tensor number 1 in the list." i kept getting this error in KSampler when trying to edit using another picture.

    • @controlaltai
      @controlaltai Місяць тому

      There is some issue with the models. Use SD 1.5 checkpoint and not SDXL or Flux (they are not compatible). If you still get that error, then check if the ic_light models are correct and the IP adapter models are correct.

    • @wencho3616
      @wencho3616 27 днів тому

      I already checked and use everything version SD 1.5, but it still have the same problem in the KSampler.

    • @controlaltai
      @controlaltai 27 днів тому

      You can email the workflow to mail @ ControlAltAI . Com (without spaces). I cannot help further without looking at the workflow. Make sure to include your input product photo and reference image you are using.

  • @Ozstudiosio
    @Ozstudiosio Місяць тому

    Hi thanks for your informative video but i got this error SamplerCustomAdvanced "xformers_attention() got an unexpected keyword argument 'mask'" and everything installed well, also i updated ControlAltAI nodes, how i can solve it?

    • @controlaltai
      @controlaltai Місяць тому

      You have to update comfy. then update the custom node. close everything and then restart. Check for fetch updates again to ensure you are on the latest version of the node.

    • @Ozstudiosio
      @Ozstudiosio Місяць тому

      @ thanks I’ll try again

  • @kagawakisho4382
    @kagawakisho4382 Місяць тому

    Somehow the comfyui does not find the FluxRegionBBOX and RegionAttention nodes. Even though everything is up to date. Any ideas?

    • @controlaltai
      @controlaltai Місяць тому

      That is a different GitHub repository. Install ControlAltAI nodes from Comfy Manager. github.com/gseth/ControlAltAI-Nodes