For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
Thank you for asking. The prompt on the left is for creating a realistic, black and white style photo, while the prompt on the right side of the workflow is for instructing how to create an illustration-style image. I could reuse the prompt on the left side and connect it directly, but this would cause conflicting results between the instruction prompt and the IPAdapter, which follows the illustration style. You can use the same prompt, but in such a case, you should remove the specific description of the photo style and only describe the object itself. This approach will help avoid conflicts and ensure consistency in the final output.
you can also try with cpds controlnet and add lora for flux. You can use Florence 2 and Olama LLMs to remove style from first text bar. BTW I really expect to see comparison of GGUF (lets say Q8) and NF4 flux models comparison using Union!
I'm stuck on Anyline Preprocessor. Just can't import it. Says (IMPORT FAILED). Tried "Try Fix" button, tried Uninstalling and reinstalling but no go. Any tips? EDIT: OKAY fixed it with AIO Aux Preprocessor. But now I get an error with ClipVision. I downloaded the IP-Adapter models into a IPAdapter folder but still no go. Error occurred when executing IPAdapterUnifiedLoader: ClipVision model not found. File "C:\COMFY-UI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\COMFY-UI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\COMFY-UI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\COMFY-UI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus-main\IPAdapterPlus.py", line 559, in load_models raise Exception("ClipVision model not found.")
when i open the workflow it says "set node,. get node and anyLineProcessor" are missing, so i go to "install missing" and it prompts me to install "anyline", program restarts, i hit refresh on browser and its still missing.
Alternatively, you can use the "AIO Aux Preprocessor" node from the "ControlNet Auxiliary Preprocessors" extension (it has similar functionality). imgur.com/qBdKLcH github.com/Fannovel16/comfyui_controlnet_aux
I use monica.im (it is a multi-AI integration service) like POE. You can refer to using opensource LLM in the following few nodes, all have similar functions: github.com/gokayfem/ComfyUI_VLM_nodes github.com/kijai/ComfyUI-Florence2 github.com/pythongossss/ComfyUI-WD14-Tagger github.com/miaoshouai/ComfyUI-Miaoshouai-Tagger
Thank you for the tutorial. AnylinePreprocessor by themisto does not load even after update all comfyui and a manual installation , pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Anyline
After installation, try restarting Pinokio to see if it resolves the issue. Since I don't use Pinokio, I'm not sure about the exact cause. Alternatively, you can use the "AIO Aux Preprocessor" node from the "ControlNet Auxiliary Preprocessors" extension (it has similar functionality). imgur.com/qBdKLcH
Thanks so much! with your tutorial I was finally able to understand style transfer using ip adpater
For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
awesome tutorial thanks ...
thank you
5:55 but why do you have positive/negative prompt in the middle of the workflow and then another prompt box on the far left hand side of the workflow?
Thank you for asking.
The prompt on the left is for creating a realistic, black and white style photo, while the prompt on the right side of the workflow is for instructing how to create an illustration-style image. I could reuse the prompt on the left side and connect it directly, but this would cause conflicting results between the instruction prompt and the IPAdapter, which follows the illustration style. You can use the same prompt, but in such a case, you should remove the specific description of the photo style and only describe the object itself. This approach will help avoid conflicts and ensure consistency in the final output.
you can also try with cpds controlnet and add lora for flux. You can use Florence 2 and Olama LLMs to remove style from first text bar. BTW I really expect to see comparison of GGUF (lets say Q8) and NF4 flux models comparison using Union!
it just doesnt work... i can get those Krita and the checkpoint. models..it would be more helful if you put the links to all the necessary assets
I'm stuck on Anyline Preprocessor. Just can't import it. Says (IMPORT FAILED). Tried "Try Fix" button, tried Uninstalling and reinstalling but no go. Any tips?
EDIT: OKAY fixed it with AIO Aux Preprocessor.
But now I get an error with ClipVision.
I downloaded the IP-Adapter models into a IPAdapter folder but still no go.
Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found.
File "C:\COMFY-UI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFY-UI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFY-UI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFY-UI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus-main\IPAdapterPlus.py", line 559, in load_models
raise Exception("ClipVision model not found.")
when i open the workflow it says "set node,. get node and anyLineProcessor" are missing, so i go to "install missing" and it prompts me to install "anyline", program restarts, i hit refresh on browser and its still missing.
Alternatively, you can use the "AIO Aux Preprocessor" node from the "ControlNet Auxiliary Preprocessors" extension (it has similar functionality).
imgur.com/qBdKLcH
github.com/Fannovel16/comfyui_controlnet_aux
which llm did you use to convert image to prompt? is it open source please guide
I use monica.im (it is a multi-AI integration service) like POE. You can refer to using opensource LLM in the following few nodes, all have similar functions:
github.com/gokayfem/ComfyUI_VLM_nodes
github.com/kijai/ComfyUI-Florence2
github.com/pythongossss/ComfyUI-WD14-Tagger
github.com/miaoshouai/ComfyUI-Miaoshouai-Tagger
Excellent tutorial!!
Krita diffusion tutorial
Yes!
Thomas Ronald Williams Brenda Lopez Susan
Thank you for the tutorial. AnylinePreprocessor by themisto does not load even after update all comfyui and a manual installation , pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Anyline
After installation, try restarting Pinokio to see if it resolves the issue. Since I don't use Pinokio, I'm not sure about the exact cause. Alternatively, you can use the "AIO Aux Preprocessor" node from the "ControlNet Auxiliary Preprocessors" extension (it has similar functionality).
imgur.com/qBdKLcH
@@AINxtGen8 Thank you for the quick reply! I'll verify this soon!