Reposer = Consistent Stable Diffusion Generated Characters in ANY pose from 1 image!
Вставка
- Опубліковано 29 вер 2024
- Stable Diffusion Reposer allows you to create a character in any pose - from a SINGLE face image using ComfyUI and a Stable Diffusion 1.5 model! Highly consistent generation is thanks to IPAdapter which allows for easy, prompt-free image generation.
No finetuning needed or LoRA training required = massive time savings.
No need for Roop, ReActor or any other face swap software which can’t be used commercially. On top of that, any face can be used - not just “realistic” ones. Want a comic art style face? No problem! Can’t install roop? Not an issue 😉
All you need is 1 image and this FREE, ready-to-use ComfyUI workflow to keep both the face and the image style in your generations! Prompts can be used too for those extra little details, should you wish.
Enjoy :)
Get the very latest workflow versions via Patreon!
/ nerdyrodent
Example with clothing too:
Stable Diffusion - Face + Pose + Clothing - NO training required!
• Stable Diffusion - Fac...
Workflow + extra docs:
github.com/ner...
github.com/ner...
How to install ComfyUI:
• Install Stable Diffusi...
Need even more help? No worries - here is a whole playlist!
• ComfyUI Tutorials and ...
== More Stable Diffusion Stuff! ==
* ControlNet Extension - github.com/Mik...
* ComfyUI Workflow Creation Essentials For Beginners - • ComfyUI Workflow Creat...
* How do I create an animated SD avatar? - • Create your own animat...
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
do you have any suggestions on fixing the IP adapter not being found?
Why do people like ComfyUI, it's so messy and hard to follow, comfy isn't the right word for the ui, more like MessyUI
I am getting an error that is actually driving me nuts:
Error occurred when executing IPAdapter:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
Thanks You, this is a game changer, roop struggles with none realistic faces so this is workflow is a awesome addition!
Glad you like it!
@@NerdyRodent is there a setting for output numbers or batch? I see image or batch node but that's it
Finally we can create comics! Wow!❤
Seeing all these nodes and things, of which i know nothing, i wonder how insight ai works. i would imagine its a similar process but with different parameters and models but just visualizing it in the way you showed has sparked my curiousity for how these ai things work. Great video, thanks
Thanks for this! Works great in 1.5, but I'm having the damnest time figuring out what is dependent on 1.5. When I load it up with SDXL, the first ksampler throws a "Error occurred when executing KSampler: The size of tensor a (1024) must match the size of tensor b (1280) at non-singleton dimension 1". Anyone know what's the cause?
Thx 4 Your hard Work. This is amazing =D
Glad you enjoy it!
Mate, this looks absolutly amazing!!! Can't wait to try it.. One question....
Is it able to copy clothing aswell if I have a character I want to remain consistent, can I use that full body character and then have it come out in a new pose or it's jusrly for faces?
This one is consistent faces, though it will use clothing influences from the face image also. For clothing swaps, see Stable Diffusion - Face + Pose + Clothing - NO training required!
ua-cam.com/video/ZcCfwTkYSz8/v-deo.html
Truly Amazing! Thanks for fast reply.. However I'm stuck with an error: NNLatentUpscale Missing and it doesn't seem to be working.. Is there a fix to this that I'm missing@@NerdyRodent
hi there, thanks a lot for the video. Im completely new in this and im still finding out how everything works. Im getting an error trying to import the Allor Plugin, im using a Mac and wanted to know if it has something to do with it. hope you can help me out.
wanted to see more examples with the side angle thingy
I get an error trying to use the workflow, something about a size mismatch in IPAdapter. Any ideas what's up with that?
Make sure to follow the instructional video and also update everything
Same thing happened to me. I'm still working on a solution. I think it has something to do with the SD 1.5 face model. This is my error: Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
Did you ever get this solved?
I figured it out. It's the clip vision model. If you used the manger to download it, it places the models.softensor in the base of the clip_visions folder.
@@knoughlbawdy This is the node I can't figure out to fix, I need to put what model where to fix it?
@@darkestmagi size mismatch is caused by the clipvision model. You can see it in the video at 6:43.
Hey, awesome video tysm! Is there any shortcut how to find all the checkpoints and safetensors to test this or is it highly dependend on use case and I have to manually download and import them?
Any chance on doing a video on Deep Floyd? Not much out there.
Sure, here you go - Deep Floyd - AI Generated Text In Images!?
ua-cam.com/video/139f-gbj9ko/v-deo.html
What do I need to start doing this? I’d want to start a comic strip using this software but have no idea where to start. Do I download Stable Diffusion to my laptop? If so, How do I even do that? Is Reposer like a preset? So many questions 😔
I'm still confused between the model require just to run only reposer pls help out, its kindly of urgent😅😅
its wonderful. but I'm too dumb to use it. xD just keep getting errors and manager wont update/ download any of the open pose/ vision clips.
You can drop me a dm on patreon if you need more help!
Can you please tell how to get consistent outfit or clothes,how can I maintain same outfit?,Please it can be really useful, thank you.
Stay tuned 😉
Do you have the workflow.json? I tried copying the visual by hand for learning, but I get loop errors, as there might be a bad node or something. I know that if I load the .json that I can use the manager to find the missing nodes.
You can drop me a dm via patreon for help!
@@NerdyRodentI have! Thanks!
If I add a skiny person to the input image and set a fat person for control net pose, will output be fat person or will it just detect and adjust pose?
What Work Flow Is This??
is there a way to generate a pic from SD and then programatically generate another pose from the character using the seed or something? not using ui
Error occurred when executing IPAdapter:
'ClipVisionModel' object has no attribute 'processor'
can someone please help me with this error
Gracias! Ya logre que funcionara, desistale e instale
Win!
So is comfyui the interface I need for this?
Yup, this is a workflow for ComfyUI! You can drop me a dm on patreon if you need more help 😀
Reposer workflow isnt there on git
Drop me a dm on patreon and I can help out! www.patreon.com/NerdyRodent
load ipadapter sd1.5 face me aparece en rojo y no me aparece como nodo faltante me podrias ayudar? saludos!
You’ll need to click “install missing”
@@NerdyRodent hola Nerdy no me aparece como missing, ya instale todo lo que me aparecía como faltante pero ese nodo sigue sin aparecer, creo le cambiaste el nombre no?
@@NerdyRodent install missing shows nothing is missing and half a dozen nodes are undefined with no names
I get wornings in yellow in my consol: load_custom_node
12:59:34.507 [Warning] ComfyUI-0 on port 7821 stderr: module_spec.loader.exec_module(module)
It’s a bit wierd that the clothes change, but the face and hair stay the same.
Try Stable Diffusion Face + Pose + Outfit - NO training required!
ua-cam.com/video/ZcCfwTkYSz8/v-deo.html
Upset why no auto1111
But the hairstyle stays the same no matter what my prompt is.
It's not good for keeping the same face but changing the hair.
If you want thing to change more (or less) you can alter the weights
@@NerdyRodent I did, and it didn't work, it changes the hair a tiny bit but if you want to change the hairstyle completely it's a no no
@@deadlyrobot5179 I can make hair different so 🤷♂️
Reposer for SDXL needs fixing, a lot of glitches there, can't properly use it on Colab, VAE Encoder spikes ram usage and Colab dies
Hi! Please Help, this error pop up when y press "Queue Prompt"
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
When I loaded the rodent image it gave me an error :(
Check the troubleshooting guide for more info!
I already have an auto1111 installed and haven't been able to correctly input the paths to my existing files so that I don't need to reinvent the wheel. Can you possibly spell it out in a bit more detail than just add path here...? The mess I have created attempting to swap files between the 2 UIs in ugly and a space hog. Thanks! I still can't get a basic reposer to work. My red load ipadapter sd 15-face box is preventing me from creating any new images.
hello ,can i impliment things you are doing with runpod? if you could make a video tutorial about runpod or something like it i would really appriciate it
Pretty sure ComfyUI will work on remote computers too, yes
@@NerdyRodent hello nerdy rodent, do u know what might be the issue?
TypeError: widget[GET_CONFIG] is not a function
This may be due to the following script:
/extensions/core/widgetInputs.js
I have errors popping up like...
When loading the graph, the following node types were not found:
DynamicThresholdingFull
CR Batch Process Switch
Nodes that have failed to load will show as red on the graph.
Nothing is working for me... I give up...
some more errors... Prompt outputs failed validation
ControlNetLoader:
- Value not in list: control_net_name: 'depth-diffusion_pytorch_model.fp16.safetensors' not in (list of length 54)
LoadImage:
- Custom validation failed for node: image - Invalid image file: NRSDXL_00169_.png
0:56 by changing the background color the clothing changed too. so the whole character is not consistent. characters are not just the face and 'more or less the clothing'
That’s right! So if you want red clothing, include red in the colour palette 😉
alright, show me the boots (shoes?) and the back now!@@NerdyRodent
@@audiogus2651 lol 😂 *shows shoes* But yes, don’t try to do hands. Enjoy the workflow!
consistent shoes? every shot... ? color me skeptical under the expanded triple deluxe
hardcore edition of this video comes out@@NerdyRodent
woah, I started with the first video and got that rodent druid to work. but now I am trying to make those poser workflows work but somehow I end up getting errors like this:
"Error occurred when executing IPAdapterApply:
'NoneType' object has no attribute 'patcher'"
I downloaded at least 5 different IP Adaper things and some by hand, some by the ComfyUI Manager, some are. bin, some are .safetensor.... I am so confused by now and I feel like I need an in between video that explains all the different kinds of, models, checkpoints, IPAdapters and what these errors even mean. Where can I get some help?
Same deal. Did you ever figure it out? On a deadline and getting desperate.
If you are just starting Comfyui WATCH THIS VIDEO!
This answered so many questions. I've been dragging my feet for weeks and this solved so many problems.
Thanks so much!
😮This is the biggest incentive to install that spaghetti interface.
😂
Omnomnom spaghetti!
Not! That interface looks even worse that automatic1111
Ehem, it only took me 6.78 hours, to install this. All the custom nodes were about as fun to download and install as a root canal treatment. When I fist loaded it my screen was a red as the bridge on the NC 1701 under red alert. Still not working optimal.
@artisans8521 ya got it to function ans its just not great at recapturing the essence of the face feel I'm probably still doing somthing wrong ugh 😑
Bro, I only learnt about Stable Diffusion a couple days ago and came across your tutorial. It's just other-worldly stuff what you're doing. I'll forever be grateful to you for your efforts. I tried several times in vain after watching this tutorial, but then realized I wasn't using OpenPose model. Once I did that, the output image that came before me almost took my breath away. Outrageously good and thanks from the bottom of heart. I can't thank you enough for this video and the references ❤
Glad you like the things 😊 It’s amazing what you can make with Comfy!
@@NerdyRodent Thanks for the response ❤️ I even tried making a workflow of my own in ComfyUI to get face expression from a reference image and apply to any character. I used MediaPipe FaceMeshProcessor but it isn't really working out😅. Too much to learn I guess before I start making workflows. Do you have a video for the same by any chance so I can look up and get some insight on the facial expression aspect?
How to load the workflow ,where shall I find .json file in order to load your workflow ,pls tell me how shall I load you exact workflow into my comfy ui
Check the video description for info!
@@NerdyRodentso basically I have to load the image you provided in you github?
Can you please make a Tutorial how we can do this in Automatic 1111? 🙏
you are like a magician..thanks for everything
It's my pleasure. Thank you for watching!
Can you please make a tutorial (SDXL and ForgeUI for us with rusty old machines) to show how to combine interaction between multiple characters via img2img and controlnet?
in example: two characters hugging, shaking hands, kissing or put one head on their shoulder or head or whatever interaction that we can CONTROL via "DUO" OpenpPose? I have no idea how to do such thing, but I'm talking about img2img specifically.
I hope that you will consider that idea, thanks ahead 🙏
I spent weeks in search of such techniques. I'm fortunate to get it here. Thankyou very much.
Great video!! Please provide Json file along with image to make it easier during import process. Sometimes images don’t work so there is nothing better than json file. Many thanks in advance amigo!!!
comfyUi more and more becoming the standard it seems
It’s fun to experiment with stuff, for sure!
Dear Mr. Rodent, ip adapter has been updated and the workflow does not work anymore. are you planning to update this one_ I am still noob and now need to figure it out 🙂
Yup! Reposer2 was updated a while back already :)
Amazing ! Will try for sure !
thanks for create this awesome tutorial, but after install all custom node step by step , I have some problem , appreciate for your help.
when I first open this workflow file, the browser window popup information :
1
When loading the graph, the following node types were not found:
CR Batch Process Switch
Nodes that have failed to load will show as red on the graph.
__
2
after I click" queue prompt " button in browser ,
the window pop a meessage: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
__
3
and my terminal come this error :
File "/home/young/Downloads/ComfyUI/ComfyUI/execution.py", line 598, in validate_prompt
class_ = nodes.NODE_CLASS_MAPPINGS[prompt[x]['class_type']]
KeyError: 'class_type'
Holy shit bro i would read the comic book you had going in the initial shots.
I think she may have to face… Cthulhu!
Thats really useful and amazing. Thanks and blessing from the Pope :D
Glad it was helpful!
Can we get an updated version of this that uses the new IP Adapter Advanced Node since the IPAApply node is depreciated? I can't figure out how to get the Advanced node to work in this workflow. I'd also appreciate explicit links to the models that must be used together for IPA and clip vision. The troubleshooting page for IPAAdvanced is not clear enough to be helpful.
I’ve swapped the node from the old new one to the new, new one 😉 Direct model links are in the “description” column so all ready to go!
@@NerdyRodent I appreciate the swift reply! However, I think I forgot to mention that I'm using SDXL. The SDXL reposer image in your github repo still produces a workflow with the old node. It shows up bright red and labeled "undefined" - I have the latest versions of all custom nodes. There are also no links describing any models for the SDXL reposer. Are you referring exclusively to the SD1.5 version of the reposer workflow?
Yup, I’m referring to the sd 1.5 version this video covers. Same as I did in Reposer2, any workflow with ip adapter apply simply needs it replaced!
Great stuff! After obtaining all nodes and models, I got the following error: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280 ]) from checkpoint, the shape in current model is torch.Size([768, 1024]) Thanks!
Never seen that. Maybe not a sd 1.5 checkpoint?
Unfortunately, that's not the case. I tried several checkpoints based on SD 1.5
Ah, that extra info helps! Make sure to use the right checkpoint in there.
Same error. Doesn't work
@@alexs1681 For my part, I initially loaded the wrong IPAdapter and got this error. Strictly follow Rodent's instructions and make sure you load the right resources. All the necessary files for this workflow can be found at the bottom of the page in the link in the description.
Hey! awesome tutorial. I've got a problem though. I get errors on the DWPose Estimator. I have found that without fail, it always turns out to be a matter of changing the models. what .onnx files should be in the bbox_detector and pose_esimator fields ?
I use the comfyui extension for A1111, and it keeps everything in one place, super practical for that.
Wait, what? Can you use comfyUI inside of Automatic1111?? I'm confused O_O
SPEAK PERSON!!!! How... WHERE...
I don't think I can post a link here apparently. Last time I tried my comment got deleted. Search "model surge a1111", or "sd-webui-comfyui"
You've changed my perspective on everything.
I'm glad I am researching so much before diving in.
What generative model are you using?
1.5? SDXL? SDXL-Turbo?
Any thoughts on what you would recommend for someone that is just starting out learning?
Is it any good?
Can anyone help me understand why my image isn't loading face? I have everything set up exactly as instructed but when I generate image it only does headless body :(
Hi! I'm trying to repeat everything according to the guide, with the same settings, but the program still makes a lot of different variations from each other. The same character does not work. I've been struggling with this problem for a year now, but nothing comes out.
do you know how i can find (name) and uninstall the old version of NNLatentUpscale? this workflow refuses to work unless i did.
A problem has developed. The WAS etc node face detect system seams to be offline, due too an error, caused by a fix on and upgrade. I finally got it to install, dependencies (GIT) on dependencies (PIT or such and so) on dependencies (python 11.7) but then the face crop node still didn't play ball. Merde (pardon my French). I found a solutiin online involving a piece of software that my index search did not find, oh dear, oh dear. This AI thing needs a Blender Institute to streamline things, this is the biggest hodgepodge of software I stuck my nose in, in a long time. Not your fault, off course, the soup is cooked up by others.
Would it be able to generate consistent sprite sheets for game animation like running and fighting? Or in addition to IP adapter and openpose it's better to also train a lora? I'm thinking of 1 sprite 1 image, not all sheet at once.
Every image I generate is auto stored somewhere? Just to know so I can delete after I generate a lot of images
Btw i've had a lot of problems installing this requeriments to run the workflow but was all my fault and I managet to made it work, except the ip-adapter part, looks like this part need to be updated in your tutorial, everything else is good, thanks a lot!
This seems like it would be incredible for keeping videos consistent
Seems like an idea 😉
Yeah really. If a script could keep feeding each frame from the reference video back in, it should be amazing. It might not be able to keep the backgrounds consistent though.
@@NerdyRodentcan you try creating A video ??❤❤
Excellent work and tutorial!
Complete character consistency is probably one more iteration away.
Unfortunately I'm getting this unholy mess when I drag the reposer.png in a clean comfyui screen: "Loading aborted due to error reloading workflow data". "This may be due to the following script:/extensions/core/widgetInputs.js". I ran ComfyUI manager, updated everything, but didn't work.
My only guess would be that you’re using an old version of ComfyUI
@@NerdyRodent Aaaaaaand.... you were absolutely right of course. I never suspected that would be it as I updated it only last week. Thank you!
Lol. I update mine every few hours for reasons… 😆
I've the exact same error but I'm all updated via the manager.
I'm on Mac m2. Any guess?
I had the exact same error on a fresh ComfyUI installed today, and I managed to fix it. Try it at your own risk.
1. I installed the node IPAdapter-ComfyUI using ComfyUI Manager. Restarted ComfyUI.
2. After this I got another error when loading the workflow, but now ComfyUI Manager would enable the Install Missing Cusom Nodes, to install the remaining three nodes, which was not possible initially. Restarted ComfyUI. After this the workflow loaded without errors.
3. Install/copy a bunch of required models for IPAdapter and Clip Vision to run the workflow. (Read the notes on IPAdapter-ComfyUI and watch the video).
Beautiful workflow I must say, very cool!
Absolutely Amazing... Just what I've been looking for. Thank you so much!!!
Glad it was helpful!
The Nerdy Rodent is becoming the ComfyUI Workflow master of the Internet!
Lol. Just playing 😉
Thanks for your inspiration and workflow. I have test it to create animation character, and I am combining this workflow to AnimatedDiff to play around. :)
Sounds great!
Cool stuff, thanks Nerdy.
😉
Amazing workflow! I'm trying to achieve this result with SDXL but the quality is not even close to SD 1.5. Do you know if it has to do with the specific ipadapters for SDXL?
Nothing yet with face for SDXL that I’m aware of. Do let me know if you find anything!
This is incredible
No matter what I try it always comes back to missing certain models or nodes. Is there a place where I can look this up? from beginning to end.
You can drop me a dm on www.patreon.com/NerdyRodent if you need more help!
Well you've done it. I hope you're happy with yourself. I'm trying to figure out the spaghetti-hell that Comfy UI looks like to me! WELL PLAYED.
Most sincerely, well done and thank you.
Heh 😆 Yay! As long as something something, success is inevitable!
can u do an automatic 1111 version?
I have downloaded models, updated ComfyUi to the latest, ran "Install Missing Nodes" and yet after six hours of trying to fix this I still get "Error occurred when executing IPAdapter:
'ClipVisionModel' object has no attribute 'processor'" I've Googled and can't seem to find any hint to a solution. ???
Im getting the same error :/ any luck?
Make sure to use the SD1.5 clip vision models indicated!
I am suffering from this error as well. I have 1.5 models down the line as well as ipadapter models from huggingface. The issue appears to be at the clipvision model. I attempted to use both, the safetensors file and the .bin. I noticed your github links to the huggingface space for those models. Here is a .json of the flow as it appears on my machine. Maybe you see something I dont? @@NerdyRodent
drive.google.com/file/d/1_3P4Tf_MX0IbejAWIAGuzCzRiwqCMZpd/view?usp=drive_link
Manual install seems to have done the trick. No more processor none issues. I'm now bouncing around between installing the right version of torch and xformers. updating now and.... HOLY BALLS BATMAN it works! enjoy the sub.
Installed comfyUI using the manual method and this was fixed. Portable version doesn't have full compatibility it seems. @@NerdyRodent
Is this to be installed located locally on our home system or accessing a cloud/matrtix? I'm sorry but I'm a total beginner.
You can run ComfyUI anywhere, but best run at home!
Hi! Your videos seem to show that you separated your checkpoints into subfolders within the comfyui structure. I can't figure out how to do this. It would be great to have sd15 and sdxl subfolders for checkpoints, loras and embeddings. If you haven't covered this already can you explain how to do this? If you already have, just a point to the video where you explain it would be great, too!
You can open the graphical file manager for your operating system, and then from the context menu create a new directory
@@NerdyRodent I must have labeled them poorly in the past. This time it worked!
Is there no tutorial for correcting all the node errors? I don't know what I'm doing or what resources I need to get this working, the video just skips over this.
You can drop me a dm at www.patreon.com/NerdyRodent if you’re getting errors on your computer!
@@NerdyRodent How do I do that? There's no visible option to DM you on Patreon.
I AM HAVING A VERY STRANGE TROUBLE . EVERY TIME I TRY TO MAKE A IMAGE , NO MATTER WHAT PROMPT OR IMAGE UPLOADED . IT GIVES ME A PHOTO OFSAME WOMEN , NAKED ! . NO MATTER WHAT . DO ANYONE ELES IS ALSO FACING SUCH PROBLEM . I AM USING REALISTIC VISION AS A BASE MODEL . IT IS AF IT IT IS FOLLOWING THE NEGATIVE PROMPT . SHE IS NAKED , NSFW AND NECKLACE
I found the problem. The negative node is actually a positive node. Try typing ''(((cowboy hat)))'' in the negative node and it will generate a picture with a cowboy hat
I would love how to make this workflow step by step, I just don't wanna copy paste
If you prefer to make workflows (rather than have them ready made for you), then check the links in the video description!
excellent always wanted chars to be consistent and now its possible thank you :)
me too! is there a setting for the number of images you want generated in batch somewhere-or am I just missing something?
Hello Nerdy,
many greetings from Berlin - Germany. Thank you very much for your great work, which helped me a lot with the realisation of my ideas. Do you see a possibility to create two characters - for example in the "Reposer". You then have one pose - but with two people who are then replaced?
hi @NerdyRodent , I haven't found the json file for the workflow, only provided things is a png image, it is little bit confusing for me when i am recreating your workflow, can you please provide the working json file for this workflow, i really want to try this....
thank you for creating such amazing video tutorials..
You can load the PNG workflow, and then click save if you want it in JSON format instead!
@@NerdyRodent thank You, i am new to comfyUI still learning.
The workflow looks Robust. Although i've tries implementing but Ksample Pre scale keeps giving this error " Error occurred when executing KSamplerAdvanced: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead". How do i Resolve this ?
I keep having issues getting the NNLatentUpscale node to load in. I understand that this is due to Effieciency Nodes no longer being supported. Has anyone found a workaround for this yet? I'm new to ComfyUI so I'm having some difficulty troubleshooting it.
You can search for the node in Manager
@@NerdyRodent even in manager no dice. I have uninstalled and reinstalled and then updated but still can’t make a go of it. Getting desperate.
I get an error when dragging the png to Comfy UI. Are there specific custom nodes that need to be installed? Error is "TypeError: Cannot read properties of undefined (reading '1')"
same here no idea why?
Make sure to update Comfy!
Thank you! How do we update when using the standalone NVIDIA build that we just unzip? Is that a GIT method or do I just have to redownload the standalone and overwrite?@@NerdyRodent
There is an update folder in the Comfyui folder with a bat file I believe @@alex_jasper
I also got the same error, updated to the latest version. Replace Vision models of all kinds
I think I'm so close to get this working but I keep getting this: Error occurred when executing ControlNetApplyAdvanced: 'NoneType' object has no attribute 'copy' , is there any way to fix this?
how to install controlnet for comfyui please
when I open up the workflow, I get this error: "TypeError: widget[GET_CONFIG] is not a function"
Any ideas how to fix?
Make sure you're running the latest version of ComfyUI
@@NerdyRodent Updated and tried a fresh install with no luck :/
@@killercraig8241 As well as the current version of ComfyUI, also make sure to keep all custom nodes too. The ComfyUI GitHub issues may also help.
Nerdy a question do I need to have IPAdapter because I have IPAdater Plus for this to work?
So long as you use the suggested models, any IP adapter will do!
@@NerdyRodent ok then here is my problem the IP model loader goes to either null or undefined, when I left click on it to load a model it does not allow me to, any thoughts?
dumb question but I can't find the IPAdapter Image (FACE) anywhere, closest one is "apply IPAdapter FaceID but that one doesn't let me upload an image under it
Thanks for uploading this, it's exactly what I'm looking for! Issue: it keeps saying it's missing CR Batch Process Switch even though I installed Comfyroll Custom Nodes. Pressing Queue Prompt yields an "unknown error". I'm new to this world, do you have any suggestions as to how I can troubleshoot? (I do see Comfyroll custom nodes in my nodes folder and have located CR Batch Process Switch in the logic.py folder, just not sure why ComfyUI can't seem to find it.) I've also updated Python to 3.11.6.
Getting closer to a usable traditional 2 d animation "in between" program. Once this happens, shizz gonna pop off.
Thank you for your hard work! Im a big fan of your talent, but having this issue and many people say thats up to devs...
conflicted nodes: image overlay [comfy_kepliststuff], and latent upscale [comfyui latent upscale], latent upscaler [sd-latent-upscaler
Hi sir, could you have another look at the SDXL version of this.
I'm getting an issue with the SDXL version of this workflow ("SDXL version of Reposer using the SDXL "IPAdapter Plus Face" model)
ERROR: IPAdapterApply: 'NoneType' object has no attribute 'encode_image'
I have a feeling it is an issue with the model used in the IP Adaptor Model, maybe the Load CLIP Vision too
After changing the model and clipvision to 'ip-adapter-plus-face_sdxl_vit-h' and 'CLIP-ViT-H-14-laion2B-s32B-b79K'
I now get:
Error occurred when executing KSamplerAdvanced:
Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead.
It’s best to use the suggested models - I can’t say if it will work with any others. You can drop me a dm on www.patreon.com/NerdyRodent for more info!
@@NerdyRodent Hi I managed to fix it. I tried to use the models in the workflow but it didnt work. So I downloaded the models found in the iAdaptor github (named in my above comment) and then fixed the next error I got by using the --flat 16 thingin the executable (I'm not in my pc I dont remenber the name) as my 1080ti processes in a different way to newer cards I guess
Does this wf still hold up with all the recent changes?
Yup! The plus face one is still the best that isn't for research-only use :)
What is this error i'm getting? TypeError: widget[GET_CONFIG] is not a function
First guess would be an old version of ComfyUI?
I'm getting the same error and the manager is telling me I've got the latest ComfyUI version
@@ak_rd444 possibly check the ComfyUI issues. This is just a workflow so zero code…
is there a reason why i am getting yellow solid color?
it was the vae model