Stable Diffusion - Face + Pose + Clothing - NO training required!
Вставка
- Опубліковано 29 вер 2024
- Building on my Reposer workflow, Reposer Plus for Stable Diffusion now has a supporing image, allowing you to incorporate items from that image into your AI generations! Find a nice jacket, find a pose, pick a face and in just seconds your character has both a BODY and an OUTFIT!
No training, no roop, no visual studio bloatware - just rock with the images you’ve got!
Note: This video shows the original IP Adapter, as does the workflow. IP adapter nodes from the future are different, so see the other reposer workflow for an example of using other, newer nodes. This one will remain unchanged so you get the best of both worlds.
Available for FREE from the AVeryComfyNerd web page -
github.com/ner...
Reposer Installation Guide -
• Reposer = Consistent S...
How to install ComfyUI:
• Install Stable Diffusi...
== More Stable Diffusion Stuff! ==
* ComfyUI Zero to Hero! -
• ComfyUI Tutorials and ...
* ControlNet Extension - github.com/Mik...
* How do I create an animated SD avatar? - • Create your own animat...
* Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
* Dreambooth Playlist - • Stable Diffusion Dream...
* Textual Inversion Playlist - • Stable Diffusion Textu...
This is freaking amazing, what a time to be alive!
Unbelievable, I've never seen anything like this, you don't even need a supporting image yet if you use face restoration with ReActor it's just perfect, thank you very much.
You are amazing.
You are achieving what many of us are trying to do; "Consistency in character creation."
Thank you for sharing your progress with us.
Yep, this is the Holy Grail of AI design this year-- consistency.
You are so welcome!
The New Stable Nerdy+ Diffusion. Genius.
😉
this is amazing man ! the only thing missing is modifying the facial expression.
Ip adapter changed everything for me and you have made Ip adapter even more useful...Thanks so much..
Glad I could help
Thanks a lot , your videos are always so helpful.
Glad to hear that! Thanks for watching 😃
Powerful stuff! I liked the variations of the dragon t-shirt. Thanks!
This is amazing! Any chance you have an updated version for SDXL?
im getting error. and cant see final image. error is 'NoneType' object has no attribute 'shape'
Amazing!! but can we apply this if we have different characters with different poses in one image. ??
it's great workflow,but the node load ipadapter with output have clip vision is hard to find,could youtell me how to install it,please🎉🎉🎉
You are a bloody legend
Thank you very much again !
This is SOOO GOOOD!
For me, through the manager or available within ComfyUI already (perhaps through previous node installations), there is a SAMLoader and a SAM Model Loader. Also, I find an InvertMask and Mask Invert. Unfortunately, I find nothing under a search for grounding or dino. I haven't cloned anything to this install as the manager has provided everything needed up to this point other than missing models for custom nodes. If cloning (DL a custom node outside the manager) is the case here, please provide links to those modules... or if I have this all wrong, perhaps a suggestion as to where to look for an answer to this dilemma. It appears that there are a few people with this same issue. TYIA
BTW, your Reposer is brilliant!!! ... as are you my nerdy compatriot.
Also, as a side note. Is it possible that you could also provide a snapshot of the layout, alongside the layout loading image on your GIT page. It would be quite helpful. Without this, for example, as I mentioned above, even though there are now nodes that will load the SAM, either by the node referenced in your workflow being merged or by that node being installed by a different custom node that performs the same task, and the fact that the node referenced in your workflow is no longer available (by being merged of depreciated), is shows up as blank with a red background in ComfyUI. We may be able to see input connectors and output connectors, but we cannot see what would have been the contents of that node such as parameter values or referenced files. Since this is the case, everyone with is same dilemma would be forced to search your entire video to see if they could find those details. I watched this video and I could not see what the contents of some items are, as they were never focused on... and even if they were, there may not have been enough clarity on those nodes to decipher. Providing those captures would alleviate these issues. This way, when we get a blank red missing node block, we can quickly and easily determine the parameters within that node when switched out with compatible nodes. TA
I also would like a complete screen shot of all of the nodes Lay-out. Having the exact same problem with Reposer 1
Use ComfyUI Manager to install missing custom nodes.
Be sure to keep ComfyUI updated regularly - including all custom nodes.
I've found that ""ComfyUI Impact Pack" also has to be installed. I think there are some missing SAM components that aren't in storyicon's "segment anything" node pack
Is there any way to change facial expression like angry, yelling, etc.?
thank u so much.my english is short. but i really want say "thank u!!!!!!!!!!!!!!"
So many red nodes listed as undefined and "install missing" says everything is loaded. unfortunately I just started playing with comfyUI today but have used auto 1111 for months
Use ComfyUI Manager to install missing custom nodes.
Be sure to keep ComfyUI updated regularly - including all custom nodes.
Awesome! Thanks for this, qq, how do you increase the batch number?
Ok this is crazy, I think that's all we need for complete designing of graphic novels with consistent characters. I need to figure out how I can use the API to achieve that
:D
There is a comic book generator on hugging face already you know.
right but there is no consistency or characters selection (yet)@@KINGLIFERISM
Can you please list which custom nodes you use. When I try the work flow I always have missing nodes. Thanks!
I use ComfyUI manager, making a single click to install all missing nodes! I’ve also added the full list of used and unused custom nodes.
thanks for the reposer plus workflow, I had trouble getting it working as I'm stuck at the Segment Anything nodes being all red (checked from the manager that it has been installed, tried reinstalling but to no avail), is there something that I am missing?
Use ComfyUI Manager to install missing custom nodes.
Be sure to keep ComfyUI updated regularly - including all custom nodes.
@@NerdyRodent I actually did a clean installation, and installed all custom nodes indicated by ComfyUI manager, and it's the Segment Anything nodes that are red, while the rest are okay, so I was wondering if it was due to a version conflict (which I need to manually install a particular version).
@@vtchiew5937 you can install it via the normal install in manager if somehow install missing fails. I’ve added a full list of both used and unused custom nodes.
Fantastic. could you give us the .json file? Do you have a membership registration or a paid account?
Gonna have to make a Patreon thing, aren’t I? 😆
@@NerdyRodent Of course. We want a better benefit to be easier to follow.
How to load the workflow ,where shall I find .json file in order to load your workflow
What an amazing tutorial. I got it working with 1.5, and tried with SDXL, but I got an error due to the IPadapter SDXL models. Have you been able to get SDXL working with this workflow?
There is an Sdxl version on my website too, yes
@@NerdyRodent Oops, I glossed right over that. Thank you.
@@NerdyRodent I was hoping to get the full clothing workflow working with SDXL, but I couldn't. Also, one issue I'm facing in general is that after the clothing mask is created, the black parts of the image really want to stay black into the final generation. Don't know how to fix that.
@@u-N16z0rz Lower the strength ;)
@@NerdyRodent Okay, I got an SDXL workflow including clothing running! And lowering the strength seems to help a bit, but I found that increasing Ksampler (base) steps seems to have a much larger effect. Just curious also, what exactly are the Base and IPA Ksamplers? I can't find any documentation on them. I also can't figure out what the "Step End/Start" that links to them does. Does it override the step count option inside the nodes?
Also I didn't load checkpoint and controlnet
This is awesome, but I can't get this to work. In 'Positive_Prompt' there's red circle. In ComfyUI I get error. I tried to reinstall all, updated all, but still have not figured out what is the problem.
ERROR:root:Failed to validate prompt for output 158:
ERROR:root:* CLIPTextEncode 30:
ERROR:root: - Required input is missing: clip
ERROR:root:Output will be ignored
You’ll need to make sure you’ve installed all the required nodes before you can run the workflow. Update ComfyUI itself as well as all custom nodes. Check the troubleshooting guide at the top for a full set of steps!
how can we use this with SDXL models? like which clipvision model to use?
There’s an example SDXL version on the site 😃
@@NerdyRodent which site? and is there any place where I can chat with you?
I've tried it, but the segmentation filter is returning a black image. I tried all the 3 options.
With the black image in the segmentation final result, I'm getting only the pose control net result, it's ignoring the face and the cloths.
Please some help
You can just bypass the segmentation if you can't think of any prompts which work
@@NerdyRodent I was using empty prompt, as in the video. But I got black Segment. I also tried bypassing it, but the final output, is only the control net skeleton image
@@astolfoemprendedor2194 ah, that would be the issue then! Just like in the video, the segmentation area needs to have a prompt in order to know what to segment.
I have this working fantastic, however whenever I change the supporting image after a previous successful run, it crashes with the dreaded out of memory error. Any thoughts?
Error occurred when executing GroundingDinoSAMSegment (segment anything):
Allocation on device 0 would exceed allowed memory. (out of memory)
For lower end hardware, you could try going with much smaller images though nothing is really a replacement for hardware 🫤
Thank you, this was running like a dream on my 3070 8GB video card, no issues. I didn't change anything, swapping images in and out without a problem, including the supporting image. Now every I change the supporting image it fails, even when going from a larger image to a smaller. If I restart ComfyUI and run with the same setup, no issues, if I change the supporting image again, I get the same error. It seems like ComfUI is not releasing my memory, and because I change the image it reruns the entire process and the GroundingDinoSAMSegment (segment anything) exhausts the memory. I'm a newbie to this, is there anyway to manually release/refresh the memory without restarting comfyui?@@NerdyRodent
@@EmmaFitzgerald-dp4re should work fine in 8GB, though it could be node updates have changed the memory usage 🫤
I know, I was even using reactorfaceswap with FaceRestorre model as the final step to close out the process, was working so well@@NerdyRodent
Here's another weird thing, if I swap the supporting image, run and get the out of memory error, it happens consistently, unless I restart comfyui, however I just discovered that if I swap the checkpoint model it will run successfully until I swap the supporting image. My work around for now is just to change the checkpoint . It's really strange, but 100% reproducible and consistent. I assuming changing the checkpoint model releases the memory@@NerdyRodent
When loading the graph, the following node types were not found:
InvertMask (segment anything)
GroundingDinoSAMSegment (segment anything)
GroundingDinoModelLoader (segment anything)
SAMModelLoader (segment anything)
Nodes that have failed to load will show as red on the graph.
i have already installed (segment anything) and I'm still getting this error. please help
Hi, did you see the solution on the segment anything github page, reagarding updating the requirements?
@@jo-e-tv8zv I did, tried lot of solutions. Do you know anything about this?
@@ManishKumar-885 There seems to be a solution by updating the requiremets file. You should take a looks at the Issues section of the comfyui_segment_anything github repository by storycon withe the title "i have already installed (segment anything) but following node types were not found while using". For me it did not work when I use comfyui on runpod. But the workflow runs on rundiffusion without errors after installing segment anything via the manager. If you find another solution please let me know. Best wishes
Isn't better to create a Lora?
Better for what? Speed? Ease of use? Pose? Face?
@@NerdyRodent avoiding complex workflows.
I'm still using a1111, but I think I will change to ComfyUI.
It's just that, some Workflows looks complex
@@astolfoemprendedor2194 you can turn the wires off in settings so that they don’t look complex anymore 😉
Hey love the hard work you put in for this! Im getting a LONG Ksampler error, wondering if you could help?
Error occurred when executing KSampler:
'NoneType' object has no attribute 'shape'
comfyui is up to date, I dont have fooocus ksampler installed... any thoughts?
may you create a tutorial specially for anime characters? like using an arm of a character and a face of another and the other body part of other character then pose it into a hard poses, like foreshortening?
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
WHAT ABOUT PERSPECTIVE? It'd be nice if it could mimic the camera angle as well?
can you publish this as a seaart workflow , plz
Just *WOW* :-) But... i am using A1111 - and tried to setup a cumfyui installation like this. Hell, i missed :/ any how to out there?
This would all be a lot of manual steps in automatic 1111
This is insane, Thank you from whole community side. Subscribe🌟
🌟I have a query, how do i auto change the loader image on each new queue?
for instance lets suppose i have a bunch of clothes and bunch of poses. but i want a fixed face. Then how i will auto queue random clothes and poses for single face?
Please do help us
Not done batches like that, as I usually just queue them all up!
Damn, thjs is absolutely phenomenal for story telling. I've been searching for a workflow / method to get consistent characters in cinsistent clothing in a pose and this is just perfect. The only thing that would make this better would be the ability to add multiple characters on the same image, each character having their own consistent clothing. This would be revolutionary for using AI image generation for storytelling.
I'm trying to figure that out as well, my current workflow for this is messy but gets the job done. In short, you have to do a lot of messy compositing, and do a final pass using img2img.
Thank you for sharing your work! I am getting an "Error occurred when executing ArithmeticBlend: The size of tensor a (3) must match the size of tensor b (6) at non-singleton dimension 0". The exact same error was posted in github discussions recently, which makes me think that a recent update to one of custom nodes broke something, maybe?
how is everybody not stuck at the checkpoint? comfyui Prompt outputs failed validation CheckpointLoaderSimple: - Value not in list: ckpt_name: 'sd15/realisticVisionV51_v51VAE.safetensors' i downloaded the fabled stabe difussion one point five but it doesnt seem to count. what is the difference?
You're an absolute magician. Thank you for your effort sir.
You are very welcome
That's genius! For ip-adapter plus face, fp32 gave me better results. Is there a way to add two ip-adapters, one for the front and another for the style, like a painting style or comic? It would be awesome. Another question: is it better to use transparent png regarding faces? Thank you!
Yup, you can keep chaining IPAdapter like in this one
not sure where do I install these?
the controlnets I think it in the right spot just wanted to confirm
@@KINGLIFERISM you can check the installation video for complete and detailed installation instructions
omg, this is so extensive and so well made, thank you for sharing this
You're so welcome!
WOW ! I have watched before that the Ai-trepreneur video about lora clothing that is absolutly complicated ! ... Thanks a lot for that information, I will try that :)
Check my twitter for other examples. Food makes a great jacket too 😉
i cannot fix the " Error occurred when executing DWPreprocessor" even after updating everything. Anyone like me who found a solution?
Click “update all” in the manager
❤❤❤you are simply great ,you deserve a lot of subs
That’ll never happen, but thanks! 😆
Can you please make a tutorial on how to apply a style to an image? Something like: grab a photo portrait and make it a 3d cartoon or anime style?
You could do it using this 😉
Hey man, having issues with Loop (441,424) with broadcast (423) - not submitting workflow. looks like the anythng enywhere loops are causing the issue. anyway to fix this
Backgrounds are the next logical step, yeah? Thanks for the awesome workflow!
I can't get this to work now after the latest controlnet update. The DWPreprocessor node cannot be found. I've tried uninstalling controlnet and deleting all of the controlnet folders like one reddit post suggested but that didn't help. if anyone knows of a fix please help. thanks
Make sure you are using the standard version of ComfyUI and that everything is up to date
@@NerdyRodentI'm using the portable version which was working fine with your previous version of this workflow. I updated everything tonight and that's when this broke. Looking through the Fannovel16/comfyui_controlnet_aux github files and I see that they updated some of the DWpose files as of yesterday and an hour ago. Maybe they broke something?
@@NerdyRodent Looks like it's a controlnet import issue. I'll contact the devs. thanks for the suggestions.
0.0 seconds (IMPORT FAILED): C:\Users\Big Bane\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
Just checked and yes - as of an hour ago they broke _all_ their preproccesors
They used an integer instead of a string in __init__.py - just put quotes around the 1 ("1") for MPS fallback until they fix it ;)
Can we apply this if we have different characters with different poses in one image? please can anyone tell me
Dear Mr. Rodent. I looked into your workflows for the new reposer plus workflow, but I only see poser, poser 2 with updates from last week. The only other one is reposer plus with bypass image option and that is still from 4 months ago.
I am replacing the ip adapter apply with ipadapter advanced notes as I am writing, assuming this will do the trick.
Hey Nerdy, thanks for helping us to learn more. I'm using this workflow a lot, but unfortunately, I tried to use it this week and have this error: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5). What could it be? Thanks a lot.
Will this work with guys?😐
I'm joking. I forget how many straight guys there are until I watch stable diffusion videos. 😆🤪
So I finally dug my PC out of storage and got the SDXL reposer workflow, and after some searching I realize that there isnt a video covering that wonderful workflow, and what better I got it working on my 12gb gpu, which was a surprise as with XL workflows things can easily get out of hand vram wise. I am shocked by that workflow.
My output looks a bit different from the face image, there is only a bit of similarity in the eyes, and that's it
Is this possible in A1111 or Foooocus? Or should I just bite the bullet and learn ComfyUI?
you don't need to learn comfiUI to use it, just need to import the nodes following the tutorial and voila
I would like to have your instant lora.... take the completed image and place it into this workflow. Is there a way to combine them?
Great job. Could you explore using ipadapter to stylize images? Like turning real photo into rick and Morty style?
Sure thing!
This is beyond crazy. I feel like all these tools have just been created recently and they are already THIS powerful. Just crazy.
Subscribed. Great content.
Should see more fun in the future too!
Thank you: super useful and you are amazing. Are you aware of this free tool for: perfect hands (with depth) and perfect feet (with canny) and pose, all driven by video game animations or custom-pose? "Character bones that look like Openpose for blender _ Ver_94 Depth+Canny+Landmark+MediaPipeFace+finger"
Sounds interesting, I’ll check it out!
first
Win!
It would help if you tell us wich ComfyUI workflow from your github you use in the video. I am massively confused :D I cant find the correct workflow. They all look different in your video.
Feel free to drop me a dm on patreon if you need more help! 😀
Excellent work!
Thank you! Cheers!
is it ok to use the OpenPose character rig thing (multi-colored bone structure poseable rig over black background) as the input for the pose here? or does it have to be a photo of a person?
The pose can be anything 😁
Dude, this is Fn incredible. Will be diving in after work!!!!
Glad you like it! 🤓
Thanks for sharing this amazing workflow.
Where is better to add the Reactor node for face swap as I can not get the same exact face for realistic images?
You can add it in before or after the open pose control net. Setting face weight to 1 or more will make the generation more like the face image.
Hello, ive been having troubles, Do you know where is the NNLatentUpscale node?
The easiest way is to use ComfyUI manager. You can also dm me on patreon if you need more help!
Top tier content! And I didn’t even reach the end of the video! Please keep up the good job!
Nerdy I bypassed the DW openpose processor and used openpose skeleton for pose reference and it worked beautifully and wasn't influenced because the skeleton has no clothes.
Nice!
ahh new error,
Error occurred when executing IPAdapterApply:
'NoneType' object has no attribute 'encode_image'
Remember to check the troubleshooting section 😉 90% of the time all you need to do is update both ComfyUi and your custom nodes
@@NerdyRodent I did update Comfy and the nodes as per your advice to others in this thread, I believe I am experiencing a conflict in the Allor Plugin, when I drag your workflow into my interface, I receive the message, "When loading the graph, the following node types were not found:
ImageBatchGet", but the node is installed with a conflict. I've checked many resources, not sure what to do next. I've deleted the nodes from the workflow, manually added them back without a conflict resulting and then I end up with the error message previously mentioned. I've tried to reinstall allor pluggin nodes, through automatic in Comfy, through GIT, and by manually downloading. Not sure what else to do
teach us how to install this properly i dont know how to install the files on your links make it more simple for us in the back
As mentioned in the video, this is the usage video. For installation you can check out the installation video. Being the first time in, however, the better thing to do is go through my ComfyUI playlist and build up to this more advanced workflow. Links are in the video description :)
Hi!
I have a question/challenge. I have been trying to recreate this ability, but purely for a face. So, I have used controlnet for generating the faces in the same pose (being a head and shoulders, facing forward pose), but then, I want the ability to add glasses, hats, earrings, etc., and have them be the SAME every time. I would then want to extend this to hair, face shapes etc., so that I could have 10 different faces, all wearing the same pair of glasses, or have the same face, showcasing 10 different types of glasses. Is this possible? Believe me, I have been trying...
Thanks!
Pure Gold
Yeaaaa! Thanx! Was waiting for this!
Hope you like it!
So I got this working, with NO errors! And it does very well looking at the pose and face but it seems to have a hard time with outfit. It will slightly take hints from the outfit image but it will fully change color, add bits, change how it fits, sometimes just change the outfit fully.
Do I need a model? change a weight? How can I tweak this so that it listens to the clothing image a bit more? I have tried using the input also to help but I am not getting much success with outfits. Any tips?
Thank you for the video!
Am I the only one getting this error? "Currently DWPose doesn't support CUDA out-of-the-box". It gives me a grey image T.T
Hi. I'm getting a No module named 'midas.dpt_depth' error from the zoe depth map on the SDXL face and pose version of this. Any idea what's causing this or how to fix?
You may have an old version of controlnet support installed
@@NerdyRodent EDIT: Nevermind. I deleted an old controlnet preprocessors folder in the custom nodes folder called comfy_controlnet_preprocessors and it worked, but now I can't get the DWPose to work for some reason. The whole thing renders but it ignores the pose of the character in the pose jpg and the preview window for DWPose Estimator stays black.
Thank you so much for the cool content. did you perhaps update this workflow to work with the new ipadapter? the v2 arn't backwards compatible
There are a bunch of updates available via Patreon 😉
@@NerdyRodent thanks! I'll make sure to check it out. thank you again for your great work!
Any idea about this error, something stuck at ImageCompositeMask "Error occurred when executing ImageCompositeMasked: tuple index out of range"
This is awesome! Thanks for the amazing workflow.
I have one question, I am trying to have the image generated in comic style. Since we are using SD 1.5 and can't just rely on prompt like "comic book style", what is my option to do so staying on SD 1.5? I tried introducing Lora as well as adding positive prompt with models like DreamShaper etc. but it seems like prompt does not have a lot of weight, OR simply isn't going to work. Any idea?
I can't get my head around this and I don't have a powerful pc that can run sd..can't figure how to do it through Google colab as well, can anyone who has learned how to do this help me with the character I am working on? I need to change its clothes and pose..it's a mascot for my brand..I would be grateful if anyone can help.
🤩you are amazing, but it does not work, i believe, i have bad luck any tip that can help me use your amazing workflow🙇🙇🙇
The best way to repair your local ComfyUI is to follow the troubleshooting section. Just start with the first step, and then work through each one in order. Hope that helps!
I am using Colab, but my computer is not good enough for stable diffusion. I will follow your instructions exactly💡. One thing, teacher: I can't find "nnlatentupscaler" to reinstall. Do you have any tips for this problem?🙇🙇🙇 @@NerdyRodent
The images I'm generating all have slightly scrunched/shorter legs than they should. It might be because my clothing reference just focuses on the top of the body? I don't want to change my image input--is there a clever alteration that can just make the legs look more normal?
Error occurred when executing IPAdapter:
'ClipVisionModel' object has no attribute 'processor'
can someone please help me with this error
This is nothing but amazing! I'm gonna buy you a few cups of coffee that's for sure! I've been waiting for this since forever.. Will there be an XL version of clothes, face and pose ?
This is too powerful! You always surprise me with your amazing ideas. Thank you so much for making and sharing these tutorials! :D
Thank you! Cheers!
Could you share this workflow please?
Your tutorials are exceptional (thank you).
The reposer workflows on your drive don't match these videos. While there are new updates since your video, I still think yours hold a lot of relevant value.
If you want to update the workflows, I wouldn't complain either!
Being such an old video, the nodes which exist then no longer work. I did, however, use both of your suggestions so that people have the option of using the newly updated version, or do any updating themselves! - github.com/nerdyrodent/AVeryComfyNerd?tab=readme-ov-file#list-of-workflows-available
@@NerdyRodent Thanks for your prompt reply.
I can't seem to get the output to be the same as the original picture. They are close but very obviously different.
Would you be so kind as to tweak my workflow so I can better understand what to tweak in the controlnet to get the desired results?
I've been tweaking and adjusting variables. The only thing I've achieved is a headache.
Sure! You can drop me a line on Patreon and explain what you're looking for :)
your version has a controlnet for tile, the one on your website doesn't. has there been an update.. I am struggling with getting the likeness you are.. trying to find out why..
Yup, the ipadapter changed so it’s slightly updated.
@@NerdyRodent some clothing won't get detected no matter what I do. have you noticed this or does all pieces of clothing work for you all the time?
What's the location for the load IPA Adapter Model ?
The current version differs slightly from the video as it’s using ipadapter plus now, but feel free to drop me a dm on www.patreon.com/NerdyRodent if you need more help!
The 1 node I am unable to find- prompts everywhere, I have looked everywhere but to no avail. Please help.
Search for “everywhere” in manager
Big thanks, got it. Thank you Rodent 😆 @@NerdyRodent
Can't find the workflow for this video on your github. Please upload!
Direct link to the old workflow - github.com/nerdyrodent/AVeryComfyNerd/blob/main/workflows/SD15/Reposer_Plus_bypass.png
You'll just need to replace any deprecated nodes, given how old this is now :)
@@NerdyRodent Thank you for the quick reply :)
Ctrl+m to re-enable disabled nodes for those trying to do so.
Awesome!
I'm gonna be that guy--is it at all possible to run IPadapter on 6Gb Vram?
I gave your previous Reposer workflow a whirl recently and after resolving some missing nodes and Torch/pip upgrade errors, immediately ran out of Vram. Darn it.
Could be pushing it a bit, but maybe with all your low vram settings and such!
I get import errors for two required models: ComfyUI-Allor and comfyui-art-venture
And it appear this "When loading the graph, the following node types were not found:
SAMModelLoader (segment anything)
GroundingDinoModelLoader (segment anything)
Image Remove Background (rembg)
AlphaChanelAdd
GetImageSize
GroundingDinoSAMSegment (segment anything)
AlphaChanelAsMask
ImageNoiseGaussian
DWPreprocessor
KuwaharaBlur
ImageNoiseBeta
ImageApplyChannel
ImageEffectsAdjustment
ImageFilterMedianBlur
ImageAlphaComposite
AlphaChanelRemove
CR Batch Process Switch
CR Image Input Switch
IPAdapter
Prompts Everywhere
UltimateSDUpscale
Nodes that have failed to load will show as red on the graph."
See the step by step installation video linked in the description and github.com/nerdyrodent/AVeryComfyNerd#troubleshooting for more info!
How to load the workflow ,where shall I find .json file in order to load your workflow ,pls tell me how shall I load you exact workflow into my comfy ui
Check out the links in the video description!
I keep getting checkpoint errors: 'model.diffusion_model.input_blocks.0.0.weight' Which checkpoint are you using?
This is for SD 1.5 models
@@NerdyRodent that's what I'm using. I've tried several 1.5 models to no avail. I've had to work through several hours of fixing countless console errors to get to this point and now I'm at a standstill.
@@NerdyRodent So you're not sure what could be causing that error? I'm using the exact same checkpoint you are in the video. I've even tried similar ones, and several other 1.5 checkpoints.
drop me a DM on patreon if you need more help!