Free workflows are available on the Pixaroma Discord server in the pixaroma-workflows channel discord.gg/gggpkVgBf3 You can now support the channel and unlock exclusive perks by becoming a member: pixaroma ua-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin Check my other channels: www.youtube.com/@altflux www.youtube.com/@AI2Play
@jonrich9675 I don't think the forge team has updated the interface to support it, usually takes days or weeks, only comfyui offers day 1 support, that was one of the reason why i switched to comfyui because it has taken too long to be able to use new technologies
Love the format of your channel and i always recommend your channel to anyone learning SD. Thank you for not putting workflows behind paywalls and I hope your generosity in turn rewards you for the effort. You and Latent Vision is at the top
Thanks again for this useful guide. I noticed that the models provided by Black FOrest are so large, why should we switch toi those ones as there are some alternatives for Flux IPAdapter
Depends on the PC configuration and I test all and keep only the one that i am happy with and deleted the rest, so for systems doesn't worth it. I use for example dev q8 because it works ok for me, so probably in a few days or weeks it will appear smaller models so we can use those if it works ok. So far I like the flux fill so i will use that, and canny lora also works nice, and redux model is small
Hey pixorama, I think you're the best when it comes to new workflows and reviews of new tools. I have a couple of questions but wasn’t sure where to ask them. 1. I have an interior scene, and I’d like to change the lighting to different times of day like night, morning, etc. Is that possible to do? 2. I have a cream tube, and I want to place it against a beautiful background in a way that doesn’t look photoshopped but keeps all the labels intact. Do you have any reviews or workflows that cover something like this?
You can try with a control net but it will not be identical, you will have some differences so is like you get similar interiors but some things will be different like maybe a vase in one will be a jar in other and so on. As for cream tube you can use the flux fill and inpaint anything else just noy the tube, so you change the background without touching the background. But i have to do some experiments when i get some time, maybe using the node that remove background to get a clear mask so we can inpaint only background more accurately, but i need more time to test it and it wasn't a priority
@pixaroma thank you for answer. The thing with the tube is I want so the lighting of the tube will also change, like shadows casting on it. I think this is a little too difficult. But I will go into your discord channel , I see there is so much useful information!
@AndreyJulpa inpaint it first the background, then run through image to image to get a variation of it, but that will probably change text and what others things you have, maybe a combination of photoshop with Ai, not sure
There are a couple ways to control the style transfer strength. The easiest way is with KJNodes' Apply Style Model Advanced node. The other is to use ConditioningSetTimestepRange or ConditioningSetAreaStrength and combine the conditionings.
well it didnt give me an error when tried turbo alpha, but the result was not so great, it was looking like when I generate without lora at 8 steps, so with or without lora at 8 steps i got that a little pixelated artefacts on mask, so not sure if it has an effect, you just reduce the steps of normal model and is faster, so instead of 20 try 16 or something to be a little faster, on 8 image is degrading. But maybe I didnt combined some nodes right, but I would have got an error I guess
I think you need photos of that necklace from different angles on different backgrounds, I used for example tensor art to train a person or a style but didnt tried with an object yet, I saw somewhere someone trained some sinkers, so it should work theoretically, I was able to inpaint a face on a different photo
@pixaroma I just have the hires pics of the products on a bust from different angles. With sdxl it was never accurate, but using fluxgym trained it to good accuracy. Works as a lora, but since no reference pics of models wearing it size mismatch can happen. Hence was wondering if I can use the trained lora and inpainting over an accurate mask Also most pics it generates are from nose down since no people in training images
@@MrDebranjandutta I never done something like that so unless you try different things not sure what it will work or not, since with AI all is random :)
@@nekola203 go to settings that gear wheel, look in the left for crystools, and then in the right it says Position (floating not implemented yet) make sure is says Top and see if other buttons are on there and is not deactivated
Free workflows are available on the Pixaroma Discord server in the pixaroma-workflows channel discord.gg/gggpkVgBf3
You can now support the channel and unlock exclusive perks by becoming a member:
pixaroma ua-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
Check my other channels:
www.youtube.com/@altflux
www.youtube.com/@AI2Play
do it have to be installed on comfyui or does forge work as well?
@jonrich9675 I don't think the forge team has updated the interface to support it, usually takes days or weeks, only comfyui offers day 1 support, that was one of the reason why i switched to comfyui because it has taken too long to be able to use new technologies
A knowledgeable person who actually knows how to put together a proper tutorial! Fantastic stuff. Thanks for putting this together.
Glad it was helpful 🙂
Love the format of your channel and i always recommend your channel to anyone learning SD. Thank you for not putting workflows behind paywalls and I hope your generosity in turn rewards you for the effort. You and Latent Vision is at the top
Thank you so much, yeah I like matteo videos also :)
This is a very good tutorial channel.
Thank you for making such an informative and detailed guide-your hard work is truly appreciated! 🙏✨
Thank you Uday ☺️
Thank you 🙏 You are the So much exciting new content in this episode - It is like drinking from a firehose!!
Thank you so much sebant, it was a busy week 😁
Amazing one. Thanks for the workflows
so cool!!!! thanks a lot sir, u are the best
Thank you ☺️
nice!
Thanks again for this useful guide. I noticed that the models provided by Black FOrest are so large, why should we switch toi those ones as there are some alternatives for Flux IPAdapter
Depends on the PC configuration and I test all and keep only the one that i am happy with and deleted the rest, so for systems doesn't worth it. I use for example dev q8 because it works ok for me, so probably in a few days or weeks it will appear smaller models so we can use those if it works ok. So far I like the flux fill so i will use that, and canny lora also works nice, and redux model is small
I was hoping you were going to do this. Thank you!
Hope you enjoyed it ☺️
So cool! Thank you! Just tested and you really need GPU with 16 gigs to run it(4070ti super or 4080)
Yeah are quite big not sure what is the minimum, but same with flux full model i think is similar
Hey pixorama, I think you're the best when it comes to new workflows and reviews of new tools. I have a couple of questions but wasn’t sure where to ask them.
1. I have an interior scene, and I’d like to change the lighting to different times of day like night, morning, etc. Is that possible to do?
2. I have a cream tube, and I want to place it against a beautiful background in a way that doesn’t look photoshopped but keeps all the labels intact.
Do you have any reviews or workflows that cover something like this?
You can try with a control net but it will not be identical, you will have some differences so is like you get similar interiors but some things will be different like maybe a vase in one will be a jar in other and so on. As for cream tube you can use the flux fill and inpaint anything else just noy the tube, so you change the background without touching the background. But i have to do some experiments when i get some time, maybe using the node that remove background to get a clear mask so we can inpaint only background more accurately, but i need more time to test it and it wasn't a priority
@pixaroma thank you for answer. The thing with the tube is I want so the lighting of the tube will also change, like shadows casting on it. I think this is a little too difficult. But I will go into your discord channel , I see there is so much useful information!
@AndreyJulpa inpaint it first the background, then run through image to image to get a variation of it, but that will probably change text and what others things you have, maybe a combination of photoshop with Ai, not sure
Do a search on this words is something new and might work for what you need, search: "In Context Lora"
There are a couple ways to control the style transfer strength. The easiest way is with KJNodes' Apply Style Model Advanced node. The other is to use ConditioningSetTimestepRange or ConditioningSetAreaStrength and combine the conditionings.
Does it work with ksampler? Or it need the other workflow like the one using full dev?
@@pixaroma It should work fine with the regular ksampler. I also just found the Advanced Reflux control nodes that look like they may be even better.
does the with flux inpatient model work with the turbo lora?
well it didnt give me an error when tried turbo alpha, but the result was not so great, it was looking like when I generate without lora at 8 steps, so with or without lora at 8 steps i got that a little pixelated artefacts on mask, so not sure if it has an effect, you just reduce the steps of normal model and is faster, so instead of 20 try 16 or something to be a little faster, on 8 image is degrading. But maybe I didnt combined some nodes right, but I would have got an error I guess
Hi how do I train jewellery as a flux lora, and then inpaint a lora (like a necklace) to inpaint with
I think you need photos of that necklace from different angles on different backgrounds, I used for example tensor art to train a person or a style but didnt tried with an object yet, I saw somewhere someone trained some sinkers, so it should work theoretically, I was able to inpaint a face on a different photo
@pixaroma I just have the hires pics of the products on a bust from different angles. With sdxl it was never accurate, but using fluxgym trained it to good accuracy. Works as a lora, but since no reference pics of models wearing it size mismatch can happen. Hence was wondering if I can use the trained lora and inpainting over an accurate mask Also most pics it generates are from nose down since no people in training images
@@MrDebranjandutta I never done something like that so unless you try different things not sure what it will work or not, since with AI all is random :)
what's the new resource monitor?
you go to manager, custom nodes manager, and install the node called cyrstools, restart comfyui and it will appear
@pixaroma I have crystools but it won't show up after the new ui changes
@@nekola203 go to settings that gear wheel, look in the left for crystools, and then in the right it says Position (floating not implemented yet) make sure is says Top and see if other buttons are on there and is not deactivated
@@pixaroma tried all that it's not working. thanks anyways
@@nekola203 I have comfyui on 2 pc so it works on both, maybe a clean install of comfyui
Uses a mask, to generate a mask - lol :D
Who, where, when? 😂