Advanced Style transfer with the Mad Scientist node
Вставка
- Опубліковано 28 лип 2024
- We are talking about advanced style transfer, the Mad Scientist node and Img2Img with CosXL-edit. Upgrade the IPAdapter extension to be able to use all the new features. Workflows are available in the example directory.
Discord server: / discord
Github sponsorship: github.com/sponsors/cubiq
Support with paypal: www.paypal.me/matt3o
Twitter: / cubiq
00:00 Intro
00:23 Style Transfer Precise
02:03 Mad Scientist Node
05:35 Advanced Blocks Tweaking
07:27 CosXL Edit - Наука та технологія
Since I made this video I added a "precise style transfer" node to the IPAdapter. You can use that instead of fiddling with the Mad Scientist. It also works with SD1.5 (to some extent).
Also since I've been asked quite a few times now... sorry, we do not have exact data of what each block does. 3 and 6 are pretty strong so it was easy but other layers have also some impact on both the composition and the style. Some seems to effect text, others background, others age. But at the moment it doesn't seem there is a "definitive guide". I would have told you otherwise 😅
thanks a lot, so in SD1.5, which block for style which for composition?
This intrigued me, I'm going to do a lot of tests to see what they do besides 3 and 6
This guy is literally equivalent to what Piximperfect is to Photoshop.
I doubt even the people who worked on SDXL had any idea that this much stuff and control can be gained over the models.
Like seriously, wtf ????
Amazing work.
i would say more like video co pilot is to after effects
nnaah I guess that the difference is just that I actually share what I find
@@latentvision lol
Unm3sh
@@latentvision Share, and explain. You're like that one teacher that didn't just show you the math formula, but showed why it was important and how to use it practically.
Best Comfyui channel on UA-cam.
x4096 agree
Matteo, your work is amazing! You are our Dr. Brown. Our mad scientist who will give 1.21 Gigawatts to the AI to take us to the future. We love you!!! 😄😄😄
just doing my part!
@@latentvision and we are doing our part loving you and being grateful 🎉
Yeah you are our mad scientist 😂 ❤ Merci Mateo !
when it comes to teaching and concise explaining, you are the GOAT!!!! Thank you so much, please keep doing this. Thank you!
As always amazing work Matt3o!
For those interested in the Crossattention codes this is what they target:
1) General Structure
2) Color Scheme
3) Composition
4) Lighting and Shadow
5) Texture and Detail
6) Style
7) Depth and Perspective
8) Background and Environment
9) Object Features
10) Motion and Dynamics
11) Emotions and Expressions
12) Contextual Consistency
Where did you get this info?
@@stefansotra2934 Research really, nothing more...
wow cool... thanks!
is 12 the 0.0 index ? if there is a more clear description for all these please link it
Great work as usual, M! I am happy to see that the group experimentation with the UNET layers has led to the development of a node that will give us more control over our generations. Thank you for your continued efforts in this field!
You are a mad scientist haha thank you so much Matteo
mad for sure, scientist not so much 😅
@@latentvision haha 😂 keep up the great work I love your content.
Matteo sei davvero incredibile con il tuo lavoro... 🎉
"While keeping up with the influx of new features is important, I'm reminded again of the value of in-depth understanding of a single function. Thank you as always."
I love the idea of target conditioning various layers and being able to direct the layer with this kind of control in the cross attention. Thank you Matteo for you continued work and expertise. You give us a lot to play with and work with. The implications of the kind of control we can have in image creation and manipulation will last for years. Continued blessing to and appreciation to you good sir. 🙏🏾👍🏾
Image gen is a tech that seemed science fiction a couple years ago, but to have refined it to the point people in their homes can casually do generations like 7:19 is nothing short of outstanding. Thanks as always.
Love what you're doing for the community - thank you for your time and for sharing :D
Thanks a lot for this new node, really appreciate it.
Just found the node today and was wondering about its use - thanks for sharing the knowledge!
God of IPAdapter
Incredible Content ! your work is undoubted the best !
Thanks a thousand Matteo. Your last statement is something I tell time and time again, we only use so little potential in what's already out there. Brilliantly proving that point.
Thank you. Exactly, we become conditioned to chase the new shiny toy rather that fully learning and enjoying the old ones. So much can be done with this, looking forward to...
Your work is amazing!
This is so incredibly cool! Thank you very much. I can't even imagine how nerve-wracking and exciting the coding was for this. :)
genius hacking of cross attention and perfect explanation of the indexing.
Absolutely amazing 😮
Always waiting for your greate video, that help me alot! Thanks
Great node, thanks a lot 😁
it was amazing ,thank you for the work you have done for the community, i really appreciate it
Thank you again, my lord
most welcome, my liege
Thanks Matteo, this is so good!
This is awesome! ty!
love love love this, going MAD!!!!
Again, blowing minds !!!!
Thanks that's cool, amazing findings that will help the comunity
Insanely cool, also just realized, you are italian as well😂🔥
Keep up the good work man
I try
Awesome, Bro
Thank you!
This builds on your previous experimental node where you asked for some help from the community. Glad to see they helped you decipher the layers
not to remove anything from the wonderful community but you've been distracted 😄Style and Composition was released months ago, way before the prompt injection.
@@latentvision I was speaking about block weights, this one: ua-cam.com/video/OrST6Nq1NUg/v-deo.htmlsi=VyhskRDQS5m8JFMX
Anyhow, it's nice to see the two combined, regardless of if it is a new feature or not. Good stuff, in either case 🙂
My understanding is that Precise generally weakened the weights of more layers, but style has always been a mystery to neural networks, although you have done so well already. I hope you can bring us more surprises, thank you for your contributions! The name 'Mad Scientist' is simply fantastic
this is awesome thank you
so cool and you are our mad scientist
Very cool, I need to play with IPAdapter more often, but I am often too busy just improving prompts and upscale workflows!
7:20 my jaw literally drop
Thank you so much!
GENIUS
Cos-XL is so tight, I'm a huge fan
Very cool! Would love to see some coding sessions. Maybe you could explain your code a bit. More info about the vector sizes, layers etc :)
I was thinking about that... not sure how much interest there would be on that though
@@latentvision Yeah maybe, but your "ComfyUI: Advanced Understanding (Part 1)" video actually performed really well I think, where you went into more details. That plus some code examples what is going on behind the scenes with your knowledge would be awesome!
Maybe a small poll could show if its worth your time :)
amazing, thanks a lots
W O W!!! AMAZING!
I'll take the blue pill!! 😁 Thanks so much for this one!! 💊
Thanks you dr.Matteo.
I think i need one of your pills to make my days shine.
Again an extraodinary work.
Until I use this a *lot*, I will have no idea what the different UNet blocks do. Maybe you could put a Note node in the pack that contains an estimation of the relative contribution of each block to style, composition, and anything else that might be useful.
A++ work as always. Best SD channel around.
unfortunately we don't know exactly what the blocks do
Nice❤❤
Nice work! ❤It's awesome to see real progress on the U-net layer. But having too many parameters can make it tough to get started, even for someone like me who's been at it for over a year. It's just too challenging for ordinary people. If we change the filling parameter to four simple options like ABCD, it might be easier to promote. Ordinary people aren't into the process; they're all about the end result.
Amazing video. You do a great job at explaining complex ideas. I've learned so much from your videos.
HOLY SHIT, this is powerful!
IKR?!
Impeccable naming, we're all a little mad by now 🤣
Unbelievable.
believe it!
@@latentvision )))
I dived into it with my head. I feel like a Mad Scientist)
This is awesome! Are you planning on making a version that works with embeds?
why not :)
@@latentvision sweet!
Awesome work! Do you have the info on the other 10 control index points?
you're the man, thanks for all this tutorials!
Mateo, This is amazing work with the mad scientist node - My only question (not criticism) is if you plan to convert the index:weight string into widgets for ease of use or is there something that prevents that?
yeah I can do that :)
you have so many secrets matteo :D
Matteo, thank you for bringing us IPAdapter, which provides a solid ground for us to combat the uncertainty generated by large models. I personally like your explanation of basic theories. Although your course is less than 10 minutes, I have studied it repeatedly for several hours. If you have time to explain in detail the specific functions and applications of the 12 layers of the cross nerve, thank you very much for your efforts, thank you!
07:20 quella roba è fucking insane.
How about a widget setting in the IpAdapter node, to set the strength of each layer with a short lable of its function?
we don't know exactly what is the function of each layer unfortunately
Very good as always Matteo.
Can you explain all the index please?
I've noticed only 3:
3: Reference image
5: Composition
6: Style
Great video, thank you! Where can we find this node?
And one more question. Where can I find an explanation of the index/Cross Attention?
Is there any similar way to apply Lora style to only specific layer? Maybe we can apply negative weight for composition layer (e.g. layer 3) and positive weight for style layer (e.g. layer 6)?
🤯
Damn that's impressive.
Could the same logic be applied to a Lora node in the future ?
incredible. Apart from style and composition, has the community found consensus on what specific qualities of the image other indexes affect?
not really unfortunately
You called that node based on yourself right? You're truly a mad scientist bringing us the best discoveries! Thank you Mateo
Amazing and insightful work! Question wrt to sponsorship, do you have a preference between github vs patreon? I'm getting so much value here that I want to meaningfully support you and will default to github support if there's no preference
hey thanks! I don't use patreon because I don't have time to push updates. Either github or paypal at the moment!
Wow... You should make the layers as weight handles and call the layers for what they are :D
🤩🤩
Hello man, thanks for sharing this amazing improvement on control! Did something change between the style transfer and composition from 2 days ago to this release? I cant seem to reproduce same results :(
Or, is there a way to reproduce the exact same layer weights of that previous release within the mad scientist node?
no, style and composition should be the same. if you have issues please post an issue on the official repository with a before/after images possibly
🙏
UPDATE ALL THE NODES!!!!
thanks Matteo
a newbie question (maybe), index 3 is composition and 6 is style, what are the others? I don't remember if you have already talked about them in your other ipadapters videos
Look at his video a few weeks about about prompting the individual UNet blocks, that's what's going on here. There's still a lot to figure out, and some may be still dependent on others so it's not as clear cut as these.
@@rhaedas9085 thanks
Hey Matteo, Just finished your ComfyUI tutorial - seriously impressive stuff! 👍❤
Your breakdown of advanced features with practical examples is super motivating. I'm excited to put these into action and unlock the full potential of ComfyUI. Thanks for sharing your knowledge!
Great video. Top notch content, as always
Do you have a list of what the other index layers are? We are experimenting with this now
no, it's difficult to undestand. some are subject specific for example (eg: they work with people not with landscapes)
"We fail to understand what we already have" - cries in GLIGEN conditioning
so true
Woooow, wow 🎉 you are amazing. This is just soooo cool.
Why the negative prompt doesn't go with minus? It would be 3:-2.5, 6:1, and this way all the sintaxis could be consistent everywhere. And people would be able to pass positive and negative as much as they want.
I need to think about it, technically you can send a negative value to the positive embeds so it's not that simple
@@latentvision then it could be a letter like 3:n2.5, 6:1 or 3:2.5n, 6:1 or 3:neg2.5, 6:1 (to make it 100% transparent)
please is the prompt injection node out yet???
Thanks, that's really cool 🙏🙏.
but Is this just for me? I found almost everything too advanced and couldn't understand what's going on, but I would really love to understand it in depth so that I can add my own to it and share. I do have some knowledge on comfyui but this is...
check the "basics" series!
... It feels like SD3 is going to have a very hard time
Did anyone ever figure out what each block of the Unet does? When I was obsessively trying to understand how stable diffusion work, I went deep into it but could never get a straight answer. Also what processes are involved in each block? If I remember correctly each block has layers within it, with ResNets and other things above my pay grade. If anyone can point to a resource I’d appreciate 🙏
Is this an evolution on the prompt block by block thing? I remember you saying on that video that nothing stopped you from using images.
the technology is the same but technically we did this before the prompt injection. Visual embeddings are easier to evaluate
Thank you! I discovered cosxl recently - and it struggles on my 4080 - and this released with perfect timing.
Hi, thanks, wonderful! I just don't understand the point of this custom node having "weight_type" field if we modify the layers' weights in the bottom input field? Is "weight_type" overriden by the values in the input field?
"style transfer precise" uses a different strategy to apply the embeds. You need to use it only if you want to do the style transfer thing. If you want to experiment with blocks you can select whatever and it will be overwritten (except again "precise")
@@latentvision Thank you Matteo, that's awesome! Grazie
amazing work, anyone test mad scientist in SD1.5? how is the specific block to inject attn work?
I made a new "precise style transfer" node that should work with SD1.5 and makes the whole process simpler
How to use mad scientist? I cant find it :/
hi matteo! I wonder if I choose different weight types and set all layers 0 except the sixth layer 1, I foud the result is all the same as default style transfer, is that means the style transfer is sixth layer 1 and other 11 layers 0, and style transfer precise is third and sixth layer are 1 other 10 layers are 0??
precise is negative composition (layer 3) and positive style (layer 6)
Your 666th like is from me. I don't know what I'd do without your brilliant work. Thank you.
hi thanks for this great tutorial. im getting error while executing the code is "" ipadapter object has no attributee 'apply_ipadapter" i tried to using sd15 checkpoints as well sdxl. but getting same.
maybe it's an older version, of an old workflow, or simply browser cache
dying to know what the other index blocks are!
don't we all?! 😄
Mateo hi and thank you. I'm using the Mad Scientist node. Thanks for the clarification. I've become more aware of how to use it. I also have one question about the "IPAdapter Encoder" node, it has an input for a mask? The point is that both input image and mask should be connected to this node. When using only the input image in the "IPAdapter Encoder" node, the output image adopts the style/whatever. But when I also connect an input mask (I tried just a colored map, image, half painted image), the IPAdapter Encoder node has no effect on the generated image at all. Could you please explain how to use the mask in the "IPAdapter Encoder" node?
I'm sorry I'm not sure I completely understand, maybe join my discord or post a discussion with some screenshots in the IPAdapter repository
@@latentvision Yeah, I already wrote to L2 (quick help).