Update: The Searge node cannot provide descriptions of images; use it for prompt generation instead. Use Florence for generating descriptions from images. Therefore, use Searge with LLM for prompt generation from text instructions, and Florence 2 for generating prompts from images. Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity. You can now support the channel and unlock exclusive perks by becoming a member: ua-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin Update Some have problem installing the searge node, a member ivo came with a solution, install in a different folder comfyui using this installer and that is install automatically the nodes i used in my episodes including searge github.com/Tavris1/ComfyUI-Easy-Install
Very well done my friend!! I've been using the Searge LLM for a few days while eagerly awaiting your video. And, as usual, I learned a BUNCH of new ideas and tech tips from this video. I love that you included the Img2Text options!! Thank you! You earned FiDolla!! 😍
Great video. It's always fun to see my LLM node in action :) If you want to have more control over the seed on the LLM node, you can also turn the random_seed into an input and connect either a primitive node or another seed generator to get the option to use different seeds every time you run the workflow. I've also tried the gguf versions of Phi-3.5-Mini, for example from bartowski/Phi-3.5-mini-instruct-GGUF on huggingface, and those have great results with smaller llm models and less vram use.
Thanks, i already did that on the simple workflow at the end of the video that only generate prompts by adding seed as input :) I left it without random seed on flux workflow just in case people want to generate more on the same prompt so it is faster. Thanks for the models i have to check it out. Do you know any similar to florence, that get better description from images?
Can you also add an image input to the node? Or a new node that has that input? So i can get a description from an image. It kind of worked with a path to the image but didn't want to add extra nodes just to get the paths in the string format and then use instructions on it
@@pixaroma No, the node and models don't support images directly. It also can't load files from your drive. In your video it didn't actually look at the images when you pasted the file paths, it picked up the subject from the image filename. What I usually do is combine what you have here: 1. load image 2. get caption with florence2 3. refine caption with searge-llm 4. ??? 5. profit
The best ComfyUI tutorial series and tutor on UA-cam and anywhere else. All the workflows so far have worked on my very low 2gb vram 16gb ram system too which has been great for me. Could you do one for consistent character generations with character sheets and upscaling (pose, clothing, facial expressions etc) and Lora training for low vram too? I think that's a challenge for some people like me. Would be most appreciated. Also have you tried running ComfyUI from an external hard drive on a Mac or Windows computer? The size of the ComfyUI folder gets large real fast. Would be great to save space and still be able to connect to any computer anywhere and work off it without needing to carry your computer everywhere or install it and download models all over again on a new system when you travel. If you've tried it and it works, could you share how? Thank you 🙏🏽.
I didn't try external drive but maybe it works if is portable comfyui theoretically should work. As for Lora i only trained online lora, locally didn't find an easy to use without error solution, fluxgym most people recommend. I used tensor art and openart websites to train lora
I usually just prompt for a chatacter sheet with same character in different poses, and then i used Photoshop to get more images, same character with crop another with different crop and changed the background just to give some variation and get more images
I dont know maybe Ip adapter or something, but i didn't try. Right probably I will use in painting and just keep the head or head and body and inpaint anything else
22:40 On Windows 10 you can hold Shift and right click and the right click menu will have an option to 'Copy as Path', so no need to go to the File Explorer ribbon menu.
Thank you so much man. Your great work guided me to explore the ComfyUI workflow from zero to a little understanding. By the way, your opening and closing animation are great too! Is it Kling too? How could you do that? Can you do an episode about this? Menu, cursor, animated background, etc... very attractive! Thanks!
Depending on animation i use multiple tools, if i get some free time will do a video on that. I use kling, also CapCut with keyframe animation, and also a software DPanimation maker
Been absolutely loving this tutorial series and accompanying Discord channel. I have been following without any issues or setbacks until this episode. Searge node cannot be installed as it requires a maximum of Python 3.11, however ComfyUI is currently running on 3.12.7. I have tried everything including backing up the python_embeded directory and installing a 3.11 version in a new python_embeded dir. When i go to run this it tells me that there is no module called comfy, so crashes on line 1 of main which is import comfy.options. If anyone else has experienced this or can help me with this frustration, I would be very grateful indeed!! 🙂 Keep up the great work, learned so much and always grateful for the workflows you include 💯👍
A member of our discord channel made a version that works that comes with that nodes and a few other installed, you can use that installer and install comfyui in different folder github.com/Tavris1/ComfyUI-Easy-Install
I have the same problem with Searge for Python 3.11 vs Python 3.12.7 that is shipped with the current version of ComfyUI. I was just about to downgrade my python_embedded to tackle that, but apparently it won't work either. Did you manage to find a solution for that issue?
@@MarekCezaryWojtaszek you can install this comfyui in a different folder and comes with all the settings and those node installed, a member of discord ivo did this installer, I didnt find other solution than this yet. github.com/Tavris1/ComfyUI-Easy-Install
Some had problems on some configurations, it has something to do with dependencies, some had problems and posted on discord, only solution that found was to install comfyui with the node already installed, someone from community made an installer github.com/Tavris1/ComfyUI-Easy-Install
Hi, thanks for the great tutorial - regarding image -> prompt, sometimes i want to use an existing image (and its prompt) - but with my own lora. is there a way to 'insert' my trigger word inside the generated prompt that was made out of an existing image?
You can add a text concatenate node , that will combine your existing prompt with your defined text, just like i do on episode 7 where i combine my styles that are in fact multiple prompts to the actual prompt
You can use a text concatenate node from was node suite and can combine any text you want from different sources, add a primitive node or a positive from easy use custom node and connect to text concatenate and do the same with output from Florence and the results are those together
Amazing as always! But I gotta problem I haven't been able to solve for 2 years! Here you generate text and copy it to a text node with a switch so you can look, edit and use. But I want 20+ text generations, look them over, then use them for batch generations. Copy 20+ manually is no good. The text is in the node! You would have thought some node or other could switch and use it without repopulating or complaining of no input, but no I haven't found one.
@@pixaroma Thanks, yep i've been doing it with text files all year but a node should be able to do what i described! Is there an inherent problem with that the way it works under the hood i wonder?
You have probably figured out something with creating a prompt from a path, I used Mistrel Q8 gguf model and I placed a path to a .JPG image of MT. Rushmore I also re-worded the prompt instructions you had, to "Generate a prompt from the image but don't use the path or file the name" and got a very nice image almost exactly like the picture so unless they updated the model perhaps its the wording or the image extension type Im guessing
How would we do this with a caption generator that doesn't have it's own dedicated node? For example, Joy Caption Alpha Two? I have a Load Checkpoint node with Jay Caption Alpha Two's .safetensors file, but I don't know how to connect that to images to generate captions. Thank you!!!
Not sure, only if you find a node for that, that was the problem for me also there are many models and hard to integrate. I did also in another episode the ollama version there only works with ollama models
Thank you! I'm sure this feature will help a lot of people in creating interesting designs and creative solutions. (Still waiting for your Upscale videos).
built an ollama version of this before but having issues with the caption getting output to the anything node or a text node....I can see the prompts being generated in the terminal for comfyui but nothing appears in the show text node. Tried a bunch of models from the list. no dice. Anyone know whats amiss here? Searge LLM also failed to load/install on MAC ugh.
for those on windows that failed to install searge this version worked for them, it install comfyui in different folder with all the nodes github.com/Tavris1/ComfyUI-Easy-Install but recently also comfyui got some updates with registry that might cause some problems until all the nodes creators register the nodes
Someone from discord community made this comfyui installer you can try, it install automatically all the nodes i used. I didn't find a way to fix it manually but this worked for many that had same problem like you so you can give it a try github.com/Tavris1/ComfyUI-Easy-Install
well for me Florence was faster, ollama let me choose more variety of models so depends on the pc specs, but ollama took more vram to run, so I dont use ollama too much. I mostly just use chatgpt so I dont take any of my vram so i can generate faster
@@pixaroma After installed all, I have problems with the ComfyUI_Searge_LLM about " llama_cpp_cuda " I follow the instructions and nothing happend, other solution you have maybe? Thank you
@@FabioAI_Oficial some people had that problem on certain pc configuration, but I dont know what triggers it. You could try to install this version a member of discord did that has already installed the searge and has the right dependencies, just instal this comfyui in a new folder, github.com/Tavris1/ComfyUI-Easy-Install
Hallo! I tried to do everything described, but when I start the queue, it says "Failed to import transformers.models.mega.configuration_mega because of the following error (look up to see its traceback): No module named 'transformers.models.mega.configuration_mega'. Where am I wrong?
when you start the comfyui check if node got installed the searge one, most of people had that problem because it didnt install the node completely. You can try to use this installer for comfyui in different folder that install those nodes automatically, a member of discord did that installer, and worked out for people could not install it github.com/Tavris1/ComfyUI-Easy-Install
Are you sure mistral LLM model is able to see image by giving it a path ? I think it would need some additional programming to do that. It may recognize file name, thats why it produces subject like women portrait or architecture but image is not similar. I tested with random file name and model was not able to "see" image for me. :) Thanks for video!
No, it seems it doesn't see, i talked with creator of the node and he said it just invented a prompt of the image info, so use Florence for that, and I will try to see if i can get another model and node that is better for image captioning. Llava from ollama seems to know how to read images, and something I saw wirh Joy in the name. I will do more research. So the searge node is better for text to text, Florence for image to text
You can try to use this comfy UI installer in a different folder, it installs all the nodes i used so far in my episode including that searge node that causes problems github.com/Tavris1/ComfyUI-Easy-Install
some still have that problem, is something with dependencies needs a certain version I think, but I have no way to test it since is different for each system. Alternative is to use Ollama like in episode 13
Hello! I have a problem. When I try to download Florence-2-base Prompt I have error - DownloadAndLoadFlorence2Model The checkpoint you are trying to load has model type `florence2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date. How can solfe this issue? Thanks in advance
You can try to see if updating transformers work, maybe you have an old version of comfyui and your manager didn't update successfully, you can try to go to update folder and run the update and update dependencies bat file. Or you can tey to go to python_embeded folder and in address bar type cmd and press enter, then run this command: ./python.exe -m pip install --upgrade transformers
@@pixaroma Hello! Thanks for our help. but after update i still have the eroor - DownloadAndLoadFlorence2Model The checkpoint you are trying to load has model type `florence2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
@@ESheridan i am not sure, i mean that node should just downloaded and use it, maybe need a different version of some dependencies but is hard to t ell. If nothing works just install a new comfyui in different folder
@@ESheridan i used a few from there but now there is a llama 3.2 so probably i will need to do some research on that. I still use chatgpt when i want something more exact
This error appears for me: DownloadAndLoadFlorence2Model Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate` Plz help
You could try I guess what they recommend and install that to see if it helps. Navigate to the folder where your python_embeded folder is located , in the address bar on top where is the path type cmd and press enter, that will open command window in the exact folder. Paste this command and press enter, then restart. python.exe -m pip install accelerate Or also you can try to fix dependencies if you go to update folder run the bat file that has dependencies in name
@@pixaroma Holy shit I will spend the entire thirty minute just listening to the way you talks rather than what you say just for how realistic it is. Its scary. What tool is that?
@@pixaroma I am getting the following error while trying to install llama_cpp_python: ERROR: llama_cpp_python-0.2.89+cpuavx2-cp311-cp311-win_amd64.whl is not a supported wheel on this platform. This is because the current version of ComfyUI is shipped with Python 3.12.7: ** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] So, for now I gave up. I will just use ChatGPT for help with creating good prompts and wait for Searge update to 3.12.
i like this series, and just subscribe your channel, but can you please change your windows theme to dark mode, (including browser and file manager). when you navigate from ComfyUI Interface to file manager or open another website, you like blashing flashlight to my face LOL 🤣🤣🤣 make me uncomfortable. thanks btw.
I understand but i can not read white text on black too much, my eyes hurt, i use a lot of interfaces on dark, but for browser my eyes just can not get used to it, i tried but didn't work for me 😂 sorry, i remember i will add a transition between
Update: The Searge node cannot provide descriptions of images; use it for prompt generation instead. Use Florence for generating descriptions from images. Therefore, use Searge with LLM for prompt generation from text instructions, and Florence 2 for generating prompts from images.
Join the conversation on Discord discord.gg/gggpkVgBf3 or in our Facebook group facebook.com/groups/pixaromacommunity.
You can now support the channel and unlock exclusive perks by becoming a member:
ua-cam.com/channels/mMbwA-s3GZDKVzGZ-kPwaQ.htmljoin
Update
Some have problem installing the searge node, a member ivo came with a solution, install in a different folder comfyui using this installer and that is install automatically the nodes i used in my episodes including searge github.com/Tavris1/ComfyUI-Easy-Install
You provide So many great basic skills for building in cui….have to watch a 2nd time to take notes. Great stuff!
Somehow, other AI UA-camrs are prominently promoted by UA-cam and Google, but you’re truly the only tutor one needs to have, imho.
Truly amazing! There are not many videos on using text-based models, but this comes close. Excited to try it out!
i can only say : u r unbelievable. your tutorials are out of this world 👏👏👏
A lot of useful information in one video without any fluff. Thank you!
Another great video!
As MrBeast says, the next step after 10k subscribers is not 20k, but 100k !!!
Keep it up!
do not have words to describe the amount of value this video provides, congrats!!
I am here to say: You're the best explainer out there when it comes to ComfyUI. Keep it up!
Thank you ☺️
Great video! Thank you for the diversity in your videos!
thanks for your free content. you always bring a lot of value to the community for free. hope will never change
Very well done my friend!! I've been using the Searge LLM for a few days while eagerly awaiting your video. And, as usual, I learned a BUNCH of new ideas and tech tips from this video. I love that you included the Img2Text options!! Thank you! You earned FiDolla!! 😍
Great video. It's always fun to see my LLM node in action :)
If you want to have more control over the seed on the LLM node, you can also turn the random_seed into an input and connect either a primitive node or another seed generator to get the option to use different seeds every time you run the workflow.
I've also tried the gguf versions of Phi-3.5-Mini, for example from bartowski/Phi-3.5-mini-instruct-GGUF on huggingface, and those have great results with smaller llm models and less vram use.
Wow!!
Thanks, i already did that on the simple workflow at the end of the video that only generate prompts by adding seed as input :) I left it without random seed on flux workflow just in case people want to generate more on the same prompt so it is faster. Thanks for the models i have to check it out. Do you know any similar to florence, that get better description from images?
Can you also add an image input to the node? Or a new node that has that input? So i can get a description from an image. It kind of worked with a path to the image but didn't want to add extra nodes just to get the paths in the string format and then use instructions on it
@@pixaroma No, the node and models don't support images directly.
It also can't load files from your drive. In your video it didn't actually look at the images when you pasted the file paths, it picked up the subject from the image filename.
What I usually do is combine what you have here:
1. load image
2. get caption with florence2
3. refine caption with searge-llm
4. ???
5. profit
Your lessons are really very detailed and useful. I'm doing well, it's fantastic, thank you for your work
Glad you like them 🙂 and thanks for support on membership
Excellent work, very detailed on Discord. Well organized and easy to follow.
Thank you so so much for these videos. You're doing an amazing job at explaining! ❤
Straight to the point and useful stuff as always,keep going!
Another well explained and easy to follow tutorial. Looking forward to giving these a try. Thanks.
Thank you ☺️
Thank you. The best tutorial I´ve found.
Your videos are awesome and the explanations are accessible. Thank you.
Thank you so much! Really useful, as always. Greetings from Perú
Great tutorials so far. The use of a path to an image for a LLM model to see it is not how those work, but you probably figured that out.
Yes I found out later :) need a vision model
Honestly baffling given the quality of the other parts of the video.
Great video!! Always learning something from you!!
Thanks!
Thanks again for continuous support ☺️
The best ComfyUI tutorial series and tutor on UA-cam and anywhere else. All the workflows so far have worked on my very low 2gb vram 16gb ram system too which has been great for me.
Could you do one for consistent character generations with character sheets and upscaling (pose, clothing, facial expressions etc) and Lora training for low vram too? I think that's a challenge for some people like me. Would be most appreciated.
Also have you tried running ComfyUI from an external hard drive on a Mac or Windows computer? The size of the ComfyUI folder gets large real fast. Would be great to save space and still be able to connect to any computer anywhere and work off it without needing to carry your computer everywhere or install it and download models all over again on a new system when you travel. If you've tried it and it works, could you share how?
Thank you 🙏🏽.
I didn't try external drive but maybe it works if is portable comfyui theoretically should work. As for Lora i only trained online lora, locally didn't find an easy to use without error solution, fluxgym most people recommend. I used tensor art and openart websites to train lora
@pixaroma Ok I'll check that out. What about consistent character for dataset for the lora training?
I usually just prompt for a chatacter sheet with same character in different poses, and then i used Photoshop to get more images, same character with crop another with different crop and changed the background just to give some variation and get more images
@pixaroma What if you already had a character you really liked say from your own drawings, how would you generate that with the input image?
I dont know maybe Ip adapter or something, but i didn't try. Right probably I will use in painting and just keep the head or head and body and inpaint anything else
22:40 On Windows 10 you can hold Shift and right click and the right click menu will have an option to 'Copy as Path', so no need to go to the File Explorer ribbon menu.
thank you for the shortcut 🙂
new enjoyment day with new video thats all :)
Thanks ☺️
Very very good tutorial
thanks, very useful tutorial
Awesome tutorial like always.
Thank you so much man. Your great work guided me to explore the ComfyUI workflow from zero to a little understanding. By the way, your opening and closing animation are great too! Is it Kling too? How could you do that? Can you do an episode about this? Menu, cursor, animated background, etc... very attractive! Thanks!
Depending on animation i use multiple tools, if i get some free time will do a video on that. I use kling, also CapCut with keyframe animation, and also a software DPanimation maker
I like your videos, so I got your youtube channel members.
thank you so much 🙂
Been absolutely loving this tutorial series and accompanying Discord channel. I have been following without any issues or setbacks until this episode. Searge node cannot be installed as it requires a maximum of Python 3.11, however ComfyUI is currently running on 3.12.7. I have tried everything including backing up the python_embeded directory and installing a 3.11 version in a new python_embeded dir. When i go to run this it tells me that there is no module called comfy, so crashes on line 1 of main which is import comfy.options. If anyone else has experienced this or can help me with this frustration, I would be very grateful indeed!! 🙂 Keep up the great work, learned so much and always grateful for the workflows you include 💯👍
A member of our discord channel made a version that works that comes with that nodes and a few other installed, you can use that installer and install comfyui in different folder github.com/Tavris1/ComfyUI-Easy-Install
I have the same problem with Searge for Python 3.11 vs Python 3.12.7 that is shipped with the current version of ComfyUI. I was just about to downgrade my python_embedded to tackle that, but apparently it won't work either. Did you manage to find a solution for that issue?
@@MarekCezaryWojtaszek you can install this comfyui in a different folder and comes with all the settings and those node installed, a member of discord ivo did this installer, I didnt find other solution than this yet. github.com/Tavris1/ComfyUI-Easy-Install
best tutorial!!
This video is so good *-*
Thank you for the tutorial it help me so much .
I get import failed on the Searge node install, tried multiple times. Folder created and file to put in that folder is where it should be. Any ideas?
Some had problems on some configurations, it has something to do with dependencies, some had problems and posted on discord, only solution that found was to install comfyui with the node already installed, someone from community made an installer github.com/Tavris1/ComfyUI-Easy-Install
Hi, thanks for the great tutorial - regarding image -> prompt, sometimes i want to use an existing image (and its prompt) - but with my own lora.
is there a way to 'insert' my trigger word inside the generated prompt that was made out of an existing image?
You can add a text concatenate node , that will combine your existing prompt with your defined text, just like i do on episode 7 where i combine my styles that are in fact multiple prompts to the actual prompt
Well explained tutorial! Thanks a lot! Is there a way how to add my own input which would always be added to generated text from Florence? Thank you?
You can use a text concatenate node from was node suite and can combine any text you want from different sources, add a primitive node or a positive from easy use custom node and connect to text concatenate and do the same with output from Florence and the results are those together
@@pixaroma Thanks for quick reply. It works! :)
Amazing as always! But I gotta problem I haven't been able to solve for 2 years! Here you generate text and copy it to a text node with a switch so you can look, edit and use. But I want 20+ text generations, look them over, then use them for batch generations. Copy 20+ manually is no good. The text is in the node! You would have thought some node or other could switch and use it without repopulating or complaining of no input, but no I haven't found one.
Maybe use itools from episode 15 to save the prompts in a text file then edit the text file and use that to generate for all the prompts
@@pixaroma Thanks, yep i've been doing it with text files all year but a node should be able to do what i described! Is there an inherent problem with that the way it works under the hood i wonder?
You have probably figured out something with creating a prompt from a path, I used Mistrel Q8 gguf model and I placed a path to a .JPG image of MT. Rushmore I also re-worded the prompt instructions you had, to "Generate a prompt from the image but don't use the path or file the name" and got a very nice image almost exactly like the picture so unless they updated the model perhaps its the wording or the image extension type Im guessing
thank you for your tutorial
amazing videos as usual, if i try to add lora keywords in the prompt they gets lost while going through LLM, anyway to workaround that?
used your concatenator idea, worked out well :D
yes text concatenate should work you can add extra things :)
Thank you.
How would we do this with a caption generator that doesn't have it's own dedicated node? For example, Joy Caption Alpha Two? I have a Load Checkpoint node with Jay Caption Alpha Two's .safetensors file, but I don't know how to connect that to images to generate captions. Thank you!!!
Not sure, only if you find a node for that, that was the problem for me also there are many models and hard to integrate. I did also in another episode the ollama version there only works with ollama models
Thank you! I'm sure this feature will help a lot of people in creating interesting designs and creative solutions.
(Still waiting for your Upscale videos).
Didn't forget about upscaling, this month will be ready, i just need to do more tests
built an ollama version of this before but having issues with the caption getting output to the anything node or a text node....I can see the prompts being generated in the terminal for comfyui but nothing appears in the show text node. Tried a bunch of models from the list. no dice. Anyone know whats amiss here? Searge LLM also failed to load/install on MAC ugh.
for those on windows that failed to install searge this version worked for them, it install comfyui in different folder with all the nodes github.com/Tavris1/ComfyUI-Easy-Install but recently also comfyui got some updates with registry that might cause some problems until all the nodes creators register the nodes
after install searge llm node missing problem, how to fix?
Someone from discord community made this comfyui installer you can try, it install automatically all the nodes i used. I didn't find a way to fix it manually but this worked for many that had same problem like you so you can give it a try github.com/Tavris1/ComfyUI-Easy-Install
Class apart!
I have a question. which one is better? Ollama node or this Florence 2? 🤔
well for me Florence was faster, ollama let me choose more variety of models so depends on the pc specs, but ollama took more vram to run, so I dont use ollama too much. I mostly just use chatgpt so I dont take any of my vram so i can generate faster
@pixaroma thank you for your help!
@@pixaroma After installed all, I have problems with the ComfyUI_Searge_LLM about " llama_cpp_cuda " I follow the instructions and nothing happend, other solution you have maybe? Thank you
@@FabioAI_Oficial some people had that problem on certain pc configuration, but I dont know what triggers it. You could try to install this version a member of discord did that has already installed the searge and has the right dependencies, just instal this comfyui in a new folder, github.com/Tavris1/ComfyUI-Easy-Install
Hallo! I tried to do everything described, but when I start the queue, it says "Failed to import transformers.models.mega.configuration_mega because of the following error (look up to see its traceback):
No module named 'transformers.models.mega.configuration_mega'. Where am I wrong?
when you start the comfyui check if node got installed the searge one, most of people had that problem because it didnt install the node completely. You can try to use this installer for comfyui in different folder that install those nodes automatically, a member of discord did that installer, and worked out for people could not install it github.com/Tavris1/ComfyUI-Easy-Install
@@pixaroma , thanks a lot! You are № 1
seams like seargeLLM dosn't work on the portable version of comfyui because the portable use python 3.12 instead of python 3.11
you can try this installer, all said it worked with this one github.com/Tavris1/ComfyUI-Easy-Install
Are you sure mistral LLM model is able to see image by giving it a path ? I think it would need some additional programming to do that. It may recognize file name, thats why it produces subject like women portrait or architecture but image is not similar. I tested with random file name and model was not able to "see" image for me. :)
Thanks for video!
No, it seems it doesn't see, i talked with creator of the node and he said it just invented a prompt of the image info, so use Florence for that, and I will try to see if i can get another model and node that is better for image captioning. Llava from ollama seems to know how to read images, and something I saw wirh Joy in the name. I will do more research. So the searge node is better for text to text, Florence for image to text
@@pixaroma Great, waiting for more videos! :)
Nice mate, make some inpaint tutorial. Would be nice
Yeah, I plan to do one maybe this month, right now working on upscaling
This does not work anymore. I cannot install the nodes. I get an import error when i try to install the Searge nodes. Plz suggest any solution
You can try to use this comfy UI installer in a different folder, it installs all the nodes i used so far in my episode including that searge node that causes problems github.com/Tavris1/ComfyUI-Easy-Install
cant install searge llm, i triend to install llama-ccp as they said in github but i've always an error in manager and when i start comfyui
some still have that problem, is something with dependencies needs a certain version I think, but I have no way to test it since is different for each system. Alternative is to use Ollama like in episode 13
@@pixaroma thanks, i'll check this
Hello! I have a problem. When I try to download Florence-2-base Prompt I have error - DownloadAndLoadFlorence2Model
The checkpoint you are trying to load has model type `florence2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
How can solfe this issue? Thanks in advance
You can try to see if updating transformers work, maybe you have an old version of comfyui and your manager didn't update successfully, you can try to go to update folder and run the update and update dependencies bat file. Or you can tey to go to python_embeded folder and in address bar type cmd and press enter, then run this command: ./python.exe -m pip install --upgrade transformers
@@pixaroma Hello! Thanks for our help. but after update i still have the eroor - DownloadAndLoadFlorence2Model
The checkpoint you are trying to load has model type `florence2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
@@ESheridan i am not sure, i mean that node should just downloaded and use it, maybe need a different version of some dependencies but is hard to t ell. If nothing works just install a new comfyui in different folder
@@pixaroma I relize! Thank you for your help. I have question, what is the best florence model on your opininon?
@@ESheridan i used a few from there but now there is a llama 3.2 so probably i will need to do some research on that. I still use chatgpt when i want something more exact
This error appears for me:
DownloadAndLoadFlorence2Model
Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`
Plz help
You could try I guess what they recommend and install that to see if it helps. Navigate to the folder where your python_embeded folder is located , in the address bar on top where is the path type cmd and press enter, that will open command window in the exact folder. Paste this command and press enter, then restart. python.exe -m pip install accelerate
Or also you can try to fix dependencies if you go to update folder run the bat file that has dependencies in name
@@pixaroma It worked perfectly, thank you very much!!! 🙏🏼
Great explanation and workflows! There is a hot comment for both you and the algorithm :-) DEECEEHAWK
Bro I am just curious, are you using ai voice over?
Yes is AI voice from my text
@@pixaroma Holy shit I will spend the entire thirty minute just listening to the way you talks rather than what you say just for how realistic it is. Its scary.
What tool is that?
@@Valket elevenlabs
love your vids man, is there anyway to like support you? you got a Patreon or smth?
Thank you ☺️ You can join membership there is join button on the channel ☺️ also under videos there are super thanks, the heat icon with dollar.
sorry, its not the searge llm, its the llama-cpp-python thats required wont work past python 3.11
check my other comment, only solution was that installer I gave link for many, not sure why that cpp cause so much problems
@@pixaroma I am getting the following error while trying to install llama_cpp_python:
ERROR: llama_cpp_python-0.2.89+cpuavx2-cp311-cp311-win_amd64.whl is not a supported wheel on this platform.
This is because the current version of ComfyUI is shipped with Python 3.12.7:
** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
So, for now I gave up. I will just use ChatGPT for help with creating good prompts and wait for Searge update to 3.12.
Any One having the same error as I am having. ComfyUI easy use (Import failed) error. done everything but no luck
Many seems to have it, you can install this in different folder and comes with that node github.com/Tavris1/ComfyUI-Easy-Install
@pixaroma love you man, hats off god bless you. Only one request I can't connect to your discord server. Link expired error.
The link from the UA-cam channel header should always work
can't get searge to work, too many errors, I kept trying, and it's just failing every time, going to have to skip this one
Check the pinned comment for alternative on installer that install searge also, but I rarely use it anyway, i use chatgpt for prompts most of the time
i like this series, and just subscribe your channel,
but can you please change your windows theme to dark mode, (including browser and file manager).
when you navigate from ComfyUI Interface to file manager or open another website, you like blashing flashlight to my face LOL 🤣🤣🤣
make me uncomfortable. thanks btw.
I understand but i can not read white text on black too much, my eyes hurt, i use a lot of interfaces on dark, but for browser my eyes just can not get used to it, i tried but didn't work for me 😂 sorry, i remember i will add a transition between
@pixaroma my suggestion, change the comfyui inferface to light mode 😂, so all consistent.
@pixaroma Thank you so much for your details expalnations