Great channel! Super clear instruction for the beginners! This lesson is very inspiring! Try to connect ollama gemma to rewrite the prompt created by llava vision model to stylize and add power lora in addition - this is insane!
"wow, this video was a game changer!🔥 integrating LLM models and how incredibly useful they are for enhancing workflows. The detailed explanations and step-by-step guidance made everything super clear. Thank you so much for sharing this valuable knowledge - it’s really helped me level up my understanding of AI! 🙌 Keep up the amazing work!"thankyou
Around timestamp 16:00 .. I tried with second CLIP Text Encode (Prompt) + made Conditioning Multi Combine (KJNodes) with two inputs. I connected original CLIP text encode and new one to it to combine them and now it works pretty ok
I ended up using more chatgpt than ollama because I work with flux and control net and already takes a lot of vram, so I just prepare prompts in chatgpt so i save some vram
Hi, consider setting the keep alive setting to 0 for ollama in the node, once you have it incorporated into an image producing workflow. Ollama is then not kept in memory. This may help. Pretty certain the nodes GitHub page explains this. If you replace run with pull, ollama will just download the model. Enjoyed the video.
Hi I follow your channel and it's the best for AI. An Inpainting tutorial in COmfy with both SDXL (maybe incorporating ip adapters and controlnets) and Flux would be amazing! There's a lot of people on youtube doing it in completely different manners
What strike me more is how english is unfit to represent phonemes, as a native speaker (I suppose) you're have to resort to adding characters to represent different vowel sounds and dashes to describe where stress goes. In Italian I would write those words as "olama" and "ollama“, and no one can misinterpret the spelling or stress.
Hi, I found your older video on how to convert sketches to AI art on your channel using A1111, but most of your new videos use ComfyUI and my friend also said ComfyUI is better. Is there an updated way to do it on ComfyUI or should I just stick with the method on the A1111 video?
I don't have enough knowledge, i did a few months ago some loras for fun, you can train also online like on civit ai or tensor art, for flux local there is fluxgym , search those terms
I would be ever so grateful if you could create a workflow video that demonstrates how to change the production photo background, adjust lighting, and choose a suitable background image in ComfyUI. It would be incredibly helpful to see how to use MultiLatentComposite and Light Vector to effectively use a lantern image size and move the product around to follow the rule of thirds. If it's possible, could you also show how to apply these techniques to similar backgrounds? A video covering these topics would be truly valuable and I would really appreciate it.
@@longsyee Not sure If i can do that advance, but I need something like that for mockups, so if I am able to do it in the future I will do a video about it
@@dan323609 i didnt find one that is really good, for getting prompts from image i still use chatgpt, but you can try a few from here ollama.com/search?q=vision&c=vision check those who have more pulls
The term "Preview" for Ollama on Windows means it's an early version of the software that is still being tested and refined. It's not exactly a demo, but rather a pre-release that allows users to try out the software and provide feedback before the final stable version is released. This version is fully functional and includes features like GPU acceleration and access to the Ollama model library, but it may still have bugs or incomplete features as the team gathers user feedback and makes improvements.
A word of warning in using LLMs to generate prompts and captions for training. Every time you query the LLM with an image it will give you a slightly different answer. It is not consistent.
Downloading this app totally jacked my computer. I couldn't get into my downloads folder at all. It was not responding. This was just on downloading the app, not even installing. I had to optimize my downloads folder, so if this happens to you, I would go to this link for the fix ua-cam.com/video/8yIqbKvE9jI/v-deo.html Hope it's cool I posted this, because I was pulling my hair out so thought I would share fix
Never had a problem but i never download the downloads folder i set chrome to always ask where to put it. Probably that could have happened with other download also
@pixaroma not sure, but it's the first time it ever happened to me, and I couldn't get into the folder. It's an easy fix when you know what to do, though so wanted to share to save time for someone maybe 👍
Great channel! Super clear instruction for the beginners! This lesson is very inspiring! Try to connect ollama gemma to rewrite the prompt created by llava vision model to stylize and add power lora in addition - this is insane!
"wow, this video was a game changer!🔥 integrating LLM models and how incredibly useful they are for enhancing workflows. The detailed explanations and step-by-step guidance made everything super clear. Thank you so much for sharing this valuable knowledge - it’s really helped me level up my understanding of AI! 🙌 Keep up the amazing work!"thankyou
Thanks for support 😊
Thanks for this tutorial and your Discord channel. The knowledge we gain from your efforts are are so welcome.
Thank you for all the help 🙂 ⚔ Legends
I got some really cool results with Ollama vision and controlnet.
Thanks you! So many gems and nuggets of wisdom in this video and on discord.
Thank you so much for support, glad it helps 🙂
Around timestamp 16:00 .. I tried with second CLIP Text Encode (Prompt) + made Conditioning Multi Combine (KJNodes) with two inputs. I connected original CLIP text encode and new one to it to combine them and now it works pretty ok
I ended up using more chatgpt than ollama because I work with flux and control net and already takes a lot of vram, so I just prepare prompts in chatgpt so i save some vram
@@pixaroma Yeah, I have same vram problem with 4090 :D Hopefully Santaclaus will bring me H100 ;)
Another amazing video. I'm always learning something new and one of the best AI videos!
Hi, consider setting the keep alive setting to 0 for ollama in the node, once you have it incorporated into an image producing workflow. Ollama is then not kept in memory. This may help. Pretty certain the nodes GitHub page explains this.
If you replace run with pull, ollama will just download the model.
Enjoyed the video.
Thank you ☺️ you created the node? Your name sounds familiar 😁
@@pixaroma no just a llm node user. I've commented before... 😅
yet another great video pixaroma!
(this is not sponsored comment)
thanks 😊
Hi I follow your channel and it's the best for AI. An Inpainting tutorial in COmfy with both SDXL (maybe incorporating ip adapters and controlnets) and Flux would be amazing! There's a lot of people on youtube doing it in completely different manners
I will see what i can do, you can do in many ways the same thing in comfyui so people usually find what works for then and their system
Thanx. Used it before, but now I'm using Searge LLM + Florence. Easier to use, no need to install additional service.
Yes that I was using in episode 11, just some people could not install searge.
Hearing your AI voice pronounce it "Olah-mah" instead of "O-Llama" is so stinkin' cute! 😂 It's like when kids say "pasghetti" 😊❤
What strike me more is how english is unfit to represent phonemes, as a native speaker (I suppose) you're have to resort to adding characters to represent different vowel sounds and dashes to describe where stress goes. In Italian I would write those words as "olama" and "ollama“, and no one can misinterpret the spelling or stress.
No way! All this time I thought it was a real voice! Cant be Ai voice ?
☺️
Great tutorial, thank you very much
nice episode! (your BEST one for now!!!) TY sooo much for it ;)
Great work again👏👍
Thanks for this tutorial
Thank you for your tutorial.
Hi, how can I get the hardware performance bar on the right under Queue Prompt? That's really cool!
go to manager>custom nodes manager, search for Crystools and install that custom node
❤
awesome!
Hi, I found your older video on how to convert sketches to AI art on your channel using A1111, but most of your new videos use ComfyUI and my friend also said ComfyUI is better. Is there an updated way to do it on ComfyUI or should I just stick with the method on the A1111 video?
I will try to do a video on that for comfyui, i plan to do all the things i did on a1111 and forge in comfyui
@@pixaroma Woah that was fast thanks
Do you have knowledge on Lora Trainings for Comfy UI?
I don't have enough knowledge, i did a few months ago some loras for fun, you can train also online like on civit ai or tensor art, for flux local there is fluxgym , search those terms
is there any alternative opensource model like midjourny for Graphic designer, design inspiration?
With flux 1 dev model you can do pretty much like in mid journey
suggest to use Groq API / flowise too =)
I assume that is not free, i mean i could use chatgpt also;) just was looking for a free local version
I would be ever so grateful if you could create a workflow video that demonstrates how to change the production photo background, adjust lighting, and choose a suitable background image in ComfyUI. It would be incredibly helpful to see how to use MultiLatentComposite and Light Vector to effectively use a lantern image size and move the product around to follow the rule of thirds. If it's possible, could you also show how to apply these techniques to similar backgrounds? A video covering these topics would be truly valuable and I would really appreciate it.
@@longsyee Not sure If i can do that advance, but I need something like that for mockups, so if I am able to do it in the future I will do a video about it
@@pixaroma i have some findings on .. BiRefNet, ViTMatte, IC-Light, ResAdapter not sure if it helps.
Even though it was installed, it still doesn't recognize the model in the model section of comfyui, why? 7:47 I also refreshed
Restart comfyui, if ollama is running and that is installed it should appear, if you add a new ollama node see if can see it in the list
@@pixaroma Restart comfyui, Ollama is running
But it didn't work again.. Maybe I should turn off the computer and turn it on?
@@mr.entezaee not sure, i only recently played with it and didn't run into any problems yet
@@pixaroma My problem was solved. I found out that I shouldn't be using a VPN...wow how lucky I was
@@mr.entezaee good to know, learned something new 😁 glad it worked out eventually
"Unfortunately, I am unable to access visual information or images, and am unable to provide any description or analysis based on them."
gemma 7b
Only works with vision models
@@pixaroma what the best?
@@dan323609 i didnt find one that is really good, for getting prompts from image i still use chatgpt, but you can try a few from here ollama.com/search?q=vision&c=vision check those who have more pulls
Why does it say " Preview " for windows ? What do they mean by that ? Is it like a demo version ?
The term "Preview" for Ollama on Windows means it's an early version of the software that is still being tested and refined. It's not exactly a demo, but rather a pre-release that allows users to try out the software and provide feedback before the final stable version is released. This version is fully functional and includes features like GPU acceleration and access to the Ollama model library, but it may still have bugs or incomplete features as the team gathers user feedback and makes improvements.
@@pixaroma oh, I got you !!!
A word of warning in using LLMs to generate prompts and captions for training. Every time you query the LLM with an image it will give you a slightly different answer. It is not consistent.
Yes it changes. I use chatgpt for captions
Downloading this app totally jacked my computer. I couldn't get into my downloads folder at all. It was not responding. This was just on downloading the app, not even installing. I had to optimize my downloads folder, so if this happens to you, I would go to this link for the fix ua-cam.com/video/8yIqbKvE9jI/v-deo.html Hope it's cool I posted this, because I was pulling my hair out so thought I would share fix
Never had a problem but i never download the downloads folder i set chrome to always ask where to put it. Probably that could have happened with other download also
@pixaroma not sure, but it's the first time it ever happened to me, and I couldn't get into the folder. It's an easy fix when you know what to do, though so wanted to share to save time for someone maybe 👍
Thanks ☺️