You the beast ! Let's create Llama to generate images for animatediff and then llava to check video consistancy and generate new loop of animation with plot that Mistral will dictate... Something like animation studio driven by AI and without human control 😆
Thanks, there are many things to explore just scratching the surface here the next update of the nodes will be wish I had more time to release things faster :D
Thank you so much 🙂 I am improving it I am adding kobold and groq will add the others as soon as I am also refactoring the code to make it easier to push updates👩💻
Yes is possible but take a lot of setup you need to install Ollama on colab and will take time and precious tokens, you could also use a groq api by setting up the environment variables but you need to delete them before you finish or people might steal them I am not sure if the colab will store those after the session. There are services like SaltAI that has this things already ready to work. I might do a tutorial on how to use it on cloud instances in the near future I think colab is not the best way.
Is only on the chat node for now I will add it to the prompt and image prompt is just that I have been busy and I am making the code more modular. I will shooting to have that sorted this week aswell as the new memory Rag systems
I am currently using alltalk to create bio's for characters in a guild and for story telling, I have wav and mp3 files of those speech files. I also have an image for what the narrator looks like. Am i right in thinking i can plug that into the dreamtalk node to have it create the avatar speaking the words ? if so, can it change the emotion on the face based on the current line (most audio is around 5 minutes long, so one emotion may look strange). Reluctant to install and try as currently i have a llm node so wondering if thier will be clashes (I used Plush-for-comfy as i can feed the instruction in as text which helps guide what i expect from the prompt.) On a side note, i didnt realise whisper could be used to generate speech, as i say i use alltalk for that, but its interesting.
There is a new one called live portrait, dreamtalk can't do automatic emotion base on audio as it is. You can only start with some of the bases there are some expressions for anger joy and so on but once you select then it sticks to them and don't change. I haven't try plush-for-comfy
@@impactframes Plush for comfy is pretty awesome for LLM use. Last night i did install all 3 of your node packs in the manager (using the author search) but they do not seem to include the Dreamtalk node? i figured stuck on one emote is better than none. I also tried hedra, which although is a closed source site, did really well... for 30 seconds then cut the rest...damn limits! Live portrait sadly needs a driver video , if i was just doing one or two of these I could record me lipsyncing to the audio, but that would become a pain very fast. The more I look the more I realise that no one has managed to replicate what "Crazy talk" did over a decade ago lol (a now discontinued product) we need an opensource alternative to hedra.... image and audio in to animated avatar out. Perhaps there is just no demand for such a thing?
@@DaveTheAIMad here is the link to the dreamtalk node but is a complex install because it needs an ML package written in C and has to be compiled to your specific machine github.com/if-ai/ComfyUI-IF_AI_Dreamtalk
@@impactframes interesting. Will give that a look. The devs of echomimic (found when trying to get sadtalker to work) have said they plan to release a comfyui node too.... so that option is in the works too
i´m using the portable version of ComfyUI and it gives me the following error pls help! When loading the graph, the following node types were not found: IF_DreamTalk Nodes that have failed to load will show as red on the graph
Right it might be the c library for ML stuff that needs to run is hard to install you can maybe try the pointers on the repo github.com/if-ai/ComfyUI-IF_AI_Dreamtalk this is hard to install. Otherwise there is this new node that might be better called vxpress maybe have a look with the ComfyUI manager It also do talking avatars
It's getting hot! Looks like it's a new round of awesomeness !
Thanks and working on more coming soon
Profiles are really fun! Thank you 😊❤
Yes it will become more useful later down the line as this nodes grow. Thanks for leaving a comment 🙂
This is what I was waiting for......thank you.
Fantastic love to hear when people like the nodes❤
First!!! keep up the great work ImpactFrames
Thank you 🙂
Such an underrated channel. Keep it up man
Thank you, whish I have time to make more videos :D
You the beast ! Let's create Llama to generate images for animatediff and then llava to check video consistancy and generate new loop of animation with plot that Mistral will dictate... Something like animation studio driven by AI and without human control 😆
Thanks, there are many things to explore just scratching the surface here the next update of the nodes will be wish I had more time to release things faster :D
mad respect, thank you very much
Thank you 🙂
Nice! keep it up ! :)
Thank you so much 🙂 I am improving it I am adding kobold and groq will add the others as soon as I am also refactoring the code to make it easier to push updates👩💻
Hi IF! Would it be possible to run it with google colab? I do not really know how to install Ollama and models there. thx!!!
Yes is possible but take a lot of setup you need to install Ollama on colab and will take time and precious tokens, you could also use a groq api by setting up the environment variables but you need to delete them before you finish or people might steal them I am not sure if the colab will store those after the session. There are services like SaltAI that has this things already ready to work. I might do a tutorial on how to use it on cloud instances in the near future I think colab is not the best way.
@impactframes Wow! It would be amazing if you could share this knowledge. We are waiting. Thank you IF! :)
Hello. Do you have any hardware recommendations for generating images/videos?
hello, i am using comfyui with google colab. is there any way to use this on the cloud server? thank you very much in advance :^)
@ImpactFrames sorry for bothering but I'm stuck on this issue. Can you pls clarify?
how do i use qroq in prompt to prompt node
Is only on the chat node for now I will add it to the prompt and image prompt is just that I have been busy and I am making the code more modular. I will shooting to have that sorted this week aswell as the new memory Rag systems
@@impactframes awesome!!! but one request, please make api key adding process easier....thanks anyways
I am currently using alltalk to create bio's for characters in a guild and for story telling, I have wav and mp3 files of those speech files. I also have an image for what the narrator looks like. Am i right in thinking i can plug that into the dreamtalk node to have it create the avatar speaking the words ? if so, can it change the emotion on the face based on the current line (most audio is around 5 minutes long, so one emotion may look strange).
Reluctant to install and try as currently i have a llm node so wondering if thier will be clashes (I used Plush-for-comfy as i can feed the instruction in as text which helps guide what i expect from the prompt.)
On a side note, i didnt realise whisper could be used to generate speech, as i say i use alltalk for that, but its interesting.
There is a new one called live portrait, dreamtalk can't do automatic emotion base on audio as it is. You can only start with some of the bases there are some expressions for anger joy and so on but once you select then it sticks to them and don't change. I haven't try plush-for-comfy
@@impactframes Plush for comfy is pretty awesome for LLM use. Last night i did install all 3 of your node packs in the manager (using the author search) but they do not seem to include the Dreamtalk node? i figured stuck on one emote is better than none.
I also tried hedra, which although is a closed source site, did really well... for 30 seconds then cut the rest...damn limits!
Live portrait sadly needs a driver video , if i was just doing one or two of these I could record me lipsyncing to the audio, but that would become a pain very fast. The more I look the more I realise that no one has managed to replicate what "Crazy talk" did over a decade ago lol (a now discontinued product) we need an opensource alternative to hedra.... image and audio in to animated avatar out. Perhaps there is just no demand for such a thing?
@@DaveTheAIMad here is the link to the dreamtalk node but is a complex install because it needs an ML package written in C and has to be compiled to your specific machine github.com/if-ai/ComfyUI-IF_AI_Dreamtalk
@@DaveTheAIMad you could also try V express github.com/tiankuan93/ComfyUI-V-Express
@@impactframes interesting. Will give that a look.
The devs of echomimic (found when trying to get sadtalker to work) have said they plan to release a comfyui node too.... so that option is in the works too
i´m using the portable version of ComfyUI and it gives me the following error pls help!
When loading the graph, the following node types were not found:
IF_DreamTalk
Nodes that have failed to load will show as red on the graph
Right it might be the c library for ML stuff that needs to run is hard to install you can maybe try the pointers on the repo github.com/if-ai/ComfyUI-IF_AI_Dreamtalk this is hard to install. Otherwise there is this new node that might be better called vxpress maybe have a look with the ComfyUI manager It also do talking avatars
do you have discord community
I have a discord but didn't had the expertise to get it moving
@@impactframes I m good with discord , will help you