@@noNumber2Sherlock with pleasure. I tought other will need it as well. if you see any improvements needed, please write me here or in the github repo.
@@NissimsTutorials-l6g Hi my friend! I want to help others who follow your great work, who may not know. I made the mistake to assume that my account for chat gpt plus would be the same usage for Open AI API usage. The OpenAI Prompter node kept telling me that my usage was depleted although I hadn't even used the new API account. Chat GPT and API accounts are not not the same. Also , I did not get any free credits, even for a new account,at least for me so I bought some credits, wasn't expensive. Also Congrats! Over 1K views already Nissim! Keep the momentum! L'Chaim
@@noNumber2Sherlock Thank you for the kind words. I think you are getting some free credits for OpenAI API per month. I will also do another check with another free account and will update here on the results.
hi, I know there is an option to set image as start image for the video. I am doing now experiment for the next video with that model. will upload video about image to video soon.
Thx! Trying the 14B now. Have a 4090, 7950X3D & 96GB (2x48) RAM on a X670E-E How were you able to load 192GB RAM ? I tried 4x32GB and burned out the Memory Controller on the 7950X3D Edit: Each step takes more time, opposite of what you listed ... My attempt, Step 1-1100s, Step 2-1400s, Step 3 - 1800s (just how far I've gotten so far)
hi, and you are very welcome. About the Memory, I have i9 14th gen that support 192GB memory, not all CPU supported it. for example i9 13th gen support only 128GB. you need to make sure the motherboard does support it. I did deep research on MB before buying mine. Please use the next video I did on that model with 7B: ua-cam.com/video/X7HKpTBfSI4/v-deo.html&ab_channel=Nissim%27sTutorials it will run in 15 minutes with that setup. please update me if need any other assistance.
hi @nirdeshshrestha9056 , I have Nvidia 4090 Founder Edition, 192GB memory and i9 14th gen. Also,. I am running another rendering right now that run for the last 2.5 hours and it still running... it will finish in ~1 hour.
please try and tell me how long it took you for 121 frames... I think that model is super nice but not practical for most of the users/computers. BTW, I also have 4 2GB SSD Samsung 990 Pro, connected with Raid 0 to one 8GB drive, so it write/read in 32GBit speed... so even that load of the model take around 1 minute on my machine.
if you use the 7 billion model, a resolution of 1024x768, 72 frame and 30 steps, the 4090 video is created in about 7 minutes. Same settings but 120 frames, approximately 12 minutes
I run it also with 7B model and it is 15 min rendering. the quality is almost the same as 14B model, but it take 17x less time. I was about to create another video for the 7B, so I will create it earlier than I thought.
You were OOM, sure thing that It took ages :) It's right now not possible to use the large model locally. If this model is partially loaded it's just not useful at all.... Sure thing that in some time some clever people will work around limitations, like always.... Quality will be the cost. But I guess there will be no miracle. It's pretty damn sure that we need to upgrade our 4090 soon to get the needed 32 GB of VRAM :D for this model or for anything coming next.
yes, but for now I released the video on the 7B model so all can use it, see it here ua-cam.com/video/X7HKpTBfSI4/v-deo.html&ab_channel=Nissim%27sTutorials .
Hi Nissim. Where do I get that OpenAI Prompt generator node please?
This one I wrote and you can find it here:
github.com/nisimjoseph/ComfyUI_OpenAI-Prompter
@@NissimsTutorials-l6g WOW Nissim! That's so awesome of you! Thank you for all you do,
@@noNumber2Sherlock with pleasure. I tought other will need it as well. if you see any improvements needed, please write me here or in the github repo.
@@NissimsTutorials-l6g Hi my friend! I want to help others who follow your great work, who may not know. I made the mistake to assume that my account for chat gpt plus would be the same usage for Open AI API usage. The OpenAI Prompter node kept telling me that my usage was depleted although I hadn't even used the new API account. Chat GPT and API accounts are not not the same. Also , I did not get any free credits, even for a new account,at least for me so I bought some credits, wasn't expensive.
Also Congrats! Over 1K views already Nissim! Keep the momentum! L'Chaim
@@noNumber2Sherlock Thank you for the kind words. I think you are getting some free credits for OpenAI API per month. I will also do another check with another free account and will update here on the results.
Hello sir. Is there a workflow to turn a product image into a professional promotional video?
hi, I know there is an option to set image as start image for the video. I am doing now experiment for the next video with that model.
will upload video about image to video soon.
Thx! Trying the 14B now. Have a 4090, 7950X3D & 96GB (2x48) RAM on a X670E-E
How were you able to load 192GB RAM ?
I tried 4x32GB and burned out the Memory Controller on the 7950X3D
Edit: Each step takes more time, opposite of what you listed ... My attempt, Step 1-1100s, Step 2-1400s, Step 3 - 1800s (just how far I've gotten so far)
hi, and you are very welcome.
About the Memory, I have i9 14th gen that support 192GB memory, not all CPU supported it. for example i9 13th gen support only 128GB. you need to make sure the motherboard does support it. I did deep research on MB before buying mine.
Please use the next video I did on that model with 7B: ua-cam.com/video/X7HKpTBfSI4/v-deo.html&ab_channel=Nissim%27sTutorials
it will run in 15 minutes with that setup.
please update me if need any other assistance.
Which gpu do you use ?
hi @nirdeshshrestha9056 ,
I have Nvidia 4090 Founder Edition, 192GB memory and i9 14th gen.
Also,. I am running another rendering right now that run for the last 2.5 hours and it still running... it will finish in ~1 hour.
@NissimsTutorials-l6g omg I was planning to test on my 3060
@NissimsTutorials-l6g thanks for sacrificing your time
please try and tell me how long it took you for 121 frames... I think that model is super nice but not practical for most of the users/computers.
BTW, I also have 4 2GB SSD Samsung 990 Pro, connected with Raid 0 to one 8GB drive, so it write/read in 32GBit speed... so even that load of the model take around 1 minute on my machine.
@@nirdeshshrestha9056 please use the Model 7B and not 14B. it will run on your GPU and will do it in around 1 hour on the 3060.
if you use the 7 billion model, a resolution of 1024x768, 72 frame and 30 steps, the 4090 video is created in about 7 minutes. Same settings but 120 frames, approximately 12 minutes
Thanks, I will do video on that model soon.
I just completed rendering another video, took 4.5 hours and the result are not so good.
I run it also with 7B model and it is 15 min rendering. the quality is almost the same as 14B model, but it take 17x less time. I was about to create another video for the 7B, so I will create it earlier than I thought.
Just released the video on the 7B model, see it here ua-cam.com/video/X7HKpTBfSI4/v-deo.html&ab_channel=Nissim%27sTutorials .
You were OOM, sure thing that It took ages :) It's right now not possible to use the large model locally. If this model is partially loaded it's just not useful at all.... Sure thing that in some time some clever people will work around limitations, like always.... Quality will be the cost. But I guess there will be no miracle. It's pretty damn sure that we need to upgrade our 4090 soon to get the needed 32 GB of VRAM :D for this model or for anything coming next.
yes, but for now I released the video on the 7B model so all can use it, see it here ua-cam.com/video/X7HKpTBfSI4/v-deo.html&ab_channel=Nissim%27sTutorials .