You can probably output the generated code into your blend file through the Gemini API. But it would require you to write an extra script that's running on your computer to take in that data and put it into that blend file. You can probably ask Gemini about it and figure it out! 😉
As a boy in the mid 70s, I had a vivid dream of being in a wonderous snowy white landscape of snow covered trees, walking with this woman hand in hand. I was aware it wasn't real and that we where both in some sort of machine that let us be any age and anywhere. This will happen and I'm now 56 and have met the lady in the dream!
Brilliant! suggestions: 1. make your commands to Gemini less conversational, just commands. 2. Can you let Gemini record all its output to a continuous output aka text file which you can see during your minimal use of TinyTask so that you have a control view of the scripts and give precise correction commands
Yeah, that is already more than I can do in Blender. Hopefully the Blender geniuses maybe add a Gemini API or otherwise make it work how you suggested. Please post another video when you figure out more. Subbing.
This is amazing, luckily for Blender it is well trained, for other specialized CAD solutions such as CATIA, it is not very good. For VFX software like Houdini it is also not very well trained. But I am sure it will get better and better fast!
hm yeah, well. nice proof of concept, but still not very useless, because its incredible hard to precisely describe what you want. But nice to maybe have an assistant for switching tools and adding modfiers, just to save some keystrokes or walking through menus. but imagine this in an office environment. your working colleages will kill you when you are talking all day XD
Wow. google is really spreading the payola dollars around huh. Ask it to give you the approximate position of the buttons, you know, something that would actually be useful........
It's very cool, it's like magic!! But like magic, it's useless. You can create these scenes faster with shortcuts and no AI. I think it can be good for more reptitive and boring tasks though. But in that case, you'd create an addonso you can always run it for other scenes. I think this will be useful when it will actually model something (clean) from scratch, or when it will be able to kitbash scenes or prefabs from an asset browser.
I think this is a brilliant proof of concept of how useful these things can be in the future. Really excited about this - thanks for the demonstration
LLMs struggle with physics. You should've asked it to create a landscape or something with various elements
This is the wildest thing I have seen someone do with AI
They didn't even started yet😎
This is awesome! Early stages, but very impressive how you got this to work a little bit.
You can probably output the generated code into your blend file through the Gemini API.
But it would require you to write an extra script that's running on your computer to take in that data and put it into that blend file.
You can probably ask Gemini about it and figure it out! 😉
FINELLLY A USEFUL VIDEO OMG, THE OLD YOU IS BACKK
This is amazing proof of concept. Keep up the good work. 👍
I already like playing around in blender alot, this just makes it a thousand times more fun.
now that was sci-fi episode.
Nice update here, yeah that can be so helpful for artist that don’t know much about scripting, quite impressive thought for the simple demonstration!
Almost like magic. This will have many uses for us 3D Artists. Thanks for sharing!
As a boy in the mid 70s, I had a vivid dream of being in a wonderous snowy white landscape of snow covered trees, walking with this woman hand in hand. I was aware it wasn't real and that we where both in some sort of machine that let us be any age and anywhere. This will happen and I'm now 56 and have met the lady in the dream!
❤❤❤ you are so clever! beautiful test! you are the best!
Another banger, like usual
Damn, that's amazing! I'll try that and let you know if i come up with something amazing!
Something went right !
Brilliant! suggestions: 1. make your commands to Gemini less conversational, just commands. 2. Can you let Gemini record all its output to a continuous output aka text file which you can see during your minimal use of TinyTask so that you have a control view of the scripts and give precise correction commands
I was thinking about it.. when we got “ok google”
Very impressive!!
Holy! That's a brilliant setup for voice command control😲
Imagine integrating this directly into Blender.
excellent video and really exciting to see what is already possible
can't be long, surely, before this is turned into some kind of add on?
hahaha just got on the TinyTask part, you are wild bro :D
I like your vids
Brother you are awesome 👍
We are gonna have a personal blender 'Jarvis'.
Yeah, that is already more than I can do in Blender. Hopefully the Blender geniuses maybe add a Gemini API or otherwise make it work how you suggested. Please post another video when you figure out more. Subbing.
me everytime i watch polyfjord video:
how did he even think about that??????
The future is scary
This is amazing, luckily for Blender it is well trained, for other specialized CAD solutions such as CATIA, it is not very good. For VFX software like Houdini it is also not very well trained. But I am sure it will get better and better fast!
hm yeah, well. nice proof of concept, but still not very useless, because its incredible hard to precisely describe what you want. But nice to maybe have an assistant for switching tools and adding modfiers, just to save some keystrokes or walking through menus. but imagine this in an office environment. your working colleages will kill you when you are talking all day XD
you found all the lazy ways of designing the model I was laughing half the way.
wow this is genius, thx for the video
i am doing this for one year now and this is only model that does this good
very cool!
I was literally clapping my hands..
Bro is lazy af and extremely talented at the same time 😂
Genius.. ❤4rmZambia 🇿🇲
you'r so smart man : )
am loving this video.. tons of infor too😅😅😅
Crazy 😮👍
This ai sounds like he’s so over his purpose lol
I wonder how it would do if you described something more complex than the basic meshes. Like if you describe a scene to it.
Well this is both fascinating and terrifying 🫣😳😁🤣🤓 futures changing fast!
They, sometimes continued people, had to create all that stuff for something different
BTW remember Gemini isn't looking at the render screen when you are giving it commands to create the python ... there is a way to, but ...
nice
as always google makes half baked model again that soemthign went wrong very annoying that is the reason why I still pay chatgpt
❤
Enhance..
I think you just need to be much more precise when describing the task. Many of the commands you gave it where a bit messy 😊
That's a good point yeah!! I agree
it always falls alseep and restarts on me too
😮🤯
I did the same year ago with only chatgpt 😂 it's crazy
wow
This sounds tedious
Maybe for now, but imagine this as a blender add-on
Anyways.. nice..
interesting
Wow. google is really spreading the payola dollars around huh. Ask it to give you the approximate position of the buttons, you know, something that would actually be useful........
Not really useful xD But exciting concept nonetheless, just one step closer to ai assistant like Jarvis or EDI :D
Agreed!! Thought I’d make this video today so we can laugh about it in 6 months
or just give it full mouse and keyboard control...then it can write or point and click, like we do...take away the burden of our existence please ai
yeaa ai is not taking over someone could do all this by hand in like two minutes
is this legal?
It's very cool, it's like magic!!
But like magic, it's useless.
You can create these scenes faster with shortcuts and no AI. I think it can be good for more reptitive and boring tasks though. But in that case, you'd create an addonso you can always run it for other scenes.
I think this will be useful when it will actually model something (clean) from scratch, or when it will be able to kitbash scenes or prefabs from an asset browser.