The fear of novelty is a common human experience. In the early 19th century, photography was opposed by traditional artists who feared that it would replace their work. In the 1950s, tradicional illustrators were resistant to the introduction of the airbrush, and in the 1990s, many artists who had relied on the airbrush switched to the mouse. I am an illustrator who has seen many changes in technology over the course of my career. I studied painting for six years at the School of Fine Arts, and then worked as an illustrator in advertising agencies for over 30 years. Today, at the age of 74, I am retired but I continue to work, learn, and research. I have used photography, airbrushing, computers, and now I am exploring artificial intelligence (AI). I believe that AI has the potential to be a powerful tool for artists. It can be used to create realistic images, generate new ideas, and automate tasks. However, I also believe that AI should be used in a way that respects the human element of art. AI should be used to enhance creativity, not replace it. I am not afraid of the future, and I do not want to be left behind. I am excited to see what AI can do for art, and I am committed to learning how to use it effectively. I believe that AI has the potential to revolutionize the art world, and I am excited to be a part of that revolution.
WOW! I salute you for being open minded and still willing to learn new things every day! Most 70+ year old people will not even engage with a smart phone, and not talking about a desktop or PC/Mac!
I've been feeding texture maps into ComfyUI and with the new IC Light addon it's essentially allowing me to generate normal maps. The only issue now is turning those "texture" maps into diffuse by removing shadows,.
That's exactly what I have been telling artists on facebook. Use your skillset, while taking advantage of ai, in order to secure your position. Because otherwise you become obsolete. And the artists skillset is still an advantage the average user does not have.
Exactly! As I mentioned, a 3D artists knows Texture Mapping, Lighting, Topology in and out! All AI does, it gives you much faster variations to work with. Also not every thing "AI" is shiny gold :) lots of outputs are worthless. However, there comes the know how and skill into play.
Isn't that pretty obvious this is nothing like the industrial revolution like people keep saying, someone made a new Hammer you use it not walk around it like it has the plague. anyone that doesn't see this advice as obvious really needs help.. its like giving someone paper after watching them draw on the floor only for them to run away screaming
I started using Final Render in 2002-2003. It was the best render engine at that time. Then Vray came out and everyone shifted on it, and Carona came out early 2010 and everyone left Vray to do Corona. I have shifted to open source since last year. Blender and Comfiui is the future. Good to see you on the AI Train. Welcome aboard. Thinking Particles with Blender and AI sounds like something I could use.
Blender is flooded with so many plugins and different workflows that many of them never even get discovered. even quality products for blender are often ignored because there are 10 other plugins that do a similar job for free. The opposite is true in 3ds Max, if you build a quality plugin that offers a unique feature people will come to you because you are the only place to get that feature.
I suspect a later iteration of ComfyUI will add a window where you paint on the model to fix artifacts and define retopology. It already has voxel retopology. If the window has mark sharp, mark flat, smooth, delete tabs & spikes, then most of the repair work is not needed. Mark symmetrical could be a powerful tool but mark asymmetrical in some cases may work better. Automatic retopology is already available in most software; adding it to ComfyUI should be quick. real skilled artists (that's not me) can also quickly identify images that are going to give topology problems. Those that can draw can create 2d sketches that produce minimal problems. If you have a camera that can take pictures from all six directions then this software allows you to digitize almost anything.
We're at the dawn of the people industries, its definitely a renaissance. I am beyond enthusiastic for those new techs to hit. Cannot wait for 3d softwares like maya and max to get with the program, I've been asking them for years to take that road with no success.
its all crap ...were nearing the end Bro...all is insane and corrupt ...and Billy is coming with a second pandemic ...because he is like God and he knows the future....there is no bright future for humanity only more darkness....Jrsuse is close
I met you at Siggraph 1996 at New Orleans I guess it was, or maybe 1998 at Orlando. I don’t remember well, but…..Cebas man…..I haven’t heard that name since my days using 3D studio on DOS……very good plugins indeed…..a legend and pioneers in 3D long before almost anyone out there…..anyways, nice to see you around guys! Wish you the best! We used one of your plugins for a shot in the film Deep Impact…….awesome!
One thing what people missing about any revolutionary tool is timing and resources. For AI tools, I think resources are in this order: cheep silicon (for now nvidia), cheep energy, solid amount of good data. If one of this elements you remove technology improovment will slow. That's why its diffcult to predict what future will be. So it's important to take time if you have and make improvement.
Edwin, check out texture generation and upscale, it's very promising in our tests. Also a renderer that can execute prompt per primitive would be interesting
Using a promt or photo capture to generated seamless texture with normal map is already a reality in substance designer sampler feature adobe firefly, so it's a manner of time when we could generated anything with prompt and simple photo capture, it add some value as a kit bashing tool for pre concept but you still need to finish the art direction and retopologise and pose the object for animation.
It gives me motivation that you are optimistic about AI. Like maybe I can use AI in my own 3D workflow! -- I am excited to see your tools and tutorials, and how they work in modern workflows! -- I do think short videos can get more views, like 5-8 minutes is a sweet spot that many viewers look for.
I would definitely like to see a tutorial video on how to get comphy UI up and running with stable diffusion (if that’s what you’re using). How many models did you have to train the AI on as well? Thanks for sharing!
Blacksmiths still exist mostly for shoeing horses and repairing decorative iron work. They almost all use truck mounted forges with a gas bottle fueling the forge. So they go to the stables or building restoration site. The anvil is bolted to the back of the truck with a jack/ foot that drops down to the ground to take the forces off the car chassis. They are 3 or 4 times faster with a gas charcoal forge on the truck bed and a annealing furnace besides it.
Comparing AI to the replacement of Horses propably misses the Point, that AI is the first "Tool" to replace "decision making" and "ideas". During industrial revolution, Technology got better to replace muscle power (from steam to oil), till only the muscle power jobs left, that needed flexible human decision making or ideas. Whats left from your job or creative industry when all creative "decision making" and creative "ideas" are automated?
AI is not making decisions yet, possibly this will take a few more years. It is a new tool that does many things faster better and easier. Just like in the Industrial revolutions we have seen. Users still need to install this software and learn it. It does not install itself and does images on its own.
The problem with life is that you constantly have to adapt, it's part of life/death's dance, you are caught in the middle with no escape but to follow the tune and fade away. And it's never magic, it's only simplicity escaping our understanding.
I'm a hobbiest, but aspiring. I had been feeling A.I. to be mostly a time sink, another marble in the "attention economy" to chase around ;) But this is the sort of thing I was hoping for, a base mesh assistant. I thought it might work best in the form of a modifier, when active you see an alternate mesh in the same position as the one you are sculpting but this mesh is in x-ray shading. I would ignore it deforming (whether in the viewport like I said or perhaps its better in the modifier preview window) until it catches my eye, turn the solid shading on in the modifier tab and examine this alternate mesh, if I like it then I hit "renew mesh" and restart with this newer base mesh. It would be nice if it auto-saves the previous mesh. And yh I could drag and drop images into it, it sounds very nice.
This is super cool thanks for crating this video if you will make a tutorial I will definitly be excited about we need more open source tutorial and know how be able to do things locally
I’d like to have the ability to use more than 1 photo, like say for a building , a side and end elevation, I use photogrammetry but I have to take all the photos, if I find images online that have more than 1 photo of the same subject. That would make it possible for a more accurate model. Great video , I have subscribed Cheers
Yes, that's one thing we are looking into if there is a solution where you supply multiple views already. Would be great to get an AI system working that could automatically work with multiple views.
it is true that these technologies are very useful and it is necessary to learn and use them, but I do not agree on the long-term philosophical perspectives. The exponential increase in productivity necessarily creates an overproduction of goods and therefore, not being able to physically expand markets without limits, this leads to a fall in the rate of profit, which historically has disastrous consequences. In fact, we always forget that the capitalist economy regularly leads to immense destruction of fixed and variable capital in order to survive, and this is always set in motion by the development of productive and technological forces. There is no reason to think that things could be different this time.
Yes, it is hard to predict the effect it will have on society. There are many pitfalls, some may have devastating "side effects". Besides the main theme: : "If less people produce more goods, who would be able to afford those goods?" Human nature is biologically wired to conserve energy, this will definitely lead to humans no longer being able to do trivial stuff in the long run. Why would you learn anything when you can just ask for it? Strange times are ahead of us!
the closest it can get is to high end photogrametry in a year or two depending on the marketplace demands. Lets presume ones prompts tons of concepts of a character and another one does the 3d prompts. At least two different type of skillsets, mindsets are to be needed even by using the deep learning tools. Still tons of work is to come thereafter down the pipeline even for low to mid range production. Not only retopo, uv and rig is way far from the capacity of all available deep learning tools but more serious departments such as groom, cfx, vfx are another obstacle for such tools to enter the pipeline. The is a lot of potential and I preffer to believe it will actually contribute for the better despite the current cgi vfx sector world wide difficult situation. They say vfx is dying where it is actually transforming. Can not say the same for gaming where the only hope will be meta and real time pipelines.
What you are describing is an AI "agent" - effectively a hive mind of differently skilled AI, connected through an AI project manager. No single AI will do it all, but a user would only need to feed their input to a single point and the PM AI would delegate tasks to reach a finalized output. There might not yet be an Agent set up for this specific use case, but there are plenty of proficient Agent set ups out there you might be interested in checking out for the current progress. TL;DR - that's exactly how AI is being leveraged in large scale endeavours, through building AI "Agents".
Can you provide it with the 6 views or maybe improve the 6 views with an AI upscaler then re-upload? I was thinking reverse engineer old 3D files to see if this would work.
@@EdwinBraun just wanted to know if they solved the problem; also the topology one because there was a point where it was better to start over again manually. This could be used in static assets, the problem in my oppinion is that if iI have to retopo / mirror / use all the tricks on all it processes it loses the time saving value.
Yeah just think: back in the Ancient Greek times we used to have Actors who would portray human emotions for the enjoyment of crowds. Now we have… oh wait…
I tried some 2d to 3d like that some months ago, and the results in UV mapping and topology were so terrible, that I assumed it would be better to make models manually. From your presentation, I can not see how the UV and topology is handled, but from what you say about it, I must assume it is still terrible.
Ohh I did not want to do a tutorial on comfy. Just concentrate on what AI can do. I can do a comfyUI tut as well. The workflow is the default CRM form GIT.
I know some compositing artist are using comfy to make some custom nodes in nuke. You Will not lose your job if you can Make your own tools AND if you Learn And adapt to the changes.
AI is not the same as replacing horse-drawn carriages with cars. Firstly, AI uses other people's property for training (and training means storing this property and only the big AI companies earn money from this in the long term) and secondly, it requires far fewer people and less talent thanks to continuous automation. This is not to say that we should ignore the technology, but there is absolutely no reason to talk it up. If you ask around the job market even a little, you can already observe an extreme decline in value. It would be smart to limit the market power of AI companies instead of rolling out the red carpet for them.
Yes, a valid point. It is very likely that at some point the legal system will shut down all AI based solutions that harvested "public" available data. Similar to Google in the past, scanning books without consent of the authors, they created facts and in the end it was accepted. Crazy times! But be aware that not all AI is using "public" data, many are using manually created and legal content. So AI tools will prevail.
This is the standard workflow example that came with the CRM set. I just added some image scale factors and nodes to it. The main issue is getting this thing to work on windows.
LOL I came here hoping for a new AI breakthrough, but it's just old stuff. Generative 3d is not production level yet, and only marching cubes produces sculptable bases. The next gen tools will be amazing though! :-)
I used multiple "tools". the base AI system is Stable Diffusion, that is the AI creating images out of random dots, then there is a tool called comfyUI that allows you to connect multiple things together like this CRM so you could create images with Stable Diffusion and then process this image with another AI (CRM) to get a 3D mesh out of one image. We are trying to do a SIMPLE one click install/run assembly to allow more people to test this out.
hahah, no greenscreen. I usually use a greenscreen but not this time. I used NVIDIA Broadcast with background removal and then in OBS I chose the chroma filter to get a simple image of my office as a background.
With the speed ai video is developing - i think 3D rendering might not even be needed in the future.... You just can tell the ai for example things like: add a meteorit in the scene that crashes in to the skyskraper... Add a dragon in this color and style that makes this movement... Maybe just film at home in a room with one actor and let everything just be replaced by Ai - even the actor and the voice. It will need 5-10 years, but it will definitely move in that direction
Word processing had the same effect and disasters were predicted. Some people lost their jobs. However it did not cause mass unemployment or anything like that.
Yes, true however look at the printing industry as a whole. It is going down and will at some point no more existent. Putting ink on paper, is dead. I remember we had printed manuals for our software!! This is no longer the case.
@@cebasVT And the printed software manuals were often wrong or out of date by the end of the month. Digital is an improvement. My point is that people from the typing industry or print media are not the ones sleeping homeless on the streets.
Ive played with some number if these meshes and in reality it looks ok at first glance but topo is a mess and in most cases a good 3d artist can do it much better job in not alot of time. Still not bad to use for certain cases where you need a quick blockout for something.
As I see it for now, the AI created 3d mesh has not much details, no aesthetic, the quality is still very poor. I don't see it as a threat for now. Chill
He might not be talking only for today this exact moment. He's playing to is for now and for near future probably. Most of the people will not live only one day and that's it. Just because it's low quality with no details doesn't mean it won't improve. As far as we can tell it will only improve..
Yeah AI isn't going to replace you, the artist next to you using AI will tho. NVIDIA has been tryin to tell us this since the beginning, weird how people can't seem to fathom it. Also, when did Artists stop making their own tools? When did that become the norm?
Only commission artists drawing other people's art/ faces have been replaced. The horse and carriage is the worst analogy I have ever heard. Know what the only self driving car in the world is? A horse. I can drive it drunk and it will always get me home and it never has an error and runs speeding down the road crashing into human beings. Meanwhile tesla... which relies entirely on sensors and familiarity. And guess what, horses reproduce and eat grass and I can eat them when they're too old. GAS is never going out. Not a single energy resource has gone extinct. AI right now RIGHT NOW pumps out errors that need correcting. I have used AI for multiple projects and have assessed, if it's called AI I will be spending as much time ironing errors as I would doing things from scratch. The best use case is for people who will accept errors. That means degradation of final product. In other words, online only. When the internet came out, everyone said it would never take off. AI is out, and it's really large language models, and everyone says it's super great. The things that are ai that work, like calculators, are not called AI. Think about that. "Ai" is trash. It's hilarious
Interesting take, I think for now you have all valid points. However, arguing that things need time to create (be it with Photoshop or generative AI) is not an argument against it. Good quality outcomes take time regardless of the tool you use. Even taking a picture needs time, and all an "Artist" does is press a button on a camera! But yes, in general you do not get the promised one click masterpiece, at least not yet. That's also what I show in my video, the output needs a lot of work to be usable. But what if it gets better, lets say exponentially?
why is this video 28 minutes long. you say the same thing 4 or 5 times over and over. you dont say anything different after the first picture, you just repeat and repeat multiple times saying the same thing as its processing a picture as if you didnt just say it 2 minutes ago.. then.... you do it 3 more times. like what the fk
The fear of novelty is a common human experience. In the early 19th century, photography was opposed by traditional artists who feared that it would replace their work. In the 1950s, tradicional illustrators were resistant to the introduction of the airbrush, and in the 1990s, many artists who had relied on the airbrush switched to the mouse.
I am an illustrator who has seen many changes in technology over the course of my career. I studied painting for six years at the School of Fine Arts, and then worked as an illustrator in advertising agencies for over 30 years. Today, at the age of 74, I am retired but I continue to work, learn, and research. I have used photography, airbrushing, computers, and now I am exploring artificial intelligence (AI).
I believe that AI has the potential to be a powerful tool for artists. It can be used to create realistic images, generate new ideas, and automate tasks. However, I also believe that AI should be used in a way that respects the human element of art. AI should be used to enhance creativity, not replace it.
I am not afraid of the future, and I do not want to be left behind. I am excited to see what AI can do for art, and I am committed to learning how to use it effectively. I believe that AI has the potential to revolutionize the art world, and I am excited to be a part of that revolution.
WOW! I salute you for being open minded and still willing to learn new things every day! Most 70+ year old people will not even engage with a smart phone, and not talking about a desktop or PC/Mac!
Search the YT and you will find bunch of tutorials on this.
I've been feeding texture maps into ComfyUI and with the new IC Light addon it's essentially allowing me to generate normal maps. The only issue now is turning those "texture" maps into diffuse by removing shadows,.
Yep! I'm looking into that as well... there is a whole bunch of cool things you can do...
That's exactly what I have been telling artists on facebook. Use your skillset, while taking advantage of ai, in order to secure your position. Because otherwise you become obsolete. And the artists skillset is still an advantage the average user does not have.
Exactly! As I mentioned, a 3D artists knows Texture Mapping, Lighting, Topology in and out! All AI does, it gives you much faster variations to work with. Also not every thing "AI" is shiny gold :) lots of outputs are worthless. However, there comes the know how and skill into play.
Isn't that pretty obvious this is nothing like the industrial revolution like people keep saying, someone made a new Hammer you use it not walk around it like it has the plague. anyone that doesn't see this advice as obvious really needs help.. its like giving someone paper after watching them draw on the floor only for them to run away screaming
I started using Final Render in 2002-2003. It was the best render engine at that time. Then Vray came out and everyone shifted on it, and Carona came out early 2010 and everyone left Vray to do Corona. I have shifted to open source since last year. Blender and Comfiui is the future. Good to see you on the AI Train. Welcome aboard. Thinking Particles with Blender and AI sounds like something I could use.
Blender is flooded with so many plugins and different workflows that many of them never even get discovered. even quality products for blender are often ignored because there are 10 other plugins that do a similar job for free. The opposite is true in 3ds Max, if you build a quality plugin that offers a unique feature people will come to you because you are the only place to get that feature.
@@PandaJerk007 Great you do your thing.
Cinema 4D + V-ray/Arnold/Redshift is much more better than Blender shit. But anw, it's pretty good and free
I suspect a later iteration of ComfyUI will add a window where you paint on the model to fix artifacts and define retopology. It already has voxel retopology. If the window has mark sharp, mark flat, smooth, delete tabs & spikes, then most of the repair work is not needed. Mark symmetrical could be a powerful tool but mark asymmetrical in some cases may work better.
Automatic retopology is already available in most software; adding it to ComfyUI should be quick.
real skilled artists (that's not me) can also quickly identify images that are going to give topology problems. Those that can draw can create 2d sketches that produce minimal problems. If you have a camera that can take pictures from all six directions then this software allows you to digitize almost anything.
We're at the dawn of the people industries, its definitely a renaissance. I am beyond enthusiastic for those new techs to hit. Cannot wait for 3d softwares like maya and max to get with the program, I've been asking them for years to take that road with no success.
its all crap ...were nearing the end Bro...all is insane and corrupt ...and Billy is coming with a second pandemic ...because he is like God and he knows the future....there is no bright future for humanity only more darkness....Jrsuse is close
I met you at Siggraph 1996 at New Orleans I guess it was, or maybe 1998 at Orlando. I don’t remember well, but…..Cebas man…..I haven’t heard that name since my days using 3D studio on DOS……very good plugins indeed…..a legend and pioneers in 3D long before almost anyone out there…..anyways, nice to see you around guys! Wish you the best! We used one of your plugins for a shot in the film Deep Impact…….awesome!
Thanks! We try our bets.
One thing what people missing about any revolutionary tool is timing and resources. For AI tools, I think resources are in this order: cheep silicon (for now nvidia), cheep energy, solid amount of good data. If one of this elements you remove technology improovment will slow. That's why its diffcult to predict what future will be. So it's important to take time if you have and make improvement.
Edwin, check out texture generation and upscale, it's very promising in our tests. Also a renderer that can execute prompt per primitive would be interesting
Yes doing this for some time now
@@cebasVT is it working? We're now setting it up with comfyui and cryptomatte, hope to have some results by end of next week
Using a promt or photo capture to generated seamless texture with normal map is already a reality in substance designer sampler feature adobe firefly, so it's a manner of time when we could generated anything with prompt and simple photo capture, it add some value as a kit bashing tool for pre concept but you still need to finish the art direction and retopologise and pose the object for animation.
yes, this is the way I see it coming as well
@@cebasVT im dazzle by the woman and robot toy example, how the hell the algorythm extrapolated what he couldnt see behind with just one picture ..?
@@Meteotrance Isn't that crazy?
It gives me motivation that you are optimistic about AI. Like maybe I can use AI in my own 3D workflow! -- I am excited to see your tools and tutorials, and how they work in modern workflows! -- I do think short videos can get more views, like 5-8 minutes is a sweet spot that many viewers look for.
Thanks for your feedback!
I would definitely like to see a tutorial video on how to get comphy UI up and running with stable diffusion (if that’s what you’re using). How many models did you have to train the AI on as well?
Thanks for sharing!
I'll think about it!
Blacksmiths still exist mostly for shoeing horses and repairing decorative iron work. They almost all use truck mounted forges with a gas bottle fueling the forge. So they go to the stables or building restoration site. The anvil is bolted to the back of the truck with a jack/ foot that drops down to the ground to take the forces off the car chassis. They are 3 or 4 times faster with a gas charcoal forge on the truck bed and a annealing furnace besides it.
Comparing AI to the replacement of Horses propably misses the Point, that AI is the first "Tool" to replace "decision making" and "ideas". During industrial revolution, Technology got better to replace muscle power (from steam to oil), till only the muscle power jobs left, that needed flexible human decision making or ideas. Whats left from your job or creative industry when all creative "decision making" and creative "ideas" are automated?
AI is not making decisions yet, possibly this will take a few more years. It is a new tool that does many things faster better and easier. Just like in the Industrial revolutions we have seen. Users still need to install this software and learn it. It does not install itself and does images on its own.
Thank You.
Really. That is all I can say.
The problem with life is that you constantly have to adapt, it's part of life/death's dance, you are caught in the middle with no escape but to follow the tune and fade away.
And it's never magic, it's only simplicity escaping our understanding.
I'm a hobbiest, but aspiring. I had been feeling A.I. to be mostly a time sink, another marble in the "attention economy" to chase around ;)
But this is the sort of thing I was hoping for, a base mesh assistant. I thought it might work best in the form of a modifier, when active you see an alternate mesh in the same position as the one you are sculpting but this mesh is in x-ray shading. I would ignore it deforming (whether in the viewport like I said or perhaps its better in the modifier preview window) until it catches my eye, turn the solid shading on in the modifier tab and examine this alternate mesh, if I like it then I hit "renew mesh" and restart with this newer base mesh. It would be nice if it auto-saves the previous mesh. And yh I could drag and drop images into it, it sounds very nice.
That's amazing! thanks for sharing. Can you do a tutorial on ComfyUI please?
Sir, can you make video on how to setup this on our device?
This is super cool thanks for crating this video if you will make a tutorial I will definitly be excited about we need more open source tutorial and know how be able to do things locally
I’d like to have the ability to use more than 1 photo, like say for a building , a side and end elevation, I use photogrammetry but I have to take all the photos, if I find images online that have more than 1 photo of the same subject. That would make it possible for a more accurate model.
Great video , I have subscribed
Cheers
Yes, that's one thing we are looking into if there is a solution where you supply multiple views already. Would be great to get an AI system working that could automatically work with multiple views.
@@cebasVT Exactly my thoughts. and also a next step from the kind of ragged first result, have it optimized to a clean object.
it is true that these technologies are very useful and it is necessary to learn and use them, but I do not agree on the long-term philosophical perspectives. The exponential increase in productivity necessarily creates an overproduction of goods and therefore, not being able to physically expand markets without limits, this leads to a fall in the rate of profit, which historically has disastrous consequences. In fact, we always forget that the capitalist economy regularly leads to immense destruction of fixed and variable capital in order to survive, and this is always set in motion by the development of productive and technological forces. There is no reason to think that things could be different this time.
Yes, it is hard to predict the effect it will have on society. There are many pitfalls, some may have devastating "side effects". Besides the main theme: : "If less people produce more goods, who would be able to afford those goods?"
Human nature is biologically wired to conserve energy, this will definitely lead to humans no longer being able to do trivial stuff in the long run. Why would you learn anything when you can just ask for it?
Strange times are ahead of us!
Very cool. Could you share the comfyui config file you used?
the closest it can get is to high end photogrametry in a year or two depending on the marketplace demands. Lets presume ones prompts tons of concepts of a character and another one does the 3d prompts. At least two different type of skillsets, mindsets are to be needed even by using the deep learning tools. Still tons of work is to come thereafter down the pipeline even for low to mid range production. Not only retopo, uv and rig is way far from the capacity of all available deep learning tools but more serious departments such as groom, cfx, vfx are another obstacle for such tools to enter the pipeline. The is a lot of potential and I preffer to believe it will actually contribute for the better despite the current cgi vfx sector world wide difficult situation. They say vfx is dying where it is actually transforming. Can not say the same for gaming where the only hope will be meta and real time pipelines.
What you are describing is an AI "agent" - effectively a hive mind of differently skilled AI, connected through an AI project manager. No single AI will do it all, but a user would only need to feed their input to a single point and the PM AI would delegate tasks to reach a finalized output.
There might not yet be an Agent set up for this specific use case, but there are plenty of proficient Agent set ups out there you might be interested in checking out for the current progress.
TL;DR - that's exactly how AI is being leveraged in large scale endeavours, through building AI "Agents".
Can you provide it with the 6 views or maybe improve the 6 views with an AI upscaler then re-upload? I was thinking reverse engineer old 3D files to see if this would work.
My doubt: How is that topology working? How does that solve the moving parts?.
Houses do not move gears only rotate not every thing needs to be movable. But yes you are right at the moment it is restricted.
@@EdwinBraun just wanted to know if they solved the problem; also the topology one because there was a point where it was better to start over again manually.
This could be used in static assets, the problem in my oppinion is that if iI have to retopo / mirror / use all the tricks on all it processes it loses the time saving value.
Depends. Onnthe project I guess and the amount. But yes right now it is very restricted workflow.
Welder is one of the professions that came from blacksmith.
There are many welders, and the pay is good.
Yeah just think: back in the Ancient Greek times we used to have Actors who would portray human emotions for the enjoyment of crowds. Now we have… oh wait…
I tried some 2d to 3d like that some months ago, and the results in UV mapping and topology were so terrible, that I assumed it would be better to make models manually. From your presentation, I can not see how the UV and topology is handled, but from what you say about it, I must assume it is still terrible.
Yes it is terrible, however you coudl use the 6 views and create out of those a better texture.
how come you didn't show the nodes that calculate the mesh ?
Ohh I did not want to do a tutorial on comfy. Just concentrate on what AI can do. I can do a comfyUI tut as well. The workflow is the default CRM form GIT.
@@cebasVT he didnt ask for a comfyui tutorial he asked you why you hid the node that you claim does the work
@@charlesreid9337 ohh I tried to explain, this was not about the comfyUI workflow. This was about converting an image to 3D.
I know some compositing artist are using comfy to make some custom nodes in nuke. You Will not lose your job if you can Make your own tools AND if you Learn And adapt to the changes.
AI is not the same as replacing horse-drawn carriages with cars. Firstly, AI uses other people's property for training (and training means storing this property and only the big AI companies earn money from this in the long term) and secondly, it requires far fewer people and less talent thanks to continuous automation.
This is not to say that we should ignore the technology, but there is absolutely no reason to talk it up. If you ask around the job market even a little, you can already observe an extreme decline in value. It would be smart to limit the market power of AI companies instead of rolling out the red carpet for them.
Yes, a valid point. It is very likely that at some point the legal system will shut down all AI based solutions that harvested "public" available data. Similar to Google in the past, scanning books without consent of the authors, they created facts and in the end it was accepted. Crazy times!
But be aware that not all AI is using "public" data, many are using manually created and legal content. So AI tools will prevail.
Can you share the workflow please?
This is the standard workflow example that came with the CRM set. I just added some image scale factors and nodes to it. The main issue is getting this thing to work on windows.
@@cebasVT yes, thank you, I found the workflow and then tried to get it to work for a few hours. Seems like some Nvidia module is hard to get running
@@rothauspils123 yes this is Ness to get this working. I guess a tutorial might be necessary for a windows install
excellent presentation
LOL I came here hoping for a new AI breakthrough, but it's just old stuff. Generative 3d is not production level yet, and only marching cubes produces sculptable bases. The next gen tools will be amazing though! :-)
Yes, it is a start. Still Artists need to learn this! Thanks for your feedback!
what's this tool called?
I used multiple "tools". the base AI system is Stable Diffusion, that is the AI creating images out of random dots, then there is a tool called comfyUI that allows you to connect multiple things together like this CRM so you could create images with Stable Diffusion and then process this image with another AI (CRM) to get a 3D mesh out of one image.
We are trying to do a SIMPLE one click install/run assembly to allow more people to test this out.
@@cebasVT cant wait. awesome!
Is this a regular green screen or ai magic for the background?
hahah, no greenscreen. I usually use a greenscreen but not this time. I used NVIDIA Broadcast with background removal and then in OBS I chose the chroma filter to get a simple image of my office as a background.
With the speed ai video is developing - i think 3D rendering might not even be needed in the future.... You just can tell the ai for example things like: add a meteorit in the scene that crashes in to the skyskraper... Add a dragon in this color and style that makes this movement... Maybe just film at home in a room with one actor and let everything just be replaced by Ai - even the actor and the voice. It will need 5-10 years, but it will definitely move in that direction
Yeah that's the goal I guess :)
It would be nice if it took the lighting info out of the final object. Baking the lighting in really is poinless.
Word processing had the same effect and disasters were predicted. Some people lost their jobs. However it did not cause mass unemployment or anything like that.
Yes, true however look at the printing industry as a whole. It is going down and will at some point no more existent. Putting ink on paper, is dead. I remember we had printed manuals for our software!! This is no longer the case.
@@cebasVT And the printed software manuals were often wrong or out of date by the end of the month. Digital is an improvement. My point is that people from the typing industry or print media are not the ones sleeping homeless on the streets.
Ive played with some number if these meshes and in reality it looks ok at first glance but topo is a mess and in most cases a good 3d artist can do it much better job in not alot of time. Still not bad to use for certain cases where you need a quick blockout for something.
I don't understand what's wrong with people who cheer for their bane!
As I see it for now, the AI created 3d mesh has not much details, no aesthetic, the quality is still very poor. I don't see it as a threat for now. Chill
Yes, true. This is my point a proper pipeline can use this mesh and with retopo and texturing make it usable. Some work needs to be still done :)
Retopo would help, and textures can be AI upressed
He might not be talking only for today this exact moment. He's playing to is for now and for near future probably. Most of the people will not live only one day and that's it. Just because it's low quality with no details doesn't mean it won't improve. As far as we can tell it will only improve..
For "now" is the magic word here ! Give it a year and it will do a better job then most of us ! It was a fun ride folks !
Wait a month, that is the AI evolution rate
Yeah AI isn't going to replace you, the artist next to you using AI will tho.
NVIDIA has been tryin to tell us this since the beginning, weird how people can't seem to fathom it.
Also, when did Artists stop making their own tools?
When did that become the norm?
Yes! Valid point.
Thank you for funny video !! 🤣🤣
Only commission artists drawing other people's art/ faces have been replaced. The horse and carriage is the worst analogy I have ever heard. Know what the only self driving car in the world is? A horse. I can drive it drunk and it will always get me home and it never has an error and runs speeding down the road crashing into human beings. Meanwhile tesla... which relies entirely on sensors and familiarity. And guess what, horses reproduce and eat grass and I can eat them when they're too old. GAS is never going out. Not a single energy resource has gone extinct. AI right now RIGHT NOW pumps out errors that need correcting. I have used AI for multiple projects and have assessed, if it's called AI I will be spending as much time ironing errors as I would doing things from scratch. The best use case is for people who will accept errors. That means degradation of final product. In other words, online only. When the internet came out, everyone said it would never take off. AI is out, and it's really large language models, and everyone says it's super great. The things that are ai that work, like calculators, are not called AI. Think about that. "Ai" is trash. It's hilarious
Interesting take, I think for now you have all valid points. However, arguing that things need time to create (be it with Photoshop or generative AI) is not an argument against it. Good quality outcomes take time regardless of the tool you use. Even taking a picture needs time, and all an "Artist" does is press a button on a camera!
But yes, in general you do not get the promised one click masterpiece, at least not yet. That's also what I show in my video, the output needs a lot of work to be usable. But what if it gets better, lets say exponentially?
I hope ai is less distracting than your flickering key
Heheh Sorry for that. I did not use my Green-Screen setup. Thanks for watching anyway :)
why is this video 28 minutes long. you say the same thing 4 or 5 times over and over. you dont say anything different after the first picture, you just repeat and repeat multiple times saying the same thing as its processing a picture as if you didnt just say it 2 minutes ago.. then.... you do it 3 more times. like what the fk
Yes I agree it can be condensed. Next videos should be better. Thanks for your feedback!!