Stable Diffusion TripoSR Transform Image To 3D Object - Is it Good?

Поділитися
Вставка
  • Опубліковано 8 бер 2024
  • In this video, we'll be diving into the exciting world of 3D modeling with the new Tripo SR models from Stable Stability AI. These cutting-edge models, developed in partnership with Tripo AI, allow you to transform flat 2D images into fully immersive 360-degree 3D objects.
    Join me as we explore the capabilities of Tripo SR and learn how to install and use these models in your own projects. We'll discuss the training process, hardware requirements, and where to download the models. Plus, I'll walk you through the installation steps, making it easy for you to get started.
    Stable Stability AI is a leading company specializing in 3D animations, and their collaboration with Tripo AI has resulted in this innovative model. With their web platform, you can effortlessly generate 3D objects online, or integrate the API into your own software for seamless integration.
    During the video, I'll showcase some stunning examples of the Tripo SR models in action, including animations created using these 3D objects. We'll also take a look at the pricing options for online processing of 3D objects, giving you a comprehensive overview of what's available.
    If you're interested in exploring the world of 3D modeling and animation, this video is a must-watch. Join me as we delve into the possibilities of Tripo SR and discover the incredible potential of transforming 2D images into lifelike 3D objects.
    If You Like tutorial like this, You Can Support Our Work In Patreon:
    / aifuturetech
    Discord : / discord
    TripoSR News : stability.ai/news/triposr-3d-...
    Tripo AI : www.tripo3d.ai/
    Huggingface : huggingface.co/stabilityai/Tr...
    Github : github.com/flowtyone/ComfyUI-...
  • Наука та технологія

КОМЕНТАРІ • 42

  • @DMikoGee
    @DMikoGee 3 місяці тому +4

    Woah, how in the world is this 3d modeler so fast, I have tried other repo but this is just unbelievably fast and more accurate.

  • @electrolab2624
    @electrolab2624 3 місяці тому +1

    Thank you, it works! (ComfyUI) - It saves an .obj file into 'outputs''. - The resolution of the mesh will probably influence image (texture quality),
    for instance a 'geometry_resolution' of 512 works too (- when I try 1024 I'm out of memory - 720 still worked), however, the file size (the .obj) will be larger.
    It looks like the texture color is saved to the vertexes, must check sometimes.
    Thank you much for your videos - they are clear and help a lot. - cheers!

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому +1

      yes, the file save as *.obj I think that is more compatible for other 3D animation software to be use, very handy if users use it for 3D editing.

  • @kalakala4803
    @kalakala4803 3 місяці тому

    Another new AI model! 🔥🔥🔥🔥🔥

  • @maheshmahindrakar1884
    @maheshmahindrakar1884 2 місяці тому

    awesome demo

  • @xcom9648
    @xcom9648 3 місяці тому +1

    Try to feed the image with batch image node to see if it can make it more accurate.

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому

      If it can input multiple angles of an object, then it can be improve

  • @alawra
    @alawra 2 місяці тому +2

    hello I followed all the steps correctly and managed to clone the repository in custom_nodes and installed all the dependencies. But when I open ComfyUI the nodes I installed aren't availiable :( When I click "add node" there's no "Flowty TripoSR" option even tho the repo is in the correct folder. I have no idea what I did wrong, do you know how to fix this? (I'm new to all this btw)
    edit: After I tried to import node templates and nothing happened, I dragged the file "workflow_rembg.json" to ComfyUI and this message in red appeared:
    "When loading the graph, the following node types were not found:
    ImageRemoveBackground+
    TripoSRModelLoader
    RemBGSession+
    TripoSRSampler
    TripoSRViewer
    Nodes that have failed to load will show as red on the graph."
    Again, I don't know what I did wrong the repo is in the correct folder :(

  • @blacksage81
    @blacksage81 3 місяці тому

    Now we just need a 3D Model Upscaler. I'm impressed wit hsome of the outputs that I got from the website,but all in all this level of AI generated 3d models cant really be used out of the box for complex characters. If one is willing to work on the model, remesh it, maybe it can be used as a baking mesh.

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому

      Yes, so far I feel this is a prototype.
      You see the human 3D model result...

    • @looseman
      @looseman 3 місяці тому

      Its related to "Texture mapping"

  • @CamiloMonsalve
    @CamiloMonsalve 2 місяці тому

    Thank you for the review. If I already have 3 or 4 pictures of a real physical object, how can I use TripoSR to model the object in 3D?

    • @TheFutureThinker
      @TheFutureThinker  2 місяці тому

      I think you can take each object into individual image , and then generate each on into obj use this AI.

    • @CamiloMonsalve
      @CamiloMonsalve 2 місяці тому

      @@TheFutureThinker Interesting start... How can these .obj files be combined into one highly accurate model?

    • @TheFutureThinker
      @TheFutureThinker  2 місяці тому +1

      That need 3D obj editor to fine tune it. As you can see this comfy render , especially on the back of object have lose so much detail. Then render it again in a 3D

    • @CamiloMonsalve
      @CamiloMonsalve 2 місяці тому

      @@TheFutureThinker Thank you for your suggestion. In my case, I would like to start with a photographic record of a real object, of which I have 3 photos (Front, side, 3/4, or back view). And from these three initial photos, I would like to guide ComfyUI to create a 3D model based on these initial photographs. What do you recommend?

  • @hanynagy8969
    @hanynagy8969 3 місяці тому

    Please can you make a tutorial on installing OOTDiffusion on Windows. There is a tutorial which not working for me and I think it will not work for anyone.

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому

      hi, do you mean this one? github.com/levihsu/OOTDiffusion
      You are running it on ComfyUI?

  • @MaxSMoke777
    @MaxSMoke777 3 місяці тому +1

    A handy way to make a mess!

  • @jord9261
    @jord9261 2 місяці тому

    thanks for the tutorial, you speak like yoda does it had me in laughing XD

    • @TheFutureThinker
      @TheFutureThinker  2 місяці тому

      Yes my master pass me a lightsaber and the force with me, so I can do Comfyui with no problem 😉

  • @UnsolvedMystery51
    @UnsolvedMystery51 3 місяці тому

    But it still not like gaming 3D... I don't think 1 image can be generate high quality.

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому +1

      Yes, as the output result it is, but i think give it sometime, sooner there will be a new model or this newer version can be improve.

    • @UnsolvedMystery51
      @UnsolvedMystery51 3 місяці тому

      @@TheFutureThinker ok, hope so.

  • @foodseen7824
    @foodseen7824 3 місяці тому

    I’ve tried it with a picture of a person through pinokio… not much initial success…

    • @foodseen7824
      @foodseen7824 3 місяці тому

      Might be that my machine isn’t powerful enough

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому

      Well, me too. I can't make person character to 3D good with this. Looks like they are only good for other objects.

    • @teealso
      @teealso 3 місяці тому

      Have been successful at making blobs with this thing. What I really want to see if how MovieLLM will make its way to ComfyUI. Tried to get it working today but the install is a real pain.

    • @foodseen7824
      @foodseen7824 3 місяці тому

      All of the outputs seem to have smooth features too… maybe it’ll be able to handle more detail in further versions? It’s still cool though

    • @foodseen7824
      @foodseen7824 3 місяці тому

      TY for the vid about it!

  • @lrnzccn5378
    @lrnzccn5378 3 місяці тому

    If you decide to explain, please do it without assuming that everyone already knows parts of the processes. If I were that good, I wouldn't need a UA-cam tutorial.... So, either you just inform us briefly, or please you guide us step by step. Saying that we simply can "run this in the command" or similar vague expressions doesn't help indeed. I would suggest a clearly illustrated step-by-step. Maybe my poor relationship with GitHub's interface and being spoiled by the traditional one-click installation packages doesn't help here. But I believe a channel like yours makes much more sense if you take care also of older, spoiled people like me! Thanks!!

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому

      Thanks, will consider it. But honestly for ComfyUI users install the command prompt, it is really just copy that command line , paste it in the CMD Windows and hit enter. 😅
      If something its hard for older people or beginners, I think it is better to start learning from the basic, learn to walk before they try to run.

    • @Drumaier
      @Drumaier 3 місяці тому

      @@TheFutureThinker Yes but people seeing this video are not just comfyUI users, I'm having problems making the copy paste cmds to work in the cmd line and I´m not clueless about this stuff, so I agree that showing the steps in a more "idiot proof" way could be more beneficial for everyone.

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому

      @Drumaier ok, I see you guys are good man. And I will do more detail on the setup part and explanation in the upcoming videos. :)

  • @DJHUNTERELDEBASTADOR
    @DJHUNTERELDEBASTADOR 3 місяці тому

    nada que ver...

  • @looseman
    @looseman 3 місяці тому

    Wrong way!
    This is useless and just for fun ONLY! It needs multiple images of the same object in order to create a usable 3D model, NOT a single image. First, it has to generate some images in different poses and angles. The next step is to train/learn from the images, in order to generate a good 3D model. This is the same way as 3D photogrammetry or nerf, but the problem is that SD is NOT able to generate the exact same object in different angles and poses. e.g. to generate a dog for 15 images in every 15 degree.
    Additionally, it should be able to load other trained models with the base model at once, e.g., trained human model, trained animal model.

    • @TheFutureThinker
      @TheFutureThinker  3 місяці тому

      I agree. It should not only gen a 3D object with only 1 image that is provided with 1 direction of it. Let's said if this AI model are getting at least 4 direction images of one object and generate it into a 3D obj. Then this should be getting better result.
      Thats why I test human in this video, the result tells.