AI images to meshes / Stable diffusion & Blender Tutorial

Поділитися
Вставка
  • Опубліковано 1 жов 2024

КОМЕНТАРІ • 254

  • @pygmalion8952
    @pygmalion8952 Рік тому +79

    what is the purpose of this though? it can be used for distant object *maybe* but there are easier ways to make those. for general purpose assets, you really can't pass the quality standart of modern games with this tech. not to mention this is just the base color. and throw away the aesthetic consistency between models too. ai makes either nearly identical images if you ask or it just can not understand what you are trying to do at all. plus if you want symbolism in your game, there is additional steps on fixing this which i think is way cumbersome and boring than actually making the asset. i didn't even mention cinema since these kinds of assets are pretty low quality even for games. (just to add, it is still ethically questionable to use these in a profit-driven project) oh one more thing, usually, games require some procedurality in their textures for some of the assets they have. this can not produce that flexibility too.
    only thing that is beneficial is that depth map thing i guess. that is kinda cool.

    • @digital-guts
      @digital-guts  Рік тому +142

      Yeah, of course, nobody here says that this models can be used in AAA games or cinema as is, and I'm not brainless "ai-bro" to say that it will be. I myself work in gamedev for a while. But there are fields for 3d-graphics other than games and cinema, for example abstract psychedelic video-art or music videos, heavily stylized indy games, maybe some surreal party poster, etc. know what I'm sayin'.
      As for cinema and gamedev, I think it can be used in some cases as kitbash parts for concept-art, and with proper knowledge of how to build prompts and usage of custom made loras and stuff like this, you can get really consistent results with ai generations.

    • @luminousdragon
      @luminousdragon Рік тому +52

      This is a proof of concept, its brand new. the process 100% can be sped up, streamlined, with ways to get better results as ai art improves.
      The description of this very video just says its a proof of concept, and people were asking for details. THis type of video is for professionals who want to explore different techniques, to build off of each others works, to stay informed about new techniques, and its just interesting.
      For instance, I make digital art, and one thing I have been experimenting with is making a 3d environment and characters as close as possible in style as some AI art Ive already generated, without taking vvery much time or effort, then rendering it as a video, then overlaying AI on top of it for a more cohesive look.
      This process could be very useful for this for multiple reasons: First, if Im using AI art to make the 3d, models, they are going to mesh very well when I overlay the second set of AI art over the 3d render. Second, because the AI art is going to be overlaid on the 3D model, I dont really care if the 3d models dont look perfect, kinda irrelevant.
      Lastly, Look at the game BattleBit which has gone viral recently. Or look at Amogus. or Minecraft. Not every game is aiming for amazing photorealism.

    • @AB-wf8ek
      @AB-wf8ek Рік тому +24

      I think it's valid to criticize the quality of the output, but I think you miss the point of you think this is trying to be a replacement for the current traditional methods.
      It's just an experimental process playing around with what's currently available.
      It's called a creative process for a reason. A true artists enjoys figuring out new and unique ways of combining tools and process, and this video is just an exercise in that.
      If you can't see the purpose for it, then just remove creative from anything you do.

    • @TaylorColpitts
      @TaylorColpitts Рік тому +6

      Concept art - really great for populating giant scenes with lots of gack and set dressing

    • @Mrkrillis
      @Mrkrillis Рік тому +2

      Thank you for asking this question as I wondered myself what this could be used for

  • @games528
    @games528 Рік тому +37

    You can't just plug the color data of a normal map texture into the normal slot in principled BSDF, you need to put a "normal map" node in between.

    • @albertobalsalm7080
      @albertobalsalm7080 Рік тому +2

      you can actually

    • @games528
      @games528 Рік тому +9

      @@albertobalsalm7080 Yes but that will lead to horrible results. You can also plug it straight into roughness if you want.

    • @sashamartinsen
      @sashamartinsen Рік тому

      thanks i missed that part while recording

    • @AB-wf8ek
      @AB-wf8ek Рік тому

      I don't use blender, but my guess for why this is, is that the color needs to be interpreted as linear for data processes, versus sRGB or whatever color profile is usually slapped on top of the image when rendering for your screen.

    • @EGP-Hub
      @EGP-Hub Рік тому +1

      Also it needs to be set to non-colour

  • @kingsleyadu9289
    @kingsleyadu9289 Рік тому +2

    u are crazy😆😆😆😆🥰🤩😍❤❤❤❤❤❤, i u love bro keep it up

  • @VincentNeemie
    @VincentNeemie Рік тому +24

    I had this theory on the start of this year, when I noticed you could generate good displacement maps using control nets, good to see someone putting that at practice.

    • @AtrusDesign
      @AtrusDesign Рік тому

      It’s an old idea. I think many of us discover it first or later.

    • @SciFiEpicShow
      @SciFiEpicShow 6 місяців тому

      So, I've been diving deep into storytelling and creative videos lately. VideoGPT showed up, and it's like having this magical assistant that instantly enhances the quality of my content.

  • @mximxi1069
    @mximxi1069 Рік тому +17

    Soon Ai will make the concept then mesh do the coding, voice acting, game or cinematography. Decide the price, list it do the admin and publishing.
    It'll also run all the warehouses and care for nan.
    Then it'll play the games and videos for you and only show you the parts it thinks you'll find entertaining.
    It'll also be writing your socials for you, you know to save you the hassle.
    After that it'll give you suggestions on what to do with your spare time. Complain at you when don't do it.
    It'll then tell you not to mate with someone due to genetical defects its found in your partner.
    It'll then run the country and make the laws.
    Soon after it'll ration salt, sugars, vitamins.
    It'll reduce your usage, you know for the future of humanity.
    How much are signal jammers again 😂😂😂

    • @tabby842
      @tabby842 Рік тому

      tech really is slowly dividing people, first it was the anti vaxxers, we laughed at them, then it was the 5g tower arsons, we laughed at them, then it's going to be ai protestors

    • @LegendaryTunes
      @LegendaryTunes Рік тому

      Then it'll train itself to play the game and review it afterwards

    • @BadBanana
      @BadBanana Рік тому

      Nope not for years yet

    • @arihaviv8510
      @arihaviv8510 Рік тому

      No it won't. AI doesn't think or ask critical questions but it makes lots of statistical associations on a very large dataset. The future belongs to those that ask critical questions

    • @vendacious
      @vendacious 10 місяців тому

      Maybe... But in the future when this happens, I think that AI generated movies will be like AI art is now, in that people don't consider it of any value. Once we take humans out of the entire creative process (even the idea), I doubt people will regard it the same way as human-created content. At least, not at first. Maybe once AIs have personalities and are self-aware. I guess we'll see...

  • @LeKhang98
    @LeKhang98 Рік тому +8

    I think using Mirror is a nice idea, but it may not be applicable for all objects. How about using SD & LORA to create 2x2 images or 3x3 images of the same object from multiple different POVs, then connecting them together instead of using a mirror?

  • @zephilde
    @zephilde Рік тому +3

    You "accomodate" yourself by sculpting something random from a not-so-accurate mesh, the mirrored thing do not look any like the originalimage thing... Do you have a workflow to get a real mesh from something representative? (like a character or landscape)

    • @mercartax
      @mercartax Рік тому

      The whole process is sub-any-standard. Kitbashing some weird crap together - that's all this will work for. Maybe in 2 or 3 years we will see something more generally usable. Good luck getting any meaningful model data from AI models these days. Hard enough to prompt them into what you actually want let alone transfer this into a working 3d environment.

  • @shaunbrown3806
    @shaunbrown3806 6 місяців тому +1

    @DIGITAL GUTS, I really like this workflow, I also wanted to know, can I use this same strategy for humanoid AI characters, as you are the only person I have seen use this workflow, thanks in advance :) also subbed

    • @digital-guts
      @digital-guts  6 місяців тому

      yeah, since this videos i’ve tried couple of things and its kinda ok for characters in certain cases. especially for weird aliens )

  • @dmingod999
    @dmingod999 Рік тому +9

    This can be a great process to use for a rough starter mesh that you can then refine

    • @pygmalion8952
      @pygmalion8952 Рік тому

      i wrote a long ass comment on this kind of stuff here but for this also, again, you can produce these maps with just normal renders of artists and either way is ethically questionable if you do not change and add your twist to it.

    • @dmingod999
      @dmingod999 Рік тому +1

      @@pygmalion8952 sure you can do this from other artists but its restrictive because you can only use what already exists -- but if you're generating the images by AI you have much more freedom -- you can sketch your idea or use a whole bunch of other tools that are available to control the AI generation, then you can make the depth map and do this bit..

  • @TheRodT
    @TheRodT Рік тому +1

    Music mad annoying

  • @Shellshock361
    @Shellshock361 Рік тому +3

    I've seen multiple people show this. This isn't a real mesh 😂. It's what you would refer to as an optical illusion. It's a very warped mesh that looks interesting from only very certain angles.

  • @Al-Musalmiin
    @Al-Musalmiin 9 місяців тому +1

    i wouldnt mind learning blender and learning how to do this. can you do a tutorial on how to run "zoe depth" locally?

  • @timd9430
    @timd9430 Рік тому +2

    Such a jimmy rig way to do things.
    Do any of these AI generators just offer an option to export or download the 3D mesh file with maps, lighting etc???? I.e. .3ds .max .dxf .fbx .obj .stl etc! Seems the AI generators are initially just composing highly elaborate 3d scenes and rendering flat image results anyways anyways?
    Same for vector based files??? Can they just export native vector files such as .svg, .ai, .eps, .cdr vector .pdf???
    AI is a career killer.

    • @sashamartinsen
      @sashamartinsen Рік тому +4

      So, is it jimmy-rig way or career killer, you decide. Neither, i think, of course it depends on your goals. Meshes like this can work only as quick kitbash parts for concepts, not as final polished product anyways. Does kitbashing killed 3d careers or photobashing killed matte painting in concept art? i dont think so.

    • @zephilde
      @zephilde Рік тому +2

      No, AI like Stable Diffusion are not working in a 3D space or vector, it's working on random pixels (noise) and apply denoising steps learnt from huge images set with descriptions. Your prompt text is helping the denoising steps to be able to "hallucinate" something from noise... The fact a final image is looking like 3D render or vectors or photgraphy or painting (etc) is just pure coincidence! :)

    • @timd9430
      @timd9430 Рік тому

      @@zephilde Any video links on that exact process?

  • @salvadormarley
    @salvadormarley Рік тому +3

    How did you get the animated face, That seems completely different to what you showed us in this demo.

    • @EGP-Hub
      @EGP-Hub Рік тому

      Looks like the metahuman facial animator possibly

    • @digital-guts
      @digital-guts  Рік тому +3

      yes it is and its no the point of this video. there are tons of content about metahuman in youtube

    • @salvadormarley
      @salvadormarley Рік тому

      @@digital-guts I've heard of metahuman but never tried it. I'll look into it. Thank you.

    • @ghklfghjfghjcvbnc
      @ghklfghjfghjcvbnc 11 місяців тому

      u are a lying clickbait @@digital-guts

  • @aleemmohammed7794
    @aleemmohammed7794 Рік тому +4

    Can you make a character model with this?

  • @referencetom1276
    @referencetom1276 Рік тому +4

    For BG objects like murals on walls and ornaments this can give a nice 2.5 D feel. Maybe can also speed up design to find form from first idea.

  • @nswayze2218
    @nswayze2218 11 місяців тому +2

    My jaw literally dropped. This is incredible! Thank you!

  • @jamesvictor2182
    @jamesvictor2182 2 місяці тому

    How can we generate 3d models from multiple depth maps of the same character from different angles? I have a comfy-ui workflow that produces identical characters from multiple angles so I should be able to combine these to avoid things like mirroring and sculpting , right?

  • @PuppetMasterdaath144
    @PuppetMasterdaath144 Рік тому +8

    I just want to point out that you people that are dissing this, that for a person like me who had zero clue about any of this, just inticed into trying out something I can get actual creative results from like this is so exiting, I mean I read a few of the technical comments and that's so past my head it really shows how its a specialized viewpoint that's not generalized to more common people in terms of general knowledge ok weird ass rant over

  • @Draughammer
    @Draughammer 11 місяців тому

    I wonder if this ever will get useful. Have it as one ugly big object like that is not very useful. Maybe for background objects though. Or to overflow the miniature market with ever more extremely bad and ugly miniatures.

  • @AtrusDesign
    @AtrusDesign Рік тому

    It’s funny, but useless for a decent geometry. It creates very bad quality models. May be there are some situations where it could be useful.

  • @pastuh
    @pastuh Рік тому +1

    Nice, but I will wait for 360 3D AI models :X

  • @stevesloan6775
    @stevesloan6775 6 місяців тому

    Goodness me… how and why are slow eye movements in the female eye ball-brain so deeply, directly connected to the male brain.???? 🧠 😂❤

  • @MaxSMoke777
    @MaxSMoke777 8 місяців тому

    You too can turn a beautiful image into a wad of chewing gum!
    Really though, well AI does make some nice concept art, but it's no substation for real 3D modeling. Not to even mention optimization and model/rigging planning. Modern games lag far behind the performance they should have because of these lazy "sculpting" techniques that take artists away from the per-polygon they should be focusing on. Polygon counts matter, folks! Just because an engine can do 1 million polygons per second is no excuse for burning 500k of that on a vase.

    • @digital-guts
      @digital-guts  8 місяців тому

      yeah, i totally agree (cause i work in gamedev myself) but there more use cases for 3d except games. read small tread under pinned first comment. dont be scares nobody will take you sub-d modelling work

  • @stevesloan6775
    @stevesloan6775 6 місяців тому

    Changing to double sided vertices, is the way to remove and double the texture map data.😂

  • @retroeshop1681
    @retroeshop1681 Рік тому +3

    Honestly I'm quite impressed, a really cool way to make a lot of kitbashing, really necessary nowadays I guess now I have to learn how to make AI images hehe cheers from Mexico!

  • @XirlioTLLXHR
    @XirlioTLLXHR Рік тому +8

    This is good enough for some indie game companies honestly. Might really help some folks out there get some assets done faster.

  • @younesaitdabachi7968
    @younesaitdabachi7968 Рік тому

    God Damn it you look like that Guy who help Mr Walter with cooking Drug in breaking Bad by the way i like your tuto keep it UP

  • @thesagerinnegan5898
    @thesagerinnegan5898 7 місяців тому +1

    what about meshes to ai images?

    • @digital-guts
      @digital-guts  7 місяців тому

      ua-cam.com/video/GSW3m79tsqU/v-deo.html

  • @BerkeAltay
    @BerkeAltay 11 місяців тому

    at this point this is not considered as work. as a 3d modeller i encourage using ai but not in this way.

  • @yklandares
    @yklandares Рік тому +1

    its the end og the world VFX

  • @tahajafar206
    @tahajafar206 Рік тому +1

    What about using chartunner Lora to create the front, back, left and right sides so merging all 4 sides will give a better and smoother object instead of correcting sides manually?
    It's just an idea but I haven't seen anyone try that so if you can could you please give it a try and share a tutorial 3:36

    • @digital-guts
      @digital-guts  Рік тому

      i’ll give it a try and take a look. i’ve done some test with ai-characters and it looks ok-ish and weird, maybe share the results later.

  • @jonathanbernardi4306
    @jonathanbernardi4306 11 місяців тому +1

    Very interesting nontheless, thanks for your time man, this technique sure have its uses.

  • @MordioMusic
    @MordioMusic 9 місяців тому +1

    usually I don't find such good music with these tutorials, cheers mate

  • @WhatNRdidnext
    @WhatNRdidnext Рік тому +1

    I love this! Plus (because of the horror-related prompts that I've been using), I'll probably give myself nightmares 😅
    Thank you for sharing ❤

  • @armandadvar6462
    @armandadvar6462 5 місяців тому

    I was waiting to see animation like your intro video😢

  • @neknaksdn9539
    @neknaksdn9539 Місяць тому

    My pc would explode with those mesh 😭

  • @CharpuART
    @CharpuART 11 місяців тому

    Now you are literally working for the machine, for free! :)

  • @joseterran
    @joseterran Рік тому +1

    nice one! got to try this! thansk for sharing

  • @reimagineuniverse
    @reimagineuniverse 5 місяців тому

    looks like cheating for talentless people wo didnt put in the time and effort

    • @digital-guts
      @digital-guts  5 місяців тому

      yes it is. but your channel consists of even more low effort “one click” ai stuff tho. lol

  • @tommythunder6578
    @tommythunder6578 10 місяців тому +1

    Thank you for this amazing tutorial!

  • @qwerty85620
    @qwerty85620 Рік тому +6

    its sux

  • @motionislive5621
    @motionislive5621 Рік тому

    Mirror tool become life changer LOL

  • @miinyoo
    @miinyoo 7 місяців тому

    That actually is a pretty decent little quick workflow. Pop that out to something like zbrush and go to town refining.
    Is it really good enough on its own? For previz and posing with a quick rig, absolutely. That's pretty fast tbh and simple.

  • @TheNitroG1
    @TheNitroG1 10 місяців тому

    this really didn't end up getting you something that at all resembled the original image.

    • @digital-guts
      @digital-guts  10 місяців тому

      this is breakdown about the technique, and not about whole image. demo video is an composition of 3-4 meshes made with this method on the head of metahuman model in unreal engine.

  • @PhilémonBaverel
    @PhilémonBaverel 8 місяців тому

    When I do this with a depth map of a 16:9 format, the displacement modifier applies the map as a small 1:1 repeat pattern.. why?
    note: I made my place 16:9 ratio and applied scale before adding displacement modifier.

  • @kingcrimson_2112
    @kingcrimson_2112 Рік тому +4

    Please ignore the salty comments. This is a game changer, especially for mobile platforms. jaw dropping result and pragmatic pipeline.

  • @isi_guro
    @isi_guro 8 місяців тому

    ラーメン中毒

  • @flopa469
    @flopa469 11 місяців тому

    So is just bloob with texture XD

  • @matthewpublikum3114
    @matthewpublikum3114 11 місяців тому

    Great for kit bashing!

  • @ATLJB86
    @ATLJB86 Рік тому

    I haven’t seen a single person use AI to texture a model using individual UV maps and I can’t understand why. Ai can dramatically speed up the texture process but I have not seen anybody take an ai generated image then turn it into a 3D model and i can’t understand why…

  • @zergidrom4572
    @zergidrom4572 Рік тому +1

    sheeesh

  • @timedriverable
    @timedriverable 10 місяців тому

    Sorry if this is a newbie question ...but is this dreamstudio some componet of SDXL?

  • @sebastianosewolf2367
    @sebastianosewolf2367 8 місяців тому

    yeah and what software or website did you use on the first minutes ?

    • @digital-guts
      @digital-guts  8 місяців тому +1

      this is Automatic1111 web-ui for stable diffusion

  • @oberdoofus
    @oberdoofus Рік тому +1

    very interesting for concept generation - thanks for sharing! I'm assuming you can also upscale the various images in SD as well to maintain more 'closeup' detail...? Maybe with appropriate LORAs...

  • @_casg
    @_casg 11 місяців тому

    Here’s a peppery comment

  • @SteveJones-uf9hs
    @SteveJones-uf9hs Рік тому +37

    it won't be long before AI creates the mesh. The days of the 3d concept artist are nearly over

    • @juanpena3955
      @juanpena3955 Рік тому +22

      yeah, you probably know how all the industry works to say that and have all the experience.

    • @SteveJones-uf9hs
      @SteveJones-uf9hs Рік тому +29

      @@juanpena3955 I'm a concept artist

    • @giovannimontagnana6262
      @giovannimontagnana6262 Рік тому +8

      You are right. Or actually AI will make their life easier

    • @juanpena3955
      @juanpena3955 Рік тому +3

      @@SteveJones-uf9hs so you actually think that Ai will replace your work too right, and first

    • @016Rafa
      @016Rafa Рік тому +11

      Days of commercial artists are nearly over

  • @realkut6954
    @realkut6954 7 місяців тому

    Hello thank for video please ,please.give me tutorial traking armor 3d stable diffusion and man video please urgen sorry bad english iam french

    • @digital-guts
      @digital-guts  7 місяців тому

      ua-cam.com/video/bKO_nVGKgLA/v-deo.htmlsi=j7BOrRMU_8AeXrUe

    • @realkut6954
      @realkut6954 7 місяців тому

      @@digital-guts thank my friend soory i want video 2d no man 3d sorry .

    • @realkut6954
      @realkut6954 7 місяців тому

      Sim wonder studio softwar

    • @realkut6954
      @realkut6954 7 місяців тому

      ua-cam.com/video/frVLAJjkHf0/v-deo.htmlsi=2uVjuK6HK8WwhQWX

  • @joedanger4541
    @joedanger4541 Рік тому

    the R.U.R. is coming

  • @Arvolve
    @Arvolve Рік тому +1

    Very cool, thanks for sharing the workflow!

  • @johanverm90
    @johanverm90 11 місяців тому

    Amazing ,awesome ..Thanks for sharing

  • @googlechel
    @googlechel 6 місяців тому

    Yo, how you get a stable diffusion, control net as local? it is?

    • @digital-guts
      @digital-guts  6 місяців тому +1

      ua-cam.com/video/d1lPvI0T_go/v-deo.html check this link

    • @googlechel
      @googlechel 6 місяців тому

      @@digital-guts thanks

  • @somebodynothing8028
    @somebodynothing8028 Рік тому

    im using invoke AI how do i get the controlnet v1.1.224 to run with it or where do i find the controlnet v1.1.224

  • @jvdome
    @jvdome Рік тому

    i could do well until the part i had to sculpt the stuff out, i couldnt come to a solution like you did easily

  • @petarh.6998
    @petarh.6998 Рік тому

    How would one do this with a front-facing character? Or does this technique demand the profile view of them?

  • @raphaelprotti5536
    @raphaelprotti5536 11 місяців тому

    The next logical step is to remesh/retopo this and reproject the texture.

  • @zacandroll
    @zacandroll Рік тому

    Im baffled

  • @shiora4213
    @shiora4213 11 місяців тому

    thanks man

  • @psykology9299
    @psykology9299 Рік тому

    This works so much better than zoedepths image to 3d

  • @ArturSofin
    @ArturSofin Рік тому +1

    Привет, очень классно! Подскажи пожалуйста, анимация лица сделана в unreal с помощью live link или это всё блендер ?

    • @digital-guts
      @digital-guts  Рік тому

      это metahuman animator внутри анриала уже да, но запись сама делается через live link просто более качественно интерпретируется

  • @Shining4Dawn
    @Shining4Dawn Рік тому

    There's something a bit misleading about the video. At the start, you show a piece of work you've made that has an animated realistic character in it, but the only thing you actually show how to make is the extra geometry on her head.
    I do think it's a cool shortcut for extra props and background objects, but the start of the video makes it seem like you're using AI generated images to make animation-ready models, which you clearly aren't.
    I do wonder if it's possible to have an AI generate orthographic views of a character to make a base mesh which will be manually re-meshed for animation later on. Then, the AI generated images would be used as a base for the model's textures.

    • @digital-guts
      @digital-guts  Рік тому +1

      This video was not meant to be a step-by-step tutorial of how to replicate the example from the beginning, only the most interesting part that I was asked about. This realistic character is an Unreal Engine MetaHuman and I think in 2023 everybody in this field knows what it is and there are thousands of tutorials on youtube about this topic. Making this i'm just having fun playing with tech tools to try new ideas and sharing it. i'm not planning to do full step-by-step beginner-friendly explaining every button.
      Answering your second question, yes it's possible (and with not so bad topology) I've seen such tools from my colleagues, but it's not public yet, as I know.

    • @kenalpha3
      @kenalpha3 Рік тому

      @@digital-guts "yes it's possible (and with not so bad topology) I've seen such tools from my colleagues, but it's not public yet, as I know."
      Can you make a vid explaining how to fix the verts to be game ready? Im deving for UE4.27/5, making character armor parts. Im not exactly sure how far I can push my vert count in UE since I dont have 10 characters in a level at once yet.
      But if you show your method [about how far I need to go] to reduce verts for games, then that would be helpful. Thank you.

  • @nathanl2966
    @nathanl2966 Рік тому

    wow.

  • @bossnoob8363
    @bossnoob8363 Рік тому

    Credo

  • @Savigo.
    @Savigo. Рік тому

    Wait, can you now just plug in normal map to the "norma"l socket without extra "normal map" node? I have to check it.

    • @Savigo.
      @Savigo. Рік тому

      Ok, You can, but it looks quite bad compared to proper connection with "normal map" node. It seems like intensity is way lower without it, and you cannot control it without normal map node.

  • @unrealhabitat
    @unrealhabitat 11 місяців тому

    Absolutly amzing! Thank you for the tutorial! :D

  • @dragonmares59110
    @dragonmares59110 Рік тому

    Woah, i think i will try to see if i can remake this tomorrow, would be a nice way to spend some time, thanks !

  • @Karasus3D
    @Karasus3D Рік тому

    My question is can i get a diffuse map turn this into a printable model id love to at least use it to make a base model and modify from there for like masks and such

  • @Ollacigi
    @Ollacigi Рік тому

    Ita still need a time.but its a cool start

  • @touyaakira1866
    @touyaakira1866 Рік тому

    Please make this topic more with more examples.Thank you

  • @wizards-themagicalconcert5048
    @wizards-themagicalconcert5048 11 місяців тому

    Fantastic content and video mate,very useful ,subbed ! Keep it up !

  • @bigfatcat8me
    @bigfatcat8me 9 місяців тому

    where is your hoodie from?

    • @digital-guts
      @digital-guts  9 місяців тому

      i dont remember, i think something like h&m or bershka nothing special

  • @wrillywonka1320
    @wrillywonka1320 Рік тому

    this is awesome! BUT you lost me at mirroring the image and then bisecting to get rid of the extra geometry. i am a noob to blender still and dont know how you did that. was it a short cut key you used? at 3:35 in the video

    • @digital-guts
      @digital-guts  Рік тому +1

      oh, its a speedup part and there is quiet a few hot keys here. but its very basic usage of scuplt mode in blender. there are many videos on youtube where this stuff is explaned, try this one ua-cam.com/video/Cmi0KoFtc-4/v-deo.htmlsi=mKSHWz8SCE8evM6M

    • @wrillywonka1320
      @wrillywonka1320 Рік тому

      @@digital-guts thank you! And i have been using this and most images work but some images invert when i mirror them. Have you wver had this problem?

  • @sburgos9621
    @sburgos9621 Рік тому

    Seen this technique before but at this stage it looks very limited. In terms of the mesh without putting any textures on it, it looked not representative of the object. I feel like adding the textures fools the eye into thinking it is more detailed than the mesh actually is.

    • @sashamartinsen
      @sashamartinsen Рік тому +1

      and this is the main point of this aproach. to trick the eye

    • @sburgos9621
      @sburgos9621 Рік тому

      @@sashamartinsen I do 3d printing so this technique wouldn't work for my application.

  • @Pragma020
    @Pragma020 Рік тому

    This is nest. cool technique.

  • @decemberboy6893
    @decemberboy6893 Рік тому

    nice video!! but too much compression on your voice

  • @SoulStoneSeeker
    @SoulStoneSeeker Рік тому

    this has many, possibilities...

  • @lightning4201
    @lightning4201 Рік тому

    Great video. Do you have a Cinema 4d tutorial on this?

  • @williammccormick3787
    @williammccormick3787 5 місяців тому

    Great tutorial thank you

  • @incomebuilder4032
    @incomebuilder4032 Рік тому

    Fooking genius you are..

  • @abdullahimuhammed6550
    @abdullahimuhammed6550 Рік тому

    what about the aye animation and smile tho? thats the most important part tbh

    • @giovannimontagnana6262
      @giovannimontagnana6262 Рік тому

      Most definitely the face mesh was a separate ready model. The assets were made with AI

  • @s.foudehi1419
    @s.foudehi1419 Рік тому

    thanks for the video, very insightful

  • @JamesClarkToxic
    @JamesClarkToxic 10 місяців тому

    The more people who experiment with new technology, the more cool ideas we come up with, and better uses we figure out for the technology. This particular workflow may not be usable for anything meaningful, but maybe it inspires someone to try something different, and that person inspires someone else, and so-on until really cool uses come out of this.

    • @digital-guts
      @digital-guts  10 місяців тому +1

      you get the point of this video. i’m just messing around this tech and trying things. actually now making full game using only this and similar approaches to meshes. it wont be anything of industry standard quality of course, but just proof of concept experiment. having a lot of fun

    • @JamesClarkToxic
      @JamesClarkToxic 10 місяців тому

      @@digital-guts I've been experimenting with ways to create a character in Stable Diffusion and turn them into a 3D model for months. The first few attempts were awful, but without those, I wouldn't have the current workflow (which is getting really close). I also know that the technology is getting better every week, so all my experimenting should help me figure out how to do things once things get to that point.

  • @ofulgor
    @ofulgor 9 місяців тому

    Wow... Just wow.
    Nice trick.

  • @sandkang827
    @sandkang827 Рік тому

    good bye my future career in 3d modeling :')

    • @fredb74
      @fredb74 11 місяців тому

      Dont give up! AI is just another powerful tool you'll have to learn, like Photoshop back in the days.

  • @tony92506
    @tony92506 Рік тому

    very cool concept

  • @NoEnd911
    @NoEnd911 Рік тому

    Jessi Pickman🎉😂

  • @AArtfat
    @AArtfat Рік тому

    Simple and cool

  • @jaypetz
    @jaypetz Рік тому

    This is really good I like this workflow thanks for sharing.

  • @Rodgerbig
    @Rodgerbig Рік тому

    Amazing, Bro! But.. How did you get 2nd (BW) image? My SD gen only one image

    • @digital-guts
      @digital-guts  Рік тому

      this is ControlNet depth model you can get it here github.com/Mikubill/sd-webui-controlnet or use zoedepth online from link in description

    • @Rodgerbig
      @Rodgerbig Рік тому

      ⁠​⁠@@digital-guts thanks for the answer! yes, I have it installed. but it gives only one result and it is different from what is needed.

    • @Rodgerbig
      @Rodgerbig Рік тому

      ⁠@@digital-gutszoedepth actually work, but I try to do this in SD

  • @ElHongoVerde
    @ElHongoVerde Рік тому +2

    It's not bad at all (is impressive actually) and you gave me very good ideas. Although I suppose this wouldn't be very applicable to non-symmetrical images, right?