Comfyui consistent characters using FLUX DEV

Поділитися
Вставка
  • Опубліковано 29 лис 2024

КОМЕНТАРІ •

  • @maximinus1972
    @maximinus1972 27 днів тому +4

    This is by far the best explanation of setting up Flux + ControlNet I have seen so far, since you actually explain everything rather than just "here's my over-complicated workflow!". The node layout is so nice and clean. You did more than enough to earn a sub and a like from me. Keep it up!

    • @goshniiAI
      @goshniiAI  27 днів тому

      I am glad to hear that the step-by-step approach was clear and helpful for you. Your support is encouraging, and I appreciate your sub and the like. Thank you so much for your time and the amazing feedback.

  • @jamessenade3181
    @jamessenade3181 2 дні тому

    thank bro ... i love the way your detailles all the process ... you are a Rock star , merci

    • @goshniiAI
      @goshniiAI  2 дні тому

      You are very welcome, and saying thank you for your compliment.

  • @240dbprisms5
    @240dbprisms5 Місяць тому +1

    omg bro, just what i need 🔥🔥 THANK YOU clear rhythm, working method

    • @goshniiAI
      @goshniiAI  Місяць тому

      you are most welcome. i am glad to read your feedback.💜

  • @kajukaipira
    @kajukaipira 2 місяці тому +2

    Amazing, concise, understandable. Congrats man, keep the good work.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Thank you so much! appreciate it

  • @zoewilliams2010
    @zoewilliams2010 13 днів тому

    Much love from South Africa! Thank you for this video!!! I'm busy making a short horror movie for fun using Flux Dev and KLING to do image-to-video, and this is EXACTLY what I need! Because I need to make consistent characters but I only have 1 input image of the character as reference. Man I didn't know they had a character pose system for flux yet THANK YOU!!! :D this needs to be ranked higher in google!

    • @goshniiAI
      @goshniiAI  13 днів тому +1

      You are very welcome! I am glad it was helpful for your short horror film project, and I appreciate your feedback. It is always great to connect with local creators, especially since I am currently in South Africa. Happy creating!

  • @pizza_later
    @pizza_later 2 місяці тому +2

    So helpful. Thank you for starting fresh and walking us through each step. Definitely earned a sub.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Thank you so much! I’m honoured to have earned your subscription and and glad you found this helpful.

  •  Місяць тому +1

    Just wanted to say, you are amazing!!

    • @goshniiAI
      @goshniiAI  Місяць тому

      Hearing that means so much. Thank you for your support.

  • @sergeysaulit
    @sergeysaulit 2 місяці тому +1

    Thank you! It’s good that you just tell and show what and how to do. Otherwise you can spend your whole life learning ComfyUI)). And so, in the process, in practice, it is easier to learn.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      I'm really glad to hear that the straightforward approach is helping you! Just diving in and practicing as you go makes it a lot easier. Thanks again for the feedback!

  • @yangli1437
    @yangli1437 4 дні тому

    Thanks so much for your hardwrok, very useful videos.

    • @goshniiAI
      @goshniiAI  4 дні тому

      You are very welcome! I appreciate your encouraging feedback. Thank you!

  • @devnull_
    @devnull_ Місяць тому

    Thanks and it is nice to see a cleaner node layout, instead of a jumble of nodes and connections, which too many Comfy tutorial makers seem to love.

    • @goshniiAI
      @goshniiAI  Місяць тому

      I am Glad it was helpful! Thank you for the observation and feedback . It means alot

  • @Gimmesomemore2012
    @Gimmesomemore2012 29 днів тому +1

    thank you very much for this tutorial... at the right speed and detailed explanation..

    • @goshniiAI
      @goshniiAI  29 днів тому

      Thank you so much for the kind words!

  • @ielohim2423
    @ielohim2423 19 днів тому

    This is amazing! Thank you so much. Subscribed!

  • @willmobar
    @willmobar 29 днів тому +1

    Thank you, you are excellent!

    • @goshniiAI
      @goshniiAI  29 днів тому

      That's very kind of you!

  • @cosymedia2257
    @cosymedia2257 6 днів тому

    Thank you!

    • @goshniiAI
      @goshniiAI  6 днів тому

      You are more than welcome.

  • @petttertube
    @petttertube 2 місяці тому +1

    Thank you very much for this priceless video. You say the parameter cfg is chosen to be 1 because we are not using the negative prompt. As far as I know Flux doesn´t use negative prompts, so I am a bit confused, could we just remove the negative prompt node from the workflow?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      You are welcome and entirely correct. However, the Ksampler will still require a negative conditioning input, so the negative prompt node is linked for that.

  • @cleverfox4413
    @cleverfox4413 2 місяці тому

    Really good Explanation, Keep up the good work :)

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Thank you for the motivation! I'm glad I could help.

  • @sudabadri7051
    @sudabadri7051 2 місяці тому +1

    Superb work mate

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Thank you so much, Suda! Love

  • @wrillywonka1320
    @wrillywonka1320 26 днів тому

    i cant lie, this was the best consistent character video for sure! is this able to work with sd3.5?

    • @goshniiAI
      @goshniiAI  26 днів тому +1

      Thank you for coming here, and I appreciate your feedback.
      Yes, it is possible! Just keep in mind that SD3.5 might need the right controlnet models and slight adjustments to the ControlNet parameters to achieve the same consistency since it has a few differences in model handling.
      If you can tweak those and add the right nodes, you should be able to get great, consistent characters!

    • @wrillywonka1320
      @wrillywonka1320 26 днів тому

      @goshniiAI wrell since im super new to comfyui i guess ill just wait for someone to make a videwo about it. By the way great video! I would use flux but my issue is that i heard flux has very strict commercial use rulesf

  • @ainaopeyemi339
    @ainaopeyemi339 2 місяці тому +1

    I love this, already subscribed

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Thank you for being here. i appreciate your support.

  • @Shaolinfool_animation
    @Shaolinfool_animation 2 дні тому

    You always make great content! I have a question. I got a image of character in a front view T-pose and I want to get different views of the character from one image. Is it possible to load that image and get different views of that character using open pose character sheet? Thanks for all of your hard work!

    • @goshniiAI
      @goshniiAI  12 годин тому +1

      That is possible, but the process will likely involve a lot of trial and error. I recommend using the OpenPose character sheet as a guide to create the character views. Then use this to make a Lora for the character. This approach will give you more control.
      Thank you for your encouraging feedback.

  • @JoeBurnett
    @JoeBurnett 2 місяці тому +3

    Great video as always! Thanks!

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Thank you for your encouragement.

  • @calvinnguyen1451
    @calvinnguyen1451 Місяць тому

    Dope stuff. You rock!

    • @goshniiAI
      @goshniiAI  Місяць тому

      I appreciate that! Thank you!

  • @wrillywonka1320
    @wrillywonka1320 26 днів тому

    also for anyone experiencing an issue downloading the yolo model, you will need to go into the comfyui folder comfyui> custom nodes> comfyui manager and you will find a config file. you open in notepad editor and where it says bypass_ssl = False you need to change False to True and save. restart comfyui and you will be able to download the yolo model no problem

  • @Usermx0101
    @Usermx0101 2 місяці тому +1

    Great video. I wonder what are the system specs you use to run this on. I got out of vram memory with 20Gb card using GGUF flex-dev-Q5 so I guess I might be doing something wrong.

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      I've got an RTX 3060 Nvidia card with 12GB. It's happened to me a few times. Just make sure to close all the apps that might be using your GPU. You could also try using an upscale of 2 instead of 4. And sometimes, saving the workflow and then restarting comfyUi helps things run smoother.

  • @ImHewg
    @ImHewg 2 місяці тому

    How do you get the super cartoony prompts, like that cool robot? I keep generating 3D characters.
    Sweet workflow! Subbed!

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Welcome on board! Here is the prompt for that.
      A Cyberpunk Mecha Kid, concept art, character sheet, in different poses and angles, including front view, side view, and back view, turnaround sheet, minimalist background, detailed face, portrait.

  • @devon9374
    @devon9374 Місяць тому

    Great video!

    • @goshniiAI
      @goshniiAI  29 днів тому

      I'm glad you enjoyed it!

  • @diaitigai9856
    @diaitigai9856 2 місяці тому

    Great content in your video! I really enjoyed it. One suggestion I have is to improve the echo in your voice using a tool called Audacity. It can help enhance the audio quality significantly. Feel free to contact me if you need any help with that. Keep up the good work!

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Thanks a lot for the awesome suggestion and kind words! I am considering the idea of using Audacity I've heard it's great so I'll definitely give it a try. If I run into any issues, I might take you up on your offer to help! Thanks again for watching and giving me some really helpful input.

  • @kagawakisho4382
    @kagawakisho4382 2 місяці тому

    Thanks for the video. This is Awesome. Do you use this to create loras? Or what do you use the character sheets for?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      I haven't specifically used this workflow to create LoRAs, BUT character sheets can definitely be a foundation for that. They help you capture a character in different poses and perspectives, making it easier to feed consistent images into training processes for LoRAs.
      Also they are super useful for game development, animation, or just keeping a consistent look across different art projects

  • @pixelist999
    @pixelist999 8 днів тому

    Great tuts! Helped me install flux1 seemlessly - however I don't seem to have dwprocessor or controlnet apply in my drop down lists? I get this message when in manager - 【ComfyUI's ControlNet Auxiliary Preprocessors】Conflicted Nodes (3)
    AnimalPosePreprocessor [ComfyUI-tbox]
    DWPreprocessor [ComfyUI-tbox]
    DensePosePreprocessor [ComfyUI-tbox]
    So I uninstalled ComfyUI-tbox and still no joy? Do you have any suggestions?

  • @muggyate
    @muggyate 2 місяці тому

    I find that if you add another generation step before to tell the AI to generate a design sheet for a mannequin, you can skip the part where you have to have an image loaded into the controlnet per-processor.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Thank you for sharing that approach with everyone! awesome tip!

  • @Ozstudiosio
    @Ozstudiosio 4 дні тому

    perfect but what if i want use image instead use prompt input?

  • @lefourbe5596
    @lefourbe5596 2 місяці тому +1

    i was versed into Chracter sheet making for over a year. however... i have yet to succeed at making the single picture Lora character that would make the reference sheet of the original concept in one go dencently.
    your take is basically the mick mumpitz workflow with flux. it's good as it is.

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      I'm really glad you found this workflow helpful and shared your experience! Flux really kicks it up a notch, and when you combine it with a refined approach like Mick Mumpitz’s, it really gives it that extra edge.

  • @TheBearmoth
    @TheBearmoth 2 місяці тому

    Great video, very helpful! What kind of spec do you need for this flow?
    I'm able to run some Flux1D stuff, but ComfyUi keeps getting killed for taking too much memory with this workflow :(

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Thank you! I'm glad you found the video helpful. if you’re already running FLUX1D. Ideally, you’d want at least 12GB of VRAM for smoother runs. You can try lowering the resolution of the inputs or using quantized models to reduce memory usage.

    • @TheBearmoth
      @TheBearmoth 2 місяці тому

      @@goshniiAI any system RAM requirements? That's given me grief in the past, before I upgraded it.

  • @V3ryH1gh
    @V3ryH1gh 27 днів тому +1

    when doing the first queue prompt for the aio aux processor - i just get a blank black image

    • @goshniiAI
      @goshniiAI  26 днів тому

      double-check that your image resolution matches the AIO's setup, mismatches can sometimes be the cause. Also, tweaking the strength values for ControlNet can help the AUX processor interpret the image better. It took me a bit of experimenting with these settings too! I hope this helps.

    • @Retrocaus
      @Retrocaus 24 дні тому

      @@goshniiAI i still get a blank image also the strength is after the preprocessor save image i don't think it affects it?

  • @E.T.S.
    @E.T.S. 2 місяці тому

    Very helpful, thank you.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      i appreciate your feedback

  • @pumbchik5788
    @pumbchik5788 2 місяці тому +3

    for the pose reference, can we add our own pics posing as we like. will it work?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Yep!!! You can use any picture, and then you'll need ControlNet to extract your pose.

  • @edmartincombemorel
    @edmartincombemorel 2 місяці тому

    great stuff, but there is defenetly a missed opportunity to crop each pose and redo a pass of ksampler on it, you could even crop your controlnet to fit the same pose.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      You're absolutely right-cropping each pose and running it through KSampler again could really refine the details and give even more control over the final result. I’ll definitely keep that in mind for future tutorials! I appreciate the insight

  • @LaMagra-w4c
    @LaMagra-w4c 11 днів тому

    Love your videos. I purchased the pack including the one in this video but I'm having issues. I keep getting the following error. 'CheckpointLoaderSimple
    ERROR: Could not detect model type of: flux1-dev-fp8.safetensors' . Where would I download the correct model for this to work?

    • @goshniiAI
      @goshniiAI  11 днів тому +1

      Thank you for supporting the channel. Make sure you're grabbing the specific FP8 version of the model and placing it in the models/checkpoints folder within your ComfyUI directory.
      Double-check that the file name hasn’t changed (e.g., flux1-dev-fp8.safetensors) and that it's saved in the right format. If you need further guidance, feel free to view this step by step video ua-cam.com/video/TWSFej_S_bY/v-deo.htmlsi=hWosspilbjYj3QWl

    • @LaMagra-w4c
      @LaMagra-w4c 11 днів тому

      @@goshniiAI Thank you! It worked but is it normally very slow when it hits the first ksampler? it takes forever to get through this point

    • @goshniiAI
      @goshniiAI  11 днів тому +1

      @@LaMagra-w4c Yes, FLUX Dev can be a bit sluggish when it hits the first KSampler , It’s not just you!
      Here are a few tips to speed things up - Use Quantized Models, Lower Sampling Steps, also make sure that your GPU and VRAM aren't getting held back by other stuff running in the background.

  • @bananacomputer9351
    @bananacomputer9351 2 місяці тому

    Wow nice

  • @m3dia_offline
    @m3dia_offline 2 місяці тому

    are you going to follow up on this video on how to use this character sheet to put them in different scenes/videos?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Thanks for the suggestion! I'll check it out since you mentioned it.

  • @AnthonyTori
    @AnthonyTori 2 місяці тому

    It would be nice if we could upload a 3D file like a glb so the software has every angle of the model. It would make consistent characters a lot easier.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      .glb would advance the creation of consistent characters. That might just be a possibility in the future!

  • @lordmo3416
    @lordmo3416 Місяць тому +1

    Would you be so kind as to give the workflow for using an existing image or character? Thanks

    • @goshniiAI
      @goshniiAI  Місяць тому +1

      Yes, hopefully, the tutorial that follows will clarify and give that.

    • @lordmo3416
      @lordmo3416 Місяць тому

      @@goshniiAI can't wait

  • @ttthr4582
    @ttthr4582 25 днів тому

    How to know which other models are trained for use with controlnet? I basically want to create a 2d cartoon character turnaround sheet using your workflow

    • @goshniiAI
      @goshniiAI  22 дні тому

      Hello, and thank you for watching and engaging. Controlnet only conditions your prompt to take a specific pose you want.. So to find models that work smoothly with ControlNet, you can explore Civitai. Sometimes the models include detailed tags indicating ControlNet compatibility. However, the majority of models are trained for controlnet.
      For that 2D cartoon character turnaround, try searching models tagged with styles like “cartoon” or “illustration.
      I hope these help.

  • @RoN43wwq
    @RoN43wwq 2 місяці тому

    great thanks

  • @pushingpandas6479
    @pushingpandas6479 2 місяці тому

    thank you!!!!

  • @hmmrm
    @hmmrm 2 місяці тому

    THANKS

  • @personaje27
    @personaje27 Місяць тому

    Hi bro thanks for the video please which PC do you recommend for all of this I am trying to get a laptop but I don't want to do mistakes as u want it for traditional video editing and Ai vidéo/image generator

    • @goshniiAI
      @goshniiAI  Місяць тому +1

      Aim for at least an NVIDIA RTX 3060 or higher with 6GB or more VRAM. This will help with both rendering in video editing software and running AI generation workflows efficiently.
      Also, RAM size of 32GB is ideal for smooth performance, especially when multitasking or running resource-heavy AI models.

  • @dmitryboldyrev7364
    @dmitryboldyrev7364 2 місяці тому +1

    How to create multiple consistent cartoon characters interacting with each other on different scenes?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Hopefully soon, in the next post

  • @JustinCiriello
    @JustinCiriello 2 місяці тому

    It all works except the Face Detailer. It just gets stuck in a loop when it gets to that step. Endless loop with no error. Refreshing and Restarting did not help. Everything is fully updated.

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      yes thats correct, the face detailer continuously refines the face details until they are complete. Keep it running until it generates the final image. You got it right!

  • @秦奕-f9k
    @秦奕-f9k 2 місяці тому

    great ai master

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Thank you, Sensei!

  • @Larimuss
    @Larimuss Місяць тому

    But how do we make different poses and profile photos for loras etc? Part 2 would be awesome 😂 this is a great workflow and video thanks!

    • @goshniiAI
      @goshniiAI  Місяць тому

      I'm glad you enjoyed the workflow and video! I appreciate your suggestion to create various poses and profile photos for LoRAs, and I will take it into consideration. True enough, Part 2 seems like a really good idea! :)

  • @AIChandu77
    @AIChandu77 2 місяці тому

    thanks

  • @greenlanternA123
    @greenlanternA123 2 місяці тому

    your UI is very nice, I still have the old look, how do I update to get your UI ?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Please see my Video here, towards the end, i explained the settings: ua-cam.com/video/PPPQ1SANScM/v-deo.htmlsi=uMK8VUuxhCxyIerW

  • @AIRawFootages
    @AIRawFootages 3 години тому

    but can i use image generated from flux dev commercially??

  • @Larimuss
    @Larimuss Місяць тому

    Nice thanks. But what about when we want to use the character in a generation?

    • @goshniiAI
      @goshniiAI  Місяць тому

      Yes, you can, here is a follow-up video that explains the process. ua-cam.com/video/OHl9J_Pga-E/v-deo.html

  • @Fret-Reps
    @Fret-Reps Місяць тому

    IDK if you can help me but I've had problems with this AIO Preprocessor.
    AIO_Preprocessor
    'NoneType' object has no attribute 'get_provider. Please help

    • @goshniiAI
      @goshniiAI  Місяць тому

      A missing or outdated dependency can cause this, so make sure to update comfy
      Otherwise, you can continue to use individual preprocessors for each controlnet model. that will still work fine.

  • @ZergRadio
    @ZergRadio 2 місяці тому

    Wow, I really enjoyed this vid.
    I am an absolute beginner.
    I am confused. In the video you have your character in many poses and improved the details.
    How would you take just one of those poses from the character (say Octopus chef) and put it in a new environment?
    Do you have a video on that?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      I'm really glad you enjoyed the video! It's awesome that even as a beginner, you're already asking great questions. If you want to take one of those poses, like our "Octopus Chef," and put it into a new environment, you can easily combine FLUX and ControlNet to lock in the pose while changing the background.
      I haven't made a specific video on that yet, but it's a good idea for a future tutorial, and I'll definitely create a detailed walkthrough soon.

  • @phenix5609
    @phenix5609 Місяць тому

    Any idea why i can't get it to work, strangely, i get your workflow correctly from the link you provide, generate my image with the 3 view like you ( before applying the controlnet ) then i run the workflow, again to apply the controlnet pose ( that show like you in the video with the reference image provide, i see the pose extracted correctly) but when i run the workflow trying to apply the controlnet, instead of the 3 view picture, i don't get the panel view applying the previously generated character to the controlnet pose, but a single centered character..., i'm really not sure what went wrong lol, si if you have any idea thx

    • @goshniiAI
      @goshniiAI  Місяць тому

      Thank you for diving into the workflow! Here are a few tips that might help:
      - Before you run the workflow again, just make sure the reference images for ControlNet are lined up right. Take a look at your positive prompt and think about adding multiple views if you haven’t already.
      - It’s a good idea to double-check the ControlNet settings, especially the resolution and how the preprocessor reads the pose data. Sometimes tweaking those can keep you from getting just a single-centred result.
      i hope these helps

  • @poptasticanimation55
    @poptasticanimation55 12 днів тому

    My AIO AUX Preprocessor is not wokring, says its not in teh folder. what should i be looking for in that folder and if not where can i get the preprocessor?

    • @goshniiAI
      @goshniiAI  12 днів тому

      First, double-check that the ControlNet Auxiliary Preprocessors folder is present in your ComfyUI directory. [ custom_nodes/ControlNet ]
      If it’s missing, you can download the necessary files by using the Manager.
      then make sure you update dcomfyUi to the latest version.

  • @wrillywonka1320
    @wrillywonka1320 26 днів тому

    update on the controlnetapplysd3 node, supposedly it has been renamed controlnet apply vae

    • @goshniiAI
      @goshniiAI  23 дні тому

      Thank you for making us aware. We appreciate you watching out for that.

  • @k.jatuphat9785
    @k.jatuphat9785 2 місяці тому

    How to add LoRA to this workflow? Please. I need LoRA for my charector face and Controlnet for my charector pose.

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      To achieve the Lora results, place the Lora Node between the load checkpoint and the prompt nodes. You can also follow this tutorial on how to use Flux with Lora. ua-cam.com/video/HuDU4DlZid8/v-deo.htmlsi=-l4wISSzrH0i1wmp

  • @aaagaming2023
    @aaagaming2023 2 місяці тому

    Is there automated way in comfy to split the character sheet into individual images to train LoRAs on the character?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Yes, you can get individual images by using the image crop node.

  • @cray989
    @cray989 23 дні тому

    I'm getting an error when I try to use the DWPreprocessor (and several others). The message says:
    # ComfyUI Error Report
    ## Error Details
    - **Node Type:** AIO_Preprocessor
    - **Exception Type:** huggingface_hub.utils._errors.LocalEntryNotFoundError
    - **Exception Message:** An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
    ## Stack Trace
    My internet connection is fine. Any advice?

    • @goshniiAI
      @goshniiAI  23 дні тому

      Sorry to hear that; I would recommend updating any of your nodes as well as running an update for ComfyUI.

  • @AIRawFootages
    @AIRawFootages 13 днів тому

    it shows "(IMPORT FAILED) ComfyUI's ControlNet Auxiliary Preprocessors" when i try to install ControlNet Auxiliary Preprocessors...anyone pls help

    • @goshniiAI
      @goshniiAI  13 днів тому

      Make sure you're running the latest version of ComfyUI. Sometimes, older versions don’t play well with newer add-ons.

  • @stevenls9781
    @stevenls9781 23 дні тому

    Is there a way with this workflow to use an image of a person that would be part of the output character sheet?

    • @goshniiAI
      @goshniiAI  23 дні тому

      Hello Steven, the answer is sadly no for this workflow. I have explained in the next tutorial how to achieve this with the IP Adapter, but it uses SDXL rather than FLUX due to the IP Adapter's consistency.
      To obtain an accurate input image, I recommend creating a character sheet for your character concept and then training a lora using your images.

    • @stevenls9781
      @stevenls9781 23 дні тому

      @@goshniiAI oh ok, that works also. Doooo you happen to have a link to a training a lora video :D

    • @goshniiAI
      @goshniiAI  23 дні тому

      ​@@stevenls9781 Not just yet. For now, I do not have a video of Lora training with FLUX, but I am considering making one to share the process.
      you can check out this reference video that might assist you ua-cam.com/video/Uls_jXy9RuU/v-deo.htmlsi=EJoLucxVyOFFQKjB

  • @demiurgen3407
    @demiurgen3407 2 місяці тому

    This might be a dumb question but what do you do with a character sheet? You have a character in different poses, then what? Do you animate it? Do you use it for something else?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Not a dumb question at all! Character sheets are often used in animation, game development, and concept art to showcase a character in various poses or expressions, making it easier for artists or animators to reference and maintain consistency.
      it’s mostly a reference tool to visualize how the character moves and looks from different angles.If you’re looking to bring these poses to life, you can definitely use them as a foundation for animation or even export them into 3D modeling software.

    • @demiurgen3407
      @demiurgen3407 2 місяці тому +1

      @@goshniiAI Cool! Maybe you could do a video on that? How to move from a character sheet to a 3D model :)

  • @ScaleniumPersonaleAI
    @ScaleniumPersonaleAI 23 дні тому

    Bro this video is great but some nodes are missing...how should we fix this?

    • @goshniiAI
      @goshniiAI  23 дні тому

      If you see missing nodes in your workflow, it means you have not yet installed the custom nodes. To install the missing nodes, go to Manager > Install Missing Nodes and then install the ones that appear.
      That will help to find the missing nodes and fix them.

  • @nickfai9301
    @nickfai9301 3 дні тому

    How to use the image reference in animation?

    • @goshniiAI
      @goshniiAI  12 годин тому +1

      I am hoping to share a video process on that in future videos.

  • @CsokaErno
    @CsokaErno Місяць тому

    This "controlnetapply sd3 andhunyuandit" is nowhere :/ I updated everything.

    • @goshniiAI
      @goshniiAI  Місяць тому

      The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.

  • @skybluexox
    @skybluexox Місяць тому

    I can’t use AIO Aux processor, how do I fix this? 😢

    • @goshniiAI
      @goshniiAI  Місяць тому

      No need to worry. You can use separate preprocessors for each model, and everything will still work.

  • @AIandTech-dq4iy
    @AIandTech-dq4iy 2 місяці тому +1

    I can't find the ControlNetApply SD3 and HunyuanDIT nodes. Where can I install them?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      One of the key nodes in comfyUI is the controlnetapplySD3. Before it's made available, make sure comfy is updated.

    • @goldkat94
      @goldkat94 2 місяці тому

      @@goshniiAI I can't find it either. Auxiliary Preprocessors is installed and "ComfyUI is already up to date with the latest version."

    • @bluemodize7718
      @bluemodize7718 2 місяці тому +1

      @@goshniiAI I already have comfy and packages up to date and still can't find it

    • @Simjedi
      @Simjedi 2 місяці тому

      @@bluemodize7718 It has changed. It's been renamed to "Apply Controlnet with VAE"

    • @fedesalmaso
      @fedesalmaso 2 місяці тому

      @@bluemodize7718 same here

  • @mr.entezaee
    @mr.entezaee 2 місяці тому

    Does anyone know how to fix this problem?
    Failed to restore node: Ultimate SD Upscale
    Please remove and re-add it.

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      It seems there might be a mismatch in the workflow. Try deleting the node and adding it back from scratch. If that doesn’t work, just make sure you have the latest version of the node installed.

    • @mr.entezaee
      @mr.entezaee 2 місяці тому

      @@goshniiAI Yes, that's it, but I don't know which node to delete.. How do I know which node to delete?

  • @RxAIWithDrJen
    @RxAIWithDrJen 2 місяці тому

    Have no idea how what i'm missing to get ControlNetApply SD3 and HunyuanDT. Does not update and does not show on Manager...so can anyone shed light? New to SD and Comfy. THanks

    • @goshniiAI
      @goshniiAI  2 місяці тому

      The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.

    • @RxAIWithDrJen
      @RxAIWithDrJen 2 місяці тому

      @@goshniiAI Thanks! And thank you for an excellent video

    • @goshniiAI
      @goshniiAI  2 місяці тому

      @@RxAIWithDrJen You are most welcome. Thank you for being here

  • @bushwentto711
    @bushwentto711 2 місяці тому

    Cool but now how can we use that to create a consistent character in a scene with flux?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      I am looking into it, and hopefully we will have a video guide on it soon.

    • @bushwentto711
      @bushwentto711 2 місяці тому

      @@goshniiAI Cheers mate keep up the great content

  • @tmlander
    @tmlander Місяць тому

    why not share the json for comfy? I went to gumroad and downloaded your files but was surprised there is no json just an image of your set up!!!?????

    • @devnull_
      @devnull_ Місяць тому +1

      You sure the image didn't have the comfy workflow stored into it? Did you try dropping it into Comfy UI?

    • @goshniiAI
      @goshniiAI  Місяць тому +1

      Yes you are right, the PNG image still works the same as a JSON file. You only have to import it or drag and drop into comfyUI.

    • @tmlander
      @tmlander Місяць тому

      @@goshniiAI I saw that later... sorry I thought comfy only accepted json... thanks for your work!

    • @goshniiAI
      @goshniiAI  Місяць тому

      @@tmlander you are most welcome, thank you for sharing an update.

  • @adult85a1
    @adult85a1 Місяць тому

    sir! which gpu are you using? and please suggest cloud gpu service site!

    • @goshniiAI
      @goshniiAI  Місяць тому

      I'm using an NVIDIA RTX 3060 for my workflow, for cloud GPU services, I recommend trying out RunPod or Vast.ai-both offer flexible pricing and options for FLUX and ControlNet if your local hardware isn't enough.

  • @josemasisvalverde8646
    @josemasisvalverde8646 14 днів тому

    Can use this for Sdxl?

    • @goshniiAI
      @goshniiAI  13 днів тому

      Yes, you can; just make sure to use the correct SDXL models for controlnet, checkpoint Loader, and other SDXL-compatible nodes.

  • @ainaopeyemi339
    @ainaopeyemi339 2 місяці тому

    So I have a question, rather than prompt everything in a single box can we have a different workflow for different pose, like for example here is the sitting pose, the standing pose, the jumping pose workflow and generate them individually rather thsn generate them in one box
    Also is there a way to make sure that this character you are prompting remains the same with time, for example this octopus man that you prompted let's say I want to use it for a children's story book, and I dont wanna prompt all the characters at once, I can prompt him sitting today, tomorrow he is standing, next week i want him eating, and this character remains the same all through at different times?????
    Thank you

    • @Muz889
      @Muz889 2 місяці тому

      What he showed in the video is called a character sheet. You can then use this character sheet as a reference image to tell flux what a character looks like and prompt any pose or action you want this character specifically. What you should now research, is how to use character sheets with flux.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Thanks for explaining and providing the extra information

  • @stevenls9781
    @stevenls9781 28 днів тому

    Can we download that workflow.. maybe I missed that in the vid.

    • @goshniiAI
      @goshniiAI  28 днів тому +1

      Yes, you can use the link in the description.

    • @stevenls9781
      @stevenls9781 28 днів тому

      @@goshniiAI oh man... if only I used my eyes. thanks for the reply.

    • @stevenls9781
      @stevenls9781 27 днів тому

      ah I was looking for a JSON file or something, it's a PNG to use as a ref and copy into Comfy

    • @goshniiAI
      @goshniiAI  27 днів тому +1

      @@stevenls9781 True! A PNG or JSON file can be used in the same way.
      The benefit of using a PNG workflow is that you can see a preview of the node structure or layout. You only need to drag the PNG file into comfyui to get to the workflow.

    • @stevenls9781
      @stevenls9781 27 днів тому

      @@goshniiAI ah gotcha, I was just looking at them as an image preview and thought cool I can create it based on that. Now after doing it manually I have dragged the png into Comfy and it loaded.. hahahah well good practice following the image :D

  • @Huguillon
    @Huguillon 2 місяці тому

    How do you get that new Interface??, I updated everything and I still have the old interface

    • @Huguillon
      @Huguillon 2 місяці тому

      Nevermind, I found it

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Awesome! I'm glad you found it.

    • @Huguillon
      @Huguillon 2 місяці тому

      @@goshniiAI By the way, Amazing video, Thank you

    • @goshniiAI
      @goshniiAI  2 місяці тому

      @@Huguillon i appreciate it, You are welcome

  • @RagonTheHitman
    @RagonTheHitman 2 місяці тому +1

    I can't use "DWPose" as a Preprocessor. I get some strange errors. Could have something to do with onnxruntime-gpu / Cuda version whatever. Someone wrote: "The error message mentioned above usually means DWPose, a Deep Learning model, and more specifically, a Controlnet preprocessor for OpenPose within ComfyUI's ControlNet Auxiliary Preprocessors, doesn't support the CUDA version installed on your machine." I tried for 4 hours to fix it, ChatGpt could'nt help neither anyone on the Internet..... :(

    • @JustinCiriello
      @JustinCiriello 2 місяці тому +3

      I can't either. Try using OpenposePreprocessor instead.

    • @RagonTheHitman
      @RagonTheHitman 2 місяці тому +2

      @@JustinCiriello Yes, this is working :)

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Thank you for providing the additional information.

    • @brandoncurrypitcher1945
      @brandoncurrypitcher1945 Місяць тому +1

      @@JustinCiriello Thanks, I had the same issue

  • @sanbait
    @sanbait Місяць тому

    what is ur comf ui panel in browser?

    • @goshniiAI
      @goshniiAI  Місяць тому

      Hello there, i have explained that towards the end of this video. ua-cam.com/video/PPPQ1SANScM/v-deo.htmlsi=_KhvMhp30g_h2rxx
      i hope this helps.

  • @fungus98
    @fungus98 2 місяці тому

    So it appears that apply SD3 node has been renamed to Apply With VAE?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      It is still SD3, as I checked.

    • @fungus98
      @fungus98 2 місяці тому

      @@goshniiAI still can't get it to come up on mine, but "apply" and "apply with vae" are the exact same nodes it looks like. At least, I can't see a difference

    • @goshniiAI
      @goshniiAI  2 місяці тому +2

      Thank you for pointing that out, it looks like the "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      @@fungus98 Yeah, you are right, and thank you for sharing your observation

  • @felipecesarlourenco8955
    @felipecesarlourenco8955 Місяць тому

    how to add simple lora?

    • @goshniiAI
      @goshniiAI  Місяць тому

      Hello there, you find view my guide here about adding a Lora in my previous videos for FLUX. ua-cam.com/video/HuDU4DlZid8/v-deo.htmlsi=FzSSqoe6OV_56l55

  • @sanbait
    @sanbait Місяць тому

    but what about non-human characters?
    Animals?

    • @goshniiAI
      @goshniiAI  Місяць тому

      For animals, you'll need the controlnet animal position model, but for now I'm not sure it is currently available for Flux.

    • @sanbait
      @sanbait Місяць тому

      @@goshniiAI how i can custom skelet.
      iam have game char like pokemon

  • @botlifegamer7026
    @botlifegamer7026 2 місяці тому

    There is no option for controlnetapply sd3 option.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      The controlnetapplySD3 is a core node in comfyUI. Ensure comfy is updated before it becomes available.

    • @goshniiAI
      @goshniiAI  2 місяці тому

      please do the same by updating comfyui.

    • @botlifegamer7026
      @botlifegamer7026 2 місяці тому

      @@goshniiAI it's not there even after updates

    • @goshniiAI
      @goshniiAI  2 місяці тому

      @HelloMeMeMeow Yeah the workflow is now available.

    • @botlifegamer7026
      @botlifegamer7026 2 місяці тому

      @@goshniiAI Your workflow is a ControlnetApply vae not the sd3 you have in yours or did you rename it?

  • @victorestomo729
    @victorestomo729 2 місяці тому

    can I add load lora node?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Yeah, that can be done. I explained how to do it in this link here. ua-cam.com/video/HuDU4DlZid8/v-deo.htmlsi=gC-go2q4ylLSm6Or

  • @wrillywonka1320
    @wrillywonka1320 2 місяці тому

    Can this be done in forge ui?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Yeah, hopefully I'll make a tutorial video for that.

    • @wrillywonka1320
      @wrillywonka1320 2 місяці тому

      @@goshniiAI thatd be awesome! I need that badly

  • @amirhossein1108
    @amirhossein1108 28 днів тому

    Is this free?

    • @goshniiAI
      @goshniiAI  28 днів тому

      Yes, you are welcome to use the description's link.

  • @Thefishos
    @Thefishos 2 місяці тому

    Very nice work ! thanks a lot man. I know it takes a lot of time to make videos like this, but is there any chance you could make a video with a workflow like this one but with flux ofc:
    ua-cam.com/video/849xBkgpF3E/v-deo.htmlsi=GZwbPr4nuI8dvvyn
    That would be amazing!!!
    🙏

    • @goshniiAI
      @goshniiAI  26 днів тому +1

      Hi there, I appreciate your suggestion and the reference link. i will consider that.

  • @bhuvanaib.9731
    @bhuvanaib.9731 12 днів тому

    Hi it's stuck on Load Upscale Model node. I believe I don't have the "4x-Ultrasharp.pth". How to get that please?

    • @goshniiAI
      @goshniiAI  12 днів тому

      The Upscale models can be downloaded through the Manager, or you can watch the video link here to guide you ua-cam.com/video/PPPQ1SANScM/v-deo.htmlsi=M-fMMvE6-kEzr5u8

  • @hasstv9393
    @hasstv9393 2 місяці тому

    Can Anyone tell me the usecase of this characters images?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      Awesome question! Just picture game development, animation, or storyboarding. When you have consistent images from different angles, it makes sure your character looks the same from any perspective. This makes it easier to animate, storyboard, or even print in 3D. It's also super helpful for storybooks or visualizing characters in dynamic scenes. I hope that gives some inspiration!

    • @hasstv9393
      @hasstv9393 2 місяці тому

      @@goshniiAI is possible to make the 3D models with AI with this images

    • @goshniiAI
      @goshniiAI  2 місяці тому

      @@hasstv9393 Absolutely! There are good AI tools for converting 2D concepts to 3D.
      If you're looking for AI-powered choices, you can use 3D A.I. Studio, Meshy, Rodin, Tripo 3D, or Genie by Luma Labs to produce 3D models directly from images, while platforms like Ready Player Me allow you to build 3D avatars using an image input.

  • @Ozstudiosio
    @Ozstudiosio 6 днів тому

    perfect workflow could you send me your contact we need speak about some business work?

    • @goshniiAI
      @goshniiAI  6 днів тому

      Thank you! Please send an email to this address: mylifeisgrander@protonmail.com.