Neuron
Neuron
  • 45
  • 164 309
SD 3.5 large & turbo in ComfyUI AI, usage and installation
In this video I will show you how you install anduse the new StableDiffusion 3.5 in ComfyUI.
The blog post at stability.ai and the page from the ComfyUI Github:
stability.ai/news/introducing-stable-diffusion-3-5
comfyanonymous.github.io/ComfyUI_examples/sd3/
Get the models and put them in the models/checkpoints folder of your ComfyUI installation:
huggingface.co/stabilityai/stable-diffusion-3.5-large/tree/main
huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo/tree/main
huggingface.co/Comfy-Org/stable-diffusion-3.5-fp8/blob/main/sd3.5_large_fp8_scaled.safetensors
Get the clipmodels:
huggingface.co/Comfy-Org/stable-diffusion-3.5-fp8/blob/main/text_encoders/clip_l.safetensors
huggingface.co/Comfy-Org/stable-diffusion-3.5-fp8/blob/main/text_encoders/clip_g.safetensors
The prompts:
A close-up portrait of a seasoned female journalist in her late 50s. She has short salt-and-pepper hair, keen hazel eyes behind rectangular glasses, and subtle laugh lines. Her expression is one of intense focus as she interviews someone off-camera. Soft, natural lighting from a nearby window illuminates her face, highlighting her determined demeanor. She's wearing a crisp white blouse and a navy blazer.
A lively street performer in her early 30s captivates a small crowd in a bustling city square. She has vibrant teal hair styled in a messy updo, bright green eyes, and a contagious smile. Her face is adorned with intricate, shimmering face paint in swirling patterns. She's wearing a colorful, patchwork dress and is mid-motion, juggling three flaming torches. The background is slightly blurred, showing the impressed onlookers and the warm glow of street lamps at dusk.
polaroid photo, night photo, photo of 24 y.o beautiful woman, pale skin, bokeh, motion blur
A charismatic speaker is captured mid-speech. He has short, tousled brown hair that's slightly messy on top. He has a round circle face, clean shaven, adorned with rounded rectangular-framed glasses with dark rims, is animated as he gestures with his left hand. He is holding a black microphone in his right hand, speaking passionately. The man is wearing a light grey sweater over a white t-shirt. He's also wearing a simple black lanyard hanging around his neck. The lanyard badge has the text "Anakin AI". Behind him, there is a blurred background with a white banner containing logos and text (including Anakin AI), a professional conference setting.
Get this workflow on Patreon with the free membership:
www.patreon.com/posts/new-sd-3-5-in-ai-114623989?Link&
Please comment below if you have questions or want to tell me your suggestions for future videos.
Buy me a coffee:
buymeacoffee.com/neuron_ai
Connect with me:
_neuron_ai
_neuron_ai
www.patreon.com/neuron_ai/
#comfyui #stablediffusion #tutorial #animatediff #automatic1111 #aiart #ai #lama #tutorial #howto
Переглядів: 637

Відео

Create mockups for graphic design presentations in ComfyUI AI with FLUX 1, load prompt from textfile
Переглядів 1,8 тис.День тому
In this video I will show you how you use ComfyUI to create graphic design mockups which you can use to present designs to customers or on your webpage. We will also implement functionality to load prompts from a textfile. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video. You will also find the needed models the...
FLUX negative prompt, how to, in ComfyUI AI
Переглядів 2,4 тис.Місяць тому
In this video I will show you how you use a negative prompt with FLUX in ComfyUI. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video. You will also find the needed models there: ua-cam.com/video/JiFxw_CToFM/v-deo.html Get this workflow on Patreon with the base membership: www.patreon.com/posts/flux-negative-to-112...
Simple FLUX inpainting in ComfyUI AI
Переглядів 1,5 тис.Місяць тому
In this video I will show you how you build a simple inpaint workflow with FLUX. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video: ua-cam.com/video/JiFxw_CToFM/v-deo.html Get the needed custom node: github.com/kijai/ComfyUI-KJNodes Get this workflow on Patreon with the free membership: www.patreon.com/posts/simp...
Vary subtle & vary strong functionality in ComfyUI AI, like in Midjourney
Переглядів 2,4 тис.Місяць тому
In this video I will walk you through a special workflow to recreate some Midjourney functionality I missed in ComfyUI. I will setup a workflow for subtle and strong variation. You will get full control over the amount of variation of your generated images. Please comment below if you have questions or want to tell me your suggestions for future videos. Get the needed custom node package: githu...
Deforum animation from start image with ComfyUI AI
Переглядів 1,4 тис.2 місяці тому
In this video I will walk you through a deforum workflow to create an animation from an starting image inside ComfyUI. Please comment below if you have questions or want to tell me your suggestions for future videos. The deforum videos on my channel: Deforum base workflow: ua-cam.com/video/zuAJExW_IPc/v-deo.html Deforum cadence interpolation: ua-cam.com/video/fQQfMAQHc_E/v-deo.html Deforum IPAd...
FLUX V1 GGUF model in ComfyUI AI, for 8GB VRAM / GPU Ram / small VRAM
Переглядів 2,4 тис.2 місяці тому
In this video I will show you how you use the Flux GGUF model version in ComfyUI. This version is optimized for small GPU VRam and smaller GPUS with 8GB. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video: ua-cam.com/video/JiFxw_CToFM/v-deo.html Update your ComfyUI to the newest version. Update or Install the GGUF...
FLUX V1 with IPAdapter in ComfyUI AI, DEV, SCHNELL, FP8
Переглядів 1,2 тис.2 місяці тому
In this video I will show you how you use the Flux Dev and Schnell model inside of ComfyUI in combination with IPAdapter. Installation: If you havent used FLUX so far check out my video on how to use and install FLUX in ComfyUI befor you watch this video: ua-cam.com/video/JiFxw_CToFM/v-deo.html Update your ComfyUI to the newest version. Update or Install the XLabs ComfyUI custom nodes: github.c...
FLUX NF4 with ControlNet in ComfyUI AI, For smaller GPUs, low VRAM
Переглядів 3,8 тис.2 місяці тому
In this video I will show you how you use the Flux FN4 model inside of ComfyUI and combine it with a ControlNet. Installation: Update your ComfyUI to the newest version. Download the Flux Dev FN4 model from the folowing link and put in your ComfyUi/models/checkpoint folder: huggingface.co/lllyasviel/flux1-dev-bnb-nf4/blob/main/flux1-dev-bnb-nf4.safetensors OR Download the Flux Schnell FN4 model...
FLUX FLUX FLUX - DEV & SCHNELL model with LORA in ComfyUI AI
Переглядів 1,3 тис.2 місяці тому
In this video I will show you how you use the Flux Dev & Schnell model inside of ComfyUI. Installation: Update your ComfyUI to the newest version. Download the Flux Dev model from the folowing link and put in your ComfyUi/models/unet folder: huggingface.co/black-forest-labs/FLUX.1-dev/tree/main civitai.com/models/617609/flux1-dev Download the Flux Schnell model from the folowing link and put in...
Easy and fast text to 3D & image to 3D with TripoSR in ComfyUI AI
Переглядів 6 тис.2 місяці тому
In this video I will show you how you can easily and fast create 3D models from a text prompt. We will use TripoSR in ComfyUI. Please comment below if you have questions or want to tell me your suggestions for future videos. Get this and many other workflows on Patreon with Base membership: www.patreon.com/posts/easy-and-fast-to-109770591?Link& Get the custom nodes: github.com/flowtyone/ComfyUI...
Create abstract animated video ControlNets in Blender 3D, for ComfyUI AI, Automatic1111, etc.
Переглядів 2,4 тис.2 місяці тому
In this video I will show you how you create animated abstract texture based ControlNets for the use with AnimateDiff, Deforum or inside of Automatic1111. Please comment below if you have questions or want to tell me your suggestions for future videos. Get this workflows for free on Patreon with the free membership: www.patreon.com/posts/create-abstract-109640900?Link& Get Blender 3D: www.blend...
Insane morbid morphing animation with AnimateDiff, IpAdapter, ControlNet, in ComfyUI AI
Переглядів 7262 місяці тому
In this video I will show you a way to create morbid, uncanny and strange morphing animations with IpAdapter plus, AnimateDiff and ControlNet in ComfyUI. We will also use upscaling and interpolation. Please comment below if you have questions or want to tell me your suggestions for future videos. Get this and many other workflows on Patreon with Base membership: www.patreon.com/posts/insane-mor...
Exchange objects by keyword in ComfyUI AI, SAM, Grounding Dino, Differential Diffusion
Переглядів 6212 місяці тому
In this videoI will show you a way to automatic detect objects by keyword and replace them with inpainting. We will use SAM the Segment Anything Model from Grounding Dino. Please comment below if you have questions or want to tell me your suggestions for future videos. Get this and many other workflows on Patreon with Base membership: www.patreon.com/posts/automatic-object-109181034?Link& Segme...
OpenSource AuraFlow V 0.1 & AuraSR upscaler model introduction and usage in ComfyUI AI
Переглядів 5583 місяці тому
In this video I will give you a quick introduction into the new AuraFlow model V 0.1 and AuraSR upscaler. Please comment below if you have questions or want to tell me your suggestions for future videos. Auraflow blog post: blog.fal.ai/auraflow/ Get the Auraflow V 0.1 model: huggingface.co/fal/AuraFlow/tree/main Get the custom nodes: github.com/GreenLandisaLie/AuraSR-ComfyUI Get the AuraSR mode...
Shakker AI a CivitAI alternative for SD models, ComfyUI, A1111, new models, AI generator
Переглядів 8923 місяці тому
Shakker AI a CivitAI alternative for SD models, ComfyUI, A1111, new models, AI generator
IPAdapter Plus styletransfer with Deforum in ComfyUI AI
Переглядів 1,3 тис.3 місяці тому
IPAdapter Plus styletransfer with Deforum in ComfyUI AI
Cadence interpolation for Deforum in ComfyUI AI, smooth animation, consistent and coherent
Переглядів 1,5 тис.4 місяці тому
Cadence interpolation for Deforum in ComfyUI AI, smooth animation, consistent and coherent
Real Deforum for ComfyUI AI, infinite psychedelic zoom animation madness
Переглядів 8 тис.5 місяців тому
Real Deforum for ComfyUI AI, infinite psychedelic zoom animation madness
Easy light transfer from image to image with ICLight in ComfyUI AI, Gaffer
Переглядів 7115 місяців тому
Easy light transfer from image to image with ICLight in ComfyUI AI, Gaffer
Autodetect and remove unwanted objects in ComfyUI AI, impact, lama, yolo8, Ultralytics, SEGM
Переглядів 2,6 тис.5 місяців тому
Autodetect and remove unwanted objects in ComfyUI AI, impact, lama, yolo8, Ultralytics, SEGM
Animate IPadapter V2 / Plus with AnimateDiff, IMG2VID
Переглядів 2,8 тис.5 місяців тому
Animate IPadapter V2 / Plus with AnimateDiff, IMG2VID
Remove objects and details from photos with AI in ComfyUI, Big Lama
Переглядів 1,7 тис.6 місяців тому
Remove objects and details from photos with AI in ComfyUI, Big Lama
StableDiffusion 3 API in ComfyUI, installation and usage
Переглядів 2,8 тис.6 місяців тому
StableDiffusion 3 API in ComfyUI, installation and usage
IP-Adapter V2 / plus for ComyUI AI step by step installation guide
Переглядів 3,8 тис.6 місяців тому
IP-Adapter V2 / plus for ComyUI AI step by step installation guide
HowTo better inpaint with Differential Diffusion in ComfyUI AI, awesome results, precise controll
Переглядів 4,9 тис.6 місяців тому
HowTo better inpaint with Differential Diffusion in ComfyUI AI, awesome results, precise controll
Interpolate infinite number of frames with small VRAM usage in ComfyUI AI, RIFE, FILM
Переглядів 4 тис.6 місяців тому
Interpolate infinite number of frames with small VRAM usage in ComfyUI AI, RIFE, FILM
Convert single image to 3D with TripoSR in ComfyUI AI, comparison to CRM
Переглядів 7 тис.6 місяців тому
Convert single image to 3D with TripoSR in ComfyUI AI, comparison to CRM
SV3D IMG to 3D and multi-view synthesis with Stable Video 3D in ComfyUI AI
Переглядів 4,5 тис.7 місяців тому
SV3D IMG to 3D and multi-view synthesis with Stable Video 3D in ComfyUI AI
Convert single image to 3D model with AI in ComfyUI with CRM, for GPU & CPU
Переглядів 13 тис.7 місяців тому
Convert single image to 3D model with AI in ComfyUI with CRM, for GPU & CPU

КОМЕНТАРІ

  • @zNaYuz
    @zNaYuz 14 хвилин тому

    I was looking around to find how to get rid of duplicated frames and found this. I have to say, you got the idea I need but implemented in an unnecessary complicated way. First of all, you don't need a complicated diagram to load 2 adjacent frames. Just set increment for them seperately starting with 0 and 1, and max value at last frame - 1 and last frame. That will solve the trick. Secondly, to get rid of duplicated frame, just split the result batchs and throw away last frame. After the whole process was done, manually add last frame. That's is alot easier and faster. Too much step will cost more computing power, which is a big deal when we only have a low-end machine.

  • @LucaSerafiniLukeZerfini
    @LucaSerafiniLukeZerfini 10 годин тому

    Hi, super useful. I have this error: # ComfyUI Error Report ## Error Details - **Node Type:** CRMPoseSampler - **Exception Type:** NotImplementedError - **Exception Message:** No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 1024, 1, 512) (torch.float32) key : shape=(1, 1024, 1, 512) (torch.float32) value : shape=(1, 1024, 1, 512) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 128 xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `flshattF@0.0.0` is not supported because: max(query.shape[-1] != value.shape[-1]) > 256 xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) operator wasn't built - see `python -m xformers.info` for more info `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 512 ## Stack Trace ```

    • @neuron_ai
      @neuron_ai Годину тому

      did you have errors when installing the custom nodes?

  • @playnoob6961
    @playnoob6961 День тому

    Hi I cloned KJnodes into custom_nodes folder and did pip install -r requirements.txt and restarted the swarmui, but still I cannot get that grow mask with Blur option. Please help

  • @geoffphillips5293
    @geoffphillips5293 11 днів тому

    This has been very useful in the past couple of months. I realised that if you have a video as input or made on the fly (like CogVideo5) you can just feed it into the interpolater and videcombine the output, and that just works without trouble. But I was making a flow that did something to the images and then wanted to interpolate that live, but couldn't work it out. So of course, I can make it save images out and then go through another stage like the above to make the interpolation. What happens is I get loads of tiny videos which are just each batch run of a single frame, what I wanted it to do was interpolate the image sequence and then save the video at the end without it creating the intermediate images.

    • @neuron_ai
      @neuron_ai 10 днів тому

      this is a step by step approach. unfortunatly this is not working with video output. you have to combine the frames into a video afterwards

    • @geoffphillips5293
      @geoffphillips5293 10 днів тому

      @@neuron_ai Thanks for the reply. After I wrote that, I tried something with "meta batch". This combined with loadvideo, to which it connects, had to be set to the right number of input frames in the video, and bingo, it worked!

    • @neuron_ai
      @neuron_ai 10 днів тому

      @@geoffphillips5293 oh nice one. didnt know this node. will give it a try as well.

  • @friederknabe6991
    @friederknabe6991 12 днів тому

    "Value not in list: model: 'triposr.ckpt' not in (list of length 67) Output will be ignored" what are im doing wrong?

    • @neuron_ai
      @neuron_ai 3 дні тому

      Hey sorry for late response. Unfortunatly triposr is quit tricky. This error sounds like your model is not in the right place. did you check this?

  • @devnull_
    @devnull_ 16 днів тому

    Thanks! Actually this one seems to be working a lot better than Florence 2 for certain things.

  • @CuddleUTube
    @CuddleUTube 18 днів тому

    the created pictures are getting more and more black. no idea what that is and why.. no matter the prompt. its basically only black with cat eyes at this point

    • @neuron_ai
      @neuron_ai 17 днів тому

      You can try to encrease the noise strength.

  • @mhmdoch
    @mhmdoch 20 днів тому

    nice video, easy workflow. thx a lot

  • @takimdigital3421
    @takimdigital3421 23 дні тому

    Can contrôlnet works with schnell model ?

  • @llirikk85
    @llirikk85 24 дні тому

    Thnx!

  • @modbit64
    @modbit64 24 дні тому

    thank you!

  • @costatattooz840
    @costatattooz840 27 днів тому

    is 12GB VRAM enough? i have a rtx 3060, and im using comfyui on archlinux

    • @neuron_ai
      @neuron_ai 19 днів тому

      this should work. check out this page: github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981

  • @Lovepeace595
    @Lovepeace595 29 днів тому

    Hi guys, i need some help too, i meet the issue as below, how can i fix it? TripoSRSampler Cannot handle this data type: (1, 1, 5), |u1 # ComfyUI Error Report ## Error Details - **Node Type:** TripoSRSampler - **Exception Type:** TypeError - **Exception Message:** Cannot handle this data type: (1, 1, 5), |u1 ## Stack Trace ``` File "D:\Flux1\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Flux1\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Flux1\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Flux1\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Flux1\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Flowty-TripoSR\__init__.py", line 92, in sample image = Image.fromarray(np.clip(255. * image, 0, 255).astype(np.uint8)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  • @JoeEngelmann
    @JoeEngelmann Місяць тому

    Great tutorial, do you happen to know of a way to use this concept/method to continue a previously generated clip output to another one? I'm trying to create a much longer animation out of many shorter ones.

    • @neuron_ai
      @neuron_ai Місяць тому

      like conecting several sequences?

    • @uavresources222
      @uavresources222 Місяць тому

      Yeah, there's a ton of on-line services that seem to do it like ltx studio but they basically do transition, I can do that in Adobe, your method seems far more less likely to have issues and as long as I understand the nodes you use properly, your method seems to have less chances of hallucinating while making the transition from one clip to another.

  • @ulamss5
    @ulamss5 Місяць тому

    8:35 so that's where the "Auto Queue" people only briefly mention then disappear off the internet forever is!!1!

    • @neuron_ai
      @neuron_ai Місяць тому

      I dont understand what you mean. Can you explain?

  • @eltalismandelafe7531
    @eltalismandelafe7531 Місяць тому

    Hi. Can be used to create a normal infinite zoom between different images without shaking,flickering?

    • @neuron_ai
      @neuron_ai Місяць тому

      It might be possible, but it is not meant for this.

  • @vasilybodnar168
    @vasilybodnar168 Місяць тому

    Thank you. But in my case (my setup based on GGUF model/clip) with Neg prompt addition, generation time rises x3 times.

    • @neuron_ai
      @neuron_ai Місяць тому

      thats a lot. there is a way where the negative prompt only gets applied to the early steps. maybe try this one. I might do a video on this as well.

  • @marweiUT
    @marweiUT Місяць тому

    Great, thank you a lot !

  • @azzot-azzot
    @azzot-azzot Місяць тому

    How did you use Flux (controlnet) in Stable Diffusion loader?

    • @neuron_ai
      @neuron_ai Місяць тому

      Is this not working for you?

    • @DingoAteMeBaby
      @DingoAteMeBaby Місяць тому

      @@neuron_ai not working for me either. says model type not detected.

  • @Ton_DayTrader
    @Ton_DayTrader Місяць тому

    how to change background with images i mean images from myself ?

    • @neuron_ai
      @neuron_ai Місяць тому

      simply load it, with a load image node.

  • @hoagiemc
    @hoagiemc Місяць тому

    Great video! Thanks a lot for this video! Finally I can complete leave A1111.

    • @neuron_ai
      @neuron_ai Місяць тому

      yes deforum was a missing piece for me as well :)

    • @hoagiemc
      @hoagiemc Місяць тому

      @@neuron_ai i still cannot do everything i was able to do in A1111, but i there is a learning curve. Right now I try to figurer out, how I can save the output as individual image files instead of saving the output using the video-node. For me it is not quiet clear, why I cannot use the IMAGE output of the "Cadence Interpolation" Node to perform some upscaling, or converting it back to Latent and so some latent upscale, etc... I always get errors with wrong paramaters on those nodes (e.g. a simple Image Preview) even does not work when i connect it to the IMAGE output of the Cadence node... 🤯

    • @neuron_ai
      @neuron_ai Місяць тому

      @@hoagiemc I discovered this as well, this migt be a bug. We should make a bug report on github or on the deforum patreon page.

  • @rd-cv4vm
    @rd-cv4vm Місяць тому

    you can add a simple controlnet behind, it will make things better

  • @TheGreatResist
    @TheGreatResist Місяць тому

    Yes! Very useful!!!

  • @AInfectados
    @AInfectados Місяць тому

    Search for: FLUX.1-dev-Controlnet-Inpainting-Alpha

  • @InaKilometrosX1TUBO
    @InaKilometrosX1TUBO Місяць тому

    Thx so much !!

  • @Royerbin
    @Royerbin Місяць тому

    good ty

  • @noodlesunreal
    @noodlesunreal Місяць тому

    Your videos are great! Maybe another deforum doing controlnet (video) and hybrid video? You Rock!

    • @neuron_ai
      @neuron_ai Місяць тому

      Great idea. WIll do this soon!

  • @linyushen-1026
    @linyushen-1026 Місяць тому

    very useful!!thanks!!♥

  • @Geffers58
    @Geffers58 Місяць тому

    This is great, and almost works. I found the final save image box is broken, but reverted to the simple save image. Also, and not your fault of course, my idiocy, I spent ages trying to get the pattern matching , thinking say you have image_00055.png etc, you would have a start seed of 55.. and puzzling how you'd specify 5 digits - then realised it's merely taking any files presumably in sort order, and so seed 0 for that reason.

    • @neuron_ai
      @neuron_ai Місяць тому

      Hello, good to know that the advanced save image is broken. So many nodes a broken since the latest comfyui updates. Yes, zero is file 1. :)

  • @TheSmASHYPants
    @TheSmASHYPants Місяць тому

    Couldn't get this to work, please supply the workflow

  • @neuron_ai
    @neuron_ai Місяць тому

    Hey guys, it seems that the TripoSR addon is broken at the moment. I am quite sure that it has to do with the recent ComfyUI updates. You can try the CRM addon instead but the installation is quite difficult. As soon as I have updates on this I will let you know.

    • @placebo_yue
      @placebo_yue Місяць тому

      dear god no. the standalone tripoSR broke, now the comfyUI one? i'll look up CRM whatever that is

    • @placebo_yue
      @placebo_yue Місяць тому

      CRM is also broken

    • @CrownCityMisfit
      @CrownCityMisfit 27 днів тому

      Any news on this? Fix on the horizon?

    • @neuron_ai
      @neuron_ai 25 днів тому

      @@CrownCityMisfit sadly nothing new :(

    • @HanSolocambo
      @HanSolocambo 10 днів тому

      I used it yesterday. Works perfectly fine. Can't get CRM to work though.

  • @giovannigiorgio1536
    @giovannigiorgio1536 Місяць тому

    I am getting constant "loading scene" in the viewer and it seems nothing is happening. Please help anybody 🤖

    • @bastianwibranek6063
      @bastianwibranek6063 Місяць тому

      I get the same issue. Did you find a solution?

    • @deandresnago2796
      @deandresnago2796 29 днів тому

      I get the same thing but the output is saved. You can open a obj viewer online and drop it in to see it

  • @giovannigiorgio1536
    @giovannigiorgio1536 Місяць тому

    Does anybody know how to solve the problem that the TripoSR viewer says "loading scene" and do nothing? Thanks.

    • @neuron_ai
      @neuron_ai Місяць тому

      I just tested the file with the current ComfyUI and it seems that something broke during the updates of ComfyUI maybe the new interface is not compatible with the TripoSR viewer. I will search for a solution.

    • @giovannigiorgio1536
      @giovannigiorgio1536 Місяць тому

      @@neuron_ai Do you have the same issue as me with this constant "loading scene" text?

    • @neuron_ai
      @neuron_ai Місяць тому

      @@giovannigiorgio1536 yes. with the latest comfyui I have the same. With the old Comfyui I didnt have it. Unfortunatly, I can not test all workflows for compatibility with new comfyui versions.

    • @giovannigiorgio1536
      @giovannigiorgio1536 Місяць тому

      @@neuron_ai Thank you very much for the info! Hopefully someone will find a solution soon.

    • @CrownCityMisfit
      @CrownCityMisfit Місяць тому

      @@neuron_ai THanks for the tutorial. Got it all working except for the viewer. I see that the OBJ is created in my Outputs folder, but no texture to apply to the model. I guess this is part of the problem as well. Will keep an eye on this space for solution thank you!!!!!

  • @Vashthareaper
    @Vashthareaper Місяць тому

    uses 100 % cpu for each frame generation, yikes

    • @neuron_ai
      @neuron_ai Місяць тому

      AI is heavy! Did it crash?

    • @Vashthareaper
      @Vashthareaper Місяць тому

      @@neuron_ai no doesnt crash but ive never had 100% cpu usage for comfyui even when using flux, cpu temps spikes to 100c everytime its on "blend conditioning" node :(

    • @neuron_ai
      @neuron_ai Місяць тому

      @@Vashthareaper maybe ask for this on the deforum discord.

    • @Vashthareaper
      @Vashthareaper Місяць тому

      @@neuron_ai did you check your cpu temps ?

    • @neuron_ai
      @neuron_ai Місяць тому

      @@Vashthareaper not yet. I can check it.

  • @Rimbo28
    @Rimbo28 Місяць тому

    Excellent idea my friend ! But is not working for me. I replicate your workflow, with the model TripoSR, but the results are so bad man... in a very poor definition... I tried to modify the parameters, but nothing change... What can i do ?

    • @neuron_ai
      @neuron_ai Місяць тому

      Hey, unfortunatly the state of the AI 3D generation is not so awesome like in other AI areas. Depending of the kind of image or prompt you start with reults vary greatly. For dome inagery its just not working great.

  • @possiblynotrohit
    @possiblynotrohit Місяць тому

    By lowvram can my 1660 super run it 😢

    • @neuron_ai
      @neuron_ai Місяць тому

      could be not enough I am sorry 😞

  • @AliRahmoun
    @AliRahmoun Місяць тому

    would this work with flux?

    • @neuron_ai
      @neuron_ai Місяць тому

      Unfortunatly not at the moment. I tried to run it with flux in all different model and sampler combination, but had no success. Some used nodes do not support flux yet. When this changes or I find another way I will make a video on it.

    • @AliRahmoun
      @AliRahmoun Місяць тому

      @@neuron_ai thanks for the reply and for sharing the knowledge! Subscribed and i'll be looking forward to it!

    • @neuron_ai
      @neuron_ai Місяць тому

      ​@@AliRahmounthank you 😊

  • @lixiagan
    @lixiagan Місяць тому

    Hello, can you only use other editing software to synthesize the initial video and the video generated by this workflow? Workflow compositing two videos can't be done yet?

    • @neuron_ai
      @neuron_ai Місяць тому

      I dont know a simple way to do this inside comfyui

  • @nguyenngochoan5801
    @nguyenngochoan5801 Місяць тому

    How to import IPadapterPlus to Comfyui =((

    • @nguyenngochoan5801
      @nguyenngochoan5801 Місяць тому

      i Can't find IPadapter advanced in Comfyui

    • @neuron_ai
      @neuron_ai Місяць тому

      you can install it over the manager

  • @spiritform111
    @spiritform111 Місяць тому

    short and simple... thanks!

  • @mneyanmels
    @mneyanmels Місяць тому

    it's not working for me (( it's removing only for one image

    • @neuron_ai
      @neuron_ai Місяць тому

      did you activate the batch processing in the manager panel?

  • @eveekiviblog7361
    @eveekiviblog7361 Місяць тому

    Is it better than Nf4? Have you tried?

    • @neuron_ai
      @neuron_ai Місяць тому

      i would say dev and pro are the best so far. regarding the results.

  • @dima-semi-ko
    @dima-semi-ko Місяць тому

    Thank you for your tutorial! i did it!!! looks like you understand the piocess under the hood very good!

  • @ZanetwiceTV
    @ZanetwiceTV 2 місяці тому

    Thanks you ! Thanks you !

  • @Renzsu
    @Renzsu 2 місяці тому

    I did a fresh ComfyUI portable download, but the Canny Edge node is nowhere to be found, also not from the links in your description. Any idea where to get it?

    • @neuron_ai
      @neuron_ai Місяць тому

      I am not at my pc right so can not check. I might forgot some custom nodes. Try installing github.com/Fannovel16/comfyui_controlnet_aux

  • @upscalednostalgiaofficial
    @upscalednostalgiaofficial 2 місяці тому

    Nice. Do you have a workflow for videos?

    • @neuron_ai
      @neuron_ai 2 місяці тому

      you mean removing the background for all frames in a video? not yet, but i will do one.

  • @LuisEnriqueReyesPerez-yt4su
    @LuisEnriqueReyesPerez-yt4su 2 місяці тому

    Thanks!! Exelent work.

  • @WapooVideo
    @WapooVideo 2 місяці тому

    how to put those generated images together into a video?

    • @neuron_ai
      @neuron_ai 2 місяці тому

      the video output node saves them to your output folder as a video

  • @INVICTUSSOLIS
    @INVICTUSSOLIS 2 місяці тому

    Is there controlnet and ipadapter for GGUF?

    • @neuron_ai
      @neuron_ai 2 місяці тому

      I think not at the moment. Loras shopuld worg with gguf.

    • @neuron_ai
      @neuron_ai 2 місяці тому

      I just realized that the gguf model does work with the xlabs sampler. so you should also be able to use contromnets. ipadapter might work also. give it a try....

    • @INVICTUSSOLIS
      @INVICTUSSOLIS Місяць тому

      @@neuron_ai For some reason it doesnt work for me. And Im using 32GB M1

    • @neuron_ai
      @neuron_ai Місяць тому

      ​@INVICTUSSOLIS this is strange, but I tested one of the models on my 4090 and it didnt work either. which model did you try?

    • @INVICTUSSOLIS
      @INVICTUSSOLIS Місяць тому

      @@neuron_ai I tried both canny and depth released by Xlabs

  • @antoinesaal2372
    @antoinesaal2372 2 місяці тому

    Error : No module named 'controlnet_aux' (ControlNet) (I'm using Flux1-dev-Q4_K_S + comfyUI + Controlnet) There is a error during the node "Canny edge" : "Error occurred when executing CannyEdgePreprocessor : No module named 'controlnet_aux' I re-installed ControlNet auxiliary model manually several times, and updated everything. It didn't change anything.

    • @neuron_ai
      @neuron_ai 2 місяці тому

      control net is not working with gguf right now.