How to Prompt FLUX. The BEST ways for prompting FLUX.1 SCHNELL and DEV including T5 and CLIP.

Поділитися
Вставка
  • Опубліковано 27 лис 2024

КОМЕНТАРІ • 58

  • @NextTechandAI
    @NextTechandAI  2 місяці тому +2

    How do you write prompts for FLUX? In natural language, tokens, or...?

    • @RASIhz
      @RASIhz 2 місяці тому

      wow this video is amazing I didint know these terms before

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      @RASIhz Thanks a lot, I‘m happy that you find the video useful.

  • @digitalspacestudio3956
    @digitalspacestudio3956 2 місяці тому +3

    Wow! This is real magic! Thank you for explaining everything so easily!

    • @NextTechandAI
      @NextTechandAI  2 місяці тому +2

      Thank you very much for your feedback! I'm happy you find the information useful!

  • @rodopil1161
    @rodopil1161 15 днів тому

    So so USEFUL and essential ! Merci beaucoup :)

    • @NextTechandAI
      @NextTechandAI  15 днів тому

      Thank you for your feedback, I'm glad the video was useful for you 😀

  • @jayross661
    @jayross661 2 місяці тому

    Great video and loved the explanations and walkthrough. Thank you!

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      Thank you very much for your motivating feedback!

  • @Marcus_Ramour
    @Marcus_Ramour 2 місяці тому

    Great video and really finding your flux tutorials/explanations very useful. I find the way I was prompting in SDXL is working well in flux too. Natural language, starting with type of image & style, description of the subject then pose and location. Flux seems to get very close which then allows fine tuninng whereas SDXL I have to use a lot of control net/IPAdapter to get what I really want alongside the prompts.

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      Thank you very much for you detailed feedback. Indeed Flux and SDXL are not far from each other and I'm not surprised that your approach works well.

  • @evolv_85
    @evolv_85 2 місяці тому

    This is great, thanks. It has saved me some time playing around with the prompts and settings. I started to move away from brackets and toward natural language prompts with SDXL to make things more straight forward and got great results as long as I got the settings right. As soon as I set up flux, I went straight to natural language and got awesome results straight away. Particularly with the schnell model. I am not seeing a great difference between the standard clip encoder and the flux one.

    • @NextTechandAI
      @NextTechandAI  2 місяці тому +1

      Thank you very much for your detailed feedback. When generating with same seed there is a noticeable difference between standard and flux text encoder, but you are right, the difference is not very big. Happy to read that you are using the SCHNELL model, too.

    • @evolv_85
      @evolv_85 2 місяці тому +1

      @@NextTechandAI Hi, no problem. It's great to share these things because it moves so fast. Today I've already found the FLUX NF4 version. It's half the size, twice as fast and results are good so far, not amazing but good enough.

  • @lowrider6419
    @lowrider6419 2 місяці тому

    My current walpaper is: Three anthropomorphic hares in red, blue and green clothes are pulling a wooden cart with large wooden wheels with huge single carrot, much larger than them. The action takes place in an autumn field with dried grass and small colourful meadow flowers growing along a dirt road. In the background, a dense forest can be seen in the distance.

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      Great idea, thanks for sharing. Did a quick generation and both with Dev and Schnell it looks like a photo.

  • @Hilfe
    @Hilfe Місяць тому

    Krasser Akzent 😀👍🏼

  • @RodrigoAGJ
    @RodrigoAGJ 16 днів тому

    I’m really eager to try out this interesting workflow! Where can I find it?

    • @NextTechandAI
      @NextTechandAI  16 днів тому

      I'm glad the video is useful. Which workflow do you mean?

  • @Beauty.and.FashionPhotographer
    @Beauty.and.FashionPhotographer 2 місяці тому +1

    suggestion for a cool video : not many talk about the "ProMax model" diffusion_pytorch_model_promax.safetensors

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      Thanks for the hint, but there are already videos about ProMax. Anyhow, I'll put it on my list.

  • @kukipett
    @kukipett 2 місяці тому

    I have also made a lot of tests with flux and prompts and i've noticed that Flux is not suited for art but more for hyperrealistic photo like images.
    There is a way to make it follow more closely your prompts, i seen a guy who was making the model pass through a dynamicthresholdingfull node and then you can use a negative prompt and a a cfgguider to force inject a cfg that is normally set to 1 for flux.
    And it works, i can add negative prompts and get a far more accurate image, i was surprised to see that my prompts were much better followed.

    • @NextTechandAI
      @NextTechandAI  2 місяці тому +1

      Thanks for the detailed description of your workflow. As mentioned in the video, I think SCHNELL is suitable for arts, maybe you have focused on DEV? Nevertheless, I tried out a workflow for negative prompts for this video. Unfortunately it works with DEV only, it makes the generation process very slow and has proven to be unreliable for me. Won’t your workflow be slowed down by negative prompts?

    • @kukipett
      @kukipett 2 місяці тому

      @@NextTechandAI Well i have only worked with dev for now. And about speed i've just made a test with the same setting on the normal generator and the special one.
      The normal takes 54 sec and the special 1 min 44 sec. i have to say that i have two loras loaded and a 3080ti 12 gb GPU.
      I use the fp8 dev and the t5xxl fp16

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      @kukipett Then our experiences with DEV coincide. I've read that a negative prompt takes about twice as long because of the second pass; that would also fit. Thanks for sharing your numbers.

    • @evolv_85
      @evolv_85 2 місяці тому

      I'm using schnell and get amazing artwork. It's generating anything I tell it to so far.

  • @ShakouTheWolf
    @ShakouTheWolf 2 місяці тому

    Hello, flux seems to be a model for realism correct? But how much fantasy stuff are we able to render with it? For example i can get DallE 3 to render cartoony inflated tigers as seen as in tom and jerry. Can Flux do this too?

    • @NextTechandAI
      @NextTechandAI  2 місяці тому +1

      With SCHNELL it's not a problem to render fantasy stuff, see the dragons in my vid. Although you can create very realistic images with DEV, fantasy images are possible, too. It just does not follow your prompts that much, but you can use lots of Loras to create certain styles.

    • @ShakouTheWolf
      @ShakouTheWolf 2 місяці тому

      Interesting! I would have to give it a try. But not sure if it can exceed the quality i expect. Since i have been using Dalle3 for that. I could show you examples through DM's or something ​@NextTechandAI

  • @oldfeiwang
    @oldfeiwang Місяць тому

    How to weight the prompt in FLUX, like sd1.5 (word:weight)? It seems do not work in that way.

    • @NextTechandAI
      @NextTechandAI  Місяць тому

      You have to use natural language and describe important items in more detail.

  • @as-ng5ln
    @as-ng5ln 18 днів тому

    DEV has its own vae

    • @NextTechandAI
      @NextTechandAI  18 днів тому

      What do you mean? There is one VAE for Flux, but some checkpoints have it included directly.

    • @as-ng5ln
      @as-ng5ln 18 днів тому

      @NextTechandAI dev has a special vae that can be downloaded on huggingface, maybe that is why the images turned out that poorly

    • @NextTechandAI
      @NextTechandAI  18 днів тому

      @@as-ng5ln No, there is one VAE for Flux. This has absolutely nothing to do with the fact that Schnell follows prompts better than DEV. Try it yourself and generate the same image with both VAE files. By the way, you can try this with SD3.5 Large and Turbo, too.

    • @as-ng5ln
      @as-ng5ln 18 днів тому

      @@NextTechandAI I'm telling you... I have the two files "ae.safetensors" and "flux1DevVAE_safetensors.safetensors". ae comes from schnell, while the other one is from the dev directory

    • @NextTechandAI
      @NextTechandAI  18 днів тому

      @as-ng5ln Yes, and they have same effect on Flux image generation. As I said, try youself.

  • @CryptoPRO-fo5wi
    @CryptoPRO-fo5wi 2 місяці тому

    You can do simple inpainting with image-to-image in ComfyUI. But how are you going to use Flux NF4?

    • @NextTechandAI
      @NextTechandAI  2 місяці тому +1

      Right, but how is this related to prompting, the topic of the video? BTW, I'm using Flux GGUF, not NF4 (ua-cam.com/video/B-Sx_XCAqzk/v-deo.html).

    • @CryptoPRO-fo5wi
      @CryptoPRO-fo5wi 2 місяці тому

      @@NextTechandAI Flux NF4 works fine with 8GB VRAM, but when I try to run Flux Q4, it fails. It seems like Q4 requires more VRAM.

    • @NextTechandAI
      @NextTechandAI  2 місяці тому +1

      @CryptoPRO-fo5wi Interesting, in the comments of my GGUF vid there is some positive feedback regarding 8GB VRAM cards and less. In theory NF4 is optimized for speed and GGUF is optimized for size. With 8GB you should easily run Q2_K and Q3_K_S. If this works you could try Q4_K_S, which has higher quality. Anyhow, you should use latest updates for GGUF, there have been several optimizations.

    • @CryptoPRO-fo5wi
      @CryptoPRO-fo5wi 2 місяці тому +1

      Thanks, I'll try Q2_K and Q3_K_S first, then see if Q4_K_S works. I'll also make sure to update GGUF to the latest version for those optimizations.

  • @Asyouwere
    @Asyouwere Місяць тому

    Nice video, suggestion; lower the music, the excessive ducking is distracting.

    • @NextTechandAI
      @NextTechandAI  Місяць тому

      Thanks for your feedback and suggestion.

  • @rijnhartman8549
    @rijnhartman8549 Місяць тому

    you should create a custom GPT in ChaGPT with this in the backend

  • @AInfectados
    @AInfectados 2 місяці тому

    Link to your workflow please and can you add a node for LORAS?

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      I've used the standard workflow you can find in Comfy's examples. If you don't know them, in the description you find a link to my FLUX installation video. If you want to try the mentioned GGUF models, the link to my GGUF video is in the description, too.

  • @AInfectados
    @AInfectados 2 місяці тому

    How i get the CLIP ENCODER FLUX node?

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      In the description you find a link to my video about installing FLUX, there's what you need.

  • @CanaldoLipeSt
    @CanaldoLipeSt 2 місяці тому

    I'm new to AI, I'm having difficulties with Flux when it comes to creating people with facial expressions, for example: sadness, anger, joy, etc.
    I also have difficulty simulating action/movement, such as: jumping, running, sitting, lying down, etc.
    Flux doesn't seem to be very friendly with camera movement, it hasn't been easy to get certain camera angles using the prompt.
    Anyone having this difficulty?

    • @NextTechandAI
      @NextTechandAI  2 місяці тому +1

      Are you using DEV or SCHNELL? SCHNELL reacts better to prompts as described my video. Facial expressions are not always perfect, but sadness, anger, joy look different. Camera settings are indeed difficult. SCHNELL reacts e.g. on expansive focus and narrow focus, but I haven't found a reliable way to determine the camera height.

    • @CanaldoLipeSt
      @CanaldoLipeSt 2 місяці тому

      @@NextTechandAI Thanks for answering, I use Dev but I also have Schnell, I didn't know about the difference between them, I'm going to do some tests with the other version and see if I have better results! thanks!

    • @NextTechandAI
      @NextTechandAI  2 місяці тому

      @CanaldoLipeSt I'm happy if the tip was helpful. Thanks for your feedback.