AINxtGen
AINxtGen
  • 4
  • 34 175
FLUX Fine Tuning with LoRA | Unleash FLUX's Potential
Discover how to unleash the hidden potential of FLUX through fine-tuning with LoRA (Low-Rank Adaptation). Join us as we explore how LoRA can enhance FLUX's capabilities and how to apply this technique to your projects. Get ready to take your AI image generation skills to the next level and unleash the full potential of FLUX!
Promts:
ainxtgen.notion.site/UA-cam-Note-c59efa43404f4338ad3158712733512e
LoRA:
huggingface.co/AINxtGen/ScarlettJohansson_LoRA_FLUX
Resource using in this video:
1. fal.ai
2. replicate.com/lucataco/ai-toolkit/train
3. github.com/ostris/ai-toolkit/blob/main/config/examples/train_lora_flux_24gb.yaml
4. github.com/kohya-ss/sd-scripts
5. github.com/Nerogar/OneTrainer
6. github.com/comfyanonymous/ComfyUI_examples/tree/master/flux
#flux #stablediffusion #comfyui #FLUXFineTuning #aiimagegeneration #machinelearning #deeplearning #aitutorialforbeginners
Переглядів: 24 504

Відео

ComfyUI: FLUX + ControlNet + IPAdapter (SDXL Version) IntegrationComfyUI: FLUX + ControlNet + IPAdapter (SDXL Version) Integration
ComfyUI: FLUX + ControlNet + IPAdapter (SDXL Version) Integration
Переглядів 6 тис.5 місяців тому
Learn how to leverage Flux with Controlnet and IPAdapter (SDXL) in ComfyUI. This up-to-date tutorial covers integration tricks. Workflow: openart.ai/workflows/penguin_miserly_20/comfyui-flux-controlnet-ipadapter-sdxl-version-integration/F0cTkrDNS5QbJw3kNCqz Resources using in this video: ComfyUI examples: github.com/comfyanonymous/ComfyUI_examples/tree/master/flux huggingface.co/xinsir/controln...
AI NEWS: Top 10 Innovations Revolutionizing the Tech Industry | 2024 #1AI NEWS: Top 10 Innovations Revolutionizing the Tech Industry | 2024 #1
AI NEWS: Top 10 Innovations Revolutionizing the Tech Industry | 2024 #1
Переглядів 665 місяців тому
Discover the cutting-edge AI innovations shaping our future! From OpenAI's SearchGPT to META's Llama-3.1, we explore 10 groundbreaking developments in AI technology. Witness real-time CGI creation, autonomous drones, and hyper-realistic image generation. Don't miss out on these game-changing advancements from industry leaders like Midjourney, StabilityAI, and more. Stay ahead in the AI revoluti...
FLUX 1 Schnell / Dev Local Install Guide / ComfyUI and trying online serviceFLUX 1 Schnell / Dev Local Install Guide / ComfyUI and trying online service
FLUX 1 Schnell / Dev Local Install Guide / ComfyUI and trying online service
Переглядів 3,5 тис.5 місяців тому
Watch now to see Flux in action and join the AI revolution! Don't forget to like, subscribe, and share your Flux creations in the comments below. #Link to try online: fal.ai/models/fal-ai/flux/dev replicate.com/collections/flux #Link to comfyUI Flux: github.com/comfyanonymous/ComfyUI_examples/tree/master/flux #Link to download Model: CLIP: huggingface.co/comfyanonymous/flux_text_encoders/tree/m...

КОМЕНТАРІ

  • @India.stories
    @India.stories 2 місяці тому

    Thanks so much! with your tutorial I was finally able to understand style transfer using ip adpater

  • @KCi-x2u
    @KCi-x2u 2 місяці тому

    This can be used in the process of creating character turnables and sheets that look exactly like what was intended?

  • @pedrohenriquespl1038
    @pedrohenriquespl1038 2 місяці тому

    Hey buddy, how u doing? This is by far the best video I’ve seen so far a out LoRA training! Tks a lot!! When u say that if u were going to retrain this LoRA you’d need to prepar le better quality data, what do tou mean by that? More pictures? Better pictures? Different settings when training? Tks bro 👊

  • @Kirmm
    @Kirmm 2 місяці тому

    I'm stuck on Anyline Preprocessor. Just can't import it. Says (IMPORT FAILED). Tried "Try Fix" button, tried Uninstalling and reinstalling but no go. Any tips? EDIT: OKAY fixed it with AIO Aux Preprocessor. But now I get an error with ClipVision. I downloaded the IP-Adapter models into a IPAdapter folder but still no go. Error occurred when executing IPAdapterUnifiedLoader: ClipVision model not found. File "C:\COMFY-UI\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\COMFY-UI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\COMFY-UI\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\COMFY-UI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus-main\IPAdapterPlus.py", line 559, in load_models raise Exception("ClipVision model not found.")

  • @fabioroncal8763
    @fabioroncal8763 3 місяці тому

    it just doesnt work... i can get those Krita and the checkpoint. models..it would be more helful if you put the links to all the necessary assets

  • @BrowneNorton-h5u
    @BrowneNorton-h5u 3 місяці тому

    Thomas Ronald Williams Brenda Lopez Susan

  • @Ittiz
    @Ittiz 3 місяці тому

    you want better results? hand write the captions for each training image in the same way you like to write your own prompts!

    • @AINxtGen8
      @AINxtGen8 3 місяці тому

      I agree, writing captions manually will usually yield better results.

  • @hasstv9393
    @hasstv9393 3 місяці тому

    Replica is best cause it cost 2$

  • @frizzfrizz3550
    @frizzfrizz3550 3 місяці тому

    great video, I want to contact you for a chat or a call, how can I do?

  • @sankyuubigan
    @sankyuubigan 4 місяці тому

    How do you think when will appear models without censorship, in which will be at once all the celebrities already trained ? I mean communities where publish these models, of course only for introductory viewing, because nsfw content can not be done because it is very bad from the point of view of morality.

  • @Reddkomet
    @Reddkomet 4 місяці тому

    Can you make a tutorial for creating style Loras?

    • @AINxtGen8
      @AINxtGen8 3 місяці тому

      Yes, I am planning to make a video about style LoRA training

  • @omegablast2002
    @omegablast2002 4 місяці тому

    to reply to the title: literally no one said it was hard, its just extremely painfully long.

  • @quangminhnguyen7834
    @quangminhnguyen7834 4 місяці тому

    Can I use the trained lora to generate images on any free website that has flux?

  • @steve-g3j6b
    @steve-g3j6b 4 місяці тому

    would love a followup video where you learned whats the best way to use those sliders on the fal web.

  • @fahimabdulaziz4255
    @fahimabdulaziz4255 4 місяці тому

    can I train lora for a consistent streetwear t-shirt design style?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Certainly, you can train a LoRA for a consistent streetwear t-shirt design style. Training for a specific style is generally more challenging than training for a character, but it's definitely achievable. Here are some tips to help you succeed: Data preparation: Gather a larger dataset of high-quality images (at least 50 good quality images). There's no need to crop these images due to the bucketing technique which is fal also used Training steps: I recommend increasing the number of training steps to at least 2000. This allows the model more time to learn the nuances of the style. Learning rate: Start with a learning rate of 0.0002. You can adjust this later if needed. Checkpoints: Make use of the new feature on fal called 'Experimental Multi Checkpoints Count'. Set this to save 4 checkpoints during the training process. This is crucial because it allows you to test different stages of the model after training and choose the one that produces the best results. Remember, training for a style requires more attention to detail and experimentation. Don't be discouraged if your first attempt isn't perfect - it often takes some fine-tuning to get the desired results.

    • @fahimabdulaziz4255
      @fahimabdulaziz4255 4 місяці тому

      @@AINxtGen8 thank you soo much, Ma Sha Allah

  • @shirleywang9584
    @shirleywang9584 4 місяці тому

    Hi, I'm Tess from Digiarty Software. Interested in a collab?

  • @TheColonelJJ
    @TheColonelJJ 4 місяці тому

    Thank you for adding how much VRAM you have!!! That was helpful! I also have 12.

  • @eveekiviblog7361
    @eveekiviblog7361 4 місяці тому

    you can also try with cpds controlnet and add lora for flux. You can use Florence 2 and Olama LLMs to remove style from first text bar. BTW I really expect to see comparison of GGUF (lets say Q8) and NF4 flux models comparison using Union!

  • @Rachelcenter1
    @Rachelcenter1 4 місяці тому

    5:55 but why do you have positive/negative prompt in the middle of the workflow and then another prompt box on the far left hand side of the workflow?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Thank you for asking. The prompt on the left is for creating a realistic, black and white style photo, while the prompt on the right side of the workflow is for instructing how to create an illustration-style image. I could reuse the prompt on the left side and connect it directly, but this would cause conflicting results between the instruction prompt and the IPAdapter, which follows the illustration style. You can use the same prompt, but in such a case, you should remove the specific description of the photo style and only describe the object itself. This approach will help avoid conflicts and ensure consistency in the final output.

  • @Rachelcenter1
    @Rachelcenter1 4 місяці тому

    when i open the workflow it says "set node,. get node and anyLineProcessor" are missing, so i go to "install missing" and it prompts me to install "anyline", program restarts, i hit refresh on browser and its still missing.

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Alternatively, you can use the "AIO Aux Preprocessor" node from the "ControlNet Auxiliary Preprocessors" extension (it has similar functionality). imgur.com/qBdKLcH github.com/Fannovel16/comfyui_controlnet_aux

  • @ShakkerAI
    @ShakkerAI 4 місяці тому

    Cooperation, may I have your email address?

  • @mehmetalirende
    @mehmetalirende 4 місяці тому

    what about combining 2 loras in 1 picture for couples?

    • @aknownj
      @aknownj 4 місяці тому

      A whole romantic getaway to any fictional destination of your imagination

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      yes you can, use Lora Stack node in ComfyUI, refer to this workflow link: openart.ai/workflows/macaque_keen_26/flux-with-multi-lora-loader-workflow/DfB4A8yL27WCwgEGi3YA or try running on replicate: replicate.com/lucataco/flux-dev-multi-lora

    • @ronnydaca
      @ronnydaca 4 місяці тому

      ​@@AINxtGen8 It's possibile with forge?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      @@ronnydaca in forge you can also load multiple lora, and adjust the weights for each lora, but I haven't actually tested the results for lora used for Flux on Forge imgur.com/HYCFTrq

  • @chrisgg
    @chrisgg 4 місяці тому

    I think, taking a celebrity creates out of the box good results without training a model?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      As I mentioned in this part of the video: 00:00:20 I chose Scarlett Johansson for testing purposes. The reason for this choice is that when I used her name as a keyword, Flux generated images that didn't resemble Johansson. This suggests that her name was likely removed from Flux's training data. I selected Scarlett Johansson for this test because she is a well-known celebrity, which makes it easier to compare the results before and after training.

  • @부정선거4.15
    @부정선거4.15 4 місяці тому

    Hi thanks. Where could I get the images I need to use?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Hi ! Thank you for your question. Depending on what type of LoRA you want to train - whether it's for a character, object, or style - one of the most commonly used image sources is Google (filtered for large images): images.google.com/advanced_image_search Alternatively, you can also use AI image generators to create a dataset for training. One example of this approach is using ComfyUI. You can refer to this workflow: openart.ai/workflows/serval_quirky_69/one-click-dataset/QoOqXTelqSjMwZ0fvxQ9

  • @paulfranco9673
    @paulfranco9673 4 місяці тому

    how did you get it to generate the thumbnail? i'm trying to use Flux to generate multiple views of characters but I'm struggling to do so, if you could give me some guidance pls!

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      The prompt will generally be like below, with the keyword here being "character design sheet". Below is the prompt that I used ChatGPT to create (I input a similar sample image and then asked ChatGPT to generate this prompt): " Character design sheet for Scarlett Johansson as Black Widow in modern 2D animation style. Horizontal layout. Left side: full body front and side views in signature black catsuit with front zipper. Right side: two close-up face views (3/4 and profile) showing detailed features. Add third full body view in dynamic fighting pose. Short wavy red hair, large green eyes with highlights, bold red lips. Exaggerated body proportions for visual appeal. Clean, sharp lines with minimal shading. Flat colors with subtle highlights. Include varied facial expressions: neutral, smiling, serious. Add rear view and close-ups of iconic accessories (e.g. wrist gauntlets, belt). White background with soft shadows. Professional, polished illustration style reminiscent of high-end animated series. "

  • @sebastianpodesta
    @sebastianpodesta 4 місяці тому

    Hi, if I want to make a Lora to give people baby faces or Asian faces, should I make a Lora with many different Asian or baby faces? What would make a good data set?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Hi, as I understand, you want to create a baby cute, kawaii style. If you're just creating a general image in this style, Flux can do it. Try some of the prompts below to see. If you want to create this style for a specific face, you'll need to create a LoRA for that face, then combine it with style keywords like those below. Another method that doesn't require LoRA is using IPAdapter Face, but it only works well on SDXL versions. Currently, FLUX doesn't have a well-functioning IPAdapter, although Xlabs has just released an IPAdapter model for FLUX, it's not very good. Reference prompts: "Asian with baby face, cute chibi style, big eyes" "Kawaii Asian portrait, childlike expression" "Cartoon Asian character, baby face, adorable" "Chibi Asian, oversized head, tiny body, playful smile" "Cute Asian portrait, youthful features, cartoon-like eyes" Images created from prompts: imgur.com/a/SQP9Ln5

  • @zorayanuthar9289
    @zorayanuthar9289 4 місяці тому

    Great guide but poor choices relating to models... Cameltoe come-on 😂

  • @sirishkumar-m5z
    @sirishkumar-m5z 4 місяці тому

    Machine Learning: SmythOS’s pre-configured support for machine learning frameworks accelerates model development and deployment, streamlining the machine learning lifecycle.

  • @sirishkumar-m5z
    @sirishkumar-m5z 4 місяці тому

    SmythOS’s modularity allows for easy customization and extension of AI capabilities, ensuring it meets specific project needs. This adaptability is key for addressing diverse AI research and application requirements.

  • @Huang-uj9rt
    @Huang-uj9rt 4 місяці тому

    For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.

  • @AINxtGen8
    @AINxtGen8 4 місяці тому

    Fal.ai fal.ai/models/fal-ai/flux-lora-general-training You can also train LoRa on civitai and replicate.com: civitai.com/models/train replicate.com/ostris/flux-dev-lora-trainer/train If your computer has a powerful GPU, you can train locally, script to traning on local machine: github.com/ostris/ai-toolkit/tree/main

    • @부정선거4.15
      @부정선거4.15 4 місяці тому

      @@AINxtGen8 thanks bro

    • @steve-g3j6b
      @steve-g3j6b 4 місяці тому

      what if I want my generations to be 16:9 should I use that size of pics to train? or 1:1 is best?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      @@steve-g3j6b Hello, thank you for your question: In fact, you don't need to crop your images to a specific size because I recently learned that fal.ai also uses ai-toolkit script from Ostris for training LoRA. This script supports a technique called 'bucketing', which is an automatic method that groups images of similar aspect ratios together during training. This means you don't need to manually crop your images to a specific size anymore. Bucketing is a technique that allows the model to train on images of various sizes and aspect ratios efficiently. It works by grouping similar-sized images into 'buckets' and processing them together, which helps maintain image quality and reduces the need for excessive resizing or cropping. This approach is particularly useful when working with datasets that contain images of different dimensions, as it preserves the original aspect ratios while still allowing for efficient batch processing during training.

    • @steve-g3j6b
      @steve-g3j6b 4 місяці тому

      @@AINxtGen8 I would imagine it will make much better backgrounds too (assuming the ai will also learn some of the BG)

    • @steve-g3j6b
      @steve-g3j6b 4 місяці тому

      @@AINxtGen8 would be a cool vid to have a comprehensive look at this workflow.

  • @rtberbary0101
    @rtberbary0101 4 місяці тому

    for some reason, it keeps failing for me. doesn't start the training eventhough i changed nothing. only uplaod my photos and trigger word same as you did. anyone else having this issue?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Have you tried clicking the "see log" button in the left hand window after clicking the "start" button? Does the log show anything?

    • @rtberbary0101
      @rtberbary0101 4 місяці тому

      @@AINxtGen8 i figured it out! apparently there is a limit on photos. you can add a maximum of 99 images for the training. anything beyond that results in an error

  • @ahtoshkaa
    @ahtoshkaa 4 місяці тому

    Great guide. thank you!

  • @artificial_director
    @artificial_director 4 місяці тому

    Thanks for the video! I wonder if one could use it for replacing fashion shoots. I would 1) train on a certain character/person/model (photo realistic ofc) 2) then train a let’s say skirt or fashion piece, maybe a couple of images of the piece 3) then somehow combine it How would you do this, would you also use controlNet for this?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Yes, you can, here's a simplified approach: 1. Train a LoRA for the Flux to create your specific character/model. Use ControlNet Pose to control the model's posing accurately. 2. Use ComfyUI's CatVTON node to change dress the AI-generated model in different outfits. This method combines character-specific LoRA models with virtual try-on technology. You can refer to the node below: github.com/chflame163/ComfyUI_CatVTON_Wrapper openart.ai/workflows/HaxcrNaVvjae9pdkut64

    • @artificial_director
      @artificial_director 4 місяці тому

      @@AINxtGen8thanks a lot!

  • @kronosiii9379
    @kronosiii9379 4 місяці тому

    Thank you for the tutorial. AnylinePreprocessor by themisto does not load even after update all comfyui and a manual installation , pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Anyline

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      After installation, try restarting Pinokio to see if it resolves the issue. Since I don't use Pinokio, I'm not sure about the exact cause. Alternatively, you can use the "AIO Aux Preprocessor" node from the "ControlNet Auxiliary Preprocessors" extension (it has similar functionality). imgur.com/qBdKLcH

    • @kronosiii9379
      @kronosiii9379 4 місяці тому

      @@AINxtGen8 Thank you for the quick reply! I'll verify this soon!

  • @sdprompts
    @sdprompts 4 місяці тому

    AI images 👍 AI voice 👎

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Thanks for your feedback! I totally get it about the AI voice. My English isn't good, and when I tried recording myself, it sounded pretty rough. I worried viewers might struggle to understand me. While AI voices can't match a fluent speaker's emotion, I think it's better for tutorials than my voice right now. I'm always trying to improve, though! Any suggestions on making the videos better? I'm all ears!

  • @anagnorisis2024
    @anagnorisis2024 4 місяці тому

    There's an issue with TheMisto Anyline. so the workflow seems broken.

  • @filterophilicxx5914
    @filterophilicxx5914 4 місяці тому

    which llm did you use to convert image to prompt? is it open source please guide

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      I use monica.im (it is a multi-AI integration service) like POE. You can refer to using opensource LLM in the following few nodes, all have similar functions: github.com/gokayfem/ComfyUI_VLM_nodes github.com/kijai/ComfyUI-Florence2 github.com/pythongossss/ComfyUI-WD14-Tagger github.com/miaoshouai/ComfyUI-Miaoshouai-Tagger

    • @SebAnt
      @SebAnt 4 місяці тому

      Excellent tutorial!!

  • @debdutbhadurishorts
    @debdutbhadurishorts 4 місяці тому

    Can I use multiple people lora in same pic ? For example lora of scarlet and Donald Trump , together dancing. And if yes then how

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Yes, you can train separate LoRAs and then load them together. If you're using ComfyUI, there's a node called 'LoRA Loader Stack' in the rgthree extension (which can be installed via Comfy Manager). You can use that node to load multiple LoRAs, and adjust the strength of each LoRA to achieve good results. imgur.com/a/GldHkqE I understand that Donald Trump was just an example, but if you want to quickly test whether Flux has been trained on a specific keyword, there's a recently launched website called fastflux.ai that can do this. This site uses the Flux Schnell model and generates images at a very high speed. imgur.com/PWOiPMM imgur.com/gubtT0v

    • @agnosticatheist4093
      @agnosticatheist4093 4 місяці тому

      You mean lora lora lora lora.....?

  • @hellfire3278
    @hellfire3278 4 місяці тому

    Can I train a LoRA model to control the measurements of a mannequin? The idea is to use trigger words for the waist, chest, and hip measurements, for example: (chest: 94cm; waist: 72cm; hips: 98cm). However, I'm unsure if all of these can be incorporated into a single LoRA model, as it might become complicated. In short, do you know how the trigger words interact with the training dataset?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      Thank you for your interesting question about controlling mannequin measurements using AI. While training a LoRA model for this purpose is creative, it might be complex and challenging to achieve the desired results. I haven't seen anyone create a LoRA specifically for controlling measurements (possibly due to the difficulty in achieving the desired results). Training such a model to accurately control multiple body measurements simultaneously (chest, waist, hips) would require an extensive and precisely labeled dataset, which could be difficult to create and maintain. Instead, I suggest using ControlNet, a simpler and potentially more effective approach. ControlNet allows for detailed control during image generation using sketches or guide images to control the mannequin's shape and measurements. This method offers several advantages: Precise control: Create a basic sketch with desired measurements. Flexibility: Easily adjust body shape by modifying the input sketch. Consistency: Generate multiple images with the same measurements. Intuitive workflow: Drawing or modifying a sketch is often easier than fine-tuning complex prompts. ControlNet can provide more accurate and consistent results in controlling mannequin measurements compared to the LoRA approach.

  • @silas-dd5ll
    @silas-dd5ll 4 місяці тому

    JUST USE KOHYASS, FREE AND BETTER

  • @HimanshuChaudhari-y9t
    @HimanshuChaudhari-y9t 4 місяці тому

    Krita diffusion tutorial

  • @ee89199
    @ee89199 4 місяці тому

    thank you can i use this to train my dog?

    • @AINxtGen8
      @AINxtGen8 4 місяці тому

      yes, of course you can

    • @artificial_director
      @artificial_director 4 місяці тому

      @@AINxtGen8 I think ee89199 is trying to be funny 🤔

  • @RKKrish-p1b
    @RKKrish-p1b 4 місяці тому

    awesome tutorial thanks ...

  • @VKS551
    @VKS551 5 місяців тому

    I followed the guide, which starts at 5:55, step by step. Unfortunately it doesn't work. 1. the link to "flux examples" is no longer there in the latest version of Comfy Manager. 2. I downloaded everything from the description and moved it to the correct folders. Checkpoint is not recognized still.

    • @AINxtGen8
      @AINxtGen8 5 місяців тому

      Thank you for pointing out the problem with link to flux in comfyUI page, I was updated it in description. About the checkpoint, you can try to change the extension, for example: flux1-schnell.sft -> flux1-schnell.safetensors and then refresh ComfyUI