How to make an AI Instagram Model Girl on ComfyUI (AI Consistent Character)

Поділитися
Вставка
  • Опубліковано 6 січ 2025

КОМЕНТАРІ •

  • @Aiconomist
    @Aiconomist  4 місяці тому

    📢 Last chance to get 40% OFF my AI Digital Model for Beginners COURSE: aiconomist.gumroad.com/l/ai-model-course

  • @BobDoyleMedia
    @BobDoyleMedia Рік тому +20

    This was excellent. Thank you for showing people there are other and better options than the Midjourney face swap method which of course limits your flexibility tremendously. I'm actually in the process of doing this exact thing and was building my ideal workflow, but what you did with the outfit and masking is really fantastic. Very helpful.

  • @thedawncourt
    @thedawncourt 9 місяців тому +2

    IPAdapterApply node fails every time I try to use the workflow. I'm a noob help please :(

  • @r2Facts
    @r2Facts 9 місяців тому

    This was Absolutely AMAING! I've watched countless videos, spent a good amount but nothing as detailed and as great as this.. Definitely subscribing to this channel

  • @Thomas_Leo
    @Thomas_Leo 10 місяців тому +1

    Great video. I find using the prompt, "staring directly into the camera" works well for portrait shots or using "portrait". I'm also glad you're using low quality prompts instead of high quality and cinematic photography. Most average phone users don't have access to high quality cameras. 😁

  • @DYYGITAL
    @DYYGITAL Рік тому +9

    where do you get all the images for clothing and poses from?

  • @shinkotsu6559
    @shinkotsu6559 Рік тому +5

    load clip vision model. what model do a load ? where to find this model.safetensors

  • @rishabjain6076
    @rishabjain6076 Рік тому +1

    perfect video , only consistent and detailed video i was looking for . this is gem.thank you so much

  • @terrorcuda1832
    @terrorcuda1832 Рік тому +8

    Fantastic video. Simple, straightforward and well explained.

  • @AnjarMoslem
    @AnjarMoslem 7 місяців тому +1

    where should I put ipdapterplus models? I put in "custom_modules->ComfyUI_IPAdapter_plus\models" but it didn't detect the model?

  • @AmeliaIsabella_x
    @AmeliaIsabella_x 8 місяців тому +1

    How can you increase the accuracy for retaining the face? The face, although quite subtle, was noticeably different in each generation, which is noticeable to a follower as I noticed just in this video. Thanks!

  • @mysticmango-fl3ej
    @mysticmango-fl3ej 10 місяців тому

    That's actually really incredible the psychology and systems that go into making that much money

  • @ejro3063
    @ejro3063 Рік тому +102

    There's nothing comfy about ComfyUI

    • @xviovx
      @xviovx Рік тому +1

      This 🤣🤣

    • @EpochEmerge
      @EpochEmerge Рік тому

      @@xviovxthen you should do it manually on a1111 to get the idea why it’s called comfy

    • @AdrianArgentina-nd7rg
      @AdrianArgentina-nd7rg Рік тому +1

      Agree😂

    • @FudduSawal
      @FudduSawal Рік тому +2

      The flexibility it gives us over other tools are justified

    • @otherrings2887
      @otherrings2887 Рік тому +1

      😂😂😂

  • @undefinedpanda-t3n
    @undefinedpanda-t3n Рік тому +8

    Hey! Thank you for the video. Can you advice about one problem? I used your workflow, but I catch this error: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). I don't understand where it comes from

    • @gerardoperez6787
      @gerardoperez6787 Рік тому

      Could you solve it? I have the same issue and don't understand how to proceed

    • @tiggy4591
      @tiggy4591 Рік тому +10

      So for that I had to get the correct clip_vision model.
      I don't know that it will let me post a link in the comments so here is how to find it:
      Go to his link to "All Useful Links & Workflow Visit:" in his description.
      Go to "IPAdapter plus Models HuggingFace Link"
      Go to the main directory, then the "Models" folder. Go to the "Image_Encorder" folder. Download "model.safetensors"
      From there you have to put the model you downloaded in your comfyUI, clip vision model folder.
      That same place also has alternative IPAdapter models.

    • @sunnyandharia907
      @sunnyandharia907 Рік тому

      @@tiggy4591 thankyou very much it worked like charm

    • @tiggy4591
      @tiggy4591 Рік тому

      @@sunnyandharia907 Awesome, I struggled with it a bit last night. I'm glad it helped.

    • @xReLoaDKryPoKz
      @xReLoaDKryPoKz Рік тому

      @@tiggy4591 i love you! you are the GOAT

  • @ChristianKozyge
    @ChristianKozyge Рік тому +4

    Whats ur Clip vison model?

  • @petEdit-h9l
    @petEdit-h9l 10 місяців тому +1

    What if I just wanted to change the pose alone, no change of clothes or anything else, how do I go about that pls

  • @DeeprajChanda-vt7kc
    @DeeprajChanda-vt7kc 8 місяців тому +2

    the ipadapter apply node is not found, can't figure out how can i fix this? any solutuons?

    • @gammingtoch259
      @gammingtoch259 8 місяців тому

      I have the same problem, try use others but not work

  • @SerenitySounds0101
    @SerenitySounds0101 6 місяців тому

    anyone know why my ksampler step would be extremely slow? I followed these directions to a T, twice and both times the process stalled at Ksampler and took 10-15mins for just 2-4 images. help!

  • @Aiconomist
    @Aiconomist  8 місяців тому +4

    Hey everyone! 😊
    I'm planning to create a comprehensive course on creating a virtual influencer from scratch and growing an Instagram account. It'll be a long and detailed course, so I'm thinking of making it a paid course, but at a reasonable price. What do you think? Would you be interested in something like that?

  • @sirjcbcbsh
    @sirjcbcbsh 3 місяці тому

    I have followed every steps but the result is not as good as yours…it turns out I am getting distorted faces. I have switched to other checkpoint models (SD1.5) still the same😢😢

  • @spiritform111
    @spiritform111 10 місяців тому

    great tutorial... very easy to follow. thank you!

  • @elleelle6351
    @elleelle6351 Рік тому +3

    I have a question, if we have a clothing or jewlery brand deal, how can we make the model wear that product?

  • @dqschannel
    @dqschannel 4 місяці тому

    I just installed it using Pinokio but how to you start the creation of the image?

  • @Rodinrodario
    @Rodinrodario Рік тому +3

    where did u get the IPadapter, how did u config and install it, which CLIP vision did u use, where did u get it? where did u get the open pose? i tried to find all by my self, but the end result is that my faceswapping looks like dogshit, can u help?

    • @ramondiaz5796
      @ramondiaz5796 10 місяців тому

      I was the same when I saw the video, frustrated because it doesn't explain many details, but I was able to make it work, but it was on my own, I took the time to look for everything you mention separately and watch videos, and I was able to get what was missing.

    • @EnesHaleYağmur
      @EnesHaleYağmur 26 днів тому

      @@ramondiaz5796how?

  • @DarioToledo
    @DarioToledo Рік тому +5

    I wish I'd have watched your video sooner. Well, if you posted it earlier 😂😂 just today I have come to a similar solution in order to achieve this. And now I still have a question: what if you want to switch to full body figure, or want to have a profile or rear view of your model, do the masks and the images going into the IPadapters remain unchanged? Or do you have to switch IPadapter's image and mask accordingly?

  • @otaviokina22
    @otaviokina22 8 місяців тому

    my model is coming out with two heads all the time, do you know how I can solve it? I've tried several negative prompts but it doesn't help.

  • @beastemp627
    @beastemp627 8 місяців тому +1

    i cant find ip adapter apply nodes , what should i do ?

    • @Aiconomist
      @Aiconomist  8 місяців тому +3

      I'm updating this workflow because a lot has changed since then. +++ IPAdapter version 2 is even more advanced now. Be sure to check out my latest videos for all the updates.

    • @gammingtoch259
      @gammingtoch259 8 місяців тому

      @@Aiconomist Please update this, i try use other nodes and adapt something similar that you .json file, but nothing work for me :(, appear a error shows "copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])"
      Please help us and update the .json file in the web, thank u very much

  • @ewzxyhh6180
    @ewzxyhh6180 11 місяців тому +2

    it worked, thanks, where i find the openposes to download?

  • @caluxkonde
    @caluxkonde 7 місяців тому

    How to consistent up to 3 character or more just style change with prompt

  • @AnjarMoslem
    @AnjarMoslem 7 місяців тому

    I get this error while using your workflow from gumroad: ClipVision model not found.. help me please

  • @SH-lh9ow
    @SH-lh9ow 6 місяців тому +1

    thanks for this video! amazing! ..No matter what I try, I don't get the option of applying the Apply IPAdapter Node.. What do I miss? would be thankful for any help!

  • @Crysteps
    @Crysteps Рік тому +6

    Hey thanks for the video, but I where do you download all the controlnets from and how do you install them in to ComfyUI as well as where did you get the Clip Vision model from ?

  • @conxrl
    @conxrl Рік тому

    cant get my characters eyes to look normal :( any tips?

  • @satishpillaigamedev
    @satishpillaigamedev 6 місяців тому

    hi im having issue with my generation before upscale the face looks messedup, any suggestions ?

  • @arujjwalnegi1597
    @arujjwalnegi1597 4 місяці тому

    Where do I need to put model for IPAdapter? It is not detecting downloaded model anywhere

  • @Anynak69
    @Anynak69 Рік тому +1

    Cool, but what about a different perspective view of the face? it seems like the face always keeps the same perspective regardless of Openpose settings. is there any way to fix this?

  • @pixelriegel
    @pixelriegel Рік тому +1

    Had an Error with the IPAdapter Apply any one an idea why it could be happening?
    Error occurred when executing IPAdapterApply:
    Error(s) in loading state_dict for Resampler:
    size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).

    • @RagesmithFTW
      @RagesmithFTW 11 місяців тому

      did you ever figure out the issue? I have the same thing happening to me

    • @pixelriegel
      @pixelriegel 11 місяців тому

      @@RagesmithFTW Yeah I think it was that there are two different clip for the IPAdapter and depending on the model you have to use one or the other.

    • @RagesmithFTW
      @RagesmithFTW 11 місяців тому

      @@pixelriegel alright thanks ill try to see if i can figure it out

    • @Kifor98
      @Kifor98 11 місяців тому

      Any ideas how to fix it?@@RagesmithFTW

  • @maxdeniel
    @maxdeniel 7 місяців тому +1

    Hi friend, a couple of questions here:
    1) how do I get the KSampler with the image preview?, I just have the normal one with no image preview.
    2) I search for the "Ultimate SC Upscaler", but I did not find it, it is something I have to install?, if so, from where can I download it?
    3) Now, I did find the Image Up Scale Loader Node, but did not find the 4x_foolhardy.pth feature, is that something I have to download from somewhere else?, in which folder do I drop it?, so it appears on the node as an option next time.
    Generally speaking there are some tools and features you are using on your video, that we don't know from where they came from, another option is to buy your workflow, which is not a problem because you are doing and amazing job, and we can support you in that way, however if I purchase the work flow, I will have the same problem because the lack of tools and at the end the work flow won't work as expected.
    I will go some research on how to get those tools and then come back to this video. Thanks bro!

    • @om1kkk
      @om1kkk 3 місяці тому

      same

  • @diego_Mcfly2023
    @diego_Mcfly2023 Місяць тому +1

    A youtube account called Future Frontier just copied your tutorial only changing the voice. Good tutorial by the way!

  • @vodun87
    @vodun87 4 місяці тому

    The Workflow does not work any more. Error occurred when executing IPAdapterUnifiedLoader:
    ClipVision model not found.
    And dont say i did something wrong i used the .json

  • @amrshbaitah
    @amrshbaitah 6 місяців тому

    someone help me please, I didnt find the Load IPAdapter in my nodes and models

  • @DarioToledo
    @DarioToledo Рік тому +2

    And another question: what's the purpose of setting the denoise value to 0.75 in the ksampler at 7:45 with an empty latent?

    • @JustFeral
      @JustFeral Рік тому

      There is none. Cause an empty latent image is just pure noise. He likely meant to convert an image into latent space or something.

    • @ehsanrt
      @ehsanrt Рік тому

      partially right .. however i think the latent isn't totally empty . there is openpose . and the generation after the first one gets seeds and clips from last one . less denoise = less changes ....

    • @DarioToledo
      @DarioToledo Рік тому

      @@JustFeral indeed, this must be related to the IPadapter in some way or I can't see the point, as he's starting from an empty latent. What's the point of partially denoising the noise.

  • @Mauriziotarricone
    @Mauriziotarricone Рік тому

    I have an issue with the load clip vision doesn’t load the safetensor

  • @brandonzidzik
    @brandonzidzik Місяць тому

    anybody got a link to the workflow?

  • @BerdanTopyürek
    @BerdanTopyürek Рік тому +2

    the "model.safetensors" file at 5:07 is too large for the file system to install, what can i do ?

    • @rpharbaugh
      @rpharbaugh 11 місяців тому

      i've been trying to find this

    • @EnesHaleYağmur
      @EnesHaleYağmur 26 днів тому

      Bende buna yanıt arıyorum ya, dosyayı bulamadım gerçi internette, sizde link var mı? Çözebildiniz mi o sorunu?

  • @Lenovicc
    @Lenovicc 8 місяців тому

    Where can I download the model for clip vision?

  • @AnjarMoslem
    @AnjarMoslem 7 місяців тому

    thanks for making this video, I just bought your workflow

  • @VaibhavShewale
    @VaibhavShewale Рік тому +1

    So what is the miimum system req?

  • @FudduSawal
    @FudduSawal Рік тому +1

    how to resolve bad hands efficiently? is there any neat trick besides negative embeddings?

    • @Aiconomist
      @Aiconomist  Рік тому

      I fix bad hands in pictures using Photoshop and inpainting.

  • @L3X369
    @L3X369 4 місяці тому

    Out of context, but what AI voice do you use? can be done local?

  • @NicholasLaDieu
    @NicholasLaDieu 11 місяців тому

    this is wild! Thanks.

  • @downoldtime5280
    @downoldtime5280 9 місяців тому +1

    Where do you get the ControlNet poses? @aiconomist

    • @Aiconomist
      @Aiconomist  9 місяців тому +1

      You can use openposes .com

  • @Rodinrodario
    @Rodinrodario Рік тому

    Can u help me? Why is sometimes the facy ugly and sometimes perfect, depens which pose i take ?

  • @eros_loveProducer
    @eros_loveProducer 11 місяців тому

    how do you do to see ksampler and upscaler progress ??
    I'm searching for it and i can't find it
    PLEASEE

  • @zr_xirconio__3577
    @zr_xirconio__3577 Рік тому +6

    hey, nice tutorial, really well explained in detail. I am getting an error when running Ksampler and i was wondering if you could help me with that:
    "It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. "
    one thing that is different from my workflow and your workflow is that in the clip vision loader I am using "clip_vision_vit_h.safetensors" instead of "model.safetensors" because I couldn't fine the file you are using on the web, any chance you could post a link to that file or help me out resolve this error?
    thanks in advance

  • @parthsit
    @parthsit 8 місяців тому

    InsightFace must be provided for FaceID models. any one getting this error ?

  • @RichardMJr
    @RichardMJr Рік тому +3

    Hey thanks for the video! I cannot find the Ultimate SD Upscaler. has it been removed? If so is there something else you would suggest in its place?

    • @MrGenius2
      @MrGenius2 5 місяців тому

      @RichardMJr you need to install it from the manager , you can manually or do it like he does at the start

  • @crow-mag4827
    @crow-mag4827 Рік тому +1

    Excellent video!

  • @prashanthravichandhran5688
    @prashanthravichandhran5688 Рік тому +1

    How to add custom clothing of my brand

  • @fabioespositovlog
    @fabioespositovlog Рік тому +2

    Great video, one question: why if I change the initial checkpoint despite keeping or removing the LoRA and connecting the model directly to the IFAdapter the whole process stops working because of a different size of the matrices?

  • @davidsik2402
    @davidsik2402 10 місяців тому

    Hey i have a question, how can i do all of this, if i have already generated model in stable diffusion?

  • @matyourin
    @matyourin 11 місяців тому

    Hm... i finally got all needed models and nodes and shit but it still does not work as shown... I get an error message "Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])." And then it goes on with execution.py line 85, line 78, ... but i think the cause is something with the resolution of the images I put in? Is that somehow relevant? Like all input images have to be the same resolution?

    • @christophsch7839
      @christophsch7839 11 місяців тому

      read the comment from @alexalex9511 someone postet an answer

  • @lilillllii246
    @lilillllii246 11 місяців тому

    When I use a same phto of clothes, almost similar characters appear, but when the image of the clothes changes, a completely different character appears. What should I fix?

  • @wilmot5484
    @wilmot5484 Рік тому +2

    Error occurred when executing KSampler:
    mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)
    Can someone help me with that?

    • @stephan935
      @stephan935 Рік тому

      I think for the initial face generation SDXL is used, but then later when you use IP and ControlNet, you should use a 1.5 based model instead. Probably you get the error because you are mixing SDXL and 1.5 models.

    • @wilmot5484
      @wilmot5484 Рік тому

      It was a problem regarding mixing SDXL with sd...,my fault. By the way I still can't make the final image with clothes similar to the face one...Do you have any advice?@@stephan935

    • @wilmot5484
      @wilmot5484 Рік тому

      @@stephan935 fixed, thank you 🙏

    • @kentverge
      @kentverge 11 місяців тому

      @@stephan935 I have a bunch of models. How can I tell which are 1.5?

    • @stephan935
      @stephan935 11 місяців тому

      @@kentverge I honestly can't tell you. Sometimes it's in the name. I mostly know it based on the information in civitai. There you find the model it's based on (e.g. SDXL or 1.5).

  • @jonnym4670
    @jonnym4670 Рік тому

    any idea what kind of graphics card you can pull this off with???
    I have a rx 6700 xt and an old laptop with a Gtx 1050 so it's clear i will need to upgrade but what is like the minimal card so i don't have to spend a lot

  • @BunniesAI
    @BunniesAI Рік тому +4

    I was/am finding this extremely usefull. However, am I the only person who can't seem to find any reference to this CLIP Vision file? pls help 🙏🏻

    • @melonso18
      @melonso18 Рік тому

      I was looking for 2 hours yesterday no chance 😢

    • @knownaschaz
      @knownaschaz Рік тому

      manager > install models > CLIPVision model (IP-adapter) if you used this guide exactly you want the SD1.5 version

  • @TalZac
    @TalZac Рік тому +1

    New to this, why not combine with faceswap?

  • @Blakerblass
    @Blakerblass Рік тому +3

    Excelente video hermano, bastante interesante.

  • @EvanKrom
    @EvanKrom 11 місяців тому

    Cool video!
    Got my KSampler blocking (loading 0% over 25 steps) after IP Adapter, don't know why but maybe related to the AMD graphic card I am using?
    Generation of image and upscaling was working...

  • @Stellaa_donnaa
    @Stellaa_donnaa 11 місяців тому

    If I get a Macbook M3 pro , can I easily install Comfy UI and work?

  • @Benny-or7fl
    @Benny-or7fl Рік тому

    Amazing! Any recommendations on how to optimize it for SDXL? For a reason I can’t explain if I update all the models I’m getting worth results with SDXL compare to 1.5… 🤔

  • @mikesalomon2695
    @mikesalomon2695 Рік тому

    Woaw, it seems very difficult but great in the same Time. I will try tomorrow

  • @JpresValknut
    @JpresValknut Рік тому

    How can I use multiple prompts at the same time? Say put 5 into the queue and then move onto the next one? So the same that with the stable dif default UI can be done by using the texfile or textbox on the very bottom?

  • @dqschannel
    @dqschannel 4 місяці тому

    After 3 installs I deleted it as it keep saying directory didn't exist which we B.S.

  • @AnjarMoslem
    @AnjarMoslem 7 місяців тому

    where to download open pose model?

  • @bordignonjunior
    @bordignonjunior 11 місяців тому +1

    video is great, but you did not provide the links to download the models.
    I have downloaded your worflow, tried to install all models same as you have, and always get an error.
    I'm sure the problem is on my side. but you could take more time to explain the details.

  • @ehteshamdanish000
    @ehteshamdanish000 11 місяців тому

    So i tried and everything works. But the next type I open comfyui the character face is getting change how to fix that can you make a video for this

  • @lilillllii246
    @lilillllii246 11 місяців тому

    I'm always thankful. Rather than first creating a female model as a prompt, is it impossible to import a photo of a female model?

  • @rezasaremi4090
    @rezasaremi4090 Рік тому

    How can i make a fashion model?! I mean same person with different outfit that i choose. Thanks

  • @sohiyel
    @sohiyel Рік тому +1

    Are they really consistent? I don't see Jennifer lawrence in any of the final generated samples!

  • @stasatanasov4263
    @stasatanasov4263 7 місяців тому

    I am trying to find a course on creating a virtual influencer, so if you can tell me when you make it will be great!

  • @chiptaylor1124
    @chiptaylor1124 Рік тому

    Did anyone happen to figure out what Clip Vision model was used?⁉

  • @TLabsLLC-AI-Development
    @TLabsLLC-AI-Development Рік тому

    This is great stuff.

  • @Specialfx999
    @Specialfx999 Рік тому +2

    Amazing content. Thanks for sharing.
    Is it possible to place her in a specific environment by providing a reference image of the environment?

  • @ehteshamdanish000
    @ehteshamdanish000 11 місяців тому

    You have shared everything thank you for that. If you could also share the openpose image and the character dress one would be appreciated

  • @szachgr43
    @szachgr43 Рік тому

    does that work on Macbook machines?

  • @MenGrowingTOWin
    @MenGrowingTOWin 11 місяців тому

    I still have no idea how you do the shortcut search node .

  • @wilderlg
    @wilderlg 5 місяців тому

    thank you bro!

  • @johnjd9640
    @johnjd9640 11 місяців тому

    Wow, this is nice, I wish there was an easier way to do this or this is too complicated to me :(

  • @DesignDesigns
    @DesignDesigns 11 місяців тому

    Awesome......

  • @ryutaroosafune8756
    @ryutaroosafune8756 Рік тому +1

    Thanks for the great tutorial and the sample json files! I was able to do almost the same thing using the sample Json, but for some reason the face part is broken and does not show a beautiful face like your tutorial. For Automatic1111, Adetailer can be used to beautifully redraw face images as well, but currently Adetailer is not available for ComfyUI. Is there something I should do?

    • @toonleap
      @toonleap Рік тому +2

      There is a plugin called Face Detailer but of course, it needs more nodes and connections making the workflow more complicated.

  • @khalpen3856
    @khalpen3856 11 місяців тому

    Made it to 4.36 ... Where do I get blank face model clothes image from?

    • @nopixeltime
      @nopixeltime 10 місяців тому

      Did you find out?

    • @khalpen3856
      @khalpen3856 10 місяців тому

      @@nopixeltime nope, did you?

  • @brandonyork9924
    @brandonyork9924 Рік тому

    why wont it show the preview image on my screen?

  • @Halil-fi3pq
    @Halil-fi3pq 11 місяців тому

    how download clip vision model ?

  • @CostinVladimir
    @CostinVladimir Рік тому

    I am going to ignore that you took MKBHD's voice and thank you for the tutorial :P

  • @MunbMe
    @MunbMe Рік тому +3

    But the clothes not same 100% right ?

    • @Aiconomist
      @Aiconomist  Рік тому +1

      You're correct, the clothes aren't always 100% the same. However, by adjusting the Ksampler denoise strength and IP adapter weight value, I can usually achieve about 80 to 90% similarity in the clothing details. also it depends on the type of clothes.

  • @MarcJordan-zn5wn
    @MarcJordan-zn5wn Рік тому

    Can you run this in chrome?😅

  • @ewzxyhh6180
    @ewzxyhh6180 Рік тому

    i cant make IPAdapter work, make a video downloading pls

  • @techvishnuyt
    @techvishnuyt 9 місяців тому

    please do on for automatic1111

  • @tengdongmei
    @tengdongmei Рік тому

    Very good, but the face painting is not very like the reference, the clothes can not change the same, are generated a little similar, but still very different, what is a good way