Negative Embeddings - ULTRA QUALITY Trick for A1111

Поділитися
Вставка
  • Опубліковано 25 сер 2024
  • Negative Embeddings can help a lot to improve your image Quality. Here is how to use them in A1111. Also I show your my unshapen Trick, to get much better results when upscaling.
    #### Links from the Video ####
    huggingface.co...
    huggingface.co...
    huggingface.co...
    huggingface.co...
    Support my Channel:
    / @oliviosarikas
    Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorial...
    How to get started with Midjourney: • Midjourney AI - FIRST ...
    Midjourney Settings explained: • Midjourney Settings Ex...
    Best Midjourney Resources: • 😍 Midjourney BEST Reso...
    Make better Midjourney Prompts: • Make BETTER Prompts - ...
    My Affinity Photo Creative Packs: gumroad.com/sa...
    My Patreon Page: / sarikas
    All my Social Media Accounts: linktr.ee/oliv...
  • Навчання та стиль

КОМЕНТАРІ • 121

  • @OlivioSarikas
    @OlivioSarikas  Рік тому +18

    #### Links from the Video ####
    huggingface.co/yesyeahvh/bad-hands-5/tree/main
    huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main
    huggingface.co/nick-x-hacker/bad-artist/tree/main
    huggingface.co/datasets/Nerfgun3/bad_prompt/tree/main

    • @havemoney
      @havemoney Рік тому +2

      Thanks always for the url :D

    • @Mandraw2012
      @Mandraw2012 Рік тому +3

      Hey there @olivio Sarikas, wanted to know is that an extension you use to get stuff from your clipboard to your img2img canvas at 4:20 ?

    • @medmen04
      @medmen04 Рік тому +2

      @@Mandraw2012 that an operaGX thing

    • @precursor4263
      @precursor4263 Рік тому +1

      Are there any embeddings for bad eyes? I know there's the face restoration option, but that usually makes the images photorealistic and sometimes it doesn't work very well for artsy stuff. I don't want to be inpainting eyes, considering I'm working with batch img2img

    • @LouisGedo
      @LouisGedo Рік тому +1

      👋

  • @fenrir20678
    @fenrir20678 Рік тому +95

    Quick little tip: Instead of copy and pasting or memorizing the names of the negative embeddings, just click the "Show/hide extra networks" button in the middle under the Generate button. There you can see all of your embeddings. Just click once in the negative prompts and the simply select which negative embedding you would like to use.

    • @polystormstudio
      @polystormstudio Рік тому +1

      Thanks for the tip!

    • @S4SA93
      @S4SA93 Рік тому +1

      That's nice, but it does not add the pointy brackets. So I wonder does it need the brackets if it is not adding them itself?

    • @nickkatsivelos6613
      @nickkatsivelos6613 Рік тому

      @@S4SA93 I think it is all taken care of - Here is the output when I did a run
      "Textual inversion embeddings loaded(4): bad-artist-anime, bad-ar..." no braces, just comma between each - had other negative prompt text in there with it.

    • @S4SA93
      @S4SA93 Рік тому

      @@nickkatsivelos6613 Yea it seems to work without the brackets but I am wondering why he adds them then

    • @SantoValentino
      @SantoValentino Рік тому

      What fork Kate you running because that’s not on auto1111… I see it on vladmandic fork

  • @benjamininkorea7016
    @benjamininkorea7016 Рік тому +22

    Very nice Photoshop process. I realized that working artistically with photoshop can save a lot of trouble-- for example, just brush out an extra finger instead of inpainting 20x and hoping. But the sharpening trick is really a game changer!

  • @AI_EmeraldApple
    @AI_EmeraldApple Рік тому +13

    There are other emeddings like ng_deepnegative_v1_75t, bad-image-v2-39000, bad-picture-chill-75v, verybadimagenegative_v1.3, and Unspeakable-Horrors-64v, that work with many models too!

  • @Vitaliy_zl
    @Vitaliy_zl Рік тому +8

    you also can use edge detection filter in photoshop > invert received image(ctrl+I) > and use this image as mask on sharped image to avoid oversharped artifacts as showed in this video

  • @AltoidDealer
    @AltoidDealer Рік тому +7

    Heya, I used your cocktail (minus the anime one) and it's great! However, I also tested adding the popular "easynegative" embed to see what would happen... after comparing dozens of outputs with/without it, I determined that if it's used with 0.5 weight it improved images even further. Note that I was testing on realistic images and omitted the Anime neg embed you showed.

  • @nio804
    @nio804 Рік тому +10

    One of my favourite tricks is to use LoRAs with negative weights. You can get some fun effects with the right LoRA

    • @moon47usaco
      @moon47usaco Рік тому +1

      That's an excellent idea, I will try that soon... =]

  • @Ureroll
    @Ureroll Рік тому +4

    Nice tip, It actually makes sense that a sharper image would produce finer details when re- upscaling. For the opposite reason I would be careful with upscaling after those blurring touch ups in the editor and leave it as a last step. Any manual blurring or smearing in my experience has an high chance to be interpreted as part of the background, unless an higher denoise is set, but that mangles everything at that point. Going back and forth long enuff and the color shifting monster will get ya. I have not found a real solution for that issue, the colors slowly shift, a really dark blue will slowly shift to purple and the blacks go up in gamma. I tried with the option in the settings or with the cutoff plugin, nothing really work so far. It would be so cool to just paint something in manually or smear off an extra finger in photoshop, send it to img2img for a beauty pass, go back to photoshop, work some more... but the colors move around too fast for that workflow. Is there a controlnet just for the tones and hue? that would be massive!

  • @rproctor83
    @rproctor83 Рік тому +12

    Be careful with embeddings, they are normally trained on specific models, when those models are updated and the embeddings are not updated you will get a bit of distortion. As the models progress but the embedding stays the same that distortion becomes more and more prevalent. To further complicate things the embeddings will affect your other networks like LoRA and LyCORIS, which if those are trained on some other model can drastically alter the results in a negative way. Not to mention things like Clip Skip and CFG, they will also greatly alter results of the embeddings.

  • @coda514
    @coda514 Рік тому +1

    Great info as always. Also, you have a really nice looking virtual home. 😉

  • @justspartak
    @justspartak Рік тому

    Delightful result! 👍 After sharpening the skin appears better and there is more detail throughout the image.

  • @eugeniusro
    @eugeniusro Рік тому +1

    In Stable diffusion it is very helpful to use negative prompts, interacting with the AI I was amazed at how similar it is to human thinking, and come to think of it we were programmed the same, including using negative prompts such as the 10 commandments from the Bible. 😀

  • @michail_777
    @michail_777 Рік тому +2

    I noticed that GFPGAN visibility CodeFormer helps a lot when generating any persona. In the end, it all depends on the models. Thanks for the link to the text hints.

  • @Hakaan911
    @Hakaan911 Рік тому +6

    embeds use same syntaxe as normal prompt, not as loras

  • @nalisten
    @nalisten Рік тому

    Thank you Olivio for being so Consistent 🙏🏽🙏🏽👑💪🏾

  • @TheElement2k7
    @TheElement2k7 Рік тому

    Thanks for the tips, something I will check out 😊

  • @optimoos
    @optimoos Рік тому

    uber cool info as always. highly appreciated, Olivio!

  • @12MANY
    @12MANY Рік тому

    Thanks a lot Olivio.

  • @AIAddict-88
    @AIAddict-88 Рік тому

    Thanks So Much I Learn So Much From Your Videos! :)

  • @Rjacket
    @Rjacket Рік тому +1

    Something I thought was strange when testing out this process of Negitive prompts. If you have TI embeds like "" having a comma in between each negative drastically changes the output. ie ", " opposed to "". Have you ever dealt with this? Do you know why it is happening? Also the changing the position of the Negative was effecting the output. Using only around each negative TI and no comma's in-between, but changing the order of say 5 neg TI's for example.
    I would really like to see a video on this type of testing, what is the rhyme + reason?

  • @globalnucleartrue
    @globalnucleartrue Рік тому +7

    How is it better than sd upscale? SD UPscale seems more simple and fast.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +4

      SD upscale just upscales the image. Img2img renders a new image with a lot more detail that the original didn't have

    • @kuromiLayfe
      @kuromiLayfe Рік тому +1

      @@OlivioSarikas SDUpscale also applies a few negative prompt img2img to fix a bunch of things that would cause the upscaler to make the bigger image uglier instead of more enhanced.
      Negative Embeddings are just regular Embeddings but trained on the worst results instead of the best quality.

  • @arielm9847
    @arielm9847 Рік тому +5

    I appreciate the video but I feel like something is missing after 4:40.
    After sharpening the upscaled image and bringing it back into img2img, what did you do with it? Did you upscale again at an even higher res (2048x3072) for more details? Did you run Generate at the same resolution just hoping more details would be added? Or are you just suggesting this workflow before going into inpainting to tweak specific areas?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +4

      no, i rendered it with the same settings again, but with the sharpened input image

    • @arielm9847
      @arielm9847 Рік тому +2

      @@OlivioSarikas Gotcha. Thank you and thanks for all your videos. They are very helpful.

  • @snatvb
    @snatvb Рік тому

    you can use ctr+c -> ctrl+v for copy paste to A1111 from any place :)

  • @JDRos
    @JDRos 11 місяців тому

    Aren't the brackets and weight only for Lora and LoCon?

  • @HAJJ101
    @HAJJ101 Рік тому +1

    Thanks for making this tutorial! I’ve been trying to figure out how to train and get this idea working. So it’s basically just training images you don’t want and putting that training in the negative embeddings? These people usually train images that are class images that generate messed up faces like “person”, “woman”, etc.? Then use a different class for the negative training after?

  • @CaptainFutureman
    @CaptainFutureman Рік тому +3

    Very nice, but I would recommend trying a different sharpening method than unsharpen masking. Haven't tried it yet, but I would bet using a high-pass filter would not give you the artifacts along the rim of the cloak.

  • @Nottiex
    @Nottiex Рік тому +4

    sry if it was asked already but what is the plugin or w/e that enables choosing of vae / clip skip on top of the main page in ui?

    • @treblor
      @treblor Рік тому +3

      Its in automattic111 settings, settings/User Interface/QuickSetings list change it to: sd_model_checkpoint, sd_vae, CLIP_stop_at_last_layers

    • @Nottiex
      @Nottiex Рік тому +1

      @@treblor oh, thank you very much

  • @EmilioNorrmann
    @EmilioNorrmann Рік тому +5

    are the mandatory on the neg prompt ?

    • @wkdpaul
      @wkdpaul Рік тому +9

      Not for embeddings, those brackets are for LORA, using just the name of the embedding works fine

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +4

      Really? I didn't know that. Thank you

    • @PizzaTimeGamingChannel
      @PizzaTimeGamingChannel Рік тому +3

      @@OlivioSarikas Also, you can use standard parentheses for those negative embeddings, i.e. (bad-artist:0.8). Don't even need to put "by bad-artist" or anything, just the negative embed is fine. :)

  • @hplovecraftmacncheese
    @hplovecraftmacncheese Рік тому +1

    When I add the negative embeddings from the extra networks button, it doesn't use the angle brackets, but for LoRA it does. Do you need the angle brackets for the negative embeddings?

  • @dinogators8323
    @dinogators8323 3 місяці тому

    thx

  • @xzypergods9867
    @xzypergods9867 3 місяці тому

    Whenever I use negative embeddings this error always show's up
    "runtimeerror: expected scalar type half but found float"

  • @terrence369
    @terrence369 Рік тому +2

    Why images of human characters created by AI give results of two heads and more fingers than it should be? And some times, those fingers represents an alien creature like tentacles/hands. Is the neural technology build upon aliens embedded into human interface?

  • @Simsonlover222
    @Simsonlover222 Рік тому

    you are a hero i love u

  • @Charkel
    @Charkel Рік тому +1

    Why don't I have a embedding folder? :(

  • @cobraeconomics4881
    @cobraeconomics4881 Рік тому +2

    How does your upscale method compare to Topaz gigapixel?

  • @manipayami294
    @manipayami294 18 днів тому

    why I dont have Restore Faces button?

  • @Shingo_AI_Art
    @Shingo_AI_Art Рік тому +1

    I always have these 4 most of the time they give amazing results, however is there a reason behind the use of pointy brackets instead of parenthesis ? 🤔

    • @AltoidDealer
      @AltoidDealer Рік тому +1

      I was wondering the same, so I simply tested both ways. I got consistently better outputs with the pointy brackets as shown in the vid

  • @blizado3675
    @blizado3675 Рік тому

    Useful, but for img2img upscale in need first more VRAM. With extra I can go to a insane resolution, but maybe that work there too? 🤔 Need to test that. And I need to test that negative prompt stuff more.

  • @sneirox
    @sneirox Рік тому

    i fell in love with her

  • @Arty-vy6zs
    @Arty-vy6zs Рік тому

    another one that is used a lot is a EasyNegative

  • @Rasukix
    @Rasukix Рік тому

    is it not better to just use highres fix from the get go?

  • @ocoro174
    @ocoro174 Рік тому

    yeah but all these models seem to be focused on faces and people. how to get midjourney like doodles/cartoons/food etc

  • @koguister
    @koguister Рік тому

    embeddings folder does not exist, should I create one, or I installed something wrong?

  • @darcasvisual
    @darcasvisual Рік тому

    Hello colleague, how do you leave the characteristics of the character's face, just change the clothes among others?

  • @skyevent8356
    @skyevent8356 Рік тому

    in anime girl i always have weird eyes no matter that i write in the negative prompt

  • @metanulski
    @metanulski Рік тому

    I dont see any improvment in the negative embedings example. The 2 neg embs hat 7 fingers, and the all neg has some extra leaves, but thats it.

  • @MarcioSilva-vf5wk
    @MarcioSilva-vf5wk Рік тому

    So, is basically a highpass filter with an overlay

  • @S4SA93
    @S4SA93 Рік тому

    Unsharpen Mask with 1 1 0 does nothing to my picture in Photoshop, what am I missing?

  • @BlackJade_OFM
    @BlackJade_OFM Рік тому

    So how do you actually know what negs are in the neg embedding? is there a way to see what negs are actually used?

  • @shadowdemonaer
    @shadowdemonaer Рік тому +1

    Alright, but how would one go about training their own negative embeddings?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +4

      Basically like a normal embedding, but with the stuff you don't want to have

    • @shadowdemonaer
      @shadowdemonaer 8 місяців тому +1

      For things like EasyNegative, you can just type that in and be able to improve your images right away. So are they only tagging their training images with EasyNegative? Are they tagging everything like usual?
      Usually when someone trains something, like a character, if they didn't want their hair style to change, they would only tag the things in the image they want changed. like if their eyes change color, they'd tag the eye color, but they wouldn't tag the hair.
      So, for a basic example, if you wanted to make a neg embed to make eyes with too many eye highlights never happen again, you would only tag the eyes, right? Or is this incorrect? That's all that holds me back.
      @@OlivioSarikas

  • @sophytes1430
    @sophytes1430 Рік тому

    Why < >
    greater than and less than sign?

  • @hishamzireeni8932
    @hishamzireeni8932 Рік тому

    @Olivio, how could you use an actual photograph and render it using AI for whatever prompt while maintaining the face ? i.e. creating an avatar or image of your face to so many different renders. How could that be done?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      Check my video on Lora Training: ua-cam.com/video/9MT1n97ITaE/v-deo.html

  • @norko7422
    @norko7422 Рік тому

    my images looks bad when I go up more than 512 in 1.5 based models. what's the issue?

    • @norko7422
      @norko7422 Рік тому

      same problem in 2.1 models up to 768...

  • @Vitaliy_zl
    @Vitaliy_zl Рік тому

    do all stable diffusion users have a habit of counting fingers on ANY images, or is it just me?

    • @blizado3675
      @blizado3675 Рік тому

      The less work you have to create an image, the more you tend to be a perfectionist. :D

  • @AlexSmith-qw5qg
    @AlexSmith-qw5qg Рік тому

    should i download this embeddings from hugging face like bad artists etc or they work if i just using them in bad prompts without downloading too

    • @Jordan-my5gq
      @Jordan-my5gq Рік тому

      You need to download the embeddings because when you type them in the negative prompt they will be replaced by their values. You do not know their values so you must download them.
      (Sorry if my English is bad, I am learning. Hope you understand my comment ^^)

  • @bryan98pa
    @bryan98pa Рік тому

    Nice videos but maybe you need to add more steps to gain more details.

  • @babamaheshvrrajrajeshvre9963

    मुझे फोटोग्राफी का बोहत सिखने है। मेरे पास फोन है। बाकी कोई डीवाईस नहीं है। लेपटॉप कम्पुटर नहीं है। तो मे ऐआई टुल कैसे उपयोग कर सकते हैं। फ्री वाले

  • @TheRealBlackNet
    @TheRealBlackNet Рік тому

    I have a RTX 3080Ti and cant go bigger then 1024 without getting a Cuta out of mermory. What card do people use to go up to 1500? I help my self with ultimate upscaler but most times I see the checkerboard. Is there a trick?

    • @Tigermania
      @Tigermania Рік тому +3

      try changing the line in your webui-user.bat to this set COMMANDLINE_ARGS=--precision full --no-half --medvram

    • @treblor
      @treblor Рік тому +2

      can also try: set COMMANDLINE_ARGS= --medvram --upcast-sampling

    • @snoweh1
      @snoweh1 Рік тому +1

      I have a 3080 10gb and I can go higher than 1024.

    • @TheRealBlackNet
      @TheRealBlackNet Рік тому

      ​@@treblor thanks!

  • @peace.n.blessings5579
    @peace.n.blessings5579 Рік тому

    What is the system requirements for running stable diffusion?

    • @Max-sq4li
      @Max-sq4li Рік тому

      at least minimal RTX3060 12Gb and above
      More VRAM = more stable and more features work with

    • @TrentSterling
      @TrentSterling Рік тому +2

      I run it locally on a 1060 6gb. It's slow, but in theory any card with 4gb of vram can do it. So minimal is smaller than that haha.

    • @AIAddict-88
      @AIAddict-88 Рік тому

      I could run it locally with a GTX980 But I recently upgraded to a 3060ti which is much faster..980 worked though!

    • @dlep9221
      @dlep9221 Рік тому +1

      I'm using A1111 with RTX2080S, 8 Gb, it's running very well (with NVIDIA CUDA & --xformers option)

    • @mr_frank9016
      @mr_frank9016 Рік тому

      succesfully using it on a gtx1650 4GB card. can generate up to 1024px, but slow time (1 to 3 minute per image).."extras" upscale take around same time, but img2img upscaling to 8k can take 1 hour with all the steps involved.

  • @user-gu9vf3cc4u
    @user-gu9vf3cc4u Рік тому

    How to use it in negative prompt? Should we use it like ?

  • @support8804
    @support8804 Рік тому +2

    what is A1111? how to install it?

    • @Steamrick
      @Steamrick Рік тому

      Automatic1111 and look at his older videos or google it

    • @havemoney
      @havemoney Рік тому

      automatic1111 >>> go google

    • @Tigermania
      @Tigermania Рік тому +5

      search for how to install automatic1111 stable diffusion

    • @Max-sq4li
      @Max-sq4li Рік тому +1

      Its an AI software that generate photo from text

    • @Jordan-my5gq
      @Jordan-my5gq Рік тому

      ​@@Max-sq4li
      Stable Diffusion is an AI.
      A1111 is an interface to interact with Stable Diffusion.

  • @Akami-hz8xz
    @Akami-hz8xz Рік тому

    you made a mistake including photoshop which is irrelevant.

  • @isycoolro
    @isycoolro Рік тому

    Hello Olivio! Can I have a one on one consultation with you? Do you have an email where I can contact you? Thanks.

  • @MarkDemarest
    @MarkDemarest Рік тому

    FIRST 🎉

  • @NiteshSaini1
    @NiteshSaini1 Рік тому +1

    Instead of AI I see it more of a programming work which doesn’t improve users artist skills yet can help them to be a programmer.
    manual work would always be the true Art. AI would be a disaster for mankind created and improved by mankind.

    • @13RedCorpse
      @13RedCorpse Рік тому +1

      The time will tell.

    • @hectord.7107
      @hectord.7107 Рік тому +2

      You don't seem to know much about art then, creating art is not just using a pen or a pencil, it's the entire process that includes the idea, the composition and the execution, many people are just copying and pasting prompts and get a nice picture, but the ones that are doing great things are using AI as one more tool, combined with photoshop and other tools and some insane art will be created in the near future that wouldn't ever be possible to create by human hand alone.

    • @DarkStoorM_
      @DarkStoorM_ Рік тому +1

      @@hectord.7107 This is what no one understands. People jump from video to video, bashing everyone in the comments for using AI whenever a new convenient tool is getting released. Funnily enough, I even found someone commenting in 3Blue1Brown's recent video, that he will stop watching 3B1B, because he used AI images (video contains images *transformed* by another artist aided by Midjourney).
      People don't seem to realize, that it's not just about _typing words into boxes_ and spamming pretty images over the internet, making artists mad. This argument is getting really annoying and is already obsolete. People already create *insane* images, completely *transforming* the base txt2img result, which immediately throws the the copyright argument straight into the trash can. Thanks to the Inpainting tool in Stable Diffusion, we can make amazing high resolution transformations from a simple photoshop sketch, still putting *massive* amounts of tedious, manual work into the result image, creating it piece by piece, utilizing the creativity to the max, still keeping the sketched composition, which is *your work*. Using artists' names is literally useless nowadays, because it has a very little impact on this process, just like using a random word in the prompt.
      People, rather than starting nonsense and useless dramas all over the internet, use this to your advantage and stop being a baby :)

  • @yoteslaya7296
    @yoteslaya7296 Рік тому

    Thanks for the info but im not paying for photoshop

    • @blizado3675
      @blizado3675 Рік тому

      Like he said any image software that has sharpening features will work. There are also free open source alternatives.

    • @yoteslaya7296
      @yoteslaya7296 Рік тому

      @@blizado3675 which ones

  • @AniCho-go-Obzorov-Net
    @AniCho-go-Obzorov-Net Рік тому

    какие то костыли, и зачем такие извращения с апскейлом =="

  • @clumsy_en
    @clumsy_en Рік тому

    nick-x-hacker/bad-artist a little off very sus nick choice on HuggingFace it shows no pickles detected but you can never be 100% sure