Stable Diffusion in Code (AI Image Generation) - Computerphile

Поділитися
Вставка
  • Опубліковано 19 жов 2022
  • Mike Continues his look at AI Image Generation with Stable Diffusion
    Mike's code: colab.research.google.com/dri...
    Jonathan: johnowhitaker/sta...
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

КОМЕНТАРІ • 442

  • @paulspaws1521
    @paulspaws1521 Рік тому +368

    I'm sorry but , "unlock your face with your phone" just cracked me up..

    • @deadfr0g
      @deadfr0g Рік тому +38

      This is inadvertently an excellent poetic description of someone using the selfie camera to apply makeup.

    • @zwenkwiel816
      @zwenkwiel816 Рік тому +12

      Unlock your phace with your fone

    • @afog
      @afog Рік тому +4

      I think he was referring to using the Energizer Power Max P18K whilst in bed... :)

    • @davidm2.johnston684
      @davidm2.johnston684 Рік тому +2

      Hahahaha didn't even notice!

    • @absalomdraconis
      @absalomdraconis Рік тому +4

      I am reminded of an odd commercial from a few years ago: "apply directly to the forehead".

  • @DampeS8N
    @DampeS8N Рік тому +280

    I've been using Stable Diffusion to _deCGI_ images. Take a screenshot from a game, run it through SD with a low noise rate, give it a detailed description of everything in the picture and it produces pretty solid photo recreations of the images. Also, often, it gets possessed by Eldritch gods and spews out monsters.

    • @zwenkwiel816
      @zwenkwiel816 Рік тому +21

      So win-win, right?

    • @MattRose30000
      @MattRose30000 Рік тому +5

      now do it in real time with DLSS and you've got something huge

    • @DampeS8N
      @DampeS8N Рік тому +17

      @@MattRose30000 This is a long way off. It isn't just that it currently takes my 3090 Ti about 5 minutes to do one frame at 1024x1024 but also it can't be playing a game at the same time and also-also it would be very disorienting because each frame will be a _different_ photo that isn't consistent from frame to frame but probably the worst part is that _you need to write a text prompt that reflects what is in the scene for each frame somehow._

    • @FayezButts
      @FayezButts Рік тому

      @@DampeS8N that’s great. Have you messed around with reusing seeds across different frames? I imagine if you get an output you like you’d want to reuse that seed

    • @dibbidydoo4318
      @dibbidydoo4318 Рік тому +1

      @@DampeS8N making text to video is the easy part, making video to text is the hard part.

  • @bustedd66
    @bustedd66 Рік тому +72

    this guy makes sense. I want more of him teaching SD and how it works.

  • @christopherg2347
    @christopherg2347 Рік тому +79

    "Simple, you just chip away all the stone that doesn't look like David."

    • @housellama
      @housellama Рік тому +12

      "I saw the angel in the marble and carved until I set him free" - Michalangelo

  • @BernardJollans
    @BernardJollans Рік тому +42

    If anyone is stuck with the code. The "i" should be a "t" in this line in the loop:
    ```
    latents = scheduler.step(noise_pred, i, latents)["prev_sample"]
    ```

    • @alenmathew8115
      @alenmathew8115 10 місяців тому +1

      Did you get the code working?. for me it's showing "unsupported operand type(s) for /: 'DecoderOutput' and 'int'" in line 59

    • @Phobos221B
      @Phobos221B 10 місяців тому +8

      @@alenmathew8115 in the last few lines, change this line
      image = (image / 2 + 0.5).clamp(0, 1) to this image = (image.sample / 2 + 0.5).clamp(0, 1)

    • @peepdawg8995
      @peepdawg8995 10 місяців тому

      man this helped me. thanks bro :)

    • @mayurpatil9871
      @mayurpatil9871 7 місяців тому

      Thanks man because of you I solved this error

    • @romainflorentz5771
      @romainflorentz5771 6 місяців тому

      Also in the Image Loop section, this needs to be moved inside the for loop :
      ```
      # Prep Scheduler
      scheduler.set_timesteps(num_inference_steps)
      ```

  • @morphman86
    @morphman86 Рік тому +200

    Mike asked himself what the use case for mixing two prompts is.
    I used this only yesterday, to produce a photorealistic painting of an owlbear from DnD...
    So it has practical uses!

    • @MushookieMan
      @MushookieMan Рік тому +41

      Maybe google is planning to create new, even more impossible captchas. "Select all the cat-dogs in the picture"

    • @dembro27
      @dembro27 Рік тому +5

      Does it hoot or roar??

    • @IceMetalPunk
      @IceMetalPunk Рік тому +3

      @@dembro27 It hoots and growls, in fact, here at Aguefort's Adventuring Academy!

    • @euchale
      @euchale Рік тому

      Its how I make my fish people too for tabletop. Tons of applications for DnD

    • @morphman86
      @morphman86 Рік тому

      @@euchale You get half-decent tieflings if you ask for a quarter human, a half lizard and the last quarter goat.

  • @IceMetalPunk
    @IceMetalPunk Рік тому +124

    The very concept of embeddings is amazing to me. It's literally "organize concepts themselves into points in space, where similar things are closer together, in many many dimensions; now you can do arithmetic on *the meanings of words, phrases, and sentences.* " Want to add the meaning of "horse" and the meaning of "male"? Well, just add these vectors together and the resulting coordinates will point right at "stallion"!
    They amaze me so much that, when I watched Everything, Everywhere, All At Once for the first time, I completely geeked out when I realized their description of the organization of the multiverse is effectively a well-embedded latent space 😅

    • @floydmaseda
      @floydmaseda Рік тому +15

      @@mrteco4236 It literally is and is done all the time.

    • @IceMetalPunk
      @IceMetalPunk Рік тому +15

      @@mrteco4236 It's... common, in fact. There's a whole video on this channel about embeddings. And it's how CLIP fundamentally works...

    • @TheColorman
      @TheColorman Рік тому +1

      This is super fascinating, especially as someone studying Data Science just learning about vector spaces and their many uses!

    • @alexanderkirilov7820
      @alexanderkirilov7820 Рік тому

      @@mrteco4236 lol

    • @Emperorhirohito19272
      @Emperorhirohito19272 Рік тому

      @@mrteco4236 that is literally what it does bro

  • @YSPACElabs
    @YSPACElabs Рік тому +33

    I've been playing with Stable Diffusion (specifically the "InvokeAI" fork because I don't have 10gb VRAM), and I've found out that spamming the end with keywords like "realistic, 4k, trending on artstation, 8k, photorealistic, hyperrealistic" has more effect on how good the output image is than I thought.

    • @ShankarSivarajan
      @ShankarSivarajan Рік тому +11

      You should try negative prompts.

    • @nicoliedolpot7213
      @nicoliedolpot7213 Рік тому +4

      to add, try emphasis "((x))" for specific objects.
      Edit: you can also use x(y), y being the weight value for that tag.

  • @paultapping9510
    @paultapping9510 Рік тому +108

    "there are questions of ethics, there are questions on how it's trained. Let's leave those for another time"
    well, if that doesn't just sum up the tech industry.

    • @monad_tcp
      @monad_tcp Рік тому +7

      what ethics ? its just a tool, and its highly dependent on human input.

    • @paultapping9510
      @paultapping9510 Рік тому

      @Luiz remember the AI chatbot that became incurably racist because it was trained on data scraped from 4chan amongst other places? That sort of thing.

    • @purplewine7362
      @purplewine7362 Рік тому +6

      that sums up every industry. you think people didn't copy art before ai? it's just a tool

    • @paultapping9510
      @paultapping9510 Рік тому +6

      @@purplewine7362 lol, not even close to the point I was making. Never mind.

    • @purplewine7362
      @purplewine7362 Рік тому +1

      @@paultapping9510 you weren't trying to make any point, otherwise you would have clarified. You were just trying to sound smart.
      Also, liking your own comments is pathetic.

  • @byteborg
    @byteborg Рік тому +5

    I love it how you simplify and explain this heap of complexity that is in generative models like this. You gave me the impulse to play around with it, inspite of being pretty complicated code due to the depth of the abstraction. It's a lot of fun to fantasize about something and have the model come up with a visual representation.

  • @jeffwads
    @jeffwads Рік тому +123

    SD is just outstanding. It can mimic the other projects and the 1.4/1.5 models will be public domain. You can't beat that.

    • @zwenkwiel816
      @zwenkwiel816 Рік тому +9

      Lol just add "dall-e 2" to your prompts XD

    • @paryska991
      @paryska991 Рік тому +9

      1.5 model just went public today i think

    • @StefanReich
      @StefanReich Рік тому +1

      @@paryska991 Ye

    • @dgo4490
      @dgo4490 Рік тому +5

      You can beat that with human creativity that doesn't require billions of calculations per second to brute force a synthetic result.

    • @zwenkwiel816
      @zwenkwiel816 Рік тому +10

      @@dgo4490 doesn't it though?

  • @Yupppi
    @Yupppi Рік тому +4

    I really liked the stable diffusion that came with the webui that you could install on your own computer, to avoid quotas or subscription costs, and it provided easy to use UI as well. With inpaint feature inside the UI as well. Shoutouts to people who make those applications from the rough code for regular people to use.

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 Рік тому +3

    I like how your channel has adapted to the advent of the machine learning boom we are experiencing

  • @simplesimon4561
    @simplesimon4561 Рік тому +117

    I would like to see a version of the code where it shows the result of each step, so you can see the noise getting reduced with each iteration

    • @JalexRosa
      @JalexRosa Рік тому +6

      me too!!

    • @gianluca.g
      @gianluca.g Рік тому +11

      I think I'm going to do it. I'm downloading the source code and save a png for each step

    • @AlphaNovaOfficial
      @AlphaNovaOfficial Рік тому +7

      Not necessarily what you're after, but if you "interrupt" a run, you can see what it's current progress was. Depending on your steps and how early you catch it, I've seen some very interesting early "noisy" images that were themselves inspiration for other images!

    • @ReneArmenta19
      @ReneArmenta19 Рік тому +2

      There is already a script for that

    • @m0nkeyb0i666
      @m0nkeyb0i666 Рік тому +13

      If you run automatic1111 there’s a setting for that, uses slightly more vram, but it’s great to watch it work

  • @3dlabs99
    @3dlabs99 Рік тому +6

    We need an entire "Frogs on stilts" channel.

  • @jenka1980
    @jenka1980 Рік тому +1

    Love Mikes explanations, somehow he manages explain so complicated stuff in so simple and understandable way.
    It will be interesting to know Mikes opinion om Midjourney as it's seems like the winner for now among the picture creation AIs.

  • @serta5727
    @serta5727 Рік тому +5

    So amazing ❤ I love stable diffusion
    Playing around the few last weeks

  • @angeleeh
    @angeleeh Рік тому +1

    Mike is a legend, truly great videos with him

  • @aorusaki
    @aorusaki Рік тому

    This video finally explained the code to me in a simple way! Now im less confused!!! Amazing extra documentation from you guys

  • @_inetuser
    @_inetuser Рік тому

    this is so interesting and has so many unexplored use cases

  • @DeKubus
    @DeKubus Рік тому

    Immediately recognized the book on Dr. Ponds desk - Prof. Paar was one of my teachers when I studied IT sec. Nice to see it outside of Germany too!

  • @thomasnicolet9561
    @thomasnicolet9561 Рік тому +16

    The current version of the reference notebook is already deprecated due to Hugging Face's API changes :)
    You try to operate on "image", which is now a DecoderOutput class:
    image = (image/ 2 + 0.5).clamp(0, 1)
    It is fixed by unpacking its tensor attribute with its sample method:
    image = (image.sample / 2 + 0.5).clamp(0, 1)

    • @Dancedfsk8
      @Dancedfsk8 Рік тому +2

      The rest of the notebook is hard to fix, I tried but in vain. I think I'll wait for Mike's update.

    • @victorwesterlund4826
      @victorwesterlund4826 Рік тому +3

      Same goes for pil_to_latent():
      AutoencoderKL.encode() returns a AutoencoderKLOutput class:
      return 0.18215 * latent.mode()
      The desired DiagonalGaussianDistribution class is now a property ("latent_dist") of this new class:
      return 0.18215 * latent.latent_dist.mode()

    • @Dancedfsk8
      @Dancedfsk8 Рік тому +2

      in img2img,
      I just extract the code of add_noise and used int instead of floatTesnsor.
      Change add_noise function to the following.
      also notice the for loop now loop 51 times.
      Not sure if this is correct, but at least it works.
      # View a noised version
      noise = torch.randn_like(encoded) # Random noise
      for i in tqdm(range(51)):
      scheduler.sigmas = scheduler.sigmas.to(device=encoded.device, dtype=encoded.dtype)
      scheduler.timesteps = scheduler.timesteps.to(encoded.device)
      sigma = scheduler.sigmas[i].flatten()
      while len(sigma.shape) < len(encoded.shape):
      sigma = sigma.unsqueeze(-1)
      noisy_samples = encoded + noise * sigma
      img = latents_to_pil(noisy_samples)[0]

    • @aaron6807
      @aaron6807 Рік тому +1

      @@victorwesterlund4826 What is the 0.18215 for? I keep seeing it in the code but I can't find an explanation for what is does or how it's derived

  • @TaranovskiAlex
    @TaranovskiAlex Рік тому +3

    Awesome explanation, thank you!

  • @gz6963
    @gz6963 Рік тому

    great video and very educational
    I'd love to hear you guys talk about textual inversion

  • @semidemiurge
    @semidemiurge Рік тому

    This was so helpful in understanding this new tech. thank you

  • @vanderkarl3927
    @vanderkarl3927 Рік тому +6

    Seeing that GPT-2 vid reminded me: we haven't had Robert Miles on in a fair while. Is he just too busy?

  • @peterw1534
    @peterw1534 Рік тому

    Wow this is actually pretty amazing. Fascinating stuff

  • @serta5727
    @serta5727 Рік тому +13

    Mikes explanations Aretha best ❤

  • @jytou
    @jytou 2 місяці тому

    Excellent explanations, as always! Thanks!

  • @gaptastic
    @gaptastic Рік тому

    this video just put me on a wonderful path, thank you!

  • @CyberMuzHR
    @CyberMuzHR Рік тому +14

    Great video! Can anyone recommend any other videos that explain the text encoding and the whole clipping process used to guide the image generation based on input prompt?

  • @theemathas
    @theemathas Рік тому +40

    I doubt DALL-E 2 is the “biggest” image generator. Stable Diffusion is probably bigger. In my circle, the biggest one is NovelAI, which is a Stable Diffusion variant specialized in anime-style images. Notably, its training data is probably the best image dataset out there in terms of detailed labels.
    It’s already been causing a lot of drama in the community. One notable case involved someone feeding a WIP drawing to img2img, posting it, claiming it as their own drawing. When the actual artist posts their finished image, this person then proceeds to accuse the artist of copying “their” art.

    • @dibbidydoo4318
      @dibbidydoo4318 Рік тому

      Imagen by Google and NUWA-infinity by Microsoft are probably superior.

    • @felixjohnson3874
      @felixjohnson3874 Рік тому +4

      Would your "circle" happen to fit after rule 33 and before rule 35?

    • @nicoliedolpot7213
      @nicoliedolpot7213 Рік тому +4

      The danbooru property labeling format, to be exact. Training is rather easy as the images in the booru databases are human-labeled.

  • @user-xv3yr5cm7f
    @user-xv3yr5cm7f 3 місяці тому

    great video. today SORA was launched, nad youvideos help to understand whats going on the background. many thanks!

  • @briancunning423
    @briancunning423 Рік тому +2

    Great explanation.

  • @pmo1972
    @pmo1972 Рік тому

    Excellent tutorial. Thank you.

  • @Tymon0000
    @Tymon0000 Рік тому +3

    I generated thousands of images with stable diffusion. It's really fun and inpiring.

  • @YeloPartyHat
    @YeloPartyHat Рік тому

    Good timing with the NovelAI leaks

  • @HerleifJarle
    @HerleifJarle Рік тому +1

    Thanks for the explanations of how AIs are being trained. I can see a slight hint of a neural network here. I think the advantage now is that companies like Bluewillow is utilizing discord to quickly gain testers free of charge even.

  • @levii2748
    @levii2748 Рік тому

    I was waiting for this 🙏🙏🙏

  • @dakotaknutson
    @dakotaknutson Рік тому +14

    For anyone trying to get the notebook to work and is getting this error: "TypeError: unsupported operand type(s) for /: 'DecoderOutput' and 'int'" change "image = (image / 2 + 0.5).clamp(0, 1)" to "image = (image.sample / 2 + 0.5).clamp(0, 1)". As noted at the top of the notebook it seems the huggin API has changed.

    • @hipposhark
      @hipposhark Рік тому

      wow thank you very much
      can confirm that this indeed solves it👍

    • @koh8614
      @koh8614 Рік тому

      In my case it outputs a Hugging Face Tokens page warning? It says that I need a token? Is it free?

    • @hipposhark
      @hipposhark Рік тому

      @@koh8614 yes it is free. you need to create an account on the hugging face website and generate a token from your profile.

    • @JavadZahiri
      @JavadZahiri Рік тому

      Thank you

  • @FusionDeveloper
    @FusionDeveloper Рік тому

    Thanks for this video.
    So the Steps is actually the Noise Level.

  • @yuxiang3147
    @yuxiang3147 7 місяців тому

    Great video. However, could you explain what this line "latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5)" does?

  • @PunmasterSTP
    @PunmasterSTP Рік тому

    Stable Diffusion in code? More like “Super great explanation that’s solid gold!” 👍

  • @johnnyw525
    @johnnyw525 Рік тому +1

    I didn't realise that this is basically the next evolution of the "AI Upscaling" technology that has been used to in videogame mods: Take an image and then add detail until it looks like what I think it's supposed to. It's still mind-bending how it results in what it does, but AI Upscaling wasn't so scary, so I suppose this feels a bit less scary now.

  • @mylittleparody2277
    @mylittleparody2277 Рік тому

    Thank you for this video, it's really interesting!

  • @Thinknotix
    @Thinknotix 7 місяців тому

    Is there a way to use 2 image prompts instead of 2 text prompts to get a 50/50 blend?

  • @slimjimbigfoot589
    @slimjimbigfoot589 Рік тому

    Amazing so stable diffusion helps un clutter all that extra pixel during the process of facial recognition.

  • @heurve
    @heurve Рік тому +2

    On line 56, the image is coming from the sample property of the DecoderOutput, change to
    55: with torch.no_grad():
    56: image = vae.decode(latents).sample

  • @Mutual_Information
    @Mutual_Information Рік тому +12

    Anyone else surprised that diffusion models are the clear winners for image generation? And GANs have almost completely fallen from favor? I haven’t seen them in any recent SOTA work..

    • @timmyt1293
      @timmyt1293 Рік тому +6

      Mmm isnt it still kinda a GAN? Stable diffusion uses a transformer block not just for the diffusion but for identifying what the actual image is from the diffusion output too. So isn't that technically a GAN? Generate images from the diffusion model, then try to categorize them through an adversarial transformer network?

    • @erikp7378
      @erikp7378 Рік тому +9

      @@timmyt1293 Actually there is no adversarial training in diffusion models in general (in particular for stable diffusion model). The condition processing is used only for guidance (free classifier guidance in this case) and from a theoretical perspective the diffusions models are closer to hierarchical variational autoencoders where the encoders are fixed diffusion steps and decoders are denoising steps with the trained noise estimation model.

    • @JadeNeoma
      @JadeNeoma Рік тому

      @@erikp7378 I wonder if you could impliment stable diffusion inside a GAN. So have the generator define the parameters for the stable diffusion based on an input and then give that to the classifier mixed in with non ai generated images

    • @dibbidydoo4318
      @dibbidydoo4318 Рік тому

      @@JadeNeoma I don't know how that would work.

    • @erikp7378
      @erikp7378 Рік тому

      @@JadeNeoma its depends on which parameters you have in mind but the main point is that the operations must remain differentiable in order to optimize the model. And in the case of hyper parameters inference it is not trivial in many cases (e.g. the number of steps)

  • @amventures1
    @amventures1 8 місяців тому

    Can we add annotations along with the image in an image2image model? The annotations to tell us which part of the image needs to be regenerated. Like I want to change the background with the annotations to that background so it gives exactly the same person with a different background? Something like Photoshop Generative AI

  • @cyndicorinne
    @cyndicorinne 11 місяців тому

    12:34 beautiful cityscapes 🏙️

  • @MadMan123654
    @MadMan123654 Рік тому +2

    I would do just about anything for more Mike content!

  • @ozorg
    @ozorg Рік тому

    Great one again!

  • @miltiadiskoutsokeras9189
    @miltiadiskoutsokeras9189 Рік тому +2

    I don't know if this is more amazing or more frightening. Brilliant stuff.

    • @andybaldman
      @andybaldman Рік тому +1

      If you aren’t frightened, you aren’t paying attention.

    • @purplewine7362
      @purplewine7362 Рік тому +3

      @@andybaldman if you're frightened, you're a luddite

    • @andybaldman
      @andybaldman Рік тому

      @@purplewine7362 Or you've worked in the tech field long enough to know how dangerous this is, and how it will be used against people eventually. As happens with all tech.

  • @6DAMMK9
    @6DAMMK9 Рік тому +1

    Thank you for the SCIENTIFIC video!
    It got outta control after the "novelaileak", which it is very important to leave some information as realistic as it can.
    I'm quite sad about the sub-culture but I still have hope on the artist / researcher to snap out from the chaos.

  • @aiartbx
    @aiartbx Рік тому +1

    Hi Mike. This is the by far the most technically clear explanation of SD that I have seen so thank you for this! Now as you would be aware by now, the art community is up in arms against this tech and I would love to hear your opinion based on the factual knowledge you have.
    The main issue that keeps coming up is that SD tech is art theft because it steals copyrighted artwork then companies profit using the images. Another point artists are making is that SD is just a mish mash collage of original art so nothing generated by Ai is brand new.
    Would you agree or disagree with these points and why strictly based on from your technical knowledge.

  • @peekpen
    @peekpen Рік тому

    I'll copy your transcript and feed it to open.ai's playground and ask him to re-interpret your addresss for images but for my own audio interpolation in music. Brilliant.

  • @jaymalby
    @jaymalby Рік тому +11

    Well, xkcd did pick the number 4 by die roll. Seems a random enough seed to me.

    • @reinei1
      @reinei1 Рік тому +2

      I had to scroll far too much to see this mentioned, but yes I agree 4 seemed quite a good random seed there...

  • @bezmi
    @bezmi Рік тому +3

    Great video. I would love to see a video about the recent controversy with GitHub copilot and GPL licenses.

  • @lolerskates876
    @lolerskates876 Рік тому

    Thank you for trying to fix the code after the API update broke it

  • @t.michaeltracy2046
    @t.michaeltracy2046 Рік тому +4

    Great video, really informative. I was hoping to try out your Google Colab code, although it seems broken at the moment. Are there any updates regarding this announcement regarding the known bugs? "Note: There might be a handful of bugs at the moment. The developers of this stable diffusion implementation keep changing the api. Everyone should know not to make breaking api changes so regularly! I'll do a pass over the code and fix bugs as soon as I can. Am away this week :) thanks to Michael d for bringing this to my attention."

  • @martinoandreascarpolini5128
    @martinoandreascarpolini5128 Рік тому +3

    [notebook error] Hello, Thanks for the fantastic video. I noticed that as of today the notebook does not run since there are some errors. I do not why, probably some library changed a bit.The first error is at line 50 of the cell with the first inference loop. Instead of 'i' there should be 't'. The second error appears at line 59. Now to access the image's tensor you have to write 'image["sample"]' instead of just 'image'.

  • @vorlon478
    @vorlon478 Рік тому +2

    13:47 reminds me of the wave function collapse algorithm.

  • @ukranaut
    @ukranaut Рік тому

    Fascinating.

  • @GKinWor
    @GKinWor Рік тому +1

    thanks for the video

  • @ben_clifford
    @ben_clifford Рік тому

    3:07 earned my like. I need to go see that now. 😂

  • @blenderpanzi
    @blenderpanzi 6 місяців тому

    If you mention another video please also link it in the description!

  • @acobster
    @acobster Рік тому

    > There are questions about ethics. There are questions about how these were trained. Maybe we deal with them another time.
    I really hope there is a discussion of this at some point. As a discipline that skews very white/male and enjoys relatively posh working conditions, it's very easy to insulate ourselves from the very real problems of the world. And because computers are so powerful it's also simple to automate oppression of many kinds, helping it continue to run smoothly. I think we have a responsibility to talk about these issues and I would love to see this channel model that in a constructive way.

  • @Monsterpala
    @Monsterpala 7 місяців тому

    "I have no idea what to use this for. There are website were people produce cool stuff." ... Rule 34 Sir.

  • @heurve
    @heurve Рік тому +3

    On line 50, i should be changed to t (as we need the FloatTensor) 50: latents = scheduler.step(noise_pred, t, latents)["prev_sample"]

  • @ZT1ST
    @ZT1ST Рік тому

    So I know you briefly mentioned the ethics of using these in the previous video (Usually around the trained images as I understand) - does Stable Diffusion allow you to not just supply that original image like the rabbit image you provided there, but the *entire* training set for a local training process based *only* on images you've provided/made/created/got permission to train based off of?

    • @Nerdule
      @Nerdule Рік тому +4

      The trouble is that in order to specify "only include data you can learn from these specific images and no others", you'd need to retrain the entire network from zero, which costs six hundred thousand dollars worth of graphics card time.

  • @brym9159
    @brym9159 Рік тому

    Mike said link to code in description!

  • @RaydenLGX
    @RaydenLGX Рік тому

    So it is basically a morphing, blending and upscaling algorhythm of compressed/encoded data?

  • @the_proffesional1713
    @the_proffesional1713 9 місяців тому

    SD banned on Colab right?
    But some of people cracked it or bypass it and itd allows u to lauch SD on colab again, which is interesting. They probably changes something in the code of SD code to make them invisible as a unknown processed.

  • @nocturne6320
    @nocturne6320 Рік тому +4

    Could you do a video about the different samplers? (eg. DDIM, Euler, Euler a, etc.) That part of the process is still a mystery for me

    • @havz0r
      @havz0r Рік тому +1

      Ddim, euler, lms, heun and dpm all produce identical results. The ones with "a" at the end (euler a, dpm2 a) are ancestral samplers and produce different results

    • @nocturne6320
      @nocturne6320 Рік тому

      @@havz0r I ment how they work under the hood. They've already explained how the network generates images from noise, but not how the different samplers work

  • @andrewdunbar828
    @andrewdunbar828 Рік тому

    Now Deep Dream Generator has just added a text to image diffusion generator too, and it's actually pretty decent.

  • @LinfordMellony
    @LinfordMellony Рік тому

    Mind giving a quick review of Bluewillow and which software does it utiized? I think you guys break down the whole infrastructure which is actually very informative.

    • @bluesailormercury
      @bluesailormercury Рік тому +1

      Somebody asked that in a Discord AMA a couple of days ago. They're not telling. But it's very likely Stable Diffusion, using a finetuned custom model, or several. So it should be the same infrastructure

  • @felixmerz6229
    @felixmerz6229 Рік тому

    Only a matter of time until someone adapts this to 3d models. I mean, there are millions of 3d models on the internet in form of assets for all kind of engines and frameworks, all with a description to them, too.

  • @nkronert
    @nkronert Рік тому +1

    This is literally the first episode of Computerphile ever that I didn't understand anything of what was explained. And judging from the comments I'm the only one. Looks like I totally missed the boat on this topic.

    • @dibbidydoo4318
      @dibbidydoo4318 Рік тому

      what was confusing?

    • @nkronert
      @nkronert Рік тому +2

      @@dibbidydoo4318 it wasn't actually confusing because there wasn't anything to confuse. I had literally never heard of these developments before.

    • @zwe1l1nkehaende
      @zwe1l1nkehaende Рік тому

      @@nkronert this is the followup video on the topic, check out the first one, where the whole thing is explained.

    • @nkronert
      @nkronert Рік тому +1

      @@zwe1l1nkehaende thanks. I already found it. But I still don't really get it 😊
      Doing some "best fit" on noise until a photorealistic image comes out still sounds like magic to me.

  • @alikaperdue
    @alikaperdue 10 місяців тому

    @14:47 - idea: hand draw your animation sequence.Give the first to image and text to AI and get the result. Then hand the resulting image, your next hand drawn frame and the text to generate the 2nd frame. Continue the process so that each new frame is a combination of the last and what you want it to look like combined. In this way the "flicker" might be reduced.
    But I haven't seen what you're talking about. I may be off.

  • @peterchindove7146
    @peterchindove7146 Рік тому

    Pretty cool!

  • @carlmalia29
    @carlmalia29 Рік тому

    love this tool but im having an error when trying to noise an image to run the AI over a guide image. the add_noise def returns an error of "AttributeError: 'int' object has no attribute 'to'". It come after the call line below any help would be amazing
    latents = scheduler.add_noise(encoded, noise, start_timestep)

  • @CrystalblueMage
    @CrystalblueMage Рік тому +1

    If you can make images by removing noise from random noise. Can you make P solutions from NP solutions the same way by training on known P solutions having "noise" added to make them NP?

  • @uneek35
    @uneek35 Рік тому +1

    Would love to see a test to see how it works when it's trained with a limited dataset.

  • @reecelawson2403
    @reecelawson2403 Рік тому

    hi, could you guys make a video on what kernels are please?

  • @ShankarSivarajan
    @ShankarSivarajan Рік тому +5

    Another cool thing you can do is _negative prompts,_ that you can put in place of the "unconditioned" embedding.

    • @Onihikage
      @Onihikage Рік тому +2

      Yep, negative prompts are great for things like getting hands right. It turns out Stable Diffusion, at least the 1.4 model everyone's been using so far, has trouble identifying where a hand or finger is supposed to stop, so you often get hands with too many fingers or fingers coming out of fingers as it keeps trying to "complete" a partial finger. Including a negative prompt for "hands" or "too many fingers" tends to produce much better results.

    • @ShankarSivarajan
      @ShankarSivarajan Рік тому +2

      @@Onihikage Yes, that is precisely what I use it for too. I expect we got that advice from the same place.

  • @JadeNeoma
    @JadeNeoma Рік тому +13

    Lets be honest Computerphile, a youtube channel with 2.2 million subscribers can afford to pay for a transcription service

  • @Jianju69
    @Jianju69 Рік тому +3

    A hybrid frog/snake is properly called a *SNOG*, obviously.

  • @timeimp
    @timeimp Рік тому +1

    "They are essentially the same, but quite different."
    Ah yes, the ol' computer science maxim of "same, but different"

  • @skwisgaarskwigelf331
    @skwisgaarskwigelf331 Рік тому +2

    I was literally generating stable diffusion memes right now.

  • @toohardtowatch
    @toohardtowatch Рік тому +3

    What surprises me is how primitive a lot of these techniques seem to be under the hood, and how much further it can obviously be taken. These techniques are still in their infancy.
    For instance, there seem to be a lot of potential image-generating procedures that might converge faster than random high frequency noise. What if there could be stages with simulated random brush strokes, or generating geometric shapes, or input to 3d modelling software. If the tools that humans use to create digital art could be algorithmically leveraged by an AI, if might be even more effective.
    Also, if you could spatially embed the tags in the source image in a way it could be coupled to the segmentation, maybe it could be used as a tool to 'compose' an image. A blob of one color is tagged as a dog, a blob of another is tagged as a bench, and the AI interprets it with those spatially defined weights to start.

  • @Indrikmyneur
    @Indrikmyneur 11 місяців тому

    Well, done, I just don't understand how the guiding works. What if I instruct it to create a complex image that certainly wasn't in any training data with many complex relations what should be where in the inquiry? How it can be constructed as a whole instead of creating and merging the parts it may have encountered?

  • @cyndicorinne
    @cyndicorinne 11 місяців тому

    I love this

  • @grayaj23
    @grayaj23 Рік тому

    "What amount of frog DO you want in this image?"
    I WANT ALL THE FROG.

  • @Lodinn
    @Lodinn Рік тому +1

    7:20 My man Mike knows that when you use a proper random function, the result would be 4. Guaranteed to be random!

  • @meguellatiyounes8659
    @meguellatiyounes8659 Рік тому

    what algorithms used in pakage managers? .

  • @jaymayhoi
    @jaymayhoi Рік тому

    so cool!

  • @methodof3
    @methodof3 Рік тому

    Cartoons and anime are going to be so amazing in 5 to 10 years

    • @theemathas
      @theemathas Рік тому

      Anime-style drawings are already a thing and is causing a lot of drama.

    • @bltzcstrnx
      @bltzcstrnx Рік тому

      ​@@theemathas well, at least you can have unique wallpapers and profile pictures.

  • @ZedaZ80
    @ZedaZ80 Рік тому +1

    7:18 is clearly a reference to xkcd 221

  • @tjpld
    @tjpld Рік тому

    Waiting for the ability to create computer game maps from a prompt. I think that would actually be easier than what Dalle etc. are doing.