want to know the best settings? Don't use the controlnet model at all. set it to none and use some better sd models like rev animated or realistic. and see the results. though you have to enable the controlnet and upload the qr. The images will turn out so awesome that you will not be able to recognize that there is qr in the image until you scan it. but for some reason you want the image to be a perfect blend of qr and the image. you can just control it by the sampling steps. (also a1111 has the qr tab in extensions you can just enable it. dont go to third party websites)
Could you elaborate? The start of your comment said not to use ControlNet model "at all." And then later, you said "...though you have to enable the Controlnet and upload the QR." So what exactly do you mean? I.e what are your settings? Could you also provide some recommendations for SD Models for QR Art? I would greatly appreciate it.
@@RussellKlimas Yes, I'm trying that. I guess it's the best way. THe problem is when the clip predicts the wrong subject, I get a very weird and different image.
can you help me fix this error OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.39 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Time taken: 0.45sTorch active/reserved: 3473/3544 MiB, Sys VRAM: 4096/4096 MiB (100.0%)
Sounds like you don't have enough vram. You'll either need to close programs that take up a lot or upgrade vram. Sometimes restarting your computer helps too.
I also have a problem with it, we wanna create a qr code with our brand's image, so i dont know what i should put into prompt. I mean i just want the qr code generated into or onto the brand image i provide, is that even possible?
Make sure you are putting a qr code that you've already made in the controlnet. Try doing a restart of the program too. I've had a similar issue before.
@@RussellKlimas yes I am using a premade QR code. I have tried with and without invert. If I use something like tile or brightness, I get a qrcode, but when I use that qr_code model, my result is unchanged even if I set control weight to 2. I can’t get it to fire off with txt or img
you made this right when i needed it
yussssss
Beginner Questions. On your screen, are you running stable diffusion locally? Do you have a tutorial on how to set that up?
ua-cam.com/video/VXEyhM3Djqg/v-deo.html
want to know the best settings? Don't use the controlnet model at all. set it to none and use some better sd models like rev animated or realistic. and see the results. though you have to enable the controlnet and upload the qr. The images will turn out so awesome that you will not be able to recognize that there is qr in the image until you scan it. but for some reason you want the image to be a perfect blend of qr and the image. you can just control it by the sampling steps. (also a1111 has the qr tab in extensions you can just enable it. dont go to third party websites)
Could you elaborate? The start of your comment said not to use ControlNet model "at all." And then later, you said "...though you have to enable the Controlnet and upload the QR." So what exactly do you mean? I.e what are your settings? Could you also provide some recommendations for SD Models for QR Art? I would greatly appreciate it.
super helpful Russel, thank you! Would the img2img work without a prompt? So basically get a generation “inspired” by the image reference image only?
I just tried it and the result in my opinion is less than desirable.
@@RussellKlimas yes same with my results. I was wondering it there was a technique to improve it.
@@calmmarketing why do you not want to use a prompt with img2img? You could always clip interrogate the image you are using and then use that to help.
@@RussellKlimas Yes, I'm trying that. I guess it's the best way. THe problem is when the clip predicts the wrong subject, I get a very weird and different image.
Which checkpoint you are working on?
Also what about the Models you mentioned in the description where should i put them?
If running locally you put them in your models folder.
When I open my automatic 1111 there is no extension option, any idea why?
Is there a way to install the models in thinkdiffusion ?
Not as far as I know, until they do it themselves.
@@RussellKlimas Thanks for the reply
can you help me fix this error OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.39 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Time taken: 0.45sTorch active/reserved: 3473/3544 MiB, Sys VRAM: 4096/4096 MiB (100.0%)
Sounds like you don't have enough vram. You'll either need to close programs that take up a lot or upgrade vram. Sometimes restarting your computer helps too.
Can we just put an image and a qr code without using a prompt ?
As much as I wished it worked that way it doesn't, not in my tests anyway.
@@RussellKlimas Thanks.
I also have a problem with it, we wanna create a qr code with our brand's image, so i dont know what i should put into prompt. I mean i just want the qr code generated into or onto the brand image i provide, is that even possible?
@@welkey90 Y that would be perfect. I don't think is possible (yet)
But is AI.
It will happen.
why does it call img2img if its gonna completely ignore the image i put
I am getting results that have no QR code in them. Control net is up to date and I am only using the QR code control net
its like this model does nothing
Make sure you are putting a qr code that you've already made in the controlnet. Try doing a restart of the program too. I've had a similar issue before.
@@RussellKlimas yes I am using a premade QR code. I have tried with and without invert. If I use something like tile or brightness, I get a qrcode, but when I use that qr_code model, my result is unchanged even if I set control weight to 2. I can’t get it to fire off with txt or img
@@kannakrew Yeah I just did a restart and it worked for me. Since I ran into something similar.
@@RussellKlimas here's the problem i ws having: My qr code was only 290x290, when i bumped it up to 500^2, it worked like a charm
lol why r u hiding SD version?
I'm hiding the model name because it's a personal mode of a friend and keep anonymity.
Ok i feel old I don't understand anything 😅
I followed everyone and I got this, RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 1024x320)
If you go to 3:51 in the video you can see that I explain that it depends on the checkpoint you are using whether it is 1.5 or 2.0 and which to use.
Is there a way to install the models in thinkdiffusion ?
potentially if you have a higher plan yes