Show up to a panel All the content creators sitting at a table Everyone goes down the line Says a recognizable line/intro/whatever Gets to one guy with a notable lack of TV on his head Dead silence He backs away from the mic Screams F*CK Crowd bursts into applause Someone throws baby onstage Elected president on the spot Immediately assassinated Survives Hacksmith builds IRL TV helmet for him
AI art is more complicated to set up than use: - First the model, you would usually start with Stable Diffusion 1.4 or 1.5, but depending on what you want, you might need some specific ones, for realistic images I like CyberRealistic. - Then you need a nice interface, working in a shell is just torture, I use EasyDiffusion, but there's a docen of them, so take the one you prefer. - If you see a pattern you don't like in the generated images, just use the negative prompt to remove those features. - If you want even more specific features, you might want to train your own LoRA, that you'll have to research yourself. - Start with a general idea, then modify the parts you're not happy with using inpainting, this way you don't need super specific prompts or the stars to align in a once in a million years pattern for a good final result. - Don't be scared of editing images manually, you can use MSpaint to modify a result you're half happy with, then use the edited image as an input for the next step, that's how I usually do it, quick and dirty, but functionally the same final result. - Don't get frustrated, just get more RAM, yeah, these tech uses infinite mountains of RAM, if you want good, detailed, high quality images, you need to get some, I was almost forced to go from 16 to 32GB, and this is for 1024x1024 images.
Could you imagine the chaotic energies of Code Bullet and I Did A Thing teaming up to make some sort of abomination?! Don't know how that'd work, but I'd be scared.
I am not the one who made the AI art, but i assume he made it with stable diffusion and added images of your character to the model so it knows Code Bullet. Than he just types in the res of the prompt like the "Code bullet in a room with old monitors, realistic". Corridor crew has a cool video on something like this when they turned them selves into an Anime.
Not necessary, some good models might have some fan-art of him directly in their training. As for prompt, it's not really and ideal prompt formatting. it's good for dall-E, not really for stable diffusion. A stable diffusion prompt would be more like this:: (code_bullet:1.3), (old_monitors:1.2), realistic But it's not an easy subject, but at least you tried giving him an answer.
@@serzaknightcore5208 the very limited amount of fan-art will probably not do much, its very likely the person created a LoRA (Low-rank adaptation) which is like OP described. It takes only 10-20 images of the character you want to recreate and produces fairly good stuff. Plus it can still be combined with styles afterwards. Also the fact that it clearly likes black with green characters on it tells us that it learned that those are heavily related to CB, and its abundant usage of it really hints at the usage of a LoRA. It has also learned that CB wears hoodies, but just promping base Stable Diffusion would not give nearly the same hoodies as depicted here.
One way to get AI to generate the image you'd like it to generate is to start out with an image similar enough to what you want.. for example by drawing the exact image you want, and then tell the AI to use that as a start image with a high weight, and make the prompt essentially just be a description of the start image.
For CB and the AI-literate: Raev apparently posted their Lora over on CivitAI. The screenshots they posted of the Lora in action do include the prompts used if you click on them. Love the description for the Lora too. XD
If you want quick good results with ai art, use image to image, instead of just a text prompt. You can just quickly and crudely edit your character into the right pose, add some basic doodles in paint for additional stuff you want in the image, and then just describe to the AI what you were trying to draw. Mess around with some sliders for better results, I guess. This should work with free online Stable Diffusion thingies too, as long as they have img to img support.
You gotta use textual embedding (or hugging face "diffusers") to get good ai art of your avatar. You've already got like hundreds of source drawings, and really it should do well with only 5-10 of your model poses. Basically just teaches the AI what the avatar looks like. Look up hugging face diffusers, or stable diffusion embeddings, both do similar things I believe.
one of the AI image posts mentioned that they trained "Lora". A Lora is a sort of refinement model to add on top of a existing model. they usually fit in megabytes instead of gigabytes for a full model. I'm guessing they trained the Lora on either images from you, or fanart, or probably both. CivitAI is the place to search for custom trained models, loras, or othe techniques of refining/changing a model.
Try saying “tv for a head that has matrix code on the screen with a black bullet in the center ” as well as saying “faceless” , the other option is using ref images to help guide the ai
Just in case if you are still wondering about how you can generate the codebullets art. What you can do on midjourney, you can upload the current codebullet image and ask in prompt to use this image as reference and generate new image and you can also tell it in prompt how you want new image to look like.
3:17 it looks like one minigame from the game "watchdogs" where people in suits with red ties and camera's for heads hunted you around town in the darkness and if they saw you for too long their head would turn from green to yellow to red then they'd run at you and kill you it was horrifying
For the AI art, it's probably made with stable diffusion. For the model, i can't tell, so i will just say the ones that i find best: waifu diffusion, and anything v5. The model heavely influence the quality. AnythingV5 have a much better quality, but tends to not follow your prompt. waifu diffusion is trained with danbooru images, so there is almost everything. As for prompt, maybe for dall-E it's good prompts, not for stable diffusion. here is how i would do it: (code_bullet:1.4), (cathodic television:1.2), green characters, (black hoodie), ((1 man)) As for some explanation: the bracket is used to tell the AI how much he should prioritize something. You can put multiple bracket to incentivise even more (that's what i did with 1 man). You can also add ":(some number)" if you don't want to put a thousand brackets (note that 1 bracket have a value of .1 . 2 brackets are 1.2, 3 are 1.3 etc.) You can also add styles to the prompt, with are essentially some general prompts that upgrade the quality. You can merge multiple from the internet, that's what i'm doing, generally it can't make the image worse, but it may make the AI less follow your prompt You can note that i didn't specified the bullet in the middle of the screen. Because it's really, really hard to point it precisely whitout the AI freaking up. If he have enough code bullet in the database, he will know what to do. If he don't, your best bet would be to to make a fast drawing, and use img to img (AI like stable diffusion and dall-E are just glorified denoiser. In txt to img, it generate a random noise, and try to denoise it following your prompt. In img to img, It will add noise to your image, and then will denoise it, which will get a similar result depending on the denoising strenght. 1 is completely random noise, 0 is no noise) I have thought about making a video resuming all the knowledge i got in a year. It's not hard to generate an image, but getting a good image is where it starts getting complicated. I can't just summarize all my knowledge into a single youtube comment, but i can tell the most important things. If i wanted, i could talk about resolution, cfg, choosing non-mainstream AI, model merging, seeds, and much more.
Holy shit, pal, this is possibly one of the most informational comments I have ever saw. I mean, legit, I think that this single comment gave me more than three hours of watched videos on this topic
@@VasiliyOgniov Yeah, the thing is most video actually say more or less the same things, sometimes whitout really knowing why it really matters. I actually used it for a long time, always experimenting, trying to see what upgrade the quality, searching things that i didn't knew what it did. I'm at a point where settings up parameters only take me seconds. I want a portrait, i put height at 960, weidth at 540, steps at 80-100 (lower don't make good results, and higher the difference is barely noticeable), batch usually at 3 (It allow you to choose which image is best, while not overloading your brain with too much choice), and then i write prompts, and in a way that for me it's almost like a third language (yeah, i'm french, writting prompts in english, in a way that don't make sense in the traditionnal english). Don't take my parameters as universal, though. Sometimes, you may wanna have other resolution (i'm on a 1660 ti, so i can't generate 1080p images), more or less steps depending on your time, or other models depending on what you need. Also, if you don't make them full resolution, use real-esrgan. It's an IA that upscale images, and when the image is 2x smaller than what you want, it make perfect results. I think integrated in stable diffusion, but maybe not with the model (i never used it, always used the original). If you are using the original, you can find a bat online to automatise all the process and not go through cmd
@@freetousebyjtc I would agree but since I'm a programmer myself who also loves to do artsy shit and is 100% against AI being trained on stolen art, I won't.
The one speed clip may do the same thing but fast, but that’s no fun! We’re here for the times where you remind us how much of a headache it is to get programs to do intermediate processes properly!
You can upload pictures to some AI tools and it will use that as it's base to make more images based on the prompt you enter. That is likely how they are making those.
Cheers to those still not on Windows 10! Upgrading is nice and all, but it's always annoying learning a new system, and... 10 is apparently pretty much a scam.
One trick I use for Image generation is to use AI to generate the prompt for the image generator. If the AI knows what you mean it can generate a prompt that is way better and longer than you could.
Alright so the reason the clip channel isn't doing as well is 1) you have a separate channel for shorts rather than using your other 2 higher subscribed channels. Since the Instagram and TikTok channels are your only channels on those platforms the algorithm preferences those. 2) This is more of a conspiracy thing but I believe that UA-cam might use data on the other short video platforms to push down those who cross post as opposed to those who created unique UA-cam Shorts to drive more people to UA-cam for new content they can't get on TikTok etc. Another thing you may want to do is more interaction on your short videos (liking, favorite replies, and replying) and part 2/3 etc as the algorithm likes to feed the second part of shorts to a person.
I'm thinking that the AI art was done with Stable Diffusion, and what probably happened was they make a photobashing image for a base and ask the model to reimagine it in a different style
"No-one likes the clips channel!! But Tiktok and Instagram are doing so much better!" It's almost like different userbases gravitated towards different platforms because of different content format or something. :) If I could set youtube to ignore shorts entirely and permanently, I would.
On Android, UA-cam Vanced will let you hide them completely. For browsers, I use AdGuard but most other ad blockers probably will have a rule available, too.
William Osman and cb should do a collab video of William making a tv with a hole in the bottom for cb to wear to open sauce and post it as one of their weekly posts
the way to get good images with an AI is to train your own AI image tool with specific pictures lets say you wanted your own character you would put a collection of Code Bullet images and tell it that it is "Code Bullet" it will then be more specific in what it makes
It’s cool to see people not complaining about AI art at once, granted it’s because they’re looking at it from a programmer’s perspective and not an artists, so it’s not “this is stealing jobs from actual artists” it’s “holy shit this was made by a computer? That’s fucking sick”
having made few AI generated images myself (my profile pic is AI generated, with the exception that i turned the face into darkness, because the AI didnt want to cooperate for that part). you have to be specific. like what type of tv you want, what you want to have in the tv, clothes, colors of the clothes, what kind of scenery, and optionally maybe add some effects, like darkness, mist, lights etc. maybe a perspective you want; front view, back view, wide angle etc, and generate images multiple times until it generates something you like. sometimes the AI just doesnt know the meanings. like, it doesnt know what Hogwarts is. or sometimes it gets confused and completely ignores a prompt. like short hair and hair slicked back. the AI art website i use, just refuses to make slicked back hair if theres short hair in the prompt. it just gives short hair. and even then, it messes up things, like, as you saw: instead of a tv AS a head, it put a head IN the tv. so, just generate multiple images. after few images, you can give more prompts, more specific and more defined descriptions.
Different types of people on different platforms. I for certain hate shorts and clips and am probably not alone. I want the full phat codesplaining. :)
3:10 image to image. either using your avatar or pasting a tv on top of some other model and then letting the AI clean it up. That would be my guess, at least.
Here is how i used to get the ai generated: Make a pic of a shredded guy with big muscles that is full hd bu his head is a clasic 1979s tv with matrix statick and in the center there is a cutout where the matric stops in the shape of a bullet. (Background must be a foggy room with a bunch of hacker look alike setup with green code upon the monitors)The whole scene should be an 4k 33d render image
prompt engineering and using reference images as part of the input are part of it AI is smart, but not magic, and sometimes you need to explain what you want in a way the AI can understand.
The key to good AI art is to always specify that you want tits in the prompt since that will give it much more reference materials to work with.
LMAOOOOOOOOOOO
_porn! porn everywhere!_
Specify you want it to be 34 years old. Look up rule 34 for details.
Name checks out…
most out of pocket comment here lol
Not sure if anyone has noticed but in his open tabs is an Etsy with “custom monitor head” so I think the man himself is gonna show up lore accurate
@@zorkman777 😉
Noticed that too honestly if he’s trying to make a cosplay head I’d suggest minibitt’s video to him
1:01
Man really said "no horny" to himself, on his female version fanart.
Straight up refusing the whole genre of selfcest.
I think that was directed to the fans
Show up to a panel
All the content creators sitting at a table
Everyone goes down the line
Says a recognizable line/intro/whatever
Gets to one guy with a notable lack of TV on his head
Dead silence
He backs away from the mic
Screams F*CK
Crowd bursts into applause
Someone throws baby onstage
Elected president on the spot
Immediately assassinated
Survives
Hacksmith builds IRL TV helmet for him
Really underrated comment
I hope my recognizable line will be chair related.
Ngl. I like the way you think 😂😂😂
id like the head bit
Cool story bro
If you have a kid can you please name him Code Pellet? Cheers
Code BB
Kid Bullet
CP
@@XplosivDS this is funny to me as a swed
At first, it would need to be named Bone Bullet
Evan: We all use Windows here, we're all friends...
Mac and Linux users:
Same, I was just wondering how he got that idea.
BSD users: am I a joke to you
i use arch btw
Linux users are not friends
Yeah, I'm on linux right now.
CB out here really trying to avoid the "Evan as a chick" stuff.
Come on, mate, you know you'd hit that, don't deny it.
"go fuck yourself"
"don't mind if i do"
Isn't that's selfcest
@@baimhakaniyeah
What about the 3d walking part 1 video?
Honestly same
8:46 how did he not notice the shirt has "CBT" written on it lmao
it's technically a correct acronym though
What is Minecraft?
@@NoGoatsNoGlory. it's a small indie game, you should try it, it's really fun (the graphics are pretty bad though)
Broke: cognitive behavioral therapy
Woke: cock and ball torture
AI art is more complicated to set up than use:
- First the model, you would usually start with Stable Diffusion 1.4 or 1.5, but depending on what you want, you might need some specific ones, for realistic images I like CyberRealistic.
- Then you need a nice interface, working in a shell is just torture, I use EasyDiffusion, but there's a docen of them, so take the one you prefer.
- If you see a pattern you don't like in the generated images, just use the negative prompt to remove those features.
- If you want even more specific features, you might want to train your own LoRA, that you'll have to research yourself.
- Start with a general idea, then modify the parts you're not happy with using inpainting, this way you don't need super specific prompts or the stars to align in a once in a million years pattern for a good final result.
- Don't be scared of editing images manually, you can use MSpaint to modify a result you're half happy with, then use the edited image as an input for the next step, that's how I usually do it, quick and dirty, but functionally the same final result.
- Don't get frustrated, just get more RAM, yeah, these tech uses infinite mountains of RAM, if you want good, detailed, high quality images, you need to get some, I was almost forced to go from 16 to 32GB, and this is for 1024x1024 images.
I’ve run out of vram before I ran out of ram personally. Though I do only have 6gb…
Could you imagine the chaotic energies of Code Bullet and I Did A Thing teaming up to make some sort of abomination?! Don't know how that'd work, but I'd be scared.
NO
YES
ai-controlled beyblade
I am not the one who made the AI art, but i assume he made it with stable diffusion and added images of your character to the model so it knows Code Bullet. Than he just types in the res of the prompt like the "Code bullet in a room with old monitors, realistic". Corridor crew has a cool video on something like this when they turned them selves into an Anime.
Not necessary, some good models might have some fan-art of him directly in their training.
As for prompt, it's not really and ideal prompt formatting. it's good for dall-E, not really for stable diffusion. A stable diffusion prompt would be more like this::
(code_bullet:1.3), (old_monitors:1.2), realistic
But it's not an easy subject, but at least you tried giving him an answer.
@@serzaknightcore5208 the very limited amount of fan-art will probably not do much, its very likely the person created a LoRA (Low-rank adaptation) which is like OP described. It takes only 10-20 images of the character you want to recreate and produces fairly good stuff. Plus it can still be combined with styles afterwards. Also the fact that it clearly likes black with green characters on it tells us that it learned that those are heavily related to CB, and its abundant usage of it really hints at the usage of a LoRA. It has also learned that CB wears hoodies, but just promping base Stable Diffusion would not give nearly the same hoodies as depicted here.
@@SmallLanguageModel Yeah, i just saw that when i tried to generate it
Yeah he trained a LoRA, it's also written in a post "I made a Code bullet lora"
@@gaggix7095 Yup i just saw, didnt get to that point before i wrote my comment ;)
people: **Tired of Waiting**
Code Bullet: **Doesn't give a...**
One way to get AI to generate the image you'd like it to generate is to start out with an image similar enough to what you want.. for example by drawing the exact image you want, and then tell the AI to use that as a start image with a high weight, and make the prompt essentially just be a description of the start image.
7:29 aka. “Code Bullet hasn't opted in to a privacy nightmare”. There's no shame in not upgrading, CB.
For CB and the AI-literate: Raev apparently posted their Lora over on CivitAI. The screenshots they posted of the Lora in action do include the prompts used if you click on them. Love the description for the Lora too. XD
0:39 I heard "it cuts kids in half" and for a moment I was like wtf before I realized
If you want quick good results with ai art, use image to image, instead of just a text prompt. You can just quickly and crudely edit your character into the right pose, add some basic doodles in paint for additional stuff you want in the image, and then just describe to the AI what you were trying to draw. Mess around with some sliders for better results, I guess.
This should work with free online Stable Diffusion thingies too, as long as they have img to img support.
You gotta use textual embedding (or hugging face "diffusers") to get good ai art of your avatar. You've already got like hundreds of source drawings, and really it should do well with only 5-10 of your model poses. Basically just teaches the AI what the avatar looks like. Look up hugging face diffusers, or stable diffusion embeddings, both do similar things I believe.
one of the AI image posts mentioned that they trained "Lora".
A Lora is a sort of refinement model to add on top of a existing model. they usually fit in megabytes instead of gigabytes for a full model. I'm guessing they trained the Lora on either images from you, or fanart, or probably both.
CivitAI is the place to search for custom trained models, loras, or othe techniques of refining/changing a model.
I love the tabs
"TV helmet - Etsy"
"Custom TV Head/Monitor..." (Also Etsy)
Try saying “tv for a head that has matrix code on the screen with a black bullet in the center ” as well as saying “faceless” , the other option is using ref images to help guide the ai
"this shits awesome it like cuts cans in half and stuff"
surely im not the only one who heard "kids" instead of "cans"
How did it take UA-cam this long to suggest your second channel? I love your content!
Just in case if you are still wondering about how you can generate the codebullets art. What you can do on midjourney, you can upload the current codebullet image and ask in prompt to use this image as reference and generate new image and you can also tell it in prompt how you want new image to look like.
3:17 it looks like one minigame from the game "watchdogs" where people in suits with red ties and camera's for heads hunted you around town in the darkness and if they saw you for too long their head would turn from green to yellow to red then they'd run at you and kill you it was horrifying
8:35 "as one trained in the force, you know true coincidences are rare" -Kreia
mhmmmm definitely a Coincidence
His chrome tab reads "Tv Helmet - Etsy" lol
For the AI art, it's probably made with stable diffusion.
For the model, i can't tell, so i will just say the ones that i find best: waifu diffusion, and anything v5. The model heavely influence the quality. AnythingV5 have a much better quality, but tends to not follow your prompt. waifu diffusion is trained with danbooru images, so there is almost everything.
As for prompt, maybe for dall-E it's good prompts, not for stable diffusion. here is how i would do it:
(code_bullet:1.4), (cathodic television:1.2), green characters, (black hoodie), ((1 man))
As for some explanation: the bracket is used to tell the AI how much he should prioritize something. You can put multiple bracket to incentivise even more (that's what i did with 1 man). You can also add ":(some number)" if you don't want to put a thousand brackets (note that 1 bracket have a value of .1 . 2 brackets are 1.2, 3 are 1.3 etc.)
You can also add styles to the prompt, with are essentially some general prompts that upgrade the quality. You can merge multiple from the internet, that's what i'm doing, generally it can't make the image worse, but it may make the AI less follow your prompt
You can note that i didn't specified the bullet in the middle of the screen. Because it's really, really hard to point it precisely whitout the AI freaking up. If he have enough code bullet in the database, he will know what to do. If he don't, your best bet would be to to make a fast drawing, and use img to img (AI like stable diffusion and dall-E are just glorified denoiser. In txt to img, it generate a random noise, and try to denoise it following your prompt. In img to img, It will add noise to your image, and then will denoise it, which will get a similar result depending on the denoising strenght. 1 is completely random noise, 0 is no noise)
I have thought about making a video resuming all the knowledge i got in a year. It's not hard to generate an image, but getting a good image is where it starts getting complicated. I can't just summarize all my knowledge into a single youtube comment, but i can tell the most important things. If i wanted, i could talk about resolution, cfg, choosing non-mainstream AI, model merging, seeds, and much more.
Holy shit, pal, this is possibly one of the most informational comments I have ever saw. I mean, legit, I think that this single comment gave me more than three hours of watched videos on this topic
@@VasiliyOgniov Yeah, the thing is most video actually say more or less the same things, sometimes whitout really knowing why it really matters. I actually used it for a long time, always experimenting, trying to see what upgrade the quality, searching things that i didn't knew what it did. I'm at a point where settings up parameters only take me seconds. I want a portrait, i put height at 960, weidth at 540, steps at 80-100 (lower don't make good results, and higher the difference is barely noticeable), batch usually at 3 (It allow you to choose which image is best, while not overloading your brain with too much choice), and then i write prompts, and in a way that for me it's almost like a third language (yeah, i'm french, writting prompts in english, in a way that don't make sense in the traditionnal english). Don't take my parameters as universal, though. Sometimes, you may wanna have other resolution (i'm on a 1660 ti, so i can't generate 1080p images), more or less steps depending on your time, or other models depending on what you need.
Also, if you don't make them full resolution, use real-esrgan. It's an IA that upscale images, and when the image is 2x smaller than what you want, it make perfect results. I think integrated in stable diffusion, but maybe not with the model (i never used it, always used the original). If you are using the original, you can find a bat online to automatise all the process and not go through cmd
I made a codebullet lora trained on video stills and fan art and on some them I used control net to fix the codebullet icon
I’ve had that mod for so long, and I love the scrunkly face he has XD, it’s beautiful
6:58 the stupidest thing i made just became the best thing i made
For AI generation: Using Midjourney you have the option to use reference images for the ai to generate smth new off
"This subreddit is you guys showing how much more talented you are"
*proceeds to show the subreddit being full of AI Art*
I agree, but this is a programmer's channel so what can you expect, they're definitely 100% on board with art theft lmao
@@freetousebyjtc I would agree but since I'm a programmer myself who also loves to do artsy shit and is 100% against AI being trained on stolen art, I won't.
The one speed clip may do the same thing but fast,
but that’s no fun!
We’re here for the times where you remind us how much of a headache it is to get programs to do intermediate processes properly!
wait really you will be in open sause?
he will, hoping hw will have a crt monitor mask
@@adelalatawi3363 yeah. I never want to see his face. It would ruin my life.
Wait hes gonna be in open sus???
Thumbnail photo: Mommy bullet
Me:Cool
You can often have a reference image, but i don’t know more haha
"Let's not be elitist about the version of Windows, we're all using Windows, we're all friends here..."
*runs uname -s*
Linux
Interesting...
I got an idea for a video.
Make two AIs, one with one AI creation algorithm, and one with another, and pit them against each other in a chess battle.
Annother week, annother CB day off. Love it.🔥
Midjourney has an add picture feature that allows you to redraw images. That is probably how he did it.
I absolutely knew that was a bullet on the monitor. I never thought it was a battery at all.
7:42 "we're all using windows"
Me Linux user: ahuh
There are dozens of us
The sudden codebullet in my backyardscientist video blew my mind. Was never expected.
You can upload pictures to some AI tools and it will use that as it's base to make more images based on the prompt you enter. That is likely how they are making those.
Hes using Stable Diffusion and a custom trained LoRA model to generate the Code Bullet images.
7:42 Code Bullets: Thank you!!
10 seconde later: oh...
Can’t wait to see a guy with a giant TV on his head at open source
Cheers to those still not on Windows 10!
Upgrading is nice and all, but it's always annoying learning a new system, and... 10 is apparently pretty much a scam.
Video idea: Use AI to respond to hate comments for you. That way you get to laugh at that, instead of feeling down from reading the comment itself. :)
One trick I use for Image generation is to use AI to generate the prompt for the image generator. If the AI knows what you mean it can generate a prompt that is way better and longer than you could.
Alright so the reason the clip channel isn't doing as well is 1) you have a separate channel for shorts rather than using your other 2 higher subscribed channels. Since the Instagram and TikTok channels are your only channels on those platforms the algorithm preferences those. 2) This is more of a conspiracy thing but I believe that UA-cam might use data on the other short video platforms to push down those who cross post as opposed to those who created unique UA-cam Shorts to drive more people to UA-cam for new content they can't get on TikTok etc.
Another thing you may want to do is more interaction on your short videos (liking, favorite replies, and replying) and part 2/3 etc as the algorithm likes to feed the second part of shorts to a person.
When i saw the thumbnail my brain did neuron activation
I'm thinking that the AI art was done with Stable Diffusion, and what probably happened was they make a photobashing image for a base and ask the model to reimagine it in a different style
Evan, you knew what you were doing with that thumbnail
"No-one likes the clips channel!! But Tiktok and Instagram are doing so much better!"
It's almost like different userbases gravitated towards different platforms because of different content format or something. :)
If I could set youtube to ignore shorts entirely and permanently, I would.
On Android, UA-cam Vanced will let you hide them completely. For browsers, I use AdGuard but most other ad blockers probably will have a rule available, too.
I'm a apple use :(. I wish there was a option to turn off shorts built into UA-cam
i think the secret is using the midjurney ai, that stuff is just so good
I see the thumbnail and immediately wondered when he turned down that path
a lot of people use youtube for longer form content, where as tiktok and instagram are designed for the short form content
Replace "form content" with "attention span" :)
The theme for Opera looks like it came straight from the early 2000s.
William Osman and cb should do a collab video of William making a tv with a hole in the bottom for cb to wear to open sauce and post it as one of their weekly posts
the way to get good images with an AI is to train your own AI image tool with specific pictures lets say you wanted your own character you would put a collection of Code Bullet images and tell it that it is "Code Bullet" it will then be more specific in what it makes
i had your video full screen and your time on the bottom scared the shit out of me bc i thought "friiick i am late!!"
>we're all using windows
*sweats in linux* Yes of course, friends, we're all friends
9:15 code Bullet just being there is good enough for me
"We're just gonna scroll past that"
- the man who put 'that' in his tumbnail
Code Bullet, man.. STABLE DIFFUSION FOR AI ART GO BRRR
I didn't even know code bullet day off before this dinner
0:37 *laughing* "that shit's awesome, it, like, cuts kids in half and stuff"
It’s cool to see people not complaining about AI art at once, granted it’s because they’re looking at it from a programmer’s perspective and not an artists, so it’s not “this is stealing jobs from actual artists” it’s “holy shit this was made by a computer? That’s fucking sick”
CB will be in a corner, writing an ai to watch corn for him, so he doesnt have to.
"We're all using windows, is all good"
Me watching on my Fedora Linux: 👀
2:36 Maybe with stable diffusion or midjourney?
automatic 1111s UI is the best for ai art mr bullet
7:39 > We're all using Windows, we're all friends here
I'd like to interject for a moment. I use Linux and everyone should as well.
Fun fact! At least one of the lasers backyard scientist uses comes from the company I work at!
It would be cool if you had a machine leaning program running live during the whole event
T.V ti- Control yourself, focus on the code!
Left brain: TV boobs
having made few AI generated images myself (my profile pic is AI generated, with the exception that i turned the face into darkness, because the AI didnt want to cooperate for that part). you have to be specific. like what type of tv you want, what you want to have in the tv, clothes, colors of the clothes, what kind of scenery, and optionally maybe add some effects, like darkness, mist, lights etc. maybe a perspective you want; front view, back view, wide angle etc, and generate images multiple times until it generates something you like. sometimes the AI just doesnt know the meanings. like, it doesnt know what Hogwarts is. or sometimes it gets confused and completely ignores a prompt. like short hair and hair slicked back. the AI art website i use, just refuses to make slicked back hair if theres short hair in the prompt. it just gives short hair. and even then, it messes up things, like, as you saw: instead of a tv AS a head, it put a head IN the tv. so, just generate multiple images. after few images, you can give more prompts, more specific and more defined descriptions.
Different types of people on different platforms. I for certain hate shorts and clips and am probably not alone. I want the full phat codesplaining. :)
Ha ha ha nice job confirming the bets with Will xD
5:04 Know can you can do , try registering a Trademark, then you enforce it and they'll have to cede the channels or something
Midjourny allows you to add images as references.
man, you should absolutely have a TV-shaped LED mask made for upend sore that would be amazing.
0:31 "that shits awesome! It like cuts kids i half and stuff!"
weird thing to say
7:39 "We are all using windows" Me: Linux Moment
"just gonna scroll past that" and then he makes it the thumbnail.
For AI imagery he mentioned using controlnet which is s stable diffusion extension.
3:10 image to image. either using your avatar or pasting a tv on top of some other model and then letting the AI clean it up. That would be my guess, at least.
Here is how i used to get the ai generated: Make a pic of a shredded guy with big muscles that is full hd bu his head is a clasic 1979s tv with matrix statick and in the center there is a cutout where the matric stops in the shape of a bullet. (Background must be a foggy room with a bunch of hacker look alike setup with green code upon the monitors)The whole scene should be an 4k 33d render image
"we're all using windows" nuh uh I use linuck
6:21 this is for my friends.
This is the secret code bullet video
To be honest I never noticed the bullet on the screen I always had thought it was a battery
Part 2 consistently getting more views than Part 1. Nice
prompt engineering and using reference images as part of the input are part of it
AI is smart, but not magic, and sometimes you need to explain what you want in a way the AI can understand.
You know Bullet, your Reddit wouldn't need to do the hole coding thing if you were doing it XD
HOW HAVE I ONLY NOW DISCOVERED YOUR SECOND CHANNEL
7:29 its his mother code gun
Code bullet do be having bigger booba than Luca Kaneshiro
of course you scroll past the clone that has jiggle melon