Each generation is limited to 1Kx1K, so if you're extending an image, doing each side as a separate generation will result in much higher resolution generations.
One note on the extension part. Currently it only supports 1024 pixels, so anything above will be stretched. If you generate the missing parts in smaller bits 1024x1024, they will look crisp af.
It's kinda odd that this limitation exists similar to Midjourney and other visual AI tools. I mean, Photoshop already has its own AI-based upscaler so you'd think it wouldn't be a problem for them to combine the generative AI with the existing upscaler technique.. Maybe they're trying to perfect it in a different manner instead of using old systems that could be outdated. Just find it odd they all hit this 1024 pixel limitation, as if they all share the same base code structure and development.
Hey benny. I really hope if possible u r able to make your own version of the across the spiderverse poster. And if he has made a video on it plz comment the Link below 👇
@@AlphaYellow it's easy enough to make an action to extend and tile the selection if you are trying to extend an image. but generating large items is a bit of a pain.
One of the most useful cases for this tool will be to generate an object and use the way it interacts with light as a reference to put your own object in, since it seems like the AI is pretty good at that so far.
Benny is an amazing artist, but he is a beginner prompter which makes huge difference and firefly is way behind midjourney, this is no reference, I hate ai and for me it should be banned from almost every area, this ai stuff will replace many artist in next months, and probably 2-3 year all.
yes and AI is so much in its early years, it has just been adopted in real products by all these big companies this year means there's a lot to happen. Of course AI attaining self-awareness might be a longshot, but other smaller things that it does in so many products is amazing and bound to get better.
dude chat gpt happened a year ago only...and that was the start of this whole a.i boom... now in only 1 year a.i hype..we have a tool like this to work with..imagine 2 or 3 more years..
A note on AI is that it will probably work better if you put more detail into the prompts so it knows what you want and thew style you want. For example, when you asked for "stars" it gave you stars, and when you asked for a "dinosaur" it gave you one, but you need to tell it the style you're going for.
Yeah but not too acurate. Benny would have to merge the new part of the sky to the original for it to work properly. The way he did, AI was not considering the whole image or even the original image, it was just a prompt for stars with no content for it to base of. It was like he was using a white canva to generate, capiche?
Oh Benny, you had to give some detailed prompts like, a hyper realistic dinosaur with 2 arms matching with the background fog, illumination, ultra realistic render, Same goes for every other stuffs to get a higher level to perfection
but it said that in that tool in Photoshop you don't need to add specific word like ultra hd, 4k, octane render, etc... But i agree that you can add more detailed prompt for the subjet😉
Protip I discovered: You can differ opacity of your selection for Generative Fill. I.E. if your selection has 1% opacity then Generative Fill will only enhance selected area. So you can work on the whole canvas not caring about 1Kx1K limitation, and then enhance it chunk by chunk. More percentage would filter your image accordingly
It's all about the prompt input, when you for example put stars nothing else it won't understand that it needs to blend it with the background. But if you get very detailed in your description it will understand what to do much easier. Interesting video never the less! Thanks Benny! Also hands and feets has always been a struggle for AI. It's getting better but since this is a new feature for photoshop it will be very limited I guess.
Benny, I am happy you are finally starting to see the AI as a tool. You are an amazing artist, and should never be afraid of technology. I am old, so I know the feeling. I follow you for a long time, and you should take advantage of the tools you have at your disposal, they won't go away. We artists will always make the results it generates better, you have the knolowedge to take it to the next level, most people don't. Using the light it generates as a guideline for your placed objects is already a great thing to have. It would be amazing to see you taking it to the next level, using your skills on top of it. Maybe in a future video? Thanks for all the great content.
It will be cool to see you using the Photoshop generative ai and edit some of the result. Like the futuristic city, you can add your touch to finish the composition but base is the AI.😊
Benny, this is clearly a tool that can be used for great inspiration to great results. Please don't dismiss it, it would be cool to see you integrate it more in your process
I was waiting for you to select an entire blank project and try to make a photo from absolutely nothing that would be kinda cool. But I appreciate your work man keep it up
I think they use the adobe stock library to train the AI, which explains the weird supermen.. limited as it might be, at least it seems they respect copyright laws 🎉🤗
It would also explain why content such as the dinosaur, robot, and futuristic city looked unrealistic--there aren't many good references from stock images for the AI to use.
I was waiting for your reaction to this new AI tool and you showed very well its advantages, disadvantages, limitations and uses🙌 In some parts of the clip I just burst out laughing out loud and I really love how you combine your humor with your photoshop skills!😂 Keep going!
I’d love to see you use this to create a composite using only Firefly! Generate different things and then edit Photoshop’s generation and make it better.
As a designer im scared of all these ai stuff. Its not necessarily the ai but more of human itself. Ive seen many messages about how people are happy that there wont be a need of designers and also some greedy people are considering replacing designer with Ai because it doesn't cost a thing and can do it in a few minutes compared to designers... Also i've noticed that on instagram real artist are now completely overshadowed by ai " artist " and real artist work are barely seen now.
I get what you're coming from. I'm also scared too as a designer that ai is taking the credit then us designer who put a lot of effort and time management to create our piece
You can paint in your quickmask with a greyish color, press Q and generative fill and it will generate an image but less intensified. Don't mistake it with opacity, it will be opaque, but the generated image will blend more with the originel image. Perfect for the test with sharks.and the stars in the sky.
This is a great demonstration of how much your prompt inputs matter, single word prompts can give some crazy responses and refining it can gradually bring a result closer to that of your original intentions
what you're supposed to do, Benny, is use stable diffusion prompting. you have to tell it the style and quality and stuff. remember it's still a machine and can't quite always pick up the inference, here are some examples. (best quality), (photorealistic) and the lighting features, (sub surface scattering, realistic lighting).. hope that helps. most AI also has a negative prompt so you can tell it specifically what to avoid.
One interesting thing about the photoshop AI is that it's the most ethical one we have right now, it was trained on the adobe stock, not on pictures stolen from the internet, which means that Adobe actually owns these pictures and the people who took these got paid, even though they never agreed for these to be used in AI training
Wow that's super interesting! The widespread stealing of artists work really bothers me so I hope other companies follow suit in the future (with opt in contracts or something for artists work)
i dont know if anyone else has mentioned this, but more detailed prompts work wonders. you could definitely form literally anything you want with some extra detail
As someone who's been using Photoshop for decades and AI generative art for almost a year I have similar feelings. It's another tool! I see people spending hours and hours trying to get hands right with AI generators. I fix those myself in Photoshop in minutes. I see people spending days learning latent coupling and multi region control to be able to generate a man with black hair on the left and a woman with red hair on the right. I photo bash this in Photoshop in minutes for a good starting point. I see people struggling with VAE files because the output colours are washed out.. i fix that in Photoshop in minutes with a Hue/sat layer. Pretty much any particular problem people are having with generative art i can fix in Photoshop in a fraction of the time it takes them. Then there's getting the light right, i paint it in in Photoshop, fog, rim light, shadows, smoke, God rays, haze etc etc. All of this can be done much quicker in Photoshop. Those of us who know Photoshop can get superior results so much quicker than the rest. Because we use all of this as a collection of tools, not a one click solution to everything.
Thank you for showing all the failure cases. All the other videos make it seem like its perfect...and I was getting some mediocre stuff and sometimes downright bad with only a few good ones. I thought it was me. Glad to see you also got some terrible results. No doubt it will get better. But right now its still a bit of a gimmick for me.
when I have used AI, it usually helps to write that it should be super realistic and, for example, write that the image is of high quality. The more description Ai gets, the better it will be.
I like how this video proves prompt engineering is a skill. One word prompts don't get good results. It's not the skill of an artist, but a skill none the less.
Great video benny, but would love to see you try it again after learning about the ai prompts and stuff! Let's see how far your imagination can get you
Great video Benny-I have some mixed feelings about anything AI related, but less from how scary it might be and moreso with whether or not it's ethical. Idk, anyway, this was fun to watch 👍
Video idea: Create a wide poster only using AI. Choose adventure plus fantasy theme and generate everything with generative fill even humans. You can only adjust shadows n highlights with minor tweaks.
I do composite art and 3D rendering since the mid/late-90s (besides my "real" job as a web and graphic designer). This whole AI stuff is fancy but it definitely lacks even basic quality requirements at this moment for more or less professional use at sizes larger than a mobile phone screen. I'll wait a few more years and improvements before considering it. For now, I'll stick to good old manual compositing, a tad more time-consuming but well worth the effort. BTW, I discovered your channel not long ago, and I must say, I absolutely LOVE your work. Kudos!
In other videos, you almost always see only good or even perfect results. But my tests looked more like what you got as well. I thought it was me and I was using the software wrong.
Even tho it didn’t always do a great job you gotta remember it’s still in beta and unlike dall-e or midjourney it’s trained on the Adobe stock library so everything you generate will be able to abide copyright laws, I’ve been trying it out for the last week and it’s actually insane, especially using it in conjunction with Dall-E can produce some really amazing results 😁
Also, he was doing bigger than 1024x1024! And we gotta remember, coming from our current ways, we would have to purchase those images in order to use them, here we're getting instant infinite options for 'free', and even if it looks bad at first, we can tweak them to fit perfectly like how we would do with an image we paid for online.
Working as a professional retoucher since 2004, This tool is amazing, however, you can't rely on it, in its entirety, so it helps in constructing small areas in an image as a constructed composite but it's not a tool that you can use at this moment as full on composite too as many people are hyping it. It's pulling its generate images from Adobe Firefly which has been trained from Adobe stock, so it's gonna give you all these funny results. So, for example, the dinosaurs you generated, you could use it as a starting point and then just build it from there with intricate detail work in Photoshop using the traditional methods that digital artists use. I noticed when using the "rectangle marque" tool on landscapes on huge areas it seems to work best as there are probably loads of these types of images in the data sets from Adobe stock.
So this actually made the fears that I had for artists go away. Yes, the Ai does well at extending and removing from existing images but it seems the Ai has troubles creating things depending on the context of the base image as well as the description of the prompt.
This is something that will most likely improve in the future. I dont see it as an issue now, but i can see how this will become a problem for artists down the line
If this is the definitive AI tool for photoshop, I'm ok with it. But they probably will improve it to the point that it gets super realistic and harder to notice that is photoshop, and years of learning photoshop and design become obsolete .
The fill area is 1024x1024, so the smaller the selection the more resolution. Also one word is probably not going to give super results. case in point "stars" gives artistic stars, star symbols, the star shape, but "realistic twilight sky" might give a better result of the first few stars peeking through the dimming sky.
I read on Adobe's page about Generative Fill is while it's in beta it can't be used commercially. Not sure if they meant for creating commercial assets, or any commercial product, as it wasn't all that clear to me and I have zero interest in using it myself, but I thought I'd pass that along.
I feel like with large open areas like fields and sky, it does well with no guidance, but for other things you need to be more specific. Take the futuristic city for example. It was showing only one or two buildings and really crisp because it didn't have any reference of what was happening. I think it was trying to generate as if you are IN the city walking through it, hence why everything is just a few buildings and looks closer to the camera. If instead you put something like "futuristic city in the distance," I feel like you'd get much better results. Basically, you need to give context and be more specific, otherwise it will just generate something based on what it already knows or thinks it should be doing.
To be fair, the placement of the selection and the image provided should give it enough context to get at least the distance right. That’s the power of these tools. Their ability to “understand” the content of an image rather than systematically going over it like content aware fill.
I agree with Benny saying that it's just a tool. I'm working on a piece right now and I need a submarine image to add in, but I cannot for the life of me find anything that works. I didn't know about this feature until watching this, so now I want to try after seeing the dinosaurs. I saw the T-Rex as a template. I don't need a perfect image to insert and be done with, I need a well a shaped, will angled template that I can then add to with lighting, coloring, texture and more only when I can't find what I need from other sources. It's a tool. I hate seeing people abuse it, but Benny has the right idea and I want to see if I can get what I need out of it so I can finish my piece. If not, I'll just keep searching! Hopefully others will see it that way too. It's just a tool!
Underwater generations also work better if you turn down the opacity on your selection BEFORE you generate an image. It helps to blend it better with the water.
I'd love to see one more episode, but with more detailed, longer and complicated prompts to see if Photoshop AI can be more useful and specific in its work. Consider that please, maybe some of your viewers could help with giving you such prompts :)
There are some things i think it will be really useful for. Like the extend ability. I've sometimes found some model images i think could be really good, but the photographer had cropped off some of the hair or such. Or making a landscape or city scene slightly wider, instead of using a lot of cloning
On top of the many other comments, I've noticed many uses of AI generated photos come from very specific prompts. So rather than just typing the thing you want, you describe how wanted it to be like. The starry sky for example, I would be specific and say the milk way or starry night sky with galaxies etc
I do not have mixed feelings about AI. I love it. It’ll never replace actual human art, but it will give artists extra tools. I could see it being super helpful in developing quick mock-ups or drafts to have a good reference and then the artist can go in and change things they don’t like and sharpen up the image as a whole by making it feel more cohesive. It’s an amazing tools. It’s especially going to be useful for replacing objects and photoshopping stuff OUT of an image more than it is photoshopping things in. It’ll be useful for wallpapers and quickly making stuff like that
AI "art" still needs a lot of hand-holding. The pictures people post have usually been massaged for hours, often taking the results and fixing them up with photoshop. I think the best use for this tech is brainstorming.
Each generation is limited to 1Kx1K, so if you're extending an image, doing each side as a separate generation will result in much higher resolution generations.
same with the faces, they looked horrendous because they were scaled up from 1024x1024
Can't wait for papers down the line where resolution won't matter much beyond a 512x512 resolution
@@hipjoeroflmto4764 what a time to be alive !
@@Likou_ reference to two minute papers? :)
One note on the extension part. Currently it only supports 1024 pixels, so anything above will be stretched. If you generate the missing parts in smaller bits 1024x1024, they will look crisp af.
Yeah, I hope it will be much higher res once it comes out of beta.
thats what i was coming to say lol
It's kinda odd that this limitation exists similar to Midjourney and other visual AI tools. I mean, Photoshop already has its own AI-based upscaler so you'd think it wouldn't be a problem for them to combine the generative AI with the existing upscaler technique.. Maybe they're trying to perfect it in a different manner instead of using old systems that could be outdated.
Just find it odd they all hit this 1024 pixel limitation, as if they all share the same base code structure and development.
Hey benny. I really hope if possible u r able to make your own version of the across the spiderverse poster. And if he has made a video on it plz comment the Link below 👇
@@AlphaYellow it's easy enough to make an action to extend and tile the selection if you are trying to extend an image. but generating large items is a bit of a pain.
One of the most useful cases for this tool will be to generate an object and use the way it interacts with light as a reference to put your own object in, since it seems like the AI is pretty good at that so far.
Soo true. I just thought about this
I agree!
After watching this video, it is confirmed that Digital Artwork creators are safe now ....
Yeah for the next 5-10 years maybe
For now, maybe. Remember this is in Beta, and the beginning of all this crazy AI stuff.
This is Beta, it will improve at an exponential rate. Look where Midjourney was a year ago and where it is now.
you didn't saw the evolution of midjourney then in the last 6 months.
Benny is an amazing artist, but he is a beginner prompter which makes huge difference and firefly is way behind midjourney, this is no reference, I hate ai and for me it should be banned from almost every area, this ai stuff will replace many artist in next months, and probably 2-3 year all.
I think the point isnt to create a final picture but rather a good base image.
Its not a failure its a start.
yes and AI is so much in its early years, it has just been adopted in real products by all these big companies this year means there's a lot to happen. Of course AI attaining self-awareness might be a longshot, but other smaller things that it does in so many products is amazing and bound to get better.
dude chat gpt happened a year ago only...and that was the start of this whole a.i boom... now in only 1 year a.i hype..we have a tool like this to work with..imagine 2 or 3 more years..
would probably help, with longer prompting
This can be useful in concept arts so the artist can draw over and refine certain things to how he likes and can get new ideas through it
@@IEatFatClitatm there is no knowledge about artist lost there jobs because of ai. It will happen but it did not yet
A note on AI is that it will probably work better if you put more detail into the prompts so it knows what you want and thew style you want. For example, when you asked for "stars" it gave you stars, and when you asked for a "dinosaur" it gave you one, but you need to tell it the style you're going for.
Yeah but not too acurate.
Benny would have to merge the new part of the sky to the original for it to work properly. The way he did, AI was not considering the whole image or even the original image, it was just a prompt for stars with no content for it to base of.
It was like he was using a white canva to generate, capiche?
@@leonardosouza2153 Wow that's actually smart lol, I'll keep that in mind for the next videos :)
@@leonardosouza2153 Does it not? Because when he just gave a blank prompt it would extend the image in accordance with the original work
@@AytherAlt Yes. But in that case he would need to prompt: Fill The image with a Sky full of Stars.
Exactly right, AI right now is about prompt engineering. Better the prompt, better the outcome.
I fully lost it at 6:13😂
Someone fed ai too much memes😂😂😂😂
5:44 i like that aspect of not being able to regognize anything, its kind of a vibe
Oh Benny, you had to give some detailed prompts like, a hyper realistic dinosaur with 2 arms matching with the background fog, illumination, ultra realistic render, Same goes for every other stuffs to get a higher level to perfection
but it said that in that tool in Photoshop you don't need to add specific word like ultra hd, 4k, octane render, etc... But i agree that you can add more detailed prompt for the subjet😉
6:13 got me off guard what the heck 😂😂
Protip I discovered: You can differ opacity of your selection for Generative Fill. I.E. if your selection has 1% opacity then Generative Fill will only enhance selected area. So you can work on the whole canvas not caring about 1Kx1K limitation, and then enhance it chunk by chunk. More percentage would filter your image accordingly
It's all about the prompt input, when you for example put stars nothing else it won't understand that it needs to blend it with the background. But if you get very detailed in your description it will understand what to do much easier. Interesting video never the less! Thanks Benny! Also hands and feets has always been a struggle for AI. It's getting better but since this is a new feature for photoshop it will be very limited I guess.
12:25: Vietnamese Superman is something I never knew we needed
Benny, I am happy you are finally starting to see the AI as a tool. You are an amazing artist, and should never be afraid of technology. I am old, so I know the feeling. I follow you for a long time, and you should take advantage of the tools you have at your disposal, they won't go away. We artists will always make the results it generates better, you have the knolowedge to take it to the next level, most people don't. Using the light it generates as a guideline for your placed objects is already a great thing to have.
It would be amazing to see you taking it to the next level, using your skills on top of it. Maybe in a future video? Thanks for all the great content.
It will be cool to see you using the Photoshop generative ai and edit some of the result. Like the futuristic city, you can add your touch to finish the composition but base is the AI.😊
Benny, this is clearly a tool that can be used for great inspiration to great results. Please don't dismiss it, it would be cool to see you integrate it more in your process
I was waiting for you to select an entire blank project and try to make a photo from absolutely nothing that would be kinda cool. But I appreciate your work man keep it up
I think they use the adobe stock library to train the AI, which explains the weird supermen.. limited as it might be, at least it seems they respect copyright laws 🎉🤗
It would also explain why content such as the dinosaur, robot, and futuristic city looked unrealistic--there aren't many good references from stock images for the AI to use.
I was waiting for your reaction to this new AI tool and you showed very well its advantages, disadvantages, limitations and uses🙌
In some parts of the clip I just burst out laughing out loud and I really love how you combine your humor with your photoshop skills!😂
Keep going!
I’d love to see you use this to create a composite using only Firefly! Generate different things and then edit Photoshop’s generation and make it better.
We need some type of Benny vs AI Challenge!!
As a designer im scared of all these ai stuff. Its not necessarily the ai but more of human itself. Ive seen many messages about how people are happy that there wont be a need of designers and also some greedy people are considering replacing designer with Ai because it doesn't cost a thing and can do it in a few minutes compared to designers...
Also i've noticed that on instagram real artist are now completely overshadowed by ai " artist " and real artist work are barely seen now.
I feel u
I get what you're coming from. I'm also scared too as a designer that ai is taking the credit then us designer who put a lot of effort and time management to create our piece
You can paint in your quickmask with a greyish color, press Q and generative fill and it will generate an image but less intensified.
Don't mistake it with opacity, it will be opaque, but the generated image will blend more with the originel image.
Perfect for the test with sharks.and the stars in the sky.
This is a great demonstration of how much your prompt inputs matter, single word prompts can give some crazy responses and refining it can gradually bring a result closer to that of your original intentions
photoshop generative fill is funny as hell, played with it for the past week
ABSOLUTELY DIED AT 6:14 😂😂
what you're supposed to do, Benny, is use stable diffusion prompting. you have to tell it the style and quality and stuff. remember it's still a machine and can't quite always pick up the inference, here are some examples. (best quality), (photorealistic) and the lighting features, (sub surface scattering, realistic lighting).. hope that helps. most AI also has a negative prompt so you can tell it specifically what to avoid.
AI adding a third limb is always really funny to me
At 6:04 😂😂😂
Benny is so funny😂😂😂
His reactions😅 and the editing of the video is great.
I love this channel❤
that part killed me XD
One interesting thing about the photoshop AI is that it's the most ethical one we have right now, it was trained on the adobe stock, not on pictures stolen from the internet, which means that Adobe actually owns these pictures and the people who took these got paid, even though they never agreed for these to be used in AI training
Wow that's super interesting! The widespread stealing of artists work really bothers me so I hope other companies follow suit in the future (with opt in contracts or something for artists work)
i dont know if anyone else has mentioned this, but more detailed prompts work wonders. you could definitely form literally anything you want with some extra detail
i like that when you generate something on water he makes the reflection of the object on the water, its so cool
13:25 The AI flicking you off is hilarious! 😂
As someone who's been using Photoshop for decades and AI generative art for almost a year I have similar feelings. It's another tool!
I see people spending hours and hours trying to get hands right with AI generators. I fix those myself in Photoshop in minutes. I see people spending days learning latent coupling and multi region control to be able to generate a man with black hair on the left and a woman with red hair on the right. I photo bash this in Photoshop in minutes for a good starting point. I see people struggling with VAE files because the output colours are washed out.. i fix that in Photoshop in minutes with a Hue/sat layer.
Pretty much any particular problem people are having with generative art i can fix in Photoshop in a fraction of the time it takes them.
Then there's getting the light right, i paint it in in Photoshop, fog, rim light, shadows, smoke, God rays, haze etc etc. All of this can be done much quicker in Photoshop.
Those of us who know Photoshop can get superior results so much quicker than the rest. Because we use all of this as a collection of tools, not a one click solution to everything.
Benny dont need generative fill, generative fill needs Benny
Haven't used it but i have heard that the more details you provide the better because it provides more data to work with.
One of the best Photoshop tutorial comedy ever🤣🤣🤣 you should try making more like this
Thank you for showing all the failure cases. All the other videos make it seem like its perfect...and I was getting some mediocre stuff and sometimes downright bad with only a few good ones. I thought it was me. Glad to see you also got some terrible results. No doubt it will get better. But right now its still a bit of a gimmick for me.
when I have used AI, it usually helps to write that it should be super realistic and, for example, write that the image is of high quality. The more description Ai gets, the better it will be.
Keep your guard up everyone! This is Ai’s first step to taking over the world 😂
Great video as always Benny 💕
That kind of task is for quantum computing.
You're laughing now, just wait.
I like how this video proves prompt engineering is a skill. One word prompts don't get good results. It's not the skill of an artist, but a skill none the less.
Bro what a video I literally laughed like half of the video LOL
Good😂
Great video benny, but would love to see you try it again after learning about the ai prompts and stuff! Let's see how far your imagination can get you
Great video Benny-I have some mixed feelings about anything AI related, but less from how scary it might be and moreso with whether or not it's ethical. Idk, anyway, this was fun to watch 👍
Video idea: Create a wide poster only using AI. Choose adventure plus fantasy theme and generate everything with generative fill even humans. You can only adjust shadows n highlights with minor tweaks.
This was a hilarious and entertaining one! I love your work Benny, keep it up.
Awesome upload, exploring these new PS options in a comical way is a great idea for new content. Keep it coming!
9:02 very simple, shouldn't be too hard. That stars was the funniest.
I got an ad for photoshop’s ai tool mid-video 💀
Yeah.. we're gonna need an Ai relistified series now..
Love your vids! Keep up the good work :)
I was cracking up the whole time. Loved this!
Benny Production workflow is about to change. Coz of this.
I do composite art and 3D rendering since the mid/late-90s (besides my "real" job as a web and graphic designer). This whole AI stuff is fancy but it definitely lacks even basic quality requirements at this moment for more or less professional use at sizes larger than a mobile phone screen. I'll wait a few more years and improvements before considering it. For now, I'll stick to good old manual compositing, a tad more time-consuming but well worth the effort. BTW, I discovered your channel not long ago, and I must say, I absolutely LOVE your work. Kudos!
Great video man- BTW, didn't realize you were sunburnt until you pointed it out 😂
In other videos, you almost always see only good or even perfect results. But my tests looked more like what you got as well. I thought it was me and I was using the software wrong.
Even tho it didn’t always do a great job you gotta remember it’s still in beta and unlike dall-e or midjourney it’s trained on the Adobe stock library so everything you generate will be able to abide copyright laws, I’ve been trying it out for the last week and it’s actually insane, especially using it in conjunction with Dall-E can produce some really amazing results 😁
Also, he was doing bigger than 1024x1024! And we gotta remember, coming from our current ways, we would have to purchase those images in order to use them, here we're getting instant infinite options for 'free', and even if it looks bad at first, we can tweak them to fit perfectly like how we would do with an image we paid for online.
Its trained on adobe stock? Thats brilliant, finally a legally stable ai software!
Working as a professional retoucher since 2004, This tool is amazing, however, you can't rely on it, in its entirety, so it helps in constructing small areas in an image as a constructed composite but it's not a tool that you can use at this moment as full on composite too as many people are hyping it. It's pulling its generate images from Adobe Firefly which has been trained from Adobe stock, so it's gonna give you all these funny results. So, for example, the dinosaurs you generated, you could use it as a starting point and then just build it from there with intricate detail work in Photoshop using the traditional methods that digital artists use. I noticed when using the "rectangle marque" tool on landscapes on huge areas it seems to work best as there are probably loads of these types of images in the data sets from Adobe stock.
i have been waiting for this video.thanks benny
forgot how much i enjoy your videos KEEP IT UP!
Youre sooooooooooo good at photoshopping❤ Keep the great content
This is what beta does, imagine what version 2.0 in 2 or 3 years is gonna be... scary
been waiting for you to try out this feature in photoshop. this is great
So this actually made the fears that I had for artists go away. Yes, the Ai does well at extending and removing from existing images but it seems the Ai has troubles creating things depending on the context of the base image as well as the description of the prompt.
This is something that will most likely improve in the future. I dont see it as an issue now, but i can see how this will become a problem for artists down the line
Panutan aing🫵🔥
😂😂😂Hilarious tryouts!
The cutlery part and humans having a picnic
If this is the definitive AI tool for photoshop, I'm ok with it. But they probably will improve it to the point that it gets super realistic and harder to notice that is photoshop, and years of learning photoshop and design become obsolete .
I dare Benny to make a photo shop where he can only use reference the AI gives him!
the more description you give it the better the result will be
I've been waiting this for a while ❤❤😂😂😂😂😂😂😂
The shark just gave me the laugh flash of my life.
Bro… I spent 10 years since I was a teen perfecting my craft in photoshop………. And now people can just do it in few clicks……. I wanna cry
Enjoyable content. It’s nice just to watch you being you
The fill area is 1024x1024, so the smaller the selection the more resolution. Also one word is probably not going to give super results. case in point "stars" gives artistic stars, star symbols, the star shape, but "realistic twilight sky" might give a better result of the first few stars peeking through the dimming sky.
6:16 bro avoided all lose hell to not say a racist joke😹😹😹🤣
I have been having so much fun with the new Generative Fill!
I feel like using this similarly to how content aware is used is the perfect way to get the best results with it
Nobody:
Absolutely no one at all:
Benny: It’s blurry
Maybe flip main cam in post so it's not too distracting when cutting from desktop cam setup to full cam?
haha so good man, very funny.
I read on Adobe's page about Generative Fill is while it's in beta it can't be used commercially. Not sure if they meant for creating commercial assets, or any commercial product, as it wasn't all that clear to me and I have zero interest in using it myself, but I thought I'd pass that along.
6:12 this made my day lmao
As a tip, use more descriptive words like realistic, shiny, foggy, big, 8k, cinematic and so on.
I feel like with large open areas like fields and sky, it does well with no guidance, but for other things you need to be more specific.
Take the futuristic city for example. It was showing only one or two buildings and really crisp because it didn't have any reference of what was happening. I think it was trying to generate as if you are IN the city walking through it, hence why everything is just a few buildings and looks closer to the camera. If instead you put something like "futuristic city in the distance," I feel like you'd get much better results. Basically, you need to give context and be more specific, otherwise it will just generate something based on what it already knows or thinks it should be doing.
To be fair, the placement of the selection and the image provided should give it enough context to get at least the distance right. That’s the power of these tools. Their ability to “understand” the content of an image rather than systematically going over it like content aware fill.
Thank you, Benny, for the reality check. I need it.
I think the biggest thing i have noticed is people expect ai to speak "human". IE with the stars, it did exactly what you told it to.
Exactly haha! It does get you the funniest results though🤣
I agree with Benny saying that it's just a tool. I'm working on a piece right now and I need a submarine image to add in, but I cannot for the life of me find anything that works. I didn't know about this feature until watching this, so now I want to try after seeing the dinosaurs. I saw the T-Rex as a template. I don't need a perfect image to insert and be done with, I need a well a shaped, will angled template that I can then add to with lighting, coloring, texture and more only when I can't find what I need from other sources. It's a tool. I hate seeing people abuse it, but Benny has the right idea and I want to see if I can get what I need out of it so I can finish my piece. If not, I'll just keep searching! Hopefully others will see it that way too. It's just a tool!
I’m surprise Universal Studios hasn’t got you for a poster lol
Underwater generations also work better if you turn down the opacity on your selection BEFORE you generate an image. It helps to blend it better with the water.
I laughed hard so many times in this video! Benny’s reactions and the edits were so funny 😭
I'd love to see one more episode, but with more detailed, longer and complicated prompts to see if Photoshop AI can be more useful and specific in its work. Consider that please, maybe some of your viewers could help with giving you such prompts :)
hi benny, your content looks amazing keep up the good work brother. 🙂
6:50 I'm wondering if it works better if you merge your layers?
Love your vids! We need some type of PS vs Stylar, PS vs Midjourney!
There are some things i think it will be really useful for. Like the extend ability. I've sometimes found some model images i think could be really good, but the photographer had cropped off some of the hair or such. Or making a landscape or city scene slightly wider, instead of using a lot of cloning
On top of the many other comments, I've noticed many uses of AI generated photos come from very specific prompts. So rather than just typing the thing you want, you describe how wanted it to be like. The starry sky for example, I would be specific and say the milk way or starry night sky with galaxies etc
The sharks got me 😂😂😂. And also the stars
I do not have mixed feelings about AI. I love it. It’ll never replace actual human art, but it will give artists extra tools. I could see it being super helpful in developing quick mock-ups or drafts to have a good reference and then the artist can go in and change things they don’t like and sharpen up the image as a whole by making it feel more cohesive. It’s an amazing tools. It’s especially going to be useful for replacing objects and photoshopping stuff OUT of an image more than it is photoshopping things in. It’ll be useful for wallpapers and quickly making stuff like that
until brands dont want to pay as much the artists because... well, "AI does half of your work". Then it becomes a problem
AI "art" still needs a lot of hand-holding. The pictures people post have usually been massaged for hours, often taking the results and fixing them up with photoshop. I think the best use for this tech is brainstorming.