I love how when you add "matte painting" it bails on its understanding of Art Nuveau. This is in keeping with my understanding of matte paintings and set designers.
haha! I think its funny how it shows you the formula that us matte painters have been using for years. "Really, if we make the foreground elements dark and the far away bright that's the formula for a matte painting? Is that really all I do?" :)
Thank you for the meaningful break down of experiments. This was definitely a question I had about midjourney as I did my first tests. This research helps out a lot!
Thanks Kevin. Ya, I've found a general lack of experiments of this type, so decided to do them myself and report on the results. Always more to do though :)
Hi Neil, thanks for your work on the descriptive table. As I am now also "working" with Midjourney, I have had similar experiences. Sometimes you think that the AI clings to a shape. Often I recognized a style of artists, although none is yet entered in the prompt. After the first "wow" you quickly come to the limitations. The program could also be called a perfect photo basher. The program finds it difficult to work out new concepts - which on the other hand is logical. But still a fantastic program and idea generator.
Glad you liked the video. Ya, after the initial shock, you realize where the limitations are, although a lot of those limitations I think revolve around the dataset it was trained on. Newer software and / or better data sets may improve the results. We'll see how this technology evolves, looking forward to one day getting access to dall-e 2 so I can run it through a similar battery of tests.
From my understanding, Midjourney weights the value of each term used, so yes more words can confuse your image. Also, it means that Midjourney may choose to ignore particular terms or not on a prompt to prompt basis. That being said, I think you've found some helpful info. Photorealistic, for instance, might affect your image in the same way as film still or octane render, depending on the prompt, your use history, etc.
Ya, there are certainly more complex tests to be done. Like someone told me the term realistic does in fact do a lot if combined with the keyword "line drawing", but not if used alone. But this video isn't meant to be definitive, I'd need far more time to do that, but hopefully it's outlined a process to experiment on your own to see what it does.
one change with the keyword "macro" that you might have noticed, but did not note, was the addition of almost and insect like look to the objects, especially the robots. Even the stuff animals look more bug like.
You know, I hadn't noticed that, but you are totally right, it does add like 15% more bug to the designs. That's both creepy and fascinating, I guess it had a lot of insect macrophotography in its training. Thanks for pointing it out!
This is great stuff Neil! I've been wondering about and experimenting with some of the same things. Thanks for the detailed side by side comparisons, you've just saved me many GPU hours 😃
No problem Jason. Ya, many people are focusing on the more experimental side of the program, I wanted to run it through some very specific tests so I'm not wasting my time with unnecessary verbs and adjectives. Glad I could speed along your process :)
I loved this video it is really valuable and i hope for many more like it. As i had a large focus on study design in my university education, i can see a lot of problems with this methodology so while i don't necessarily think the explanation value of this experiment allows for your conclusions i am still really happy you did it. Thanks, and subbed :)
It is certainly not a perfectly scientific test, but it was the best I could do with the limited access to the algorithms that I have :) Hopefully one day we'll have more access to the guts and I can go further, but in the meantime, glad you found it interesting!
Good information - I find positioning the view the most frustrating especially as there are dedicated keywords in the MJ manual that seem to do nothing. Please excuse the side note but it is fascinating how 'strong depth of field' is now used where we used to say 'shallow depth of field' and 'good depth of field' has now become 'no depth of field'. It seems back to front to old school photographers.
Haha! I know, I came at it from a 3d rendering side, so the term DOF is commonly viewed as a blur effect, so strong DOF means a lot of blur and no depth of field is no blur. But photographers have it correct, it's just a hard habit for me to break :D
I use the pokemon test. Add a little pikachu and then mewtwo to any prompt to kinda see where it is in the cloud 3:11 It likely IS. Just the seed you are at and prompt weight refused to find that because of the LIMIT of data in the set. That prompt will give you more 3/4 heads with different seeds than saying front view would.
This is interesting... unfortunately since the images are all different seeds, the subtle differences are much harder to spot. I'd lock down the seed when comparing these things so you can get a more 1-to-1 comparison using "--seed 1" etc. I've noticed some prompts resonate and amplify better with certain combinations, so its possible different subjects or prompt combinations might actually show a stronger effect where you don't see so much in more simple combinations or with these particular examples. Keep in mind also, the more simple your prompt, the more MJ will fill in with its own "flavor" which may also affect (or mute) your results. Anyway, this is all fascinating.
Thanks Ben, ya, one of the reasons I did multiple examples of each prompt was to get some sort of "average" image, I didn't want to do only a single image per test prompt and call it done, since a single image could be misleading and random. But good idea on the seed value, next time I'll experiment with locking that down and see what the results are. And yes, using multiple word combinations I'm sure can have an effect, it's just hard because that involves either changing more than 1 variable at a time, or creates an exponentially larger number of examples to do. I've had a lot of success of doing more complex prompts and removing a keyword I was interested in testing instead of adding a single keyword, maybe doing more of that would be a good test. Anyways, this was meant to get people looking more critically at prompts, and hopefully that succeeded, and people can try out their own tests. Thanks for all the feedback!
@@ArtOfSoulburn Cool thanks for the engagement. On a whim, I tried running some --sameseed tests of my own after watching your video, and found it similarly hard to really spot the nuance, but my subject matter might have been part of the problem. Anyway, good food for thought.
--sameseed seems to work best, since it'll do all 4 with the same seed and not just the first image in the top left of the grid. Clicking any of the buttons on an image will change the seed, but MidJourney uses a proper Img2Img on their generated images, so it's basically like using the same seed.
This video is an incredible resource that answers so many questions I've had about what the heck all those keywords are actually doing. Any chance you could share your "mega images" on your site?
Glad you enjoyed it. Yes, visit here: www.neilblevins.com/art_lessons/midjourney_ai_prompts_test/midjourney_ai_prompts_test.htm and here: www.neilblevins.com/art_lessons/midjourney_vs_dalle/midjourney_vs_dalle.htm to get the mega images for my two recent tutorials, links at the bottom of the page after the video.
@@BrandonZell wow, I’d never even heard of Vivaldi. My site has been tested in Firefox, edge, chrome and safari, but good to know Vivaldi doesn’t like it.
I realize none of us has all the time in the world to devote to every rabbit hole, but it would be interesting to see the results if you ran these exact tests a dozen or more times. There're other aspects of Midjourney's processes that I think would probably affect the experiment, two examples off the bat are its tendency to learn from your requests and then cater responses and also the evolving censorship algorithms. Thanks for sharing your findings! Peace
Maybe, but if it is affecting it it's so minor I feel it's not worth it. At least in the examples I've done, I refuse to categorically say it doesn't work since someone will find an instance it does do something major :) Glad you liked the experiments!
Neil, just want to say your Maxscripts have been a massive time save for me over the years so thanks. After my first foray and bewilderment at AI art generators I now feel that it's clip-art on steroids for the masses. I'm sure it will improve but feels a bit like infinite monkey theorem at the minute.
Thanks, glad you've enjoyed my scripts! And yes, my biggest worry is that since anyone can now "create" unlimited art, we're just going to be bombarded forever now. Even if the art it produces isn't terribly good, how can you find the good stuff when the internet is filled with so much noise.
Thanks, glad you like it. And yes, visit this page and click the links underneath the video: www.neilblevins.com/art_lessons/midjourney_ai_prompts_test/midjourney_ai_prompts_test.htm
I feel that is the approach to make AI that is supposed to be controlled by simple text prompts isn't the right way to go. I hope developers will some day change directions. I think these image generators need distinctive parameters in a way current node systems like in blender work. so you could define a percentage from style x, a percentage from style y, set the position for the main object, how the background should look like or even have different images as parameters for different aspects of what you want. one image for style, one for composition/placement and so on.
Agreed, which I think will eventually happen. Programs like artbreeder are already kind of doing this, where you can mix different amounts of features like "snow" on a landscape, so when it comes to developing this software, that sort of stuff would be ideal, assuming there's a big enough market for it.
Completely agree. It would also be great if images were generated on multiple layers, making it easy to edit individual elements or to mix and match different images.
you should try doing modifier with a set seed (--seed), that way all pictures will be identical and only the modifier will make a change. It's hard to compare with random seeds as the bot will assume things not included xD (it already does a style, a light mode etc when rendering, you just don't know which)
Yup, so I wasn't aware of the seed function when I first did these tests, but I agree that would be the way to go. I did 4 images per keyword in order to try and get a sense of what each keyword is doing, but the seed would make it a little bit more repeatable.
No, just using a comma, the + is showing I'm adding the extra words to the prompt I showed before for the purposes of the video, but the actual prompt used in midjourney is just a comma
@@ArtOfSoulburn Cool. Some are using commas, then there is also the :: separator I also see, good to know. Controlled tests like this are very useful, thanks.
Thank you for your work! (also I'm going to argur that "trending on artstation" does change colors and composition. Did you use that prompt, or "on artstation" like your last examples show, which would be probably different)
I used "Trending on artstation", its just too long for my graphic so I shortened it :) If you have some tests where adding "trending on artstation" made a big difference, would love to see them. There may be some situations where it works, if so, would love to know so I can do further tests. And glad you enjoyed the video.
Could you cover if it is possible to steer the point of view of the render with photographic terms. like 14m lense, deep focus, Shallow Depth of field, Bokeh. I' would like to be able to have it zoom in or out, or even lave low angle or turn the focus 90 degrees, or reverse. It will be possible some day. Thanks!
Midjourney doesn’t work as well with that sort of terminology, but I have a video out next week on dalle 2 which is much better using those sets of terms.
@@ArtOfSoulburn yes, I mean not a dramatic difference but a substantial one nonetheless, now I unfortunately ran out of credits to give it a try, but I have observed this in the prompts of another Midjourney user. I didn't get a chance to give it a try myself. Figured it was easy enough if you wanted to give it a try yourself. I will also try it when I get a chance... style Great and informative video by the way thank you...
@@MMMM-sv1lk Glad you like the video. So I tried the test, and am not seeing the same difference you saw in the other user. Or am I doing it wrong? neilblevins.com/temp/test.jpg
@@ArtOfSoulburn Well the examples you shared do seem close, that being said I still find the style prompts results a little more on point, could be my subjective bias... The reasoning behind why I thought this might matter was because of Google... I assumed that the datasets that Midjourney used must originate from Google's API as it is the biggest catalogue of text sorted images on Earth, and there, when you search for the artists name you mainly get the artist's face photos whereas when you type style after the artist name you get a bunch of their paintings... And I believe that's where my memory failed me, I think the problem that I noticed on that user's prompt was when he inputted the "artist's name" alone vs style... The huge difference happens between those two inputs. Like you end up getting the artist's face in the composition vs his/her style... That was the problem I noticed. Your input "in the style of " still includes the word "style" therefore seems to work fine. My bad. I appreciate your effort. Hope it wasn't too much trouble.
@@isatche Interesting. I'll look into that, ya, I suspect some of those word modifiers may only work in combination with other word modifiers, and don't work by themselves, I'll try these two out with line drawings and see what I get.
Ai is so hot right now. I notice your videos about AI are yielding high views. I would have never found your art otherwise. I really enjoyed your "AI The Road Ahead"... I used to be a tshirt designer. Then Adobe went subscription and increased the availability of "good enough" art exponentially and it usurped > 20% of the market. Most artists in the field needed to convert to conventions or other avenues to market and generate income to supplement the decrease in online tshirt market yield. I never converted and I went down for the count. I sometimes see "art" accounts of ai generated art. The owners speaking in terms as "I created"... and I think... did you though? However... I imagine some people will become famous for their ai created art. Like I said, it's so hot right now.
Thanks for the story, so many people don't understand that all it takes is a 20% dip in business to make a career unviable. I saw an interview with Dave McKean recently about ai art where he discusses the "I created" thing. There are going to be kids who have visuals and ideas in their heads, they'll prompt a piece of ai artwork and feel they somehow created this. And then they'll never work at becoming good at being an actual artist, and all those ideas that are locked in their head will never see the light of day. And that is really sad. They'll convince themselves they've created when they haven't. I really have no clue where this is all going, I'm enjoying using AI to give me photobash elements, but the larger implications of AI are very much frightening.
There needs to be more discussion on the ethics (or lack thereof) in mining the work of living artists for data to generate derivative images with AI. This is not inspiration or influence... this is copying and extrapolation. These artists did not give permission for their art to be (ab)used in this way. I suspect in the future AI like this is going to be limited to drawing from only public domain or specifically licensed data sets because there is no way artists are going to tolerate having their work stolen and replaced with transparently derivative AI copies passed off as 'original' work.
Completely agree, and I'm starting to think there may be a video in my future about this exact subject. In my opinion, artists shouldn't have to opt out of these datasets, they should have to opt in. It's the wild west right now, but I hope that a few legal challenges happen soon so the courts can properly define what is and what isn't ok when it comes to these datasets.
It's interesting. In the macro photo, midjourney turned your "stuffed animal" into a macro of some type of flower-insect, since most photographers use macro for tiny things in nature. Another UA-camr did a prompt of "a man in space", and midjourney created a man in space, but with the face of Elon musk xDDDD I think we should think in context before creating the prompts.
Ya, it's bad to think in terms of magic when I'm doing a video that's a scientific exploration of why the AI does what it does, but there is an element of randomness and literal thinking to these AIs which makes them tough to anticipate, especially if you don't know the dataset they worked on. Like I included a prompt the other day that included the word "desaturated" and another artists work, and I got colorful results, and then when I removed the artist's name, I got greyscale images. So obviously those two words are not weighted the same, the artist's work won out over the term desaturated. Trying to figure out why it does what it does, especially in more complex prompts, is tough, but hopefully my video, even if it doesn't have all the answers, at least gets people asking some questions.
Why are so many of these AI demonstrations focused on cartoon animation subjects? I find it really boring. I would be interested in seeing how AI creates natural looking human beings and how convincing they could look.
I love how when you add "matte painting" it bails on its understanding of Art Nuveau. This is in keeping with my understanding of matte paintings and set designers.
haha! I think its funny how it shows you the formula that us matte painters have been using for years. "Really, if we make the foreground elements dark and the far away bright that's the formula for a matte painting? Is that really all I do?" :)
Thank you for the meaningful break down of experiments. This was definitely a question I had about midjourney as I did my first tests. This research helps out a lot!
Thanks Kevin. Ya, I've found a general lack of experiments of this type, so decided to do them myself and report on the results. Always more to do though :)
Hi Neil, thanks for your work on the descriptive table. As I am now also "working" with Midjourney, I have had similar experiences. Sometimes you think that the AI clings to a shape. Often I recognized a style of artists, although none is yet entered in the prompt. After the first "wow" you quickly come to the limitations. The program could also be called a perfect photo basher. The program finds it difficult to work out new concepts - which on the other hand is logical. But still a fantastic program and idea generator.
Glad you liked the video. Ya, after the initial shock, you realize where the limitations are, although a lot of those limitations I think revolve around the dataset it was trained on. Newer software and / or better data sets may improve the results. We'll see how this technology evolves, looking forward to one day getting access to dall-e 2 so I can run it through a similar battery of tests.
From my understanding, Midjourney weights the value of each term used, so yes more words can confuse your image. Also, it means that Midjourney may choose to ignore particular terms or not on a prompt to prompt basis. That being said, I think you've found some helpful info. Photorealistic, for instance, might affect your image in the same way as film still or octane render, depending on the prompt, your use history, etc.
Ya, there are certainly more complex tests to be done. Like someone told me the term realistic does in fact do a lot if combined with the keyword "line drawing", but not if used alone. But this video isn't meant to be definitive, I'd need far more time to do that, but hopefully it's outlined a process to experiment on your own to see what it does.
one change with the keyword "macro" that you might have noticed, but did not note, was the addition of almost and insect like look to the objects, especially the robots. Even the stuff animals look more bug like.
You know, I hadn't noticed that, but you are totally right, it does add like 15% more bug to the designs. That's both creepy and fascinating, I guess it had a lot of insect macrophotography in its training. Thanks for pointing it out!
very helpful! i've slowly grown out of using most of the "least" useful ones as i've messed with the AI more, looking forward to more!
Thanks, glad you liked it!
This is great stuff Neil! I've been wondering about and experimenting with some of the same things. Thanks for the detailed side by side comparisons, you've just saved me many GPU hours 😃
No problem Jason. Ya, many people are focusing on the more experimental side of the program, I wanted to run it through some very specific tests so I'm not wasting my time with unnecessary verbs and adjectives. Glad I could speed along your process :)
Someone posted on Midjourney that the AI is really training us to see the world different. That stuck in my head.
Wow thank you! I love your patience!
Glad you enjoyed it Annie!
been doing these experiments as well, really insightful, thank you
Thanks, glad you've found them helpful!
I loved this video it is really valuable and i hope for many more like it.
As i had a large focus on study design in my university education, i can see a lot of problems with this methodology so while i don't necessarily think the explanation value of this experiment allows for your conclusions i am still really happy you did it.
Thanks, and subbed :)
It is certainly not a perfectly scientific test, but it was the best I could do with the limited access to the algorithms that I have :) Hopefully one day we'll have more access to the guts and I can go further, but in the meantime, glad you found it interesting!
Good information - I find positioning the view the most frustrating especially as there are dedicated keywords in the MJ manual that seem to do nothing.
Please excuse the side note but it is fascinating how 'strong depth of field' is now used where we used to say 'shallow depth of field' and 'good depth of field' has now become 'no depth of field'. It seems back to front to old school photographers.
Haha! I know, I came at it from a 3d rendering side, so the term DOF is commonly viewed as a blur effect, so strong DOF means a lot of blur and no depth of field is no blur. But photographers have it correct, it's just a hard habit for me to break :D
More videos like this please. Thank you.
Thanks! Will keep it in mind.
I use the pokemon test. Add a little pikachu and then mewtwo to any prompt to kinda see where it is in the cloud
3:11 It likely IS. Just the seed you are at and prompt weight refused to find that because of the LIMIT of data in the set. That prompt will give you more 3/4 heads with different seeds than saying front view would.
This is interesting... unfortunately since the images are all different seeds, the subtle differences are much harder to spot. I'd lock down the seed when comparing these things so you can get a more 1-to-1 comparison using "--seed 1" etc. I've noticed some prompts resonate and amplify better with certain combinations, so its possible different subjects or prompt combinations might actually show a stronger effect where you don't see so much in more simple combinations or with these particular examples. Keep in mind also, the more simple your prompt, the more MJ will fill in with its own "flavor" which may also affect (or mute) your results. Anyway, this is all fascinating.
Thanks Ben, ya, one of the reasons I did multiple examples of each prompt was to get some sort of "average" image, I didn't want to do only a single image per test prompt and call it done, since a single image could be misleading and random. But good idea on the seed value, next time I'll experiment with locking that down and see what the results are. And yes, using multiple word combinations I'm sure can have an effect, it's just hard because that involves either changing more than 1 variable at a time, or creates an exponentially larger number of examples to do. I've had a lot of success of doing more complex prompts and removing a keyword I was interested in testing instead of adding a single keyword, maybe doing more of that would be a good test. Anyways, this was meant to get people looking more critically at prompts, and hopefully that succeeded, and people can try out their own tests. Thanks for all the feedback!
@@ArtOfSoulburn Cool thanks for the engagement. On a whim, I tried running some --sameseed tests of my own after watching your video, and found it similarly hard to really spot the nuance, but my subject matter might have been part of the problem. Anyway, good food for thought.
@@chromafresh Glad I inspired you to do some further tests. There is an element of black magic here :)
--sameseed seems to work best, since it'll do all 4 with the same seed and not just the first image in the top left of the grid. Clicking any of the buttons on an image will change the seed, but MidJourney uses a proper Img2Img on their generated images, so it's basically like using the same seed.
This video is an incredible resource that answers so many questions I've had about what the heck all those keywords are actually doing. Any chance you could share your "mega images" on your site?
Glad you enjoyed it. Yes, visit here: www.neilblevins.com/art_lessons/midjourney_ai_prompts_test/midjourney_ai_prompts_test.htm and here: www.neilblevins.com/art_lessons/midjourney_vs_dalle/midjourney_vs_dalle.htm to get the mega images for my two recent tutorials, links at the bottom of the page after the video.
@@ArtOfSoulburn Those links aren't working for me... is your site down?
@@BrandonZell they just worked for me. Maybe retry?
@@ArtOfSoulburn Retry was a no-go, but I gave it a try in Safari and got the page to load. (was using Vivaldi, which is Chromium).
@@BrandonZell wow, I’d never even heard of Vivaldi. My site has been tested in Firefox, edge, chrome and safari, but good to know Vivaldi doesn’t like it.
I realize none of us has all the time in the world to devote to every rabbit hole, but it would be interesting to see the results if you ran these exact tests a dozen or more times. There're other aspects of Midjourney's processes that I think would probably affect the experiment, two examples off the bat are its tendency to learn from your requests and then cater responses and also the evolving censorship algorithms. Thanks for sharing your findings! Peace
That's interesting, do they say explicitly that depending on which images you pick it will change the types of results it provides you?
8:21 - It looks like "detailed" makes it zoom in a little. To ..show.. the details?
Good experiment, thanks.
Maybe, but if it is affecting it it's so minor I feel it's not worth it. At least in the examples I've done, I refuse to categorically say it doesn't work since someone will find an instance it does do something major :) Glad you liked the experiments!
Great video, thanks for this.
No problem Graeme!
Neil, just want to say your Maxscripts have been a massive time save for me over the years so thanks.
After my first foray and bewilderment at AI art generators I now feel that it's clip-art on steroids for the masses. I'm sure it will improve but feels a bit like infinite monkey theorem at the minute.
Thanks, glad you've enjoyed my scripts! And yes, my biggest worry is that since anyone can now "create" unlimited art, we're just going to be bombarded forever now. Even if the art it produces isn't terribly good, how can you find the good stuff when the internet is filled with so much noise.
Very helpful. Did you try 4k, or 8k?
Thanks, and no, didn't try those, but would certainly be worth a test!
this is one of best secret revealing MJ tutorials... btw is the mega chart picture available download for closer study?
Thanks, glad you like it. And yes, visit this page and click the links underneath the video: www.neilblevins.com/art_lessons/midjourney_ai_prompts_test/midjourney_ai_prompts_test.htm
great info! thanks : )
I feel that is the approach to make AI that is supposed to be controlled by simple text prompts isn't the right way to go. I hope developers will some day change directions. I think these image generators need distinctive parameters in a way current node systems like in blender work. so you could define a percentage from style x, a percentage from style y, set the position for the main object, how the background should look like or even have different images as parameters for different aspects of what you want. one image for style, one for composition/placement and so on.
Agreed, which I think will eventually happen. Programs like artbreeder are already kind of doing this, where you can mix different amounts of features like "snow" on a landscape, so when it comes to developing this software, that sort of stuff would be ideal, assuming there's a big enough market for it.
Completely agree. It would also be great if images were generated on multiple layers, making it easy to edit individual elements or to mix and match different images.
Very useful. Thanks
you should try doing modifier with a set seed (--seed), that way all pictures will be identical and only the modifier will make a change. It's hard to compare with random seeds as the bot will assume things not included xD (it already does a style, a light mode etc when rendering, you just don't know which)
Yup, so I wasn't aware of the seed function when I first did these tests, but I agree that would be the way to go. I did 4 images per keyword in order to try and get a sense of what each keyword is doing, but the seed would make it a little bit more repeatable.
@@ArtOfSoulburn yeah, it's not very advertised... i know of it from the documentation xD
Definitely helpful, appreciate it!
No problem!
Damn MJ is way underpowered, stable diffusion understands all the words that didn’t work here
Are you using the +, since you show + and a comma?
No, just using a comma, the + is showing I'm adding the extra words to the prompt I showed before for the purposes of the video, but the actual prompt used in midjourney is just a comma
@@ArtOfSoulburn Cool. Some are using commas, then there is also the :: separator I also see, good to know. Controlled tests like this are very useful, thanks.
great content, tyvm
Thank you for your work!
(also I'm going to argur that "trending on artstation" does change colors and composition. Did you use that prompt, or "on artstation" like your last examples show, which would be probably different)
I used "Trending on artstation", its just too long for my graphic so I shortened it :) If you have some tests where adding "trending on artstation" made a big difference, would love to see them. There may be some situations where it works, if so, would love to know so I can do further tests. And glad you enjoyed the video.
That will help a lot. I appreciate :3
Glad you liked it!
water colot works well too..big changes
Could you cover if it is possible to steer the point of view of the render with photographic terms. like 14m lense, deep focus, Shallow Depth of field, Bokeh. I' would like to be able to have it zoom in or out, or even lave low angle or turn the focus 90 degrees, or reverse. It will be possible some day. Thanks!
Midjourney doesn’t work as well with that sort of terminology, but I have a video out next week on dalle 2 which is much better using those sets of terms.
putting "Style" after the artists name makes a difference, somehow I am getting that, could you also try?
So you're finding writing "In The Style Of " and "Artist " are producing very different results?
@@ArtOfSoulburn yes, I mean not a dramatic difference but a substantial one nonetheless, now I unfortunately ran out of credits to give it a try, but I have observed this in the prompts of another Midjourney user. I didn't get a chance to give it a try myself. Figured it was easy enough if you wanted to give it a try yourself. I will also try it when I get a chance...
style
Great and informative video by the way thank you...
@@MMMM-sv1lk Glad you like the video. So I tried the test, and am not seeing the same difference you saw in the other user. Or am I doing it wrong? neilblevins.com/temp/test.jpg
@@ArtOfSoulburn Well the examples you shared do seem close, that being said I still find the style prompts results a little more on point, could be my subjective bias...
The reasoning behind why I thought this might matter was because of Google... I assumed that the datasets that Midjourney used must originate from Google's API as it is the biggest catalogue of text sorted images on Earth, and there, when you search for the artists name you mainly get the artist's face photos whereas when you type style after the artist name you get a bunch of their paintings...
And I believe that's where my memory failed me, I think the problem that I noticed on that user's prompt was when he inputted the "artist's name" alone vs style... The huge difference happens between those two inputs. Like you end up getting the artist's face in the composition vs his/her style... That was the problem I noticed.
Your input "in the style of " still includes the word "style" therefore seems to work fine. My bad. I appreciate your effort. Hope it wasn't too much trouble.
@@MMMM-sv1lk No bother, always interested in figuring out new stuff :)
Hey--I'm curious. How do you feel as an artist knowing that AI can steal your style?
I am not a fan. But this technology isn’t going away, so I’m trying to find a way to incorporate it into my work, use it instead of it using me :)
I might be wrong but I believe "detailed" made a difference for me when I want it to make an ink drawing.
The same applies for "concept art". I think it makes drawn character look better.
@@isatche Interesting. I'll look into that, ya, I suspect some of those word modifiers may only work in combination with other word modifiers, and don't work by themselves, I'll try these two out with line drawings and see what I get.
you can try those Least Useful Prompts in Stable Diffusion and the outcome will differ.
Ai is so hot right now. I notice your videos about AI are yielding high views. I would have never found your art otherwise. I really enjoyed your "AI The Road Ahead"... I used to be a tshirt designer. Then Adobe went subscription and increased the availability of "good enough" art exponentially and it usurped > 20% of the market. Most artists in the field needed to convert to conventions or other avenues to market and generate income to supplement the decrease in online tshirt market yield. I never converted and I went down for the count.
I sometimes see "art" accounts of ai generated art. The owners speaking in terms as "I created"... and I think... did you though? However... I imagine some people will become famous for their ai created art. Like I said, it's so hot right now.
Thanks for the story, so many people don't understand that all it takes is a 20% dip in business to make a career unviable. I saw an interview with Dave McKean recently about ai art where he discusses the "I created" thing. There are going to be kids who have visuals and ideas in their heads, they'll prompt a piece of ai artwork and feel they somehow created this. And then they'll never work at becoming good at being an actual artist, and all those ideas that are locked in their head will never see the light of day. And that is really sad. They'll convince themselves they've created when they haven't. I really have no clue where this is all going, I'm enjoying using AI to give me photobash elements, but the larger implications of AI are very much frightening.
great. make a new vid with more argumens so we can skip out on placebo words
There needs to be more discussion on the ethics (or lack thereof) in mining the work of living artists for data to generate derivative images with AI. This is not inspiration or influence... this is copying and extrapolation. These artists did not give permission for their art to be (ab)used in this way. I suspect in the future AI like this is going to be limited to drawing from only public domain or specifically licensed data sets because there is no way artists are going to tolerate having their work stolen and replaced with transparently derivative AI copies passed off as 'original' work.
Completely agree, and I'm starting to think there may be a video in my future about this exact subject. In my opinion, artists shouldn't have to opt out of these datasets, they should have to opt in. It's the wild west right now, but I hope that a few legal challenges happen soon so the courts can properly define what is and what isn't ok when it comes to these datasets.
Geez.
is a beta
Yup, looking forward to see how it evolves.
It's interesting. In the macro photo, midjourney turned your "stuffed animal" into a macro of some type of flower-insect, since most photographers use macro for tiny things in nature.
Another UA-camr did a prompt of "a man in space", and midjourney created a man in space, but with the face of Elon musk xDDDD
I think we should think in context before creating the prompts.
Ya, it's bad to think in terms of magic when I'm doing a video that's a scientific exploration of why the AI does what it does, but there is an element of randomness and literal thinking to these AIs which makes them tough to anticipate, especially if you don't know the dataset they worked on. Like I included a prompt the other day that included the word "desaturated" and another artists work, and I got colorful results, and then when I removed the artist's name, I got greyscale images. So obviously those two words are not weighted the same, the artist's work won out over the term desaturated. Trying to figure out why it does what it does, especially in more complex prompts, is tough, but hopefully my video, even if it doesn't have all the answers, at least gets people asking some questions.
@@ArtOfSoulburn You did a good job. There is a lot that we don't know about this AI, all we can do for now is try to guess.
Why are so many of these AI demonstrations focused on cartoon animation subjects? I find it really boring. I would be interested in seeing how AI creates natural looking human beings and how convincing they could look.
Weird, dark, strange, distorted. Who made this software? A team of schizophrenics?