The Planet of Terror
Вставка
- Опубліковано 26 чер 2024
- Welcome to Xinferis TV, your gateway to television from another reality.
🔮 If you enjoyed this video, don't forget to give it a thumbs up, share your thoughts in the comments, and hit that subscribe button to join our community.
👉 Know someone who would appreciate my music? Feel free to share this video with them.
🎶 Listen to my music on:
🎧 Spotify: open.spotify.com/artist/1hQWO...
🍎 Apple Music: / xinferis
🔗 Explore more of my content and stay up-to-date with my latest projects via my Linktree: linktr.ee/xinferis
🎹 Curious about my creative process? I create the music using the following software tools:
Logic Pro
Native Instruments Komplete 14 CE
Arturia V Collection X
Arturia FX Collection 4
Arturia Pigments 4
u-he Repro-5
u-he Repro 1
u-he Diva
u-he Zebra Legacy
Ozone 11 Advanced
IK Multimedia Total Studio 3.5 MAX
🎛️ And the audio hardware below:
Native Instruments Kontrol S61 MK3
Akai MPK-49
Arturia KeyStep
Arturia BeatStep Pro
Behringer DeepMind 12
Behringer Model D
Behringer Neutron
Arturia MiniBrute
Korg Minilogue
MOTU 828es
Yamaha HS8 Studio Monitors
Yamaha HS8S Studio Subwoofer
🎨 I use the following creative tools to make the art:
Midjourney
Draw Things (which uses Stable Diffusion)
Pixelmator Pro
Apple Photos
🎬 And these editing tools to create the videos:
Runway
Apple Final Cut Pro
Apple Compressor
Apple Motion
Thank you for visiting the worlds of Xinferis TV! Your continued support helps drive me to continue creating more videos. 🚀🌌
🚫 Copyright Notice: All of the music and art on this channel are created by me. The content of this video is not royalty-free, and I reserve all rights to the video, music, and art. Any unauthorized use or reproduction of my content is strictly prohibited.
#synth #soundtrack #aiart
Pretty wild looking place.
Thank you!
Really awesome stuff. You outdid yourself with this one. I wasn't expecting animation.
Thank you for the kind words. It was a lot more work than expected, and I literally got it done just in time to get it out for Friday. :)
The space girls are so alluring and the space zombies are so gruesome. I'm also seeing some animation. Very cool.
Thank you! I am glad you liked it!
👌👌👌👏👏👏👏👏👏👏👏
Thank you for watching this one!
@@XinferisTV waiting on the next one
🤯‼️⚡💯🔥☠️🔥💯⚡‼️🤯
Thank you for watching this one!
This is pretty damn good.
Thank you!
🤖 Sweet, love the animation. Even the smallest movements, panning and mist bring a touch of reality, and the beautiful women make it much less terrifying. It's getting better 💯
Thank you. I was aiming at trying to keep the artifacts with AI video to a minimum and that kind of requires subtle movement for the most part. But it also fits the style of my videos well too.
Good job, I am loving this AI stuff.
Glad you enjoy it!
This will last short mate. Soon we will be submerged by tons of this AI nonsense crap.
All these videos are all the same.
@danielerusso5054 I agree that there will likely be a ton of low effort and low quality AI videos, especially once AI music works better.
I am primarily a musician but after the past couple years I have gained a lot of experience making AI art. I think there will be artists like me that learn how to really take advantage of AI tools to create videos that match their artistic visions and do something different.
If I didn't believe what I create was something different and good, I wouldn't make my videos. It takes a lot of work, and I don't get enough views to give me any incentive other than sharing my music and art with others.
I think that there will continue to be people that create great things that they care about, regardless of the tools they use to get there. There will also be people that just crank out garbage. But that's the same with any kind of video on UA-cam. There are plenty of super low effort videos with talking heads and no editing for example.
Your work is outstanding! and your criteria with the subtle animation was on point, because the problem with many ai creators is that they add too much animation and the deformation makes it look weird
@alexhosen Yeah my goal was to have motion and animation that fits the style of my videos, but try to reduce the number of artifacts as much as possible. It really bothers me when AI videos have weird face deformations and eyes doing weird things for example. My animations are definitely not artifact free, but I tried to keep the artifacts to a minimum and isolated mostly to things that aren't the primary elements of the scene. Similar to when an artist draws or animates a scene by hand, secondary elements may have less detail or coarser animations. It still took a LOT of generations per video to get something where the primary elements worked well, and the artifacts on the secondary elements weren't distracting. :)
@@XinferisTV The level of artifacts is absolutley on the range of tolerance and subtleness IMO, and personally, if it add to much work, i think you should stay with the still images because they are already great! You should be art directing a retro sci fy show.
Also, I would love to see your take on retro fantasy, something like a Conanesque world. Sorry if my english is not right. Keep it up!
@alexhosen I definitely found it interesting doing the animation, but it was a lot more work. And I don't think was necessarily better, just different. What I mean is with just images, I can achieve a much higher level of fidelity and clarity. I also have a lot more control, and can easily modifying anything that looks off by hand. Plus although I really like Runway, it's hard to justify $95 dollars a month, when I am already technically losing money making these videos. :) It was definitely worth paying for a month to experiment with it though.
@alexhosen Just to clarify, I don't try to make low effort videos. I just meant the additional effort to do video didn't necessarily make this better than my other videos, just different in my opinion. It sounds insane, but I actually generate 5000 images in order to get the few images I used to create this video. I am pretty picky though. :)
@@XinferisTV The amount of effort you put on your videos is out of discussion, dont worry. Hope the channel keeps growing bc you deserve it!
Superb! I'm so glad you tried Runway to animate your amazing work. I can see you set the movement slider to minimal and slow pan camera work. It all works beautifully.
Actually I didn't set the movement slider to the minimum. Although you can extend videos to 8 seconds, the default is 4 seconds. In general doing 8 seconds will have results that are more likely to lose coherence. So what I did was create 4 second videos, but timed the video at half speed and used Optical Flow to smooth out the frames in Final Cut.
By doing this, it gave the videos a similar feel to my other videos, but obvious with animation and motion. Depending on the image I was using, sometimes I didn't change any settings in Runway. Other times I provided a prompt. And then other times I used the camera features to create what I was looking for.
I used Gen-2 for this video, but I am really looking forward to trying out Gen-3. Hopefully they activate it in my account soon. :)
@@XinferisTV Ah, right. I put the slider on 1 setting to get next to no movement and thought that's what you'd done. The slow-motion idea works well though! I'm not sure that Gen-3 is going to have image to video when it releases, that's what I'm hearing anyway. Maybe further down the line it will, I hope it will.
@Stackpooled99 Yes it's possible the initial release won't have image to video at some point, but they will definitely have it eventually. I will definitely play around with it a bit to see if I can get a decent style from it using text to image though. I think that there is a good change text to image will work better with better quality. The reason I think that, is you can tell that when Gen-2 creates animations, it's creating its interpretation of your image as an animated. Since I created the initial images in Midjourney, Runway doesn't "think" about the concepts in the same way. Some of the decoherence I see, appears to be the model reinterpreting the art. When I tried using text to image, its interpretation seemed more consistent over the animation.
I could be wrong, but that was my experience so far. Though I only generated a little over 1,400 videos so far.
@@XinferisTV I agree that it will probably be better at text to video and plan on trying some of my older prompts yet again but modified for the LLM. Gen-2 has changed so much over the ten months I've been using it, both T2V & I2V. I regularly find a whole new style sophistication of imagery appear after an update.
Lots to look forward to with Gen-3!!😀
Wow blown away. Congrats to the creator. Oh wait, um never mind.
Thank you!
What kind of A.I is this? Its almost like taking a glimpse into another world or realm it looks so real 🤔 more real than cgi.
You hit the nail on the head of why I use AI to make this art. You can't make art that looks like this without AI.
I use Midjourney to create the images for most of my videos. In this particular video I created the images with Midjourney and then used Runway to animate the images.
@XinferisTV looks amazing 👏
Is there a way you could do a tutorial on how you use the AI Art tools? Like what is an example of the prompt you use? Of all the AI Art I've seen, you seem to be one of the best at this.
I have thought about creating tutorial videos but making the art and music for these videos already take up a lot of my time. If you look at the comments in my other videos I do give tips, tricks and mini tutorial.
🪐🪐🪐
As always, thank you for watching!