I make the background in an AI tool called Deforum and editing in Premiere Pro/After Effects. Here's my process: 1. Finding fitting stable diffusion prompts for the music video. I ask ChatGPT to give me fitting prompts based on the lyrics I feed it. I test the outputs and make changes based on the results from img2img generations with Stable Diffusion. I also try out different models and Loras to find outputs I like. 2. Getting music reactive keyframes for the animation. I give the song to this audio keyframe tool: www.chigozie.co.uk/audio-keyframe-generator/ and give it suitable functions for the values I want. I personally use 25 frames per second, something like “0.60 - x^6” for strength schedule and “1 + x*10” for Translation Z in 3D mode. I often play around with these settings and also give the 3D rotations different functions to react to. 3. Setting up a render in Deforum. Now I enter the keyframes and prompts into Deforum to start generating an animation. I usually do this 4 times with slightly tweaked settings and prompts to get different results to work with later. 4. Editing in Premiere Pro. To get a more dynamic video, I cut between the different animations in a multicam sequence, picking the best parts and cutting to the music. 5. Finishing up in After Effects. In After Effects I manually add the lyrics, timing and dividing them as I see fit. I also add a bunch of effects like additional 3D camera movements, shakes, overlays, particles, blur and chromatic aberration. Most of these are made to react to the music through generated audio keyframes.
Love going on UA-cam and new Songs from bad ass bands pop up
I have head ache from listening FICKING HELL (I'm ok I just fell in the shower while listening to this thang!!!)
Mm, I like this one )
I want to know how you make these.
I make the background in an AI tool called Deforum and editing in Premiere Pro/After Effects.
Here's my process:
1. Finding fitting stable diffusion prompts for the music video.
I ask ChatGPT to give me fitting prompts based on the lyrics I feed it. I test the outputs and make changes based on the results from img2img generations with Stable Diffusion. I also try out different models and Loras to find outputs I like.
2. Getting music reactive keyframes for the animation.
I give the song to this audio keyframe tool: www.chigozie.co.uk/audio-keyframe-generator/ and give it suitable functions for the values I want. I personally use 25 frames per second, something like “0.60 - x^6” for strength schedule and “1 + x*10” for Translation Z in 3D mode. I often play around with these settings and also give the 3D rotations different functions to react to.
3. Setting up a render in Deforum.
Now I enter the keyframes and prompts into Deforum to start generating an animation. I usually do this 4 times with slightly tweaked settings and prompts to get different results to work with later.
4. Editing in Premiere Pro.
To get a more dynamic video, I cut between the different animations in a multicam sequence, picking the best parts and cutting to the music.
5. Finishing up in After Effects.
In After Effects I manually add the lyrics, timing and dividing them as I see fit. I also add a bunch of effects like additional 3D camera movements, shakes, overlays, particles, blur and chromatic aberration. Most of these are made to react to the music through generated audio keyframes.