I finally got some good results from this and it only took me 50leven tries😆😆😆I had to watch this video and rewind and pause and I think I learned something today. Thank you Robin Fernandes, very cool tool here, and I will try to make something cool with it. Coffee incoming!
This is completely off the charts amazing! Now, if I can just figure out how to do it. Sadly all three of your tutorials assume a much higher familiarity with Deforum, Parseq and trigonometry than I posses. But I will work on it. Thanks for an amazing tool that I hope to learn.
As a musician/programmer, this is absolutely the tool for me. Thanks so much for making and sharing this ! Some of what you are demonstrating here is quite advanced, such as utilizing parameters and functions, and also it's not obvious at all (apart from reading the man I suppose). It would be super helpful to have IDE style in-line tooltips for this, a little hand-holding would go a long way to get users up to speed. Is it also possible to read pitch/timings from MIDI files? Hosting the webapp frontend locally is a necessity for me, how to do that?
Hi, thanks for the kind feedback! 100% agree on the tooltips, that's something I'd like to add. I have a ticket open on GitHub for syntax highlighting & autocomplete. You currently can't load data from midi files but it's a great idea that is frequently requested, so I hope to get to that some time soon too (don't hesitate to raise a feature request on GitHub). To host Parseq locally check the developer instructions in the readme (basically `npm install --force`, then `npm start`).
So totally cool...words defy me. I am an 'outsider' to coding and am amazed by the control of the 'chaos' of generation you can get with some of these functions. Thanks for the window into what to work on to achieve these artistic results...peace.
Wow... this is so bloody awesome! Thanks so much for making the effort to build this tool and for sharing it with us. I recently started trying to learn Python (first programming language I've ever attempted to learn) and this has just made me want to learn it faster. Can't wait to apply this to some of my own songs. 🙏
Great stuff. Two basic features that should be added is the "final_keyframe" context variable that can be used in equations, also the ability to manage CFG just as one would Strength.
Hi! As discussed, CFG scale is already available as "scale". last,_frame is coming soon but can be achieved already with a unique label on the last frame and info_match_next().
Thank you so much for another video tutorial. I would like to ask if you have plans to create a "window" in "Parseq" in which there will be a "visualization of motion". There is such a thing in "Houdini" and in my opinion "After Effects".You have the "Visualised field value flow" diagrams. I'm not very good with English, but I'll try to explain. By "motion visualization" I mean that he who creates can see the movement of the camera in relation to the object. "Visualised field value flow" can also be understood, but it is more difficult to imagine in your head what movement will eventually occur throughout the video.Thank you for bringing up Codenc. I was wondering how to fix this, since the video consists of scenes and somewhere you need a smooth picture and somewhere you need a fast picture. I assumed that the creators of "Deforum" would create a "Codenc" setting. From 0 frame to 100 "Codenc"-1, and from 100 to 200 "Codenc"-5 and so on. Even with an automatic reduction from 5 to 1 with each frame.And, as it turns out, there is already a solution. I just need to understand the math of Parseq and learn the syntax of formulas. Thank you again for your work.
Tried something , it came out like magic although it wasn’t perfect, and had to battle through a few errors. However, I wish everything was a lot easier but I guess I’ll have to spend more time on these videos to gain any mastery.
Your work is truly a masterpiece., I was wondering if it would be possible for me to obtain your audio files for the tutorial. It would mean the world to me to start from scratch and learn directly from you.
i manage to extract the audio from your video, here is my video that I done thanks to your tutorial. ua-cam.com/video/Qxh9sf0YGQk/v-deo.html thank you!
I'm now thinking that the best way to accomplish what I want is to have the seed increase by one on each snare beat. Can this be done and what is the correct formula for that?
Hi, thanks for this great tool and helpful tutorials. I created a video based on audio tracks and there is a problem with image heating. When one picture does not change in the frame for more than a second, it begins to turn yellow, as if overheating. What can be done about it? (sorry for my bad english, I use a translator)
Okay, I'm starting to wrap my head around this. I watching this a persistent question kept pooping into my head. Can you still do this using a video input? You see, what I am currently doing is making a music video, I have a video of the artist singing, I use deforum to change her clothes and the background, but I mask her face. I think that it would be amazing for parseq to automate the strength and camera moves based on the sound file, but I still need to use the video to keep everything on track. Does this make sense? Any thoughts?
Can you upload a couple of pre configured settings so we can just change the prompts and make our own cool videos, it’s too complicated trying to figure out the numbers for each setting. I’ve watched your videos 100 times and I’m still lost when it comes to this, but when I just copy your numbers from the video my video comes out great, but when I try to make it longer 300 or 600 I have to manually add in the values again which is where I’m lost. Camera left, right, up down 3D left right up down these are the parts I need explained simplified for dummies
I have started work on a library of examples and I'm hoping the community (you :) ) can help built it up: github.com/rewbs/sd-parseq/discussions/categories/parseq-example-library
May I have question please? Is it possible to loop my keyframe animation? (For example I have 3 keyframes making some custom sinusoid and I want to loop this through whole anim)
Hi, there's no built in way to loop at the moment. You have a few options: you could replicate the keyframes so that the same patterns repeat, or you could adapt your formulas so they produce repeating patterns, or you could modify the generated JSON to repeat the rendered frames (you'd need to write a simple script to increase the frame number). Making this easier is a good idea, thanks for the feedback.
@@rewbs Thank you Robin, will try to to replicate it by formulas. Next things I have in my mind as requests are support of ControlNet parameters and possiblity of easier writing of numbers (.7 instead of 0.7) Thank you for great work!
Thank you for a great tutorial. What would be the formula to add a zoom out for the z translation on some keyframes? I tried to manipulate the same formula without success
What computer are you using? Run Logic X 10.6.3 at 32 for the buffer size on a Mac Mini late 2012 16g ram with a SSD drive. Do you think this will run? Have OpenCore installed for Ventura.
Hi! I develop on an M1 macbook and I run Stable Diffusion / Deforum on a Linux box with a 24GB 3090 GPU. My guess is your 2012 Mac is probably not going to be a sufficiently beefy system to run Stable Diffusion at reasonable speeds. I'd suggest first playing around with one of the hosted options like Rundiffusion, and then decide whether you want to invest in a system that can run this stuff locally.
Hi! I watched the tutorial about 10 times and an error started to appear that I couldn't resolve. appears to come from parseq. Error: 'Expecting value: line 1 column 1 (char 0)'. Before reporting, please check your schedules/ init values. Could someone try to help me please?
Somehow i try to replicate your settings and i'm losing quality every step. Maybe it's de depth warping as I can't replicate your settings... Or not usung VAE? BTW is there any way to donate?
Hmm not sure why that would be. I have uploaded some of the full Deforum settings files if you want to explore. - Here's the full settings file from the underwater video at cadence 1, Euler 20 steps, 512x512: gist.github.com/rewbs/1c766a44d7893fd4934bf29537224ab1 . - Here's the snake/mushroom one: gist.github.com/rewbs/51313b9238a4385f730f975e96d02af8 - Here's the cat/koala/fox/bear one: gist.github.com/rewbs/2645616d86750768de35a651e933ae8f The rendered `parseq_manifest` URL may not be reliable given that Parseq updates the content of those URLs in place and I may have made changes after those renders. However, in each of those settings files is a `fetched_parseq_manifest_summary` field which you can copy and import into Parseq - that should be the same as the Parseq settings I've already shared in the tutorial descriptions.
Such a helpful tool thank you for explaining. I'm coming across an issue with not being able to zoom in to see and edit event detections. if you or someone could explain how, thanks.
To zoom, the easiest way is to use the viewport between the graph and the reference audio section. You can drag the edges of the bar to change the zoom, and drag the whole purple bar to scroll. Once zoomed in, you can also just click and drag on the audio waveform to scroll.
Is there a way to trigger a sudden color change and disable color coherence for specific sections? I'm trying to create an animation that flips between angel in the sky and demon in hell. if i enable color coherence I end up with a cyan pink demon but if i turn it off the colors get washed out after the initial generation up until the switch to the other object. Great Video and great work!
Not that I know of. This has frustrated me too, and it seems it would make sense to be able to control the intensity of the colour correction over the course of the video. Definitely something I'll look into eventually if no one gets to it before me. :) That being said, there is already a colour correction schedule as part of the Guided Images settings. I haven't played with it yet but I think it controls how much the colors are skewed towards the current guiding image. Might be worth experimenting with it. You can find it in the extension under Keyframes/Guided Images/Guided Images Schedules/Color correction factor. It is not yet controllable with Parseq (pending this PR: github.com/deforum-art/sd-webui-deforum/pull/807 ).
your tutoriel is really great and i use parseq for testing a lot but since somedays,z_translation seems to be converted by deforum in y_scale that's really strange
I am able to follow the tutorial, but only exactly. I really don't understand the "programming" bits at all. Is there any resource that anyone can point me to so that I can start trying to learn how to create my own... strings? I feel like I'm close to having access to an amazing tool but I'm totally lost with how you come up with the actual magic of the tool.
Hi! You could take a look at the Parseq example library: github.com/rewbs/sd-parseq/discussions/categories/parseq-example-library . I'm also working on a documentation page that will more clearly show examples of all the important functions available in the Parseq expression language.
hey! doing this now and in the very beginning when I am trying to introduce the function is gives an Error: Error parsing interpolation for strength at frame 0 (if (f==info_match_last(“Snare”)) 0.25 else 0.8): Error: invalid syntax at line 1 col 24: 1 if (f==info_match_last(“Snare”)) 0.25 else 0.8 ^ Unexpected input (lexer error). Instead, I was expecting to see one of the foll... [See Javascript console for full error message]. I dont know Java or any language really so I m so confused how to fix these issues. Do you have Discord or somewhere where I could ask the questions? thank you for Parseq, its an incredible software
Hi! It might be to do with your quotes. Those double quotes look slanted unicode quotes, so most likely they were replaced by some editor. Replace them with normal double-quotes and you should be fine. And yes, I hang out on the Deforum discord which is linked from the parseq UI (see the char button at the top).
Does anyone know how to do this? Let’s say I want S-pulse(p=4b, a=0.10, pw=1) and I also want (p=16b, a=0.20, pw=1) how do I write this? Basically I want every 4beats to drop 0.10 and every 16beats to drop 0.20! Thank you in advance 🙏🏽
does anyone here know if there is something like "batch size" in Deforum or parseq ? i want to render multiple animations at once with different seeds, my 4090 falls asleep rendering only one image at a time.
@@rewbs thx for responding, and for making this awesome software. Very strange that we have no batch size in deforum, to me it feels kinda important, it be so much easier to render 4 anims at once with different seeds to the in post have more source material to work with.
Yes, you can use the fact it's a regex by embedding the conditional in a single call like this: *if (f==info_match_last("instrument1|instrument2")) 0.25 else S* Alternatively, if you prefer, you can split the conditional out like this: *if (f==info_match_last("instrument1") or f==info_match_last("instrument2")) 0.25 else S*
@@rewbs THAT"S HOW ITS DONE!!! I was doing it like this, if (f==info_match_last("instrument1", "instrument2")) 0.25 else S Can't tell ya enough but thank you! Or maybe I just tell you thank you mentally when I'm watching your tutorials for the 10th time in a day
i watched all 3 videos to see if i missed anything is there anyway you might be able to help me i keep getting this error code.... Error: ''NoneType' object is not subscriptable'. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \. Full error message is in your terminal/ cli.
Hi, I'd need to see the full error message from the console to be sure, could you share that? If you have .srt saving enabled in the deforum settings, you could try disabling that (there is a bug currently whereby srt saving can sometimes cause that error, fix is in progress).
@@rewbs thank you for the reply i just deleated and reinstalled SD 1111. it works now im so excited thank you so much fo r you work. i will be donating a coffee or two for your work.
If you reload, you should now see Parseq at version 0.1.100, which includes a possible fix to the UI performance regression when working with audio. Give it a go and let me know if you see any improvement.
As a music producer this is something Ive been after for awhile, but you completely lose me once you start adding the code with values and so forth thats the bit I cant get my head round, so I might wait a bit longer till you update it to be more user friendly....but great work my friend !!
I don't think it will become more user friendly, to have flexibility and features you need the complexity. Just copy the numbers and you'll learn a bit each time. Start now! You'll be amazed how far you get if you stick to it.
@@ade4200 I feel the same as @gloxmusic74 I'm totally lost with the coding. I have basically no background in programming so I feel like this is just out of my reach. Do you have any advice for getting my feet on the ground and heading towards the goal of writing code for this tool?
ua-cam.com/video/fRNBgn8dhRs/v-deo.html You said to send you the results of using your editor (which I think is the best thing god has invented since the bed 🛌) and here it is. This is my first "serious work" in Deforum, and I was just getting to know Parsek then. This is my song and I'm a musician basically, but I've always been interested in film and animation and here I am
Interesting - both should work (they are aliases), and do in my tests. If you have an example where info_match_last doesn't work, feel free to share with me.
Thanks!
One of those videos which has to be studied rather than watched…thanks man..
I took a 6 month break come back and this is where we are im STOKED so much easier than before thank you!!!!
I finally got some good results from this and it only took me 50leven tries😆😆😆I had to watch this video and rewind and pause and I think I learned something today. Thank you Robin Fernandes, very cool tool here, and I will try to make something cool with it. Coffee incoming!
Time to consume all of my free time learning how to make music videos with this
Thats what ive been doing
I don't know if get super exciting or just start crying. Awesome result and superb content.
Thanks for the tut
that's a great idea to separate into stems for clarity and isolation of instruments
Excellent tutorial. Just what i needed after committing to learning audio synced Deforum today.
Bro, you deserve not a cofffee, but a machine to make coffees for your entire life, thanks bro really appreciate
The amount of skills going on here is amazing, thank you for such a great piece of software and tutorials.
This is completely off the charts amazing! Now, if I can just figure out how to do it. Sadly all three of your tutorials assume a much higher familiarity with Deforum, Parseq and trigonometry than I posses. But I will work on it. Thanks for an amazing tool that I hope to learn.
As a musician/programmer, this is absolutely the tool for me. Thanks so much for making and sharing this !
Some of what you are demonstrating here is quite advanced, such as utilizing parameters and functions, and also it's not obvious at all (apart from reading the man I suppose). It would be super helpful to have IDE style in-line tooltips for this, a little hand-holding would go a long way to get users up to speed. Is it also possible to read pitch/timings from MIDI files? Hosting the webapp frontend locally is a necessity for me, how to do that?
Hi, thanks for the kind feedback! 100% agree on the tooltips, that's something I'd like to add. I have a ticket open on GitHub for syntax highlighting & autocomplete.
You currently can't load data from midi files but it's a great idea that is frequently requested, so I hope to get to that some time soon too (don't hesitate to raise a feature request on GitHub).
To host Parseq locally check the developer instructions in the readme (basically `npm install --force`, then `npm start`).
This is SOOOO useful. YOU ARE GREAT! thank you (also Renoise user here ;)
So totally cool...words defy me. I am an 'outsider' to coding and am amazed by the control of the 'chaos' of generation you can get with some of these functions. Thanks for the window into what to work on to achieve these artistic results...peace.
thisis fucking amazing. i mean this video should win the video music award of this year. the tech is fenomenal!
This program is going to save me so much time. I've been synching by hand.
Amazing tutorial man!!
Parseq is a very powerful and creative tool.
Thank you so much🙏
This tutorial is as awesome as your last ones - so useful and eye opening. Keep up your great work, thank you so much!
Wow... this is so bloody awesome! Thanks so much for making the effort to build this tool and for sharing it with us. I recently started trying to learn Python (first programming language I've ever attempted to learn) and this has just made me want to learn it faster. Can't wait to apply this to some of my own songs. 🙏
wow its here! I have been waiting for this for a while
Great stuff. Two basic features that should be added is the "final_keyframe" context variable that can be used in equations, also the ability to manage CFG just as one would Strength.
Hi! As discussed, CFG scale is already available as "scale". last,_frame is coming soon but can be achieved already with a unique label on the last frame and info_match_next().
@@rewbs Yes, sorry I had posted this before your response on discord. I thought it would be a lot harder to reach you haha
Awesome tool
thank you, I love the way you do your tutorials
Wow this is exceptional work!
Fantastic tutorial! Beautifully done!
Thanks for your work! I was waiting for that :D
awesome, you are a talent, my channel is using parseq, the scripts are awesome and accurate, thank you!
Awesome vid, thank you!
Amazing tutorial man!!
Thank you so much
can't wait for your next tutorials
Thank you so much for another video tutorial. I would like to ask if you have plans to create a "window" in "Parseq" in which there will be a "visualization of motion". There is such a thing in "Houdini" and in my opinion "After Effects".You have the "Visualised field value flow" diagrams. I'm not very good with English, but I'll try to explain. By "motion visualization" I mean that he who creates can see the movement of the camera in relation to the object. "Visualised field value flow" can also be understood, but it is more difficult to imagine in your head what movement will eventually occur throughout the video.Thank you for bringing up Codenc. I was wondering how to fix this, since the video consists of scenes and somewhere you need a smooth picture and somewhere you need a fast picture. I assumed that the creators of "Deforum" would create a "Codenc" setting. From 0 frame to 100 "Codenc"-1, and from 100 to 200 "Codenc"-5 and so on. Even with an automatic reduction from 5 to 1 with each frame.And, as it turns out, there is already a solution. I just need to understand the math of Parseq and learn the syntax of formulas.
Thank you again for your work.
you sir are simply perfect, thanks for this.
I thought this was for the synthesizer😅 but stayed watching because it looked interesting
Remarkable. My head hurts now.
Tried something , it came out like magic although it wasn’t perfect, and had to battle through a few errors. However, I wish everything was a lot easier but I guess I’ll have to spend more time on these videos to gain any mastery.
one request i would like to give, pos and neg prompts that will generate each frame (common Prompts )
what was the other sampler you showed at the end?
Euler at 80 steps. Full video with details in description here: ua-cam.com/video/LzWXxCbTaOU/v-deo.html
Bruh this is so dope lmao i have no brain capacity to duplicate that without studying lmao
I made something and i want to send you 👀 cause u wanted to see the results
It is abit wanky but not bad for 1st try
thank you so much
Thy so mutch. I reallyreally wait this:)
any way to use 5 ready image to make it?
Your work is truly a masterpiece., I was wondering if it would be possible for me to obtain your audio files for the tutorial. It would mean the world to me to start from scratch and learn directly from you.
i manage to extract the audio from your video, here is my video that I done thanks to your tutorial.
ua-cam.com/video/Qxh9sf0YGQk/v-deo.html
thank you!
Great work! I will probably finish the audio track one day and share it, but until then yes feel free to extract it.
@@rewbs thank you !! 💓
I'm now thinking that the best way to accomplish what I want is to have the seed increase by one on each snare beat. Can this be done and what is the correct formula for that?
yesss, thnks for this!
thanks for sharing 🙏
Hi, thanks for this great tool and helpful tutorials.
I created a video based on audio tracks and there is a problem with image heating. When one picture does not change in the frame for more than a second, it begins to turn yellow, as if overheating. What can be done about it? (sorry for my bad english, I use a translator)
Help please with deep-frying
There's so much to learn. It's so hard
I can't make out the name of that audio tool you're using . . . maybe put a link to it in the description of the vid?
It's renoise: www.renoise.com/
How delete event? I click on the checkbox, but another event appears, and this one fails
hey man! thanks for knowladge 😀
thank you for this amazing tutorial!! I am curious how to introduce LORAs or trained models (of MY face for example)
Okay, I'm starting to wrap my head around this. I watching this a persistent question kept pooping into my head. Can you still do this using a video input? You see, what I am currently doing is making a music video, I have a video of the artist singing, I use deforum to change her clothes and the background, but I mask her face. I think that it would be amazing for parseq to automate the strength and camera moves based on the sound file, but I still need to use the video to keep everything on track. Does this make sense? Any thoughts?
Hi i need help to sync audio with video with defroum. when I copy link and generate video in defroum video is ok but know audio. please advice me.
🔥🔥
Can you upload a couple of pre configured settings so we can just change the prompts and make our own cool videos, it’s too complicated trying to figure out the numbers for each setting. I’ve watched your videos 100 times and I’m still lost when it comes to this, but when I just copy your numbers from the video my video comes out great, but when I try to make it longer 300 or 600 I have to manually add in the values again which is where I’m lost. Camera left, right, up down 3D left right up down these are the parts I need explained simplified for dummies
I have started work on a library of examples and I'm hoping the community (you :) ) can help built it up: github.com/rewbs/sd-parseq/discussions/categories/parseq-example-library
May I have question please? Is it possible to loop my keyframe animation? (For example I have 3 keyframes making some custom sinusoid and I want to loop this through whole anim)
Hi, there's no built in way to loop at the moment. You have a few options: you could replicate the keyframes so that the same patterns repeat, or you could adapt your formulas so they produce repeating patterns, or you could modify the generated JSON to repeat the rendered frames (you'd need to write a simple script to increase the frame number). Making this easier is a good idea, thanks for the feedback.
@@rewbs Thank you Robin, will try to to replicate it by formulas. Next things I have in my mind as requests are support of ControlNet parameters and possiblity of easier writing of numbers (.7 instead of 0.7) Thank you for great work!
@@tomaskrejzek9122 controlnet schedule support is nearly ready, stay tuned!
Hohoo ;)
So cool.
Thank you for a great tutorial. What would be the formula to add a zoom out for the z translation on some keyframes? I tried to manipulate the same formula without success
MVP
What computer are you using? Run Logic X 10.6.3 at 32 for the buffer size on a Mac Mini late 2012 16g ram with a SSD drive. Do you think this will run? Have OpenCore installed for Ventura.
Hi! I develop on an M1 macbook and I run Stable Diffusion / Deforum on a Linux box with a 24GB 3090 GPU. My guess is your 2012 Mac is probably not going to be a sufficiently beefy system to run Stable Diffusion at reasonable speeds. I'd suggest first playing around with one of the hosted options like Rundiffusion, and then decide whether you want to invest in a system that can run this stuff locally.
Hi! I watched the tutorial about 10 times and an error started to appear that I couldn't resolve. appears to come from parseq.
Error: 'Expecting value: line 1 column 1 (char 0)'. Before reporting, please check your schedules/ init values.
Could someone try to help me please?
Could you share your Parseq document and your deforum settings file?
you are the best!
hi could u plz make video to apply real song on deforum ,,
Somehow i try to replicate your settings and i'm losing quality every step. Maybe it's de depth warping as I can't replicate your settings... Or not usung VAE? BTW is there any way to donate?
Hmm not sure why that would be. I have uploaded some of the full Deforum settings files if you want to explore.
- Here's the full settings file from the underwater video at cadence 1, Euler 20 steps, 512x512: gist.github.com/rewbs/1c766a44d7893fd4934bf29537224ab1 .
- Here's the snake/mushroom one: gist.github.com/rewbs/51313b9238a4385f730f975e96d02af8
- Here's the cat/koala/fox/bear one: gist.github.com/rewbs/2645616d86750768de35a651e933ae8f
The rendered `parseq_manifest` URL may not be reliable given that Parseq updates the content of those URLs in place and I may have made changes after those renders. However, in each of those settings files is a `fetched_parseq_manifest_summary` field which you can copy and import into Parseq - that should be the same as the Parseq settings I've already shared in the tutorial descriptions.
Re donations, if you really want there's a "buy me a coffee" link at the top of the parseq UI. Thanks!! :)
Thank U 🙏
Such a helpful tool thank you for explaining. I'm coming across an issue with not being able to zoom in to see and edit event detections. if you or someone could explain how, thanks.
To zoom, the easiest way is to use the viewport between the graph and the reference audio section. You can drag the edges of the bar to change the zoom, and drag the whole purple bar to scroll. Once zoomed in, you can also just click and drag on the audio waveform to scroll.
@@rewbs oh I see now thank you!
How do you improve the quality of your videos? At least to HD
Have you tried the upscaling feature in the Deforum extension?
@@rewbs I tried, but it didn't work. I need a hint or a link to help. In the video lessons you talked about video processing, but not in DEFORUM
I've already learned.
thank you for this
thank you for a awesome tutorial .can you give me the music to try it with a different prompts
Hi, thanks for the feedback. Yes I'll make my music available some time soon. For now, I'm sure there's plenty of other music you can play with. :)
@@rewbs thanks you for everything you helped me to know how it’s working 🙏🏻
Freaking magnificent!
Oh my god. This is such a time saver 🤣🤣🤣🤣. You should sell this product for at least 10$/Month
Is there a way to trigger a sudden color change and disable color coherence for specific sections? I'm trying to create an animation that flips between angel in the sky and demon in hell. if i enable color coherence I end up with a cyan pink demon but if i turn it off the colors get washed out after the initial generation up until the switch to the other object. Great Video and great work!
Not that I know of. This has frustrated me too, and it seems it would make sense to be able to control the intensity of the colour correction over the course of the video. Definitely something I'll look into eventually if no one gets to it before me. :)
That being said, there is already a colour correction schedule as part of the Guided Images settings. I haven't played with it yet but I think it controls how much the colors are skewed towards the current guiding image. Might be worth experimenting with it. You can find it in the extension under Keyframes/Guided Images/Guided Images Schedules/Color correction factor. It is not yet controllable with Parseq (pending this PR: github.com/deforum-art/sd-webui-deforum/pull/807 ).
How to separate the bassdrum and snaredrum in the drum? Or should they be exported separately in the software? THX
The simplest approach with the best result is to render them separately, which is what I did.
thank you!
your tutoriel is really great and i use parseq for testing a lot but since somedays,z_translation seems to be converted by deforum in y_scale that's really strange
I've not seen that happen before. If you find a reproducible scenario please comment with more details!
@@rewbs there was an update of deforum and it works actually, thx for your answer
I am able to follow the tutorial, but only exactly. I really don't understand the "programming" bits at all. Is there any resource that anyone can point me to so that I can start trying to learn how to create my own... strings? I feel like I'm close to having access to an amazing tool but I'm totally lost with how you come up with the actual magic of the tool.
Hi! You could take a look at the Parseq example library: github.com/rewbs/sd-parseq/discussions/categories/parseq-example-library . I'm also working on a documentation page that will more clearly show examples of all the important functions available in the Parseq expression language.
happy day!
Ninja+Jedi=Robin
hey! doing this now and in the very beginning when I am trying to introduce the function is gives an Error: Error parsing interpolation for strength at frame 0 (if (f==info_match_last(“Snare”)) 0.25 else 0.8): Error: invalid syntax at line 1 col 24: 1 if (f==info_match_last(“Snare”)) 0.25 else 0.8 ^ Unexpected input (lexer error). Instead, I was expecting to see one of the foll... [See Javascript console for full error message]. I dont know Java or any language really so I m so confused how to fix these issues. Do you have Discord or somewhere where I could ask the questions? thank you for Parseq, its an incredible software
Hi! It might be to do with your quotes. Those double quotes look slanted unicode quotes, so most likely they were replaced by some editor. Replace them with normal double-quotes and you should be fine.
And yes, I hang out on the Deforum discord which is linked from the parseq UI (see the char button at the top).
very nice
Does anyone know how to do this? Let’s say I want S-pulse(p=4b, a=0.10, pw=1) and I also want (p=16b, a=0.20, pw=1) how do I write this? Basically I want every 4beats to drop 0.10 and every 16beats to drop 0.20! Thank you in advance 🙏🏽
Hi! One way is simply to add your two pulse waves together, e.g.: S-(pulse(p=4b, pw=1, a=0.1)+pulse(p=16b, pw=1, a=0.1))
@@rewbs You're the man 🙏
does anyone here know if there is something like "batch size" in Deforum or parseq ? i want to render multiple animations at once with different seeds, my 4090 falls asleep rendering only one image at a time.
I don't think Deforum supports this currently.
@@rewbs thx for responding, and for making this awesome software. Very strange that we have no batch size in deforum, to me it feels kinda important, it be so much easier to render 4 anims at once with different seeds to the in post have more source material to work with.
my question is can you use Parseq with Deforum?
Yes, it is built for use with Deforum (specifically the a1111 extension for Deforum).
Can you add two instruments in this, if (f==info_match_last("snaredrum")) 0.25 else S
Yes, you can use the fact it's a regex by embedding the conditional in a single call like this:
*if (f==info_match_last("instrument1|instrument2")) 0.25 else S*
Alternatively, if you prefer, you can split the conditional out like this:
*if (f==info_match_last("instrument1") or f==info_match_last("instrument2")) 0.25 else S*
@@rewbs THAT"S HOW ITS DONE!!! I was doing it like this,
if (f==info_match_last("instrument1", "instrument2")) 0.25 else S
Can't tell ya enough but thank you! Or maybe I just tell you thank you mentally when I'm watching your tutorials for the 10th time in a day
godsend
i watched all 3 videos to see if i missed anything is there anyway you might be able to help me i keep getting this error code.... Error: ''NoneType' object is not subscriptable'. Check your schedules/ init values please. Also make sure you don't have a backwards slash in any of your PATHs - use / instead of \. Full error message is in your terminal/ cli.
Hi, I'd need to see the full error message from the console to be sure, could you share that? If you have .srt saving enabled in the deforum settings, you could try disabling that (there is a bug currently whereby srt saving can sometimes cause that error, fix is in progress).
@@rewbs thank you for the reply i just deleated and reinstalled SD 1111. it works now im so excited thank you so much fo r you work. i will be donating a coffee or two for your work.
idk if it's just me but using the event detector is really slow, lags quite a bit . other than taht this is just amazing!
Bro it is so bad, this is exactly what I need to generate ai music videos but with how laggy it is for me it's damn near unusable.
Hi, yes looks like there was a regression recently that has made things slow down after you load audio in some cases. I will look into it.
If you reload, you should now see Parseq at version 0.1.100, which includes a possible fix to the UI performance regression when working with audio. Give it a go and let me know if you see any improvement.
@@rewbs it is buttery smooth now! thank you for all the hard work!
As a music producer this is something Ive been after for awhile, but you completely lose me once you start adding the code with values and so forth thats the bit I cant get my head round, so I might wait a bit longer till you update it to be more user friendly....but great work my friend !!
I don't think it will become more user friendly, to have flexibility and features you need the complexity. Just copy the numbers and you'll learn a bit each time. Start now! You'll be amazed how far you get if you stick to it.
@@ade4200 I feel the same as @gloxmusic74 I'm totally lost with the coding. I have basically no background in programming so I feel like this is just out of my reach. Do you have any advice for getting my feet on the ground and heading towards the goal of writing code for this tool?
@@echonomix_ I wrote a huuuge reply!!! It definitely showed as sent....just checked back here and....not here
My head just exploded
This addon should be integrated into deforum addon...
Сool
Instant sub, nice work man. Thank you!
ua-cam.com/video/fRNBgn8dhRs/v-deo.html
You said to send you the results of using your editor (which I think is the best thing god has invented since the bed 🛌) and here it is. This is my first "serious work" in Deforum, and I was just getting to know Parsek then. This is my song and I'm a musician basically, but I've always been interested in film and animation and here I am
Great work!
can we marry?
Weird video taste
Looks like the info_match_last function changed to info_match_prev. I couldn't get it to work with info_match_last, but info_match_prev worked.
Interesting - both should work (they are aliases), and do in my tests. If you have an example where info_match_last doesn't work, feel free to share with me.