there is and it's called object tracking. Issue is that you need at least 8 trackers on your object which is ridiculous. This is really just a tutorial on manual object tracking. Technically you could try speeding this up by running trackers on the 2 markers, using their position data to create a simple 1 bone rig that moves the sphere, and change scale with an expression and some drivers.
I used a single dot the ball and use the montion tracking tab to get a single tracking point on that dot, add a sphere, set the origin of the sphere to the empty track and boom location has been taken care of, now i just have to manually set the rotation, it works really well tho
Its a good skill to know. In real production you don't always get footage with convenient tracking markers, so you have to wing it and do it the hard way.
Professional compositor here, so I've done my fair share of roto in my time; Keying every 20 or so frames is great. Even better is key the start frame, then the end frame, then the middle, and keep subdividing that way. Even better than that, and what I do on every rotoshape or anything I'm tracking manually, is do the first and last frames, and then keyframe the key-poses. Say the ball starts at the bottom of frame, goes up, and then down again. You would put a key at the extents of each of those motions. Depending on the motion, I will then usually put a key a few frames after/before those initial keyframes, to nudge the object forward a bit while it is still slowly accelerating to its next pose. That usually captures the ease in/ease out that real natural motion has. Then I will start to subdivide the keys, and get down to frame by frame level if need be.
Senior Roto Artist here, It's easier if you were to analyse the footage then figure out your key poses and mark those down, then fill in the ease and outs in between your key poses. Then you pretty much got a full track without having to do by 20s or 10s. It's much easier if you look at roto/matchmoving from an animator perspective. As animation is and always will be a study of physics and movement. I'm sure you knew that but you prob didn't want to spend 10 mins explaining how animation works LOL. Also feels bad when most big studios already have a matchmoving software xDDD we're spoiled.
I wanted to say this too, also in Blender you got the Animation curves witch are easy to edit, with view keyframes you can match the motion of your complete footage.
@@aronseptianto8142 try pftrack or syntheyes pftrack has 'better' interface and easy to solve for easy ones but it total shit when you have a muddy shaky shot lol
This is a nice quick overview of the process, particularly the compositing, but suffers from 1 common error - you should not be changing scale unless the photographed object actually changed scale in the shot. It seems convenient, but you should always try to match the REALITY of the scene. The main reason is that you cannot properly match lighting, cast-shadows, and environmental interaction if your object is not moving accurately through space, at the proper scale. Sure, its more difficult to account for 3 more axes of motion, but that is the nature of the problem. Also, you are correct to break the sequence into large intervals, but you should find the extremes of position and key those first. Then break the intervals into powers of 2, such as starting at 64 or 32, depending on the length of the shot. Then continue breaking the intervals down by a factor of 2 until you are down to a singe frame between keys. Its common to build a rig with the Z-depth axis oriented to the camera - that will account for your apparent change in scale. Then you can choose a point, track that, and lock the object local X,Y position to that point. Then you are left with rotations, which are oriented around the single tracked point. The Z-depth axis passes through the tracked point as well.
In particular, a linear movement in z direction will require a non-linear movement along the "scale axis". Using scale can be considered quick and dirty for not so much (and more erratic) depth change in the scene and no intersction with other objects
You would be right if he were needing to rerender lighting to match the original scene, but his method more or less overlays the original lighting onto the new footage
@@aliensoup2420 the lighting comes from the scene and compositing, if CGmatter used lamps to approximate the lighting then the movement would be necessary
Wow, this music is way too intense for a technically oriented- *"This is some of my favorite background music"* I can see we're growing apart, CGMatter... we're definitely growing apart.
I think this is the kind of thing that separates the CGMatter videos from the Default Cube videos. CGMatter is meant to create intrigue into various things you can do in Blender, Default Cube takes its time and breaks things down. At least that's my take on it.
Funny, I did something very simmular for a shortfilm i work on atm. I have one tip for you, you can use one tracker inside the motiontracking to match the position of the sphere, then import an tracked empty and now you can keyframe the motion of your object you want to track in as in this video. The thing i also did another way was to keyframe the scale afterwards, witch could also be tracked inside the motiontracking system. This pipeline made the location tracking very easy.
I don't know what half of this means, but I've been reccomended this at least four times now. So, I'm going to nod and pretend I understand, because I have to respect how much work this video took.
Also using shortcuts will save you time....... Just Alt + mouse scroll wheel to scroll between frames or arrow keys, and hit the Record button to automatically insert keyframes
Note: if your ball rotates too far for two dots to account for, don't simply add more dots. Add more dots *of different colors*. That way you'll know at a glance in what orientation relative to the starting position you are in, which is important because your eye will constantly fool you regarding how far a sphere has rotated.
Scale instead of moving the ball's position in the tracking process, this is a one way off to re-do the work if you need zdepth or simulation later on.
Okay, another approach: Take your entire image sequence and put it through a Hough transform to detect circular objects. You can even do this from within Blender by running a Python script that imports OpenCV and make it create/adapt a sphere that matches the state of the sphere in your sequence.
Please do the explosion integration tutorial 🙏 Great tutorial, cool to see another tracking technic, thanks as usual, keep up all the awesome work you're doing! Does CGMatter? Oh yeah, it does
I think it require at least 4 points to determine orientation, but also distance from the camera. I tried something similar, but after linking empties to a track, all empties are in single plane. Now, important question - is there a way (maybe python programming) to move object or empties based of the space between them? For example, if object is closer, empties are more spread apart, if far, then empties are closer each to other. Four markers (and four empties) for four corners of the image, then some mathematics approach to reconstruct depth data? If distance between original points are known, then we should be able to use this data to reconstruct the distance between each point and the camera, not only from each to other.
What you are describing is auto-tracking software that already exists in Blender. Yes, if you have sufficient data, use an auto-tracking algorithm - but the point of this video is how to do something when you don't have sufficient data to use auto-tracking.
what if instead of scaling the sphere, you set the 3d cursor to the camera position, set the 3d cursor as the origin of transformations, and set it to only change location? Then you would have the sphere moving in 3d space at least.
Now show us how to track camera footage perfectly aligned into a 3D LiDAR Scan of the scene in the footage (and render the two together with some transparency or something). I can provide data if you want...?
To all you "Ugh, couldn't we not do this manually" out there, note that CGMatter is an honest hard-working individual (who happens to make blender memes on the daily) and will not tolerate your laziness. You can go to channels like EWan Lazubert if you want to be lazy.
But by just scaling the balls and not tracking them in 3D you are limited in the interaction between them, aren't you? So if you use Metaballs and put them together and hold one ball behind the other in the video what will happen is, that they will melt together even though they are fairly far apart depthwise in the video, since in Blender they are just a big and a small ball next to each other.
Stay with blue or green, digital cameras have double resolution for green because of the bayer filter, blue is not too bad because the color is very far from humam color scheeme
>sponsored by squarespace
>shows a sphere
Salvato 😂😂😂
😂
default square
Spherespace
spherespace
**THAT WAS... TOO INTENSE FOR A TUTORIAL**
the dude speaking like those AI bots on some youtube Channel
there has to be a simpler way of doing this
i m 100% sure there it is
i have the feeling this tutorial it's a bit of a troll :))) still interesting
Jonathan L well yes...buuuuuut this realm is set hardmode
there is and it's called object tracking. Issue is that you need at least 8 trackers on your object which is ridiculous. This is really just a tutorial on manual object tracking. Technically you could try speeding this up by running trackers on the 2 markers, using their position data to create a simple 1 bone rig that moves the sphere, and change scale with an expression and some drivers.
@@CGMatter Would it be possible/realistic to use three tracking dots to track position, scale and rotation?
I used a single dot the ball and use the montion tracking tab to get a single tracking point on that dot, add a sphere, set the origin of the sphere to the empty track and boom location has been taken care of, now i just have to manually set the rotation, it works really well tho
tracking manually like a medieval peasant?! 😲
Its a good skill to know. In real production you don't always get footage with convenient tracking markers, so you have to wing it and do it the hard way.
yes.
Very ew
6 days late
🙄🙄🙄🙄🙄
I always figured you more of a Raid: Shadow Legends man rather than a Squarespace man.
Definitely Not Dan hmm..
Nah I’m more of a standing up man
Or maybe a Nord vpn
what about honey
More like bore ragnarok
Wow, 6 minutes! That's a full on documentary!
I got lost at “Open up blender” now I’m toasting a frog. Send help
Free the frog
Professional compositor here, so I've done my fair share of roto in my time; Keying every 20 or so frames is great. Even better is key the start frame, then the end frame, then the middle, and keep subdividing that way.
Even better than that, and what I do on every rotoshape or anything I'm tracking manually, is do the first and last frames, and then keyframe the key-poses. Say the ball starts at the bottom of frame, goes up, and then down again. You would put a key at the extents of each of those motions. Depending on the motion, I will then usually put a key a few frames after/before those initial keyframes, to nudge the object forward a bit while it is still slowly accelerating to its next pose. That usually captures the ease in/ease out that real natural motion has. Then I will start to subdivide the keys, and get down to frame by frame level if need be.
Senior Roto Artist here, It's easier if you were to analyse the footage then figure out your key poses and mark those down, then fill in the ease and outs in between your key poses. Then you pretty much got a full track without having to do by 20s or 10s. It's much easier if you look at roto/matchmoving from an animator perspective. As animation is and always will be a study of physics and movement.
I'm sure you knew that but you prob didn't want to spend 10 mins explaining how animation works LOL.
Also feels bad when most big studios already have a matchmoving software xDDD we're spoiled.
Heavy Metal this animation isn't smooth enough to define key positions
I wanted to say this too, also in Blender you got the Animation curves witch are easy to edit, with view keyframes you can match the motion of your complete footage.
Can tou do a tutorial or a forum or something😅
Blender has its own planar tracker and camera solver
how crap is it is up to debate, but still
@@aronseptianto8142 try pftrack or syntheyes
pftrack has 'better' interface and easy to solve for easy ones but it total shit when you have a muddy shaky shot lol
This is a nice quick overview of the process, particularly the compositing, but suffers from 1 common error - you should not be changing scale unless the photographed object actually changed scale in the shot. It seems convenient, but you should always try to match the REALITY of the scene. The main reason is that you cannot properly match lighting, cast-shadows, and environmental interaction if your object is not moving accurately through space, at the proper scale. Sure, its more difficult to account for 3 more axes of motion, but that is the nature of the problem. Also, you are correct to break the sequence into large intervals, but you should find the extremes of position and key those first. Then break the intervals into powers of 2, such as starting at 64 or 32, depending on the length of the shot. Then continue breaking the intervals down by a factor of 2 until you are down to a singe frame between keys. Its common to build a rig with the Z-depth axis oriented to the camera - that will account for your apparent change in scale. Then you can choose a point, track that, and lock the object local X,Y position to that point. Then you are left with rotations, which are oriented around the single tracked point. The Z-depth axis passes through the tracked point as well.
In particular, a linear movement in z direction will require a non-linear movement along the "scale axis". Using scale can be considered quick and dirty for not so much (and more erratic) depth change in the scene and no intersction with other objects
You would be right if he were needing to rerender lighting to match the original scene, but his method more or less overlays the original lighting onto the new footage
He's not lighting
@@cynthetic4896 You need light to render. It's just the wrong way to do it.
@@aliensoup2420 the lighting comes from the scene and compositing, if CGmatter used lamps to approximate the lighting then the movement would be necessary
Ok, i will stick to microsoft paint after seeing this
lmao.
lmao.
lmao.
lmao.
oaml
Wow, this music is way too intense for a technically oriented-
*"This is some of my favorite background music"*
I can see we're growing apart, CGMatter... we're definitely growing apart.
No, I like it. It makes it suspenseful. And much more interesting.
I think this is the kind of thing that separates the CGMatter videos from the Default Cube videos. CGMatter is meant to create intrigue into various things you can do in Blender, Default Cube takes its time and breaks things down. At least that's my take on it.
@@Wander4P Now hes doing asmr shit and i hate it
what's the name of the music?
Thank you for the comment, Penny Gadget.
We can animate Pokeballs, now!
YES
The music was cool but the dude kept talking and talking in the background.
wow something I'll never use, but it's finally in a nice compact video so I can save time. Revolutionary
I think the "Go 20 frames" then subdivide the frame intervals is brilliant...
Seriously how can he talk like 4 minutes non stop without breathing?
Something's wrong, I can feel it!
All women can do that.
@@REDxFROG I like how your comment somehow manages to be both insulting masculinity and insulting women through casual sexism.
@@flytrapYTP it's insulting when women have the skills and I state this as a fact?🤓
Cuts.
the first blender tutorial i followed and actually finished
Me with my potato PC:
I'ma try doing this.
Me one mistake later:
I shouldn't have done that!
I should not have done that!!!
Funny, I did something very simmular for a shortfilm i work on atm. I have one tip for you, you can use one tracker inside the motiontracking to match the position of the sphere, then import an tracked empty and now you can keyframe the motion of your object you want to track in as in this video. The thing i also did another way was to keyframe the scale afterwards, witch could also be tracked inside the motiontracking system. This pipeline made the location tracking very easy.
I also got a nice moon mapping, check it out on my channel, you can have it for playing around, if you want.
I don't know what half of this means, but I've been reccomended this at least four times now. So, I'm going to nod and pretend I understand, because I have to respect how much work this video took.
Best beginner tutorial for blender
I don't even video edit but I find this style of beating stuff down people's throat very entertaining.
Why this is in my recommended i don't even know what this means
That's a lot more work than I had imagined! Btw have you made a tutorial on object tracking?
Shutter Authority a 2 freaking million subscriber channel comment has no likes or replies.
This shutter guy reminds me of a default cube
Amazing and entertaining video! Thank you. #blender #3dtracking #vfx #matchmoving ps: yes. The background track is awesome.
the compositing trick was really nice!
These are more than just tutorials, I watch all of your videos even though I can't make most of them
theres a video called The Octo-Bouncer where the ball position is tracked by a camera basically the only thing last is to add rotation tracking
Also using shortcuts will save you time....... Just Alt + mouse scroll wheel to scroll between frames or arrow keys, and hit the Record button to automatically insert keyframes
Haha, overwhelmed? Naw! Bring it on! Thanks as always! Awesome tutorial.
I Refuse To Believe That You Really Absolutely And Utterly Undeniably Have To Do The Tracking Manually!
I don't even have a sphere
love your tracking video
which is rare on youtube for some reason
Note: if your ball rotates too far for two dots to account for, don't simply add more dots. Add more dots *of different colors*. That way you'll know at a glance in what orientation relative to the starting position you are in, which is important because your eye will constantly fool you regarding how far a sphere has rotated.
I thought it will be a whole bunch of masking but this is much smarter, thanks!
I'm watching your video at 3AM because of your commentary, I don't understand a single thing about blender but you make it very interesting
This was exactly what I needed before I knew I'd need it.
I don't know for what purpose you need to track a sphere, but good job, time consuming, but really amazing result!
Scale instead of moving the ball's position in the tracking process, this is a one way off to re-do the work if you need zdepth or simulation later on.
love the split screen.
Okay, another approach: Take your entire image sequence and put it through a Hough transform to detect circular objects. You can even do this from within Blender by running a Python script that imports OpenCV and make it create/adapt a sphere that matches the state of the sphere in your sequence.
Bro you are not a cg artist, you are fully a magician!
Glad to see your face back!
I just want a really long compilation of lots of amazing tracked 3D animations-
You’re amazing at editing, my guy!
Me, at 4am who doesn’t even have blender and needs to go to sleep: oooOOOoh, sphere tracking???
Please do the explosion integration tutorial 🙏
Great tutorial, cool to see another tracking technic, thanks as usual, keep up all the awesome work you're doing! Does CGMatter? Oh yeah, it does
CGMatter: *0:20*
Me: *_*ElijahWoodLaugh.mp3*_*
This is just everyday bread for this mastermind, I wasn't sure what a sphere was before seeing this
compositing node network be like:➡️⬇️⤴️↩️↙️⬇️↩️⬅️↗️⬅️⬅️
Wow, blew my mind
thank you for sharing the compositing process, i alway feel alone when it's time to render a vfx :)
I wanted to cry watching this tutorial
100k sub soon. Consistent uploads. 👏👏
spam
u are right. still one of the dopest background music around #Skaler
I like that it says"was" at 1:08
I think it require at least 4 points to determine orientation, but also distance from the camera. I tried something similar, but after linking empties to a track, all empties are in single plane. Now, important question - is there a way (maybe python programming) to move object or empties based of the space between them? For example, if object is closer, empties are more spread apart, if far, then empties are closer each to other. Four markers (and four empties) for four corners of the image, then some mathematics approach to reconstruct depth data? If distance between original points are known, then we should be able to use this data to reconstruct the distance between each point and the camera, not only from each to other.
What you are describing is auto-tracking software that already exists in Blender. Yes, if you have sufficient data, use an auto-tracking algorithm - but the point of this video is how to do something when you don't have sufficient data to use auto-tracking.
if squarespace is the sponsor it should have been "default cube tracking"
what if instead of scaling the sphere, you set the 3d cursor to the camera position, set the 3d cursor as the origin of transformations, and set it to only change location? Then you would have the sphere moving in 3d space at least.
man i was not expecting that trickshot
This is the guy who made Jesus walk on water
This guy's comedic jokes make motion tracking more interesting
I'd better watch this in default cube channel. Man, about 80 percent stuffs went straight through my head.
Thanks. I am now a profesional at whatever this is.
that ending made me giggle
ant needed this a week ago
This is so engaging.
I like how he purpose fully doesn't want any one to be able to follow along
this guy, so big brain
I wish I had the equipment and skill to do this. One day i'll try though. Hopefully, it would go well.
wow. easier than I thought
thank you cg matter, very epic
I legit thought this was Adam Ragusea when the SquareSpace ad appeared
Now show us how to track camera footage perfectly aligned into a 3D LiDAR Scan of the scene in the footage (and render the two together with some transparency or something). I can provide data if you want...?
Good job men the result is very cool
One after effects user disliked this video
ok
imagine paying for after effects
i have a feeling this is gonna be recommended to a lot of random people very soon.
I feel nothing but squarespace
Mind blowing
Pf track has geometry track, and cinema 4d has object tracker
To all you "Ugh, couldn't we not do this manually" out there, note that CGMatter is an honest hard-working individual (who happens to make blender memes on the daily) and will not tolerate your laziness. You can go to channels like EWan Lazubert if you want to be lazy.
Is this one of those videos that will show up in my recommended 6 years later
Wait you're saying the real meatballs were fake?!?! O:
Godzilla had a seizure listening to this and died
Damn, I think that was the one thing in this video that I understand 5:15
Yes.
Instructions unclear. Still have blue balls
i did not breathe through the whole video
6 minutes for cgmatter is like 12 hours for another video
Pair a vertex on the ball to an empty, track the marker on the ball, parent the empty to the tracker?
3:22-3:35 That's nice
Majestic, but.. I guess I'll need a slower and a bit more detailed explanation) (yeah, newbie here ;) )
But by just scaling the balls and not tracking them in 3D you are limited in the interaction between them, aren't you? So if you use Metaballs and put them together and hold one ball behind the other in the video what will happen is, that they will melt together even though they are fairly far apart depthwise in the video, since in Blender they are just a big and a small ball next to each other.
So next tutorial you ll make explosion using default cube?
It’s very interesting to see a man tries to explain 3D video technology and doesn’t even understand anything but still find it interesting
So, I tried with a yellow ball. Now I know why there aren't yellow chroma keys
what did he use to render it, was it eevee or cycles
Stay with blue or green, digital cameras have double resolution for green because of the bayer filter, blue is not too bad because the color is very far from humam color scheeme
I got Cadbury gems balls so maybe I could try this thing someday 😂 when an add-on drops in.
i had no need for this tutorial it was just recomended to me and i have a lot of spare time so i watched it
I've been doing a lot of experimentation with the new fluid sim explosions, it sure would be nice to get a tutorial on it.