SynthEyes Essentials - 04 Proper QC and Export [Boris FX]

Поділитися
Вставка
  • Опубліковано 13 лип 2024
  • In this next chapter of Getting Started with SynthEyes, we look at a proper QC workflow and how to export. We also look at the importance of lens distortion and the best ways to undistort and re-distort your shot. SynthEyes expert, Matthew Merkovich shares his efficient techniques for building and refining meshes.
    LEARN MORE ABOUT SYNTHEYES HERE : bit.ly/3vO3j62
    / / D I S C O R D C H A N N E L : www.borisfxdiscord.com/ / /
    • After finishing the 3D track, use SynthEyes' rendering tools to create a preview movie to see if the camera track matches the footage.
    • The Lens Workflow button in SynthEyes helps automate the undistortion process and prepares the project for export.
    • When rendering a preview, it's recommended to use a black background for the 3D viewport.
    • You can export the 3D scene to other compositing applications like Fusion for further refinement.
    • Fusion uses nodes to represent different elements in the compositing process.
    • SynthEyes offers options for handling lens distortion during export, including using dedicated distortion nodes or ST Maps (displacement maps).
    • After exporting a redistorted wireframe preview, it's important to perform Quality Control (QC) to ensure everything matches up as expected.
    • The final step is to deliver the undistorted footage (as an image sequence) and other necessary elements.
    • What to look out for exporting SynthEyes to Blender
    #3Dtracking #SynthEyes #Fusion #VFX
    // C H A P T E R S //
    0:00 Intro
    0:41 Rendering a CG Stand-in Movie
    1:15 Switching Between Rooms
    4:18 Lens Room - Lens Grids
    7:27 Removing Lens Distortion
    11:24 Rendering a CGI Stand-in
    16:50 Re-distort in BlackMagic Fusion
    20:38 Adding Distortion Back to our CG Elements
    22:45 Working with ST Maps
    25:49 Slap Comping the Redistorted Wireframe
    29:39 Render Undistorted JPEGs from SynthEyes
    31:34 Exporting to Other Applications
    32:12 Thoughts about Exporting to Blender
    34:35 Conclusion
  • Фільми й анімація

КОМЕНТАРІ • 42

  • @glmstudiogh
    @glmstudiogh Місяць тому +4

    I want to thank you Matthew for all your lessons even before Boris acquired this amazing program, I tried learning Syntheyes back then but i dropped it cause it was a bit confusing but your tutorials have helped and now I don’t want to give up learning it cause I see it’s very powerful in matchmoving

  • @juanorea9215
    @juanorea9215 Місяць тому +2

    Thanks Matt! really appreciate you going thoruoghly through the settings, explaining the 'whys', not just the 'click here & there'. Loving this series!

  • @imtiazali6980
    @imtiazali6980 Місяць тому +4

    Sir kindly continue this series and show us distort undistort redistort with CGI workflow and how to apply those opration in compositing please....

    • @MatthewMerkovich
      @MatthewMerkovich Місяць тому +1

      This video pretty much covers it. You'd just run the lens workflow, export to your 3D animation app (Maya, Houdini, C4D, Blender, etc.), and then import your 3D assets and do your layout and lighting. Then you'd render at the overscan, un-distorted, resolution, and finally redistort that in comp, exactly like the CGI stand-in movie here. It's really that simple.
      Unless you had a more specific question?

  • @estebancrop
    @estebancrop Місяць тому +2

    Amazing! Thanks Matthew Merkovich and boris!

  • @minimalfun
    @minimalfun Місяць тому +2

    Another gem from the master himself!

  • @williamreliford_vfx_artist
    @williamreliford_vfx_artist Місяць тому +2

    Wow! A master teacher. Well done!!

  • @merdovfx
    @merdovfx Місяць тому +1

    Thanks alot for this tutorial u cover all things that iwas looking for
    Cant wait for the lidar and geo track tutorial ❤

  • @glmstudiogh
    @glmstudiogh Місяць тому +1

    hi everyone. So I figured out the export for Houdini through a forum. Houdini is expecting a .hip file which SYntheyes apparently doesn't write to, so the solution is to open the .bat file in a text editor, copy the codes, open the textport in Houdini, and then paste the code. that's it.
    and for the sliding issue I had between Syntheyes and C4D, the problem was the frame number, not the fps, but which frame the project starts in compared to the actual plate. my plate starts on frame 1001 but the Project in C4D starts on frame 0, so I just had to match that and it worked.

  • @muhammadbadran4896
    @muhammadbadran4896 Місяць тому +2

    As usual, it's a great tutorial!
    Could you make a tutorial about the coordinates window especially (Seed and Lock)?

    • @MatthewMerkovich
      @MatthewMerkovich Місяць тому +2

      Absolutely! Next, I'm going to be talking about GeoH Tracking within a solved camera space, and there will be a lot to cover when it comes to constraints.

    • @muhammadbadran4896
      @muhammadbadran4896 Місяць тому

      @@MatthewMerkovich Thank you so much!

  • @kostas-fh1oi
    @kostas-fh1oi Місяць тому

    Thank you, just thank you!!!

  • @BrianMartees
    @BrianMartees 5 днів тому

    Thanks for your tutorials, they really helped me understand SynthEyes. It would be great if you showed how to find "depth".
    If with shots where there is not enough of everything, everything is fine, but when tracking a metropolis, syntheyes go crazy with depth) And it seems that when, for example, the edges of one building should be on the same line, then in synthese the difference between these points is simply cosmic, so It's almost impossible to build a good geometry around such tracking)

    • @BorisFXco
      @BorisFXco  5 днів тому

      This may be a scenario where setting up more constraints is a good idea. We have a couple of (admittedly rather old) tutorials available here : borisfx.com/videos/?tags=product:SynthEyes&search=constraint
      If you have some specific questions, please join our Discord community. It's good to get more voices there : www.borisfxdiscord.com/

  • @abdessamadhasnaoui6887
    @abdessamadhasnaoui6887 Місяць тому +1

    thank you so much
    are you going to give any more in depth tutos about this?
    path filtering....troubleshooting trackers.......

    • @MatthewMerkovich
      @MatthewMerkovich Місяць тому +1

      I'm thinking about going into a bit of GeoH Tracking next. Many seem to find it perplexing, but it really is very powerful. I also have had an idea for a couple years. It would be a tutorial about path filtering and smoothing, while also keeping your track from sliding.

    • @abdessamadhasnaoui6887
      @abdessamadhasnaoui6887 Місяць тому +1

      @@MatthewMerkovich thank you 😀

  • @the_shizon3322
    @the_shizon3322 Місяць тому

    Awesome video. Can you export the undistorted plate from Syntheses to use as a camera plate background in you 3D app?

    • @BorisFXco
      @BorisFXco  Місяць тому +1

      Yes, absolutely. Here's a link to the workflow for you : borisfx.com/documentation/syntheyes/SynthEyesUM_files/delivering_undistorted

  • @glmstudiogh
    @glmstudiogh Місяць тому

    Great lesson Matthew. I’m a bit confused on the cropping. If the image has been upscaled to true a bit, wouldn’t that be different from the original when comping? I’m a bit confused. If you could help me understand please. Cause it’ll definitely look different from the original plate when in Comp or is that an overscan workflow? And also how do you tell how much cropping to do in Syntheyes per plate?

    • @MatthewMerkovich
      @MatthewMerkovich Місяць тому

      The Image doesn't get "up-scaled" during the cropping operation, to be clear. We are increasing the *_canvas size_* , so that when the un-distortion gets applied, which *_does_* increase the image resolution, that just means all the original photography has a space to be undistorted into.
      SynthEyes will do all the calculations for how big to make the new canvas using the "cropping" tool (which again are not *_cropping!_*) based on the lens distortion calculations.

  • @glmstudiogh
    @glmstudiogh Місяць тому

    I also tried exporting USD but the camera is still and not moving

  • @glmstudiogh
    @glmstudiogh Місяць тому

    ok. so now after doing so many tests, I still can't get an accurate scale when I export to Cinema 4D. any help on that, please? cause all other programs work perfectly but then C4D is my main DCC. any help, please?

  • @user-ij9nl3xf5k
    @user-ij9nl3xf5k 4 дні тому

    I have a few questions. help me please
    1. How do I export to Fusion included in DaVinci Resolve?
    2. Perform the lens workflow and then export it to Blender. The exported file has background and distortion as shown at 9:15 seconds. After placing the model in this state, should I render it as un-distorted when rendering it in Blender? And do I have to re-distort this again?

    • @BorisFXco
      @BorisFXco  3 дні тому +1

      1. You can use the Fusion exporter from SynthEyes to create a .comp, then go to File > Import Fusion Composition inside of Resolve. It will create the scene for you with every node you need in the flow.
      2. Re-distortion is usually done at the compositing stage. If you're rendering a full comp from Blender, you can do it then. If it's going somewhere else, you'll probably want to render out un-distorted and use the calculated lens to bring all the CG elements together in comp.

    • @user-ij9nl3xf5k
      @user-ij9nl3xf5k 3 дні тому

      @@BorisFXco Thank you for your kind reply.
      Let me ask you one more thing. If you shoot a video that involves turning your head, there will be a lot of motion blur. Is there a way to track this correctly? When running in the traditional way, there is no tracker at all in areas with severe motion blur.

  • @sonnhost
    @sonnhost Місяць тому

    How to export Un-ditortion attribute to using Compositing in AfterAffect

    • @BorisFXco
      @BorisFXco  Місяць тому

      Here you go. We have a full video about that : borisfx.com/videos/whats-new-in-syntheyes-2024-1-part-2/

  • @zurasaur
    @zurasaur Місяць тому

    Amazing tutorial - I’ve used lens grids for years - could you explain why we get better tracking results from the lens estimation over properly undistorting using a grid?
    I’ve tested and indeed you get poor results with a properly pre-undistorted plate compared with when you let syntheyes do the estimation
    How is this possible

    • @MatthewMerkovich
      @MatthewMerkovich Місяць тому

      SynthEyes isn't doing an "estimation," unless you want to define the word itself as any representation of the lens distortion by ***any*** means. It may help to think that what SynthEyes is doing is calculating the lens distortion based on the behavior of the 2D trackers as they move through the frame, and how they deviate from the basically linear path they'd be on if the shot had been captured with a pinhole camera. So, as your 2D tracker moves to the corners and edges of your frame, the deviation can be graphed over time. In this tutorial series I am using the Std. Radial 4th Order lens model. You can read more about the math here: en.wikipedia.org/wiki/Distortion_(optics)
      Now take into consideration whatever lens grid you have. How was it shot? How big is it? That will determine the photographic focus distance of the lens grid, and that focus distance will now introduce some amount of lens breathe. Now consider the vfx shot in which you are going to use the distortion derived from that lens grid. Was it the same focus distance? Probably not, and if there is a rack focus, definitely not. And now your lens grid deviates from the distortion that is is actually in your shot by some amount. And then you have manufacturing errors which might be small, but absolutely exist for the lens, your lens grid, the camera body, and the list goes on.
      As with all my answers, though, every shot is different. Having that lens grid to fall back on can be a huge help. (I'm using one right now.) But usually, nearly always (when there are plenty of 2D trackers covering the frame, with good parallax, and a good solve), the calculated lens distortion is more accurate than brute forcing a lens grid into the solve calculations.

    • @ryanansen
      @ryanansen Місяць тому +1

      I may be completely wrong, but just from speculation, I'd imagine the reasoning is because a lens grid captures the lens distortion of a lens during a very specific point in time (ie the time at which you captured the lens distortion grid).
      The reason this might be important is there could potentially be very subtle micro changes in the properties of the lens between the time the lens grid was shot and the time the priciple photography was shot. Maybe the lens was locked into the camera slightly differently, or settled into the locking bracket in the most subtlest of ways differently in that it would yield a microscopic difference in how the glass elements are positioned in relation to the sensor.
      Another reason I could imagine why it's more accurate is for the same reason he mentions why an ST map might not be the way to go, that being properities of the lens might change throughout the duration of a clip such as lens breathing, etc.
      Or, again, maybe I'm 100% wrong and Matthew will chime in with a correct answer hahaha. I'm just speculating is all!

    • @MatthewMerkovich
      @MatthewMerkovich Місяць тому +1

      I can only agree with everything you just said, @ryanansen. One point I’ll expand on is that to capture the changing distortion, you have to solve for all those variables, and the more you’re solving for, the more trackers you’ll need.
      But yes to everything you just wrote.

    • @zurasaur
      @zurasaur Місяць тому

      Thanks guys, I wasn’t aware the distortion is usually changing or breathing throughout the shot, I am always working with locked focal length and focus so I thought it would be consistent
      It just a bit annoying to be using estimations instead of having a concrete method of undistorting any lens using real data

    • @MatthewMerkovich
      @MatthewMerkovich Місяць тому

      @@zurasaur Lens breathe only happens when you change focus, so the breathe wouldn't necessarily be changing throughout the shot. But you see it all the time whenever you are watching movies or TV shows, especially when they are shot with anamorphic lenses that: 1) have the most lens distortion, and 2) exhibit a lot of lens breath when pulling focus. Just look for when they rack focus a shot, and yowza! It looks like it's a zoom lens! 😂

  • @mr_bugzzz
    @mr_bugzzz 23 дні тому

    I still dont understand the purpose of the all these manipulations with distortion, re-distortion... Why i should do this? Or i dont need this at all?

    • @BorisFXco
      @BorisFXco  23 дні тому

      Footage shot with real lenses have a certain amount of curvature to them. Generally this is shown more towards the edges of the image. This can be almost unnoticeable, or very profound if you think about a fisheye lens, for example . CG images are created with perfect, square pixels across the frame.
      If you're trying to composite CG with real footage and you don't compensate for this distortion, then your CG won't be matchmoved properly. Viewers may see the issue and _feel_ something is off, often without being able to say why.
      In this setup, we undistort the plate and have track points for that. It's now flat. Now when that is combined with the flat CG, we want to redistort it, so it looks like it was shot in camera. Usually, instead of redistorted the whole thing, we just redistort the added elements, but the reason is the same; so it looks like it was all shot with the same real camera and lens.
      Hope that makes sense.

    • @mr_bugzzz
      @mr_bugzzz 22 дні тому +1

      @@BorisFXco Thanks a lot for such detailed answer) Now i understand 😅

  • @manolomaru
    @manolomaru Місяць тому

    ✨👌😎🙂😎👍✨