Amazing videos, this is the first one that i see it has all that information that nobody wants to explain, they go straight to do the processing, thanks for the help!
I really appreciate the amount of time and effort has gone into making this video and thank you for your generosity in sharing your workflow really helpful really useful all the best
13:13 After filtering by confidence is there no need to update transform or Optimize Cameras? What do those operations actually do, anyway? When should or shouldn't we utilize those functions? I've heard people talk about calibrating photos or having trouble with calibrating photos. What is that? What's the correct workflow for adding GCP's to the project? Some people say to transform the default drone GPS coordinates to a projected coordinate reference system and others say to deselect the cameras in the reference window. Is ASPRS a good place to study? I'd like to learn more about the things that aren't in the typical UA-cam videos. If you feel like doing an atypical, deep dive video into things like this I'd love to watch it ! It's autumn here now. Spring there? Thanks!
@geospatialtips thank you , your videos are helpful. Could u do a video about merging chunks on larger datasets of images, When to merge how to remove erroneous points in merging. i find it difficult to get right on linear mission running 20 km at a time with GCPs .thanks
Thanks for a great video. Are the filtering steps needed when you use GCPs? Is the process later faster when you filter vs using all the points? I usually erase all the points outside my area of interest after aligning. For contours, I make a 30cm, 50cm or more /px DEM and build contours from that. They are way smoother and better looking, not zig-zaggy, and that is what you really need. If you survey a land with RTK GPS or total station, you wouldn't measure a point every 5cm or even 5m on large surfaces. Thanks!
Thanks for the comments! Yes, I would always suggest the filtering steps as it cleans out points that have a lower certainty or may contain some errors. I am not sure that the later processes are any faster, but models will be cleaner. So I suggest always doing it as it is a fast process and won't cost you time, but will yield cleaner/better outputs. As for the contours, that is a valid point, thank you. I would usually create contours in Global Mapper, ArcGIS or something like that anyway, but your point is well made. I suppose one might consider whether we want contours that are aesthetically pleasing, or technically more accurate and create them accordingly. Thanks for watching.
Hi there. It's very good of you to help folks dig into Metashape. I'm curious what metrics you used to quantify the quality (error reduction) achieved with your particular settings? In my testing, I have come up with different general values for for the gradual selection parameters. And also, some of the camera optimization settings. It's okay that you have settled on different settings, but how did you determine that they were best for you? In my case, I use the error that MS reports for both check points and pixel error values. For example, If I remove tie points (by gradual selection) and check point and/or pixel errors get worse, I back up and try a different value. Same with the optimize camera settings. By taking this approach, I have found the general workflow settings that work best for me. I'm not saying that your workflow does not work best for you. I'm just curious how you went about quantifying the improvement? Maybe I can learn something new! Thanks!
Hi Dave Thanks very much for your thoughts and comments. When it comes to Gradual Selection, unless you are using the same camera each time and achieve the same GPS accuracy each time, fly the same terrain type etc., the numbers will need to fluctuate. As an example, when using a metric camera, you will almost always be able to select a Reconstruction Uncertainty value of about 5 or even less, whereas with a lower-grade drone camera, I find the starting value tends to be around 10-12 and even higher. In a general application: Generally, I have target values in mind, such as those mentioned above, and will then evaluate how many points would be removed if those values were selected, keeping in mind the conditions of flight, the terrain being considered, image reflectance values etc. My honest view is that is we look only at the numbers and do not concentrate on what we are seeing in the results on screen, we're potentially opening ourselves up for trouble. Why you might ask? Well, if I set my control point accuracy to 0.001, select the markers and then optimize the cameras, I will almost always get a very good residual in terms of numbers. However, MS will only be fiddling the mathematics to tend toward that 0.001 accuracy value. If my data is actually poor in regions beyond those control points, it won't be any good purely because the residuals look good - but I am sure this isn't news to you. Across all photogrammetric software packages, unless we keep all of the camera parameters fixed, the calculation of residuals serves to illustrate how well the software has done in manipulating the calibration values to achieve the target value that we set for it. However, this video was aimed at a non-GCP approach and to answer your question, I have arrived at values like those chosen after performing hundreds of projects with scores of sensors and found these to be adequate starting values for my and othr general applications. If I were to do the video over again with 5 different sensors, I would likely use 5 different values at each step. When using GCPs and in your particular case( I think): In your case, where I assume you are using both control points and check points, observing the change in check point residuals makes sense, so long as they have been held independent of the control points. This would also be true if you are using no control points, only check points and not restraining your data to the check points at all. In these cases, the check pint residuals carry more weight and have valid meaning. Am I correct to assume you are using one of these two approaches? Poor Application of GCPs: In the other case, where one might only be using GCPs as control points and constrain the data to them, well then we can fiddle the gradual selection and optimize camera parameters all we like and achieve resulting residuals of close to 0 and then declare our job well done. However, we may only have been able to ruin the camera calibration so poorly that it has forced the focal length, principal points etc. to such an extent that the data appears to fit the control perfectly, where in reality, if we were to then compare this data to check points or a LiDAR survey, we would be found wanting. I have seen so many users do this with Metashape, Pix4D, UAS Master etc. Furthermore, it's odd to see how so many people have an amazing level of faith in GCPs and fail to consider if they are derived from static observations with a full network calculations, or RTK, or RTX. Are they levelled or not, what does the survey report say, ellipsoidal versus orthometric....the list goes on! Conclusion: At the end of the day, my feeling is that we need to apply good survey practices and principles. If we have done all of that correctly and it results in excellent residuals then great. If we have done that all and we have poor residuals, we should then apply further good practice, investigate the source of error and try again. I don't want to say don't look at the residuals, but in my mind, they should be what we look at last, as a result of a professional job. Apply sound survey and photogrammetric principles, look at the data, consider the starting versus final camera calibration values, then consider our check points independently of control points. If we process only to target good residuals so that we can hand over a glowing report but pay no attention to what's really happening with our data, in our respective professional worlds we would all soon have to appear before our representative councils and explain our actions once clients had found us to be lacking in our conduct. Very long story short, I think there are a few different methods we can all use to come to a good result, yours and mine I hope are both good candidates. In this video I tried to convey a generic approach that serves to be a good starting base for anyone new to MS or starting down their photogrammetric journey Thanks for taking the time to chat.
@@geospatialtips Thank you for the thoughtful reply. All my use has been with lower quality drone cameras. First the P4P and now using a P4RTK (same camera), and Autel Evo RTK, probably similar quality. I do use GCPs and Checkpoints. And, my reference to watching residuals is referencing the checkpoints only. I also watch what MS reports for average pixel error on the images. I think checkpoint residuals are the single most valuable indicator of the absolute accuracy of the surface model. I agree with you that there can be unaccounted for error in the ground control, both GCP and CPs. Less so if the points were observed over different days and then averaged, which is rare. Considering all of the metrics I have in a dataset to evaluate accuracy, the CPs seem to me to be the best candidate. That is why I use their residuals to determine if the level of thinning of the sparse cloud (key points) is helping or is a detriment. I will fully admit that I am not knowledgeable enough to look at the camera calibration and determine if the values are reasonable of not. When not taking into consideration residuals, I guess I am still not sure what you are looking at to gauge the levels of points to remove and the different parameters you select in the Optimize Camera steps? Thanks!
Hi - check the Metashape Preferences/Advanced tab and make sure the box is ticked to allow Metashape to read that kind of data on import. If it still is not there, right click on one of your images in File Explorer, go to Properties and then Details. Scroll down the list and make sure the coordinates are saved in the raw image data. If none of that helps you, maybe send me one of your images and I will take a look. (geospatiltips@gmail.com)
Thanks for the video. How come the model that is viewed directly on the software looks nicer and sharper compared to the .fbx export which looks less sharp? Thanks in advance
Merci beaucoup pour cette vidéo. Elle est très bien détaillée Mais je rencontre un peu de difficulté avec l'étape 3 : Calcul du Dense Cloud. Cette option n'est pas disponible dans le Workflow de la version 2.1? Merci d'avance pour votre réponse
Great video. Highlights of buttons clicked (with text on screen of each process in sequence) for future videos would help dramatically in the ability to follow along. Creating Ortho's and DTM's (with added GCP's ...another lesson somewhere?) is my main goal to . So perhaps I can avoid the Point Cloud process completely. Pix4D seems to not demand attention to point clouds to build Ortho's and DTM's and DSM's in the one hit!.
Hi thanks for watching, for your comments and suggestions too. The point cloud is not a requirement in Metashape in order to generate the orthos, but because I wanted it for other outputs, I chose to use it in this case. One could generate DSMs from depth maps and then the ortho from that. However, the DTM should only be produced from a properly filtered ground model and I would suggest that at some point, before the DTM is produced, the user (i.e. you and I) should check it for correctness before producing it. :)
I need one large image of the whole scene from the top but with orthomosaic it's like a puzzle, I know, Mosaic^^ is there another option, like, just render the current perspecive as high-res image?
I don't believe there is a render option, like you are describing, at least not one that I know of. But I am interested in the issues you are having with the mosaic. Maybe it is just how I am reading your message, so please correct me if I am wrong. The mosaic doesn't have to be like a puzzle (accurate alignment and blending) and could easily be created as a whole/single scene that is exported as a single file, rather than a tiled export. Let me know if I can help a little more?
Hi Paul - thanks for watching. For this example, I have used a modest Mavic Mini. The point really is that it comes down to how you collect and then process the data (my experience is really all in data processing and that's why I focus on it). So, in your case, with that equipment listed I would expect superior results, so long as you are careful about how you do things. *At the same time, I have seen many results of excellent and very expensive equipment yielding useless data because best practices were not followed.
Can you explain why you use adaptive camera model fitting and remove points by error before accuracy? I read the USGS guide and they suggest accuracy first then error, and they also recommend to use specific calibration parameters for each step. Thanks!
Hi - thanks for your question and comments. I'm not sure of the exact USGS workflow you are referring to, but I see some comments on the Agisoft forum as well. Anyway, unless you are using a high-end camera, or in fact a metric camera, then I don't believe that fixed parameters will remain constant for a less expensive camera, such as those on a standard DJI Mavic. As you fly and the drone is shuffled by wind etc. there will be certain flexes on the sensor and lens and they are thus (in my opinion) not fixed, hence the Adaptive model is used. I have also just found better results when doing this. If however you are suing something like a $50,000 PhaseOne iXM150, that is a fully metric camera, with fixed lens and it had an official calibration certificate - different story. *I have seen a USGS article on coastal mapping with drones where they say to keep your key-point limit set to 0! This is a very bad decision as there will be no filtering at all and preliminary camera calibration will be based on bad data from those points not removed.* I filter the points in the order in which I do really because it's something I saw many years ago, it worked well and I stuck with it. Sometimes, I run it iteratively, the whole process, and it never seems to mess anything up. Just remain focused, look at the results and you'll be fine. This method is what has worked for me and in my experience yields accurate and repeatable results. There are many ways to approach the workflow, but for me this is tried and tested, 100s of times. Feel free to tinker with the approach, just keep checking the results and don't rely on the numbers - Look at the data itself. Cheers!
hank you very much for this video. It is very detailed. However, I'm having a bit of difficulty with step 3: Dense Cloud Calculation. This option is not available in the Workflow of version 2.1? Thank you in advance for your response
@@geospatialtips i had the same problem as @josephagouze3770. I was able to generate the dense point cloud following your instructions, but now the "filter by confidence" is and remains greyed out. Any idea what to do?
Hi - yes, when you create the point cloud, select the Advanced option and there will be a check box you need to enable that allows you to calculate the point confidence. I suspect this was turned off for you.
Hi Contours are simple enough: File > Export > Shapes > Select DXF For points, there is no direct csv option. I'd suggest exporting as txt, turn off all additional options such as RGB, normals etc. (unless you want those). Then, open that file in Notepadd++ or UltraEdit etc and change all "spaces" to "commas". That should sort you out.
hi, thank you for your tutorial. i followed most of your steps but i noticed that during adding photos to my agisoft, it seems to be the longitude and latitude information in the reference box are missing and all photos are in grey (no tick in the box). can u help me to identify what is my problem or missing steps? It results on me unable to create DEM. thank you in advance for your response.
Hi - sorry to hear about your issues. I can think of two things off the top of my head: 1. Your images don't have any georeferencing for some reason, you can check this by right clicking on the file in File Explorer, then going to the Details tab and scrolling down. If the GPS info is available, you will see it. 2. There is a setting in Metashape to disable all metadata being imported. This might be active for some reason. Go to Tools/Preferences/Advanced and take a look. If you still can't find a solution, feel free to send me an image, or a few of them, and I'll take a look for you. geospatialtips@gmail.com Cheers
I'm working on a site with cranes. The tall boom of one of the cranes does not reconstruct well and it blocks details on the ground below. I've been deleting the points of the boom from the DPC. Then I create a 2.5d mesh from the DPC and an ortho from the mesh. Unfortunately the deleted points reappear in the ortho! How can I prevent this? In the past I think I have done this successfully but it's now working now. I guess I must be doing something different but I have no idea what that would be. Maybe in the past I used a 3D mesh or made the ortho from the DEM... or something else? I'm lost and I've reprocessed the project three times already 😞 I was able to assign different photos in the ortho to hide the crane but I'd rather just delete the points. At times I'd like to delete trees, wires and other things so this is an important task to learn. At this point I cant easily go back and classify the crane as something to leave out of the ortho because those points have been deleted in the DPC. Please help. Thanks! Using version 1.7
Hi Jerseyshoredrones! Let's see if we can help... When you say that the "point reappear in the ortho", is it the points themselves (such that the crane is rectified in the ortho at crane height) or, as I might expect, the crane is still seen in the ortho but is rectified onto the ground surface? Or, to explain it this way, the points you use will create the 2.5D mesh or 3D mesh and yes, the crane should not be there if those points were deleted out and then the points saved (project saved and in turns saves the points without the crane in them). Then, you have a clean mesh. Next step is the creation of the ortho which is rectified on to that surface. However, the crane in the image may very likely be rectified because the image and the 2.5D mesh are two completely separated entities. At this stage, the crane is just a set of pixels in an image, not a 3D modelled crane. So the only way to get it out of the image is to use very selective images or use a seamline editor after processing to cut out the crane. Basically, some things we can get rid of automatically and others we have to do manually. But if that crane is there in 3D in the ortho, I'd suggest the point cloud hasn't been saved, or not saved properly. A crane in the image (maybe draped onto the ground or underlying building etc. is just something we need to manually adjust out.) I hope I have understood you correctly and offered some insight, but if I haven't please feel free to email me with some screenshots and we can take a deeper look?
Thanks for this! Do you find that the dense point cloud doesn’t have enough to work with after filtering? My computer takes HOURS to process the point cloud. 800 photos , JPEG… not sure what I’m doing wrong.
Hi James - no, in fact I often filter it a bit more or I might generate key-points for certain tasks. What percentage of your points is it filtering out? What is your use case for the data, this might impact on the number of points you need? If it is getting rid of the majority of your points, we might need to review the camera calibration and see what is happening there. Yes, 800 images might take a long time, especially if you have very high resolution. Also, if you have an older or perhaps a less powerful workstation, processing times might be affected. If it will help you, you are welcome to share some screenshots, a video, your data or anything else with me on my email and I'll take a look, after which we can review together? It's up to you, but let me know and maybe I can help. Cheers
I am using the new Mavic three enterprise. My understanding is that it is a good sensor with high resolution. I’m honestly not sure how much I’m filtering I pretty much followed your instructions step-by-step. I am very new at this so I’m on a very steep learning curve. My computer is a gaming laptop with 16 gigs of RAM it may not be enough
@@jamesnahill7124 That is a good drone with a solid sensor, no problem there. 16gigs may be a little light for bigger jobs, so perhaps that is where the time is being spent in processing, but it'll work, just a little slower. I am just not sure why you'd be losing so many points. It's tough to guess without seeing the data. Can you confirm the elevation you flew above the ground? What percentage of image overlap do you have, both forwards and sideways? Are you able to tell me the camera focal length before and after adjustments? (You'll find it under Tools>Camera Calibration, then we want to compare the initial and the adjusted "f" value) My email is geospatialtips@gmail.com if you would like to send some data my way.
Hi, i´m currently trying to create a DEM but, for some reason, my DEM doesn´t align with the rest of the models and comes out all stretched and completely out of the location I want it to be. I´ve also noticed that my cameras are also misaligned with the control points (they are in the right location), despite everything being in the same coordinate system (ETRS89/UTM zone 29N (EPSG:25829). Can you tell me what I can do to solve it?
You're very welcome - thanks for watching! This is just the way I handle processing in Metashape, not to say it's perfect, but tried and tested and seems to do the job (most of the time).
Great video thank you. When in this workflow would you recommend locating GCP's when they are being used? after the dense cloud, but before the filtering perhaps?
Hi Malcolm. Thanks for watching. After alignment, I would clean the tie-points first using the "Gradual Selection" tool, then add the GCPs and optimise cameras. Following that, create the dense cloud (which will then be created off of a GCP corrected tie-point set) and finally, filter the dense cloud.
@@geospatialtips Hi! Thank you for sharing your knowledge! Could you plese argue what's make you clean the tie point cloud first and than apply the GCPs? Will it be any conflict with cleaning the tie point cloud before making a manual camera set up? My regards! O.
Hi - thanks for watching. This link will take you to the page of Agisoft recommendations for system specs. I suggest focusing on the "Advanced" configuration section. www.agisoft.com/downloads/system-requirements/ In short a good config is the following: 64-128GB RAM, Intel i7/i9 minimum 3Ghz CPU (faster is better), 8-12GB RAM GPU i.e. Nvidia 3080. (The AMD equivalent CPUs and GPUs will work as well, keep specs to these or better)
@@geospatialtips This tutorial has helped me a lot. I've watched it a few times already. 2018 I bought a Phantom 4 Pro and started using dronedeploy for golf course mapping. I've just purchased a Mavic 3E and started using Metashape pro trial version. I'm still debating on which software to use. Metashape pro is expensive but reasonable prices for cloud processing and hosting. Capture Reality has a good PPI option but no cloud processing and hosting.
After the point cloud process. The points collected in the tie-point process will be used to optimize your camera alignment. Thus, if you clean up too many points, you run the risk of having images that are no longer aligned. Also, cleaning the tie points will not affect the region for which the dense cloud is created, as that is determined by the aligned images and the region box. *You would still want to run the gradual selection steps to optimize your tie points though.
Hey, i downloaded the 2.0.2 version and there's no "build dense cloud" feature and i got "build point cloud" instead. Are these two feature the same? or what?
I have chunks of 1500 x 9 . I try to align using point cloud and it says not enough ram. Total photos are about 14500. So is there any way to merge all dem, DTM, ortho mosaic and 3d model using agisoft? System specs : 64gb ddr4 3600mhz i7 14700k Rtx 3060 12 gb 1tb nvme 3.0
Hi - yes, there is a solution for your situation. You'll want to setup your computer as a local server/node. This video will explain what you need to do: ua-cam.com/video/BYRIC-qkZJ8/v-deo.html Using this technique I have processed 35000 images using a PC with about the same specs as yours, without the need to use chunks. All the best!
Hi, thank you so much for the tutorial, it has helped me a lot. I just want to ask you how can I improve the model if I'm working with crops, 'cause I'm working with images of avocado trees and looks like they're all cropped, and the ground it's all distorted. I've followed this tutorial step to step, so I wonder if it's due to a lack of quality of my images (the overlap is 65%) or I'm missing something. On the internet I've found that might be due to the lack of overlapping of the images and the movement of the leafs, and that could be improved using ground point cloud classification. Thanks!
Hi - thanks for watching and for your question. Are the trees quite far apart, with spaces in-between where you can see the ground? Are you rectifying them onto the DTM (ground only) or DSM ground and above-ground features? I would think the DSM gives you this odd result, if the trees have spaces between them, where you see the ground. In this case the model would be on the top of tress, then ground, then back up on the trees etc. and this could cause a bad result. If that is the case, try classify the ground points and create your model or mesh from ground only and retry. If that doesn't work or there are still issues, please send me screenshots and more info on my email, geospatialtips@gmail.com and I will take a look for you.
@@geospatialtips Hi, thank you very much for your response! I'll be checking that then. Maybe there were ground points that I didn't classified well enough.
Hi Mark, I just purchased Metashape pro software and using cloud processing. Project wouldn’t open locally after processing in the cloud on the last step of building the mesh. I finally deleted the project and started over. Lost a lot of money in processing hours. Hoping I can hire you for advice. I need to create an upwork account.
Hi - if your resolution is very high, then it likely wouldn't make much difference. However, as a best practice, use the 3D option. Many volumetric packages will use "pillars" for their computation anyway, but if your input is already "pillar" data, such as the 2.5D would create, then you may end up with volume pillar and 2.5D model pillar offsets. If you feed in 3D, then it'll place pillars as it needs without the potential issue mentioned. I hope that makes sense.
@@geospatialtips thank you for the quick response, I'm new into photogrammetry and I'm not familiar with the "pillars" term but I'll follow your instructions and then try experimenting to compare the results between the 2.5 D and 3D. Your videos are really helping me in learning photogrammetry, hope to see another video especially in stockpile data processing and measurement. Thanks.
@@LumentaJr I'm very glad the videos are helping, thanks for watching! For stockpile calculations, have you had a look at these two videos: ua-cam.com/video/bfI8JAUKUpg/v-deo.html and ua-cam.com/video/9t-EJBVqgLE/v-deo.html
@@geospatialtips yes I've seen both of them and now I need to practice to collect the data, my current role at work is as a geological engineering so I never had a chance to use the company's drone but right now I'm interested in learning coal stockpile measurement using a drone, however I don't have the appropriate Drone at the moment, I bought the mini 3 so I can practice without the fear of losing it. What is your opinion about doing a stockpile measurement using Dji mini 3 (small non enterprise drone)? I've seen a post on the internet comparing Dji mini 2 vs Phantom 4 and the result is quite similar.
Great tutorial! I have a question when it comes to processing the orthomosaic using the 2.5d mesh. After doing this I am noticing edges of buildings to be a little jagged and blurry. Do you have an idea of what might cause that?
Hey Dennis, thanks for watching. Could be a few things. I suspect your 2.5D mesh isn't perfect on those edges either? I assume that's the origin of the issue as the images are being rectified on that mesh, so that is my suspicion. To overcome it, you might try to to redo your mesh but at a lower resolution. By doing so, it'll have to simplify the edges of the mesh and may smooth out your orthoimage result. Failing that, go back to the filtering steps and see if you need to clean more points out from your cloud? Is the mesh from dense point cloud or depth maps? Perhaps try the other option? If you have made it from the dense cloud, try depth maps and vice versa. Just a few things to try, without seeing the data I am guessing at reasons, but if you don't come right, maybe send me a screenshot or two to my email and I'll look at it for you. Let me know if any of that helps.
@@geospatialtips Hey thanks for the quick response! I will check through these suggestions and see if the map improves. Look forward to more videos from you
Not right now, as I don’t have any good datasets. If you have one you’d be willing to share, I’d happily create a video. Email me at geospatialtips@gmail.com if you do.
Thanks for watching! I'd suggest starting with this approach as a basis, but perhaps play around with the Gradual Selection steps a little. The principles of photogrammetry do not change, so you should see decent results with the approach outlined in the video.
@@geospatialtips thank you for your attention. As for the type of flight plan, any more advisable? normal flight or crossed flight? how much front and side overlap? and how much use the speed of the drone?
@@jacksonlima5821 If the buildings are tall or very close together, then yes a crossed flight plan would be advisable, a grid pattern covering the whole region of tall buildings is ideal. Then, if you are using a grid pattern, you can relax the side overlap a bit and use about 50-60% but keep 70-80% forward overlap. If you do not use the grid pattern, then increase your side overlap to about 80% as well. Overall, this may be more overlap than you need (both scenarios) but you would rather have more images, than too few and have to refly.
So with all this filters you actually filter inaccurate points leaving better points for the model? You do not lose important data for the process cause the data you remove is not accurate anyway? Great content btw
Yes, exactly correct. Keeping in mind that these are just tie points, they serve to “connect” the images together and establish common points. They play some role in depth recreation later on, but for that and other reasons you would always prefer a cleaner dataset. Thanks for watching and I appreciate the positive feedback!
Hi Usually estimated is a good option here. It'll perform an initial alignment, analyze the images and then correct the alignment. I think it also bases it on your GPS positions to begin with. Sequential just uses the images in the order they appear in the list, so if you have multiple and crossing flights, this may not be a good option for you.
@@retobrugger3270 A few reasons I can think of: The demo is only free for 30 days, after that it won't work. Secondly, I believe the terms and conditions state that the demo cannot be used for any commercial work or financial gain, so technically it shouldn't be used for such things. Nothing physically stops you from doing so and continuing to use it, but those are some reasons, nonetheless.
Hi - the value selected would depend on the quality of your point cloud, the camera used, accuracy of your positioning data etc. The software you use post-Metashape doesn't really play a role in this decision. I would suggest you test a few values, observe the results on the screen in Metashape, then reset the filter and try again until you achieve the optimal result. Hope that helps!
If we are only going to make an orthomosaic then we would just make the 2.5D model, right? But what if we ALSO need to calculate volumes from the DTM? In that case should we make a 3D for volume calculation and 2.5D for the ortho? Would just the 2.5D work well to make the DTM and calculate volumes? Thanks again
Yep - if only after the ortho, then 2.5D would be best in general. When it comes to volumes, you'll always want to go 3D, so the best solution is to do both. However, if you only have time to do either 2.5D or 3D and need ortho and volumes, choose 3D as it'll give an acceptable ortho and good volumes. I wouldn't trust 2.5D to be very accurate for volumes - but maybe that's a topic for another video ;)
@@geospatialtips I'm working on a slightly larger data set and I have a new problem..."Not Enough Memory". My old computer is struggling with the model. For volumes can we go straight from dense point cloud to the DEM and calculate the volumes, skipping over the 3D model? Any issues with doing that? Your video on Volume calculation doesn't mention the 3D model but it does refer us to this video on the complete process to get the DEM. Thanks again!
@@jerseyshoredroneservices225 Hi again :) Yes sure, you can go from cloud to DEM and there are no issues with that. If you still need to create a model for the whole region, you can follow the steps in this video and it'll probably get you around your hassles: ua-cam.com/video/BYRIC-qkZJ8/v-deo.html Or you can manipulate the size of your system pagefile as well, I know that sometimes helps.
I think it depends on the application. I am not very familiar with the intricacies of Reality Capture, but from what I have seen it models small objects or small regions really well. However, when it comes to expansive areas, Metashape is right at the top of the pile. I don’t believe RC scales as well into large mapping projects. I have also seen Metashape create some excellent smaller models too. If I could only choose one, I would probably go for Metashape as it suits my needs. The choice is yours of course.
Can i ask you what processor are you using and if you are using gpu acceleration or only cpu? I have seen in the settings that i can select my nvidia 3080 but there is a “warning” so i’m scared hahah
Sure - I use an older 8700k processor, and also a 3080 like you do. In Tools/Preferences you definitely want to make sure your GPU is selected there, and at the bottom of that box where is says "Use CPU when performing GPU accelerated Processing" Make sure that is not selected.
Hi, actually lower specs than you. i7 8700k 64GB DDR4 GeForce 3080 It may just be the images I used are lower resolution than yours or I have used fewer images. But generally, with your spec, you should be doing better than I am.
Hi - Thanks for the feedback. This video is pretty old now and I think I've got the cursor visible in all videos that follow this one. Time for a redo of the Metshape walkthrough anyway I think! Cheers
Hi - NDVI can only be calculated if the NIR band is captured, such as when a 4-band camera (RGBI) is used. This is only a 3-band camera (RGB) and so NDVI cannot be calculated unfortunately. If you do have the 4th band in your data, then you will use the raster calculator to get the job done, this article should get you started: agisoft.freshdesk.com/support/solutions/articles/31000161545-prescription-maps-generation-in-agisoft-metashape
Hi, thanks for the comment. Yes, as a single texture file 16384 would be preferable to 4096 for a larger mesh, however it'll be slower to view on older PCs or online. The same "quality" can be achieved with 4x4096 textures. It's best to make the decision as to whether you want many files, or a single texture file, versus your GPU memory capability for display etc.
Looks like the programmers changed the menus. These days they just don't seem to care if they invalidate tutorials. There's no selection tot "build dense cloud" anymore.
Yip, they did change it up. It is now called Build Point Cloud (or something like that) a little further down the same menu. But, it does the exact same thing.
@@geospatialtips Thanks! That's very useful. I don't know if the recent trends in software to just change menu items at will, voiding UA-cam tutorials out there, is good or bad. I sure would have loved to ignore users back when I was stuck writing software that had user interfaces. But it still seems irresponsible to me! I suppose, with AI rising, it won't matter much.
Hi. Thanks very much for the feedback and suggestions. I have tried to implement some of these already in later videos and I hope they are easier to follow than this one.
How the heck do you free rotate the viewport. I can't for the life of me find the hotkey or button press combination to rotate freely. It's infuriating.
Hi Spikey Slayer - yes, volume calculations as you would do them with Metashape, Pix4D etc. are relative to the data (ground surface) and not usually absolute. What I mean is that you want to find the amount of material above the ground. Thus, if the ground is high above, or far below, the real-world position, it does not matter because your material (volume) would shift accordingly. This is of course reliant on you processing the data carefully, like I show in my videos. When it would matter is if you are using known bases, or base layers beneath your volume. In that case you would need to ensure that the two surfaces (base and ground level you processed) match well. If you need more help, feel free to mail me at geospatialtips@gmail.com and I'll try explain in more depth - but I'm sure you get it.
@@geospatialtips Holy crap, thank you, ok i need to give it a try, have you any specific tutorial to do this? I already have metashape free, but i think i need the standard to do this
@@SpikeySlayer No tutorial on this yet - I just make a few videos when my day-job is quiet... I suspect you would need Metashape Pro to calculate volumes, even the 30-day trial should work as I think it has full functionality.
I had some time to compare volume calculations with and without GCPs. ua-cam.com/video/UjTToqTkpEE/v-deo.html Full volume tutorial coming soon as well.
Antes de tentar reclassificar a nuvem de pontos densa, sempre aplicar o "reset classification" Before running the reclassification of the dense cloud, you should apply the “reset classification”
Hi - how you capture will depend on what drone/equipment you have. For smaller drones, I have tested and used Copterus ua-cam.com/video/Kwr4hzrdT0U/v-deo.html and Map Pilot Pro ua-cam.com/video/kZRXH9uNhoA/v-deo.html To cover the general flight pattern and best practices, this video will describe what you should consider: ua-cam.com/video/5hW1aaV1Fks/v-deo.html
Please consider hitting the subscribe button 🙏 It helps me out SO MUCH and will allow me to create more helpful videos! 😊
waiting for more ... thank you very much
I keep referring back to this video. I've probably watched it 10 times.
Thanks again!
Please help me in doing a task like this. How can I contact you?
@riskaazzahrah4771
Hi - please email me at geospatialtips@gmail.com and I'll do my best to assist you.
Thank you for taking the time to do this - I learn something each time I come across a new video on using AM.
Glad it was helpful!
Amazing videos, this is the first one that i see it has all that information that nobody wants to explain, they go straight to do the processing, thanks for the help!
Thank you so much for this content, your way of explaining things makes everything easy to understand.
You're very welcome!
Amazing video we are waiting for the RTK with GCPs workflow ❤
Thanks!
If you have a suitable dataset I could use, please send it along and I will make the video for you :)
geospatialtips@gmail.com
Hi, is there any way that you could make a similar guide using gcp's?
This was superb, just started to use Metashape and this helped massively. Thankyou
You're very welcome! :)
wow this video is very clear!!!! thank u! im a freshman who start intern at a survey company, my mentor ask me to learn this skill
Glad it was helpful!
More meta shape tutorials quick and easy great for new people great video 👍
Thanks for watching. Glad it is helpful!
thank you so much for spending ur time for creating such a tutorial. It’s extremely informative and helpful!
You're so welcome! Thank you for watching.
I really appreciate the amount of time and effort has gone into making this video and thank you for your generosity in sharing your workflow really helpful really useful all the best
Great of you to say so, thanks, I appreciate it!
Thank you so much. May God bless you and your family.
Awesome Tutorial, so much detail, I would have just built what I needed and done, no fine tuning. Thanks again!!!
You're welcome!
13:13 After filtering by confidence is there no need to update transform or Optimize Cameras?
What do those operations actually do, anyway? When should or shouldn't we utilize those functions?
I've heard people talk about calibrating photos or having trouble with calibrating photos. What is that?
What's the correct workflow for adding GCP's to the project? Some people say to transform the default drone GPS coordinates to a projected coordinate reference system and others say to deselect the cameras in the reference window.
Is ASPRS a good place to study? I'd like to learn more about the things that aren't in the typical UA-cam videos.
If you feel like doing an atypical, deep dive video into things like this I'd love to watch it !
It's autumn here now. Spring there?
Thanks!
Excellent video! A great help in understanding the process and software. Getting amazing results!
Glad it was helpful!
Great Methodology of Teaching
Thanks a lot, I really appreciate your comment.
@geospatialtips thank you , your videos are helpful. Could u do a video about merging chunks on larger datasets of images, When to merge how to remove erroneous points in merging. i find it difficult to get right on linear mission running 20 km at a time with GCPs .thanks
Thanks for a great video. Are the filtering steps needed when you use GCPs? Is the process later faster when you filter vs using all the points? I usually erase all the points outside my area of interest after aligning. For contours, I make a 30cm, 50cm or more /px DEM and build contours from that. They are way smoother and better looking, not zig-zaggy, and that is what you really need. If you survey a land with RTK GPS or total station, you wouldn't measure a point every 5cm or even 5m on large surfaces. Thanks!
Thanks for the comments!
Yes, I would always suggest the filtering steps as it cleans out points that have a lower certainty or may contain some errors. I am not sure that the later processes are any faster, but models will be cleaner. So I suggest always doing it as it is a fast process and won't cost you time, but will yield cleaner/better outputs.
As for the contours, that is a valid point, thank you. I would usually create contours in Global Mapper, ArcGIS or something like that anyway, but your point is well made. I suppose one might consider whether we want contours that are aesthetically pleasing, or technically more accurate and create them accordingly.
Thanks for watching.
This is great tutorial for a newbie like me. Keep it up mate. 👏😊
Glad to hear that!
I like the rooster in the background 👍🐔
LOL - thanks. I can't get them to keep quiet at the best of times. Life on the farm I guess!
The best & easy to understand explanation.. Keep it up 🔥.
Thank you, I will
Great Tutorial Indeed. Would you please make a another part of this video (With GCP)?🙏
Hi - I've just uploaded a video explaining the right way to place GCPs and some problems to look out for
ua-cam.com/video/ytvc0euMeKM/v-deo.html
Congrats, excellent video! Extremely informative and helpful! Thanks!
I am so glad it is proving helpful! Thanks for watching.
thanks for sharing your knowledge 💗
You are so welcome. Thanks for watching
Hi there. It's very good of you to help folks dig into Metashape.
I'm curious what metrics you used to quantify the quality (error reduction) achieved with your particular settings? In my testing, I have come up with different general values for for the gradual selection parameters. And also, some of the camera optimization settings. It's okay that you have settled on different settings, but how did you determine that they were best for you?
In my case, I use the error that MS reports for both check points and pixel error values. For example, If I remove tie points (by gradual selection) and check point and/or pixel errors get worse, I back up and try a different value. Same with the optimize camera settings. By taking this approach, I have found the general workflow settings that work best for me.
I'm not saying that your workflow does not work best for you. I'm just curious how you went about quantifying the improvement? Maybe I can learn something new!
Thanks!
Hi Dave
Thanks very much for your thoughts and comments.
When it comes to Gradual Selection, unless you are using the same camera each time and achieve the same GPS accuracy each time, fly the same terrain type etc., the numbers will need to fluctuate. As an example, when using a metric camera, you will almost always be able to select a Reconstruction Uncertainty value of about 5 or even less, whereas with a lower-grade drone camera, I find the starting value tends to be around 10-12 and even higher.
In a general application:
Generally, I have target values in mind, such as those mentioned above, and will then evaluate how many points would be removed if those values were selected, keeping in mind the conditions of flight, the terrain being considered, image reflectance values etc. My honest view is that is we look only at the numbers and do not concentrate on what we are seeing in the results on screen, we're potentially opening ourselves up for trouble. Why you might ask? Well, if I set my control point accuracy to 0.001, select the markers and then optimize the cameras, I will almost always get a very good residual in terms of numbers. However, MS will only be fiddling the mathematics to tend toward that 0.001 accuracy value. If my data is actually poor in regions beyond those control points, it won't be any good purely because the residuals look good - but I am sure this isn't news to you. Across all photogrammetric software packages, unless we keep all of the camera parameters fixed, the calculation of residuals serves to illustrate how well the software has done in manipulating the calibration values to achieve the target value that we set for it. However, this video was aimed at a non-GCP approach and to answer your question, I have arrived at values like those chosen after performing hundreds of projects with scores of sensors and found these to be adequate starting values for my and othr general applications. If I were to do the video over again with 5 different sensors, I would likely use 5 different values at each step.
When using GCPs and in your particular case( I think):
In your case, where I assume you are using both control points and check points, observing the change in check point residuals makes sense, so long as they have been held independent of the control points. This would also be true if you are using no control points, only check points and not restraining your data to the check points at all. In these cases, the check pint residuals carry more weight and have valid meaning.
Am I correct to assume you are using one of these two approaches?
Poor Application of GCPs:
In the other case, where one might only be using GCPs as control points and constrain the data to them, well then we can fiddle the gradual selection and optimize camera parameters all we like and achieve resulting residuals of close to 0 and then declare our job well done. However, we may only have been able to ruin the camera calibration so poorly that it has forced the focal length, principal points etc. to such an extent that the data appears to fit the control perfectly, where in reality, if we were to then compare this data to check points or a LiDAR survey, we would be found wanting. I have seen so many users do this with Metashape, Pix4D, UAS Master etc.
Furthermore, it's odd to see how so many people have an amazing level of faith in GCPs and fail to consider if they are derived from static observations with a full network calculations, or RTK, or RTX. Are they levelled or not, what does the survey report say, ellipsoidal versus orthometric....the list goes on!
Conclusion:
At the end of the day, my feeling is that we need to apply good survey practices and principles. If we have done all of that correctly and it results in excellent residuals then great. If we have done that all and we have poor residuals, we should then apply further good practice, investigate the source of error and try again.
I don't want to say don't look at the residuals, but in my mind, they should be what we look at last, as a result of a professional job. Apply sound survey and photogrammetric principles, look at the data, consider the starting versus final camera calibration values, then consider our check points independently of control points. If we process only to target good residuals so that we can hand over a glowing report but pay no attention to what's really happening with our data, in our respective professional worlds we would all soon have to appear before our representative councils and explain our actions once clients had found us to be lacking in our conduct.
Very long story short, I think there are a few different methods we can all use to come to a good result, yours and mine I hope are both good candidates. In this video I tried to convey a generic approach that serves to be a good starting base for anyone new to MS or starting down their photogrammetric journey
Thanks for taking the time to chat.
@@geospatialtips Thank you for the thoughtful reply.
All my use has been with lower quality drone cameras. First the P4P and now using a P4RTK (same camera), and Autel Evo RTK, probably similar quality. I do use GCPs and Checkpoints. And, my reference to watching residuals is referencing the checkpoints only. I also watch what MS reports for average pixel error on the images. I think checkpoint residuals are the single most valuable indicator of the absolute accuracy of the surface model.
I agree with you that there can be unaccounted for error in the ground control, both GCP and CPs. Less so if the points were observed over different days and then averaged, which is rare. Considering all of the metrics I have in a dataset to evaluate accuracy, the CPs seem to me to be the best candidate. That is why I use their residuals to determine if the level of thinning of the sparse cloud (key points) is helping or is a detriment. I will fully admit that I am not knowledgeable enough to look at the camera calibration and determine if the values are reasonable of not.
When not taking into consideration residuals, I guess I am still not sure what you are looking at to gauge the levels of points to remove and the different parameters you select in the Optimize Camera steps?
Thanks!
hi. i have pics from my camera. but no coordiantes appear in the refrence column when i add them for lign. why they not appear?
Hi - check the Metashape Preferences/Advanced tab and make sure the box is ticked to allow Metashape to read that kind of data on import.
If it still is not there, right click on one of your images in File Explorer, go to Properties and then Details. Scroll down the list and make sure the coordinates are saved in the raw image data.
If none of that helps you, maybe send me one of your images and I will take a look. (geospatiltips@gmail.com)
Thank you for this very much to the point tutorial.
Thank you for this. Great walk through
Glad it was helpful!
Thanks for the video. How come the model that is viewed directly on the software looks nicer and sharper compared to the .fbx export which looks less sharp?
Thanks in advance
Merci beaucoup pour cette vidéo. Elle est très bien détaillée
Mais je rencontre un peu de difficulté avec l'étape 3 : Calcul du Dense Cloud.
Cette option n'est pas disponible dans le Workflow de la version 2.1?
Merci d'avance pour votre réponse
I'm having the exact same problem. No more Dense Point Cloud in v 2.1.2
Hi - it's just called "Build Point Cloud" now, but it does the same job.
Thank you so much for this video. I learned a lot.
Excellent. I’m glad I could help. Thanks for watching, and please come back soon and watch some new content that is on the way.
Your tutorial is the best! Thank you!
Great video. Highlights of buttons clicked (with text on screen of each process in sequence) for future videos would help dramatically in the ability to follow along.
Creating Ortho's and DTM's (with added GCP's ...another lesson somewhere?) is my main goal to . So perhaps I can avoid the Point Cloud process completely. Pix4D seems to not demand attention to point clouds to build Ortho's and DTM's and DSM's in the one hit!.
Hi thanks for watching, for your comments and suggestions too.
The point cloud is not a requirement in Metashape in order to generate the orthos, but because I wanted it for other outputs, I chose to use it in this case. One could generate DSMs from depth maps and then the ortho from that. However, the DTM should only be produced from a properly filtered ground model and I would suggest that at some point, before the DTM is produced, the user (i.e. you and I) should check it for correctness before producing it. :)
I need one large image of the whole scene from the top but with orthomosaic it's like a puzzle, I know, Mosaic^^ is there another option, like, just render the current perspecive as high-res image?
I don't believe there is a render option, like you are describing, at least not one that I know of.
But I am interested in the issues you are having with the mosaic. Maybe it is just how I am reading your message, so please correct me if I am wrong. The mosaic doesn't have to be like a puzzle (accurate alignment and blending) and could easily be created as a whole/single scene that is exported as a single file, rather than a tiled export. Let me know if I can help a little more?
What camera/ drone did you use. Would a a7r iv with a m300 rtk give similar quality
Hi Paul - thanks for watching.
For this example, I have used a modest Mavic Mini. The point really is that it comes down to how you collect and then process the data (my experience is really all in data processing and that's why I focus on it). So, in your case, with that equipment listed I would expect superior results, so long as you are careful about how you do things.
*At the same time, I have seen many results of excellent and very expensive equipment yielding useless data because best practices were not followed.
Can you explain why you use adaptive camera model fitting and remove points by error before accuracy? I read the USGS guide and they suggest accuracy first then error, and they also recommend to use specific calibration parameters for each step. Thanks!
Hi - thanks for your question and comments.
I'm not sure of the exact USGS workflow you are referring to, but I see some comments on the Agisoft forum as well.
Anyway, unless you are using a high-end camera, or in fact a metric camera, then I don't believe that fixed parameters will remain constant for a less expensive camera, such as those on a standard DJI Mavic. As you fly and the drone is shuffled by wind etc. there will be certain flexes on the sensor and lens and they are thus (in my opinion) not fixed, hence the Adaptive model is used. I have also just found better results when doing this. If however you are suing something like a $50,000 PhaseOne iXM150, that is a fully metric camera, with fixed lens and it had an official calibration certificate - different story.
*I have seen a USGS article on coastal mapping with drones where they say to keep your key-point limit set to 0! This is a very bad decision as there will be no filtering at all and preliminary camera calibration will be based on bad data from those points not removed.*
I filter the points in the order in which I do really because it's something I saw many years ago, it worked well and I stuck with it. Sometimes, I run it iteratively, the whole process, and it never seems to mess anything up. Just remain focused, look at the results and you'll be fine.
This method is what has worked for me and in my experience yields accurate and repeatable results. There are many ways to approach the workflow, but for me this is tried and tested, 100s of times. Feel free to tinker with the approach, just keep checking the results and don't rely on the numbers - Look at the data itself.
Cheers!
hank you very much for this video. It is very detailed. However, I'm having a bit of difficulty with step 3: Dense Cloud Calculation. This option is not available in the Workflow of version 2.1? Thank you in advance for your response
Hi - it has been renamed "Build Point Cloud" and is a little further down the list than it used to be. It performs the same function however.
@@geospatialtips i had the same problem as @josephagouze3770. I was able to generate the dense point cloud following your instructions, but now the "filter by confidence" is and remains greyed out. Any idea what to do?
Hi - yes, when you create the point cloud, select the Advanced option and there will be a check box you need to enable that allows you to calculate the point confidence. I suspect this was turned off for you.
@@geospatialtips thank you very much
Most valuable video. Thank u so much . How to export contour to dxf format. And point cloud to csv format. Pls do the needful
Hi
Contours are simple enough: File > Export > Shapes > Select DXF
For points, there is no direct csv option. I'd suggest exporting as txt, turn off all additional options such as RGB, normals etc. (unless you want those).
Then, open that file in Notepadd++ or UltraEdit etc and change all "spaces" to "commas".
That should sort you out.
hi, thank you for your tutorial. i followed most of your steps but i noticed that during adding photos to my agisoft, it seems to be the longitude and latitude information in the reference box are missing and all photos are in grey (no tick in the box). can u help me to identify what is my problem or missing steps? It results on me unable to create DEM. thank you in advance for your response.
Hi - sorry to hear about your issues.
I can think of two things off the top of my head:
1. Your images don't have any georeferencing for some reason, you can check this by right clicking on the file in File Explorer, then going to the Details tab and scrolling down. If the GPS info is available, you will see it.
2. There is a setting in Metashape to disable all metadata being imported. This might be active for some reason. Go to Tools/Preferences/Advanced and take a look.
If you still can't find a solution, feel free to send me an image, or a few of them, and I'll take a look for you.
geospatialtips@gmail.com
Cheers
one idea; could be nice to see your cursor when youre working:D! thnx for the dedicated video
Hi, thanks for watching.
Yes, that's a good idea and I have tried it in the more recent videos and will keep doing so!
Good explained, many thanks!
You're welcome!
I'm working on a site with cranes. The tall boom of one of the cranes does not reconstruct well and it blocks details on the ground below.
I've been deleting the points of the boom from the DPC.
Then I create a 2.5d mesh from the DPC and an ortho from the mesh. Unfortunately the deleted points reappear in the ortho! How can I prevent this?
In the past I think I have done this successfully but it's now working now. I guess I must be doing something different but I have no idea what that would be. Maybe in the past I used a 3D mesh or made the ortho from the DEM... or something else? I'm lost and I've reprocessed the project three times already 😞
I was able to assign different photos in the ortho to hide the crane but I'd rather just delete the points. At times I'd like to delete trees, wires and other things so this is an important task to learn. At this point I cant easily go back and classify the crane as something to leave out of the ortho because those points have been deleted in the DPC.
Please help. Thanks!
Using version 1.7
Hi Jerseyshoredrones!
Let's see if we can help...
When you say that the "point reappear in the ortho", is it the points themselves (such that the crane is rectified in the ortho at crane height) or, as I might expect, the crane is still seen in the ortho but is rectified onto the ground surface?
Or, to explain it this way, the points you use will create the 2.5D mesh or 3D mesh and yes, the crane should not be there if those points were deleted out and then the points saved (project saved and in turns saves the points without the crane in them). Then, you have a clean mesh. Next step is the creation of the ortho which is rectified on to that surface. However, the crane in the image may very likely be rectified because the image and the 2.5D mesh are two completely separated entities. At this stage, the crane is just a set of pixels in an image, not a 3D modelled crane. So the only way to get it out of the image is to use very selective images or use a seamline editor after processing to cut out the crane.
Basically, some things we can get rid of automatically and others we have to do manually. But if that crane is there in 3D in the ortho, I'd suggest the point cloud hasn't been saved, or not saved properly. A crane in the image (maybe draped onto the ground or underlying building etc. is just something we need to manually adjust out.)
I hope I have understood you correctly and offered some insight, but if I haven't please feel free to email me with some screenshots and we can take a deeper look?
Is there a way to hide the physical references from the photos?...I mean those rectangles for creating mao referennces
Thanks for this! Do you find that the dense point cloud doesn’t have enough to work with after filtering? My computer takes HOURS to process the point cloud. 800 photos , JPEG… not sure what I’m doing wrong.
Hi James - no, in fact I often filter it a bit more or I might generate key-points for certain tasks.
What percentage of your points is it filtering out?
What is your use case for the data, this might impact on the number of points you need?
If it is getting rid of the majority of your points, we might need to review the camera calibration and see what is happening there.
Yes, 800 images might take a long time, especially if you have very high resolution. Also, if you have an older or perhaps a less powerful workstation, processing times might be affected.
If it will help you, you are welcome to share some screenshots, a video, your data or anything else with me on my email and I'll take a look, after which we can review together? It's up to you, but let me know and maybe I can help.
Cheers
I am using the new Mavic three enterprise. My understanding is that it is a good sensor with high resolution.
I’m honestly not sure how much I’m filtering I pretty much followed your instructions step-by-step.
I am very new at this so I’m on a very steep learning curve.
My computer is a gaming laptop with 16 gigs of RAM it may not be enough
@@jamesnahill7124
That is a good drone with a solid sensor, no problem there.
16gigs may be a little light for bigger jobs, so perhaps that is where the time is being spent in processing, but it'll work, just a little slower.
I am just not sure why you'd be losing so many points. It's tough to guess without seeing the data.
Can you confirm the elevation you flew above the ground?
What percentage of image overlap do you have, both forwards and sideways?
Are you able to tell me the camera focal length before and after adjustments? (You'll find it under Tools>Camera Calibration, then we want to compare the initial and the adjusted "f" value)
My email is geospatialtips@gmail.com if you would like to send some data my way.
Hi, i´m currently trying to create a DEM but, for some reason, my DEM doesn´t align with the rest of the models and comes out all stretched and completely out of the location I want it to be. I´ve also noticed that my cameras are also misaligned with the control points (they are in the right location), despite everything being in the same coordinate system (ETRS89/UTM zone 29N (EPSG:25829). Can you tell me what I can do to solve it?
Hi, thanks for the question.
Please will you send me a few screenshots to look at and I’ll try help?
You can email them to geospatialtips@gmail.com
i guess i've been doing it ummm let's say different lol. awesome in depth tutorial! tyvm
You're very welcome - thanks for watching!
This is just the way I handle processing in Metashape, not to say it's perfect, but tried and tested and seems to do the job (most of the time).
@@geospatialtips worked like a charm for me! i would kill for contour smoothing tutorial too :D
Great video thank you. When in this workflow would you recommend locating GCP's when they are being used? after the dense cloud, but before the filtering perhaps?
Hi Malcolm. Thanks for watching.
After alignment, I would clean the tie-points first using the "Gradual Selection" tool, then add the GCPs and optimise cameras.
Following that, create the dense cloud (which will then be created off of a GCP corrected tie-point set) and finally, filter the dense cloud.
@@geospatialtips superb, thank you very much! 👍🙂
@@geospatialtips Hi! Thank you for sharing your knowledge! Could you plese argue what's make you clean the tie point cloud first and than apply the GCPs? Will it be any conflict with cleaning the tie point cloud before making a manual camera set up?
My regards!
O.
Hi - I've just uploaded a video explaining the right way to place GCPs and some problems to look out for
ua-cam.com/video/ytvc0euMeKM/v-deo.html
wonderful content abt agisoft.. thanks a lot.. can u suggest gud and optimum system configuration for running agisoft smooth and faster?
Hi - thanks for watching.
This link will take you to the page of Agisoft recommendations for system specs. I suggest focusing on the "Advanced" configuration section. www.agisoft.com/downloads/system-requirements/
In short a good config is the following: 64-128GB RAM, Intel i7/i9 minimum 3Ghz CPU (faster is better), 8-12GB RAM GPU i.e. Nvidia 3080. (The AMD equivalent CPUs and GPUs will work as well, keep specs to these or better)
Well done! Liked and subscribed.
Dense Cloud is now called Point Cloud and the command on the Workflow menu is now Build Point Cloud
Yes indeed. That’s a change for the new version 2 release. Thanks for letting us all know.
@@geospatialtips This tutorial has helped me a lot. I've watched it a few times already. 2018 I bought a Phantom 4 Pro and started using dronedeploy for golf course mapping. I've just purchased a Mavic 3E and started using Metashape pro trial version. I'm still debating on which software to use. Metashape pro is expensive but reasonable prices for cloud processing and hosting. Capture Reality has a good PPI option but no cloud processing and hosting.
@@agrengs0 what did you decide to go with?
Purchased metashape pro and using cloud service. What about you?@@nathaneck4629
honestly very helpful !
Glad it was helpful!
Is it better to clean edge data during the tie point process or after in the point cloud process?
After the point cloud process. The points collected in the tie-point process will be used to optimize your camera alignment. Thus, if you clean up too many points, you run the risk of having images that are no longer aligned. Also, cleaning the tie points will not affect the region for which the dense cloud is created, as that is determined by the aligned images and the region box. *You would still want to run the gradual selection steps to optimize your tie points though.
Hey, i downloaded the 2.0.2 version and there's no "build dense cloud" feature and i got "build point cloud" instead. Are these two feature the same? or what?
Hi - yes, that's the new name for it, but it is the same function.
I have chunks of 1500 x 9 .
I try to align using point cloud and it says not enough ram.
Total photos are about 14500.
So is there any way to merge all dem, DTM, ortho mosaic and 3d model using agisoft?
System specs : 64gb ddr4 3600mhz
i7 14700k
Rtx 3060 12 gb
1tb nvme 3.0
Hi - yes, there is a solution for your situation. You'll want to setup your computer as a local server/node.
This video will explain what you need to do: ua-cam.com/video/BYRIC-qkZJ8/v-deo.html
Using this technique I have processed 35000 images using a PC with about the same specs as yours, without the need to use chunks.
All the best!
Great tutorial!
Hi, thank you so much for the tutorial, it has helped me a lot. I just want to ask you how can I improve the model if I'm working with crops, 'cause I'm working with images of avocado trees and looks like they're all cropped, and the ground it's all distorted. I've followed this tutorial step to step, so I wonder if it's due to a lack of quality of my images (the overlap is 65%) or I'm missing something. On the internet I've found that might be due to the lack of overlapping of the images and the movement of the leafs, and that could be improved using ground point cloud classification. Thanks!
Hi - thanks for watching and for your question.
Are the trees quite far apart, with spaces in-between where you can see the ground?
Are you rectifying them onto the DTM (ground only) or DSM ground and above-ground features? I would think the DSM gives you this odd result, if the trees have spaces between them, where you see the ground. In this case the model would be on the top of tress, then ground, then back up on the trees etc. and this could cause a bad result.
If that is the case, try classify the ground points and create your model or mesh from ground only and retry.
If that doesn't work or there are still issues, please send me screenshots and more info on my email, geospatialtips@gmail.com and I will take a look for you.
@@geospatialtips Hi, thank you very much for your response! I'll be checking that then. Maybe there were ground points that I didn't classified well enough.
Amazing tutorial, thank you!
Thx. and will be waiting for an update.
Thx. for the video. However, Metashape is now on Ver. 2.1 , where the menus have been changes/merged and it would be a good idea to update the video!
Thanks for the comment and yes indeed, I think it's time for an update! I plan to work on one soon enough, once the day-job gets a little quieter.
Another and very helpfull tuto ! thanx so much
Glad to hear that!
Hi Mark, I just purchased Metashape pro software and using cloud processing. Project wouldn’t open locally after processing in the cloud on the last step of building the mesh. I finally deleted the project and started over. Lost a lot of money in processing hours. Hoping I can hire you for advice. I need to create an upwork account.
Hi Alex. Sorry to hear about your hassles. You can email me directly on geospatialtips@gmail.com and we can chat there if you like?
Thank you for the complete tutorial, i'm wondering which one should I choose for stockpile volume measurement? The Arbitrary 3D or the 2.5 D one?
Hi - if your resolution is very high, then it likely wouldn't make much difference.
However, as a best practice, use the 3D option. Many volumetric packages will use "pillars" for their computation anyway, but if your input is already "pillar" data, such as the 2.5D would create, then you may end up with volume pillar and 2.5D model pillar offsets. If you feed in 3D, then it'll place pillars as it needs without the potential issue mentioned.
I hope that makes sense.
@@geospatialtips thank you for the quick response, I'm new into photogrammetry and I'm not familiar with the "pillars" term but I'll follow your instructions and then try experimenting to compare the results between the 2.5 D and 3D. Your videos are really helping me in learning photogrammetry, hope to see another video especially in stockpile data processing and measurement. Thanks.
@@LumentaJr I'm very glad the videos are helping, thanks for watching!
For stockpile calculations, have you had a look at these two videos: ua-cam.com/video/bfI8JAUKUpg/v-deo.html and ua-cam.com/video/9t-EJBVqgLE/v-deo.html
@@geospatialtips yes I've seen both of them and now I need to practice to collect the data, my current role at work is as a geological engineering so I never had a chance to use the company's drone but right now I'm interested in learning coal stockpile measurement using a drone, however I don't have the appropriate Drone at the moment, I bought the mini 3 so I can practice without the fear of losing it.
What is your opinion about doing a stockpile measurement using Dji mini 3 (small non enterprise drone)? I've seen a post on the internet comparing Dji mini 2 vs Phantom 4 and the result is quite similar.
Great tutorial! I have a question when it comes to processing the orthomosaic using the 2.5d mesh. After doing this I am noticing edges of buildings to be a little jagged and blurry. Do you have an idea of what might cause that?
Hey Dennis, thanks for watching.
Could be a few things. I suspect your 2.5D mesh isn't perfect on those edges either? I assume that's the origin of the issue as the images are being rectified on that mesh, so that is my suspicion.
To overcome it, you might try to to redo your mesh but at a lower resolution. By doing so, it'll have to simplify the edges of the mesh and may smooth out your orthoimage result.
Failing that, go back to the filtering steps and see if you need to clean more points out from your cloud? Is the mesh from dense point cloud or depth maps? Perhaps try the other option? If you have made it from the dense cloud, try depth maps and vice versa.
Just a few things to try, without seeing the data I am guessing at reasons, but if you don't come right, maybe send me a screenshot or two to my email and I'll look at it for you.
Let me know if any of that helps.
@@geospatialtips Hey thanks for the quick response! I will check through these suggestions and see if the map improves. Look forward to more videos from you
Any plans on how to process multispectral imagery with Metashape?
Not right now, as I don’t have any good datasets. If you have one you’d be willing to share, I’d happily create a video.
Email me at geospatialtips@gmail.com if you do.
Thank you, it is a incredible video!
Thank you so much, I'm glad you think so and I hope it is helpful!
Very good content. For processing in built-up urban areas, can you follow these steps or is there a better way?
Thanks for watching!
I'd suggest starting with this approach as a basis, but perhaps play around with the Gradual Selection steps a little. The principles of photogrammetry do not change, so you should see decent results with the approach outlined in the video.
@@geospatialtips thank you for your attention. As for the type of flight plan, any more advisable? normal flight or crossed flight? how much front and side overlap? and how much use the speed of the drone?
@@jacksonlima5821 If the buildings are tall or very close together, then yes a crossed flight plan would be advisable, a grid pattern covering the whole region of tall buildings is ideal.
Then, if you are using a grid pattern, you can relax the side overlap a bit and use about 50-60% but keep 70-80% forward overlap. If you do not use the grid pattern, then increase your side overlap to about 80% as well.
Overall, this may be more overlap than you need (both scenarios) but you would rather have more images, than too few and have to refly.
So with all this filters you actually filter inaccurate points leaving better points for the model? You do not lose important data for the process cause the data you remove is not accurate anyway? Great content btw
Yes, exactly correct. Keeping in mind that these are just tie points, they serve to “connect” the images together and establish common points. They play some role in depth recreation later on, but for that and other reasons you would always prefer a cleaner dataset.
Thanks for watching and I appreciate the positive feedback!
@@geospatialtips so tie points serve to "connect" triangulations? With more clean we have better polygons?
In following these steps, I don't have the dense cloud option. Did I miss something or is it named something else?
Hi - if you are using the new version 2 of Metashape, it is now called "Build Point Cloud"
Hope that helps
On Align Photos I have Estimated and Sequential for the Reference Preselection.... which one do I choose
Hi
Usually estimated is a good option here. It'll perform an initial alignment, analyze the images and then correct the alignment. I think it also bases it on your GPS positions to begin with.
Sequential just uses the images in the order they appear in the list, so if you have multiple and crossing flights, this may not be a good option for you.
Hey I love your Videos😍. Is it necessary to buy Agisoft Professional to calculate the volumnia of an excavation?
And another question: is it possible to use the demo version of Abisoft Metashape Professional?
Hi - yes, you can definitely use the demo version to calculate volumes and do all other things.
Hi - thanks so much! Yes, you'll need t have the Pro version for volume calcs.
Thank you. But why does not everyone do it with the demo version which is free?
@@retobrugger3270 A few reasons I can think of: The demo is only free for 30 days, after that it won't work. Secondly, I believe the terms and conditions state that the demo cannot be used for any commercial work or financial gain, so technically it shouldn't be used for such things. Nothing physically stops you from doing so and continuing to use it, but those are some reasons, nonetheless.
what are the rms error of this data? How can we improve the accuracy without GCPs
what number of filter my confidence would be good to extract the point cloud data for the autodesk recap ??
Hi - the value selected would depend on the quality of your point cloud, the camera used, accuracy of your positioning data etc. The software you use post-Metashape doesn't really play a role in this decision. I would suggest you test a few values, observe the results on the screen in Metashape, then reset the filter and try again until you achieve the optimal result. Hope that helps!
If we are only going to make an orthomosaic then we would just make the 2.5D model, right?
But what if we ALSO need to calculate volumes from the DTM? In that case should we make a 3D for volume calculation and 2.5D for the ortho?
Would just the 2.5D work well to make the DTM and calculate volumes? Thanks again
Yep - if only after the ortho, then 2.5D would be best in general. When it comes to volumes, you'll always want to go 3D, so the best solution is to do both.
However, if you only have time to do either 2.5D or 3D and need ortho and volumes, choose 3D as it'll give an acceptable ortho and good volumes. I wouldn't trust 2.5D to be very accurate for volumes - but maybe that's a topic for another video ;)
@@geospatialtips
Great, thank you!
@@geospatialtips
I'm working on a slightly larger data set and I have a new problem..."Not Enough Memory". My old computer is struggling with the model.
For volumes can we go straight from dense point cloud to the DEM and calculate the volumes, skipping over the 3D model? Any issues with doing that?
Your video on Volume calculation doesn't mention the 3D model but it does refer us to this video on the complete process to get the DEM.
Thanks again!
@@jerseyshoredroneservices225 Hi again :)
Yes sure, you can go from cloud to DEM and there are no issues with that.
If you still need to create a model for the whole region, you can follow the steps in this video and it'll probably get you around your hassles: ua-cam.com/video/BYRIC-qkZJ8/v-deo.html
Or you can manipulate the size of your system pagefile as well, I know that sometimes helps.
@@geospatialtips
Hello and thank you Sir!
I really appreciate all the guidance you give me/us here on UA-cam 🙂
Does RealityCapture generate better results or is Agisoft on par with it?
I think it depends on the application.
I am not very familiar with the intricacies of Reality Capture, but from what I have seen it models small objects or small regions really well.
However, when it comes to expansive areas, Metashape is right at the top of the pile. I don’t believe RC scales as well into large mapping projects.
I have also seen Metashape create some excellent smaller models too.
If I could only choose one, I would probably go for Metashape as it suits my needs. The choice is yours of course.
Please id love to know how you got it to free rotate bcuz i hate using that ball to look around its so hard
Hi - Sure, what I did is:
Go to Model->Navigation Mode->Terrain Mode (Default is Object Mode)
Also, Model->Show/Hide Items->Show Trackball (Turn off)
What version of Agisoft meta shape do you have?
Hi. This video was made with version 1.8. Currently I use 2.1 however.
very helpful thankyou very much
You're welcome!
Can i ask you what processor are you using and if you are using gpu acceleration or only cpu? I have seen in the settings that i can select my nvidia 3080 but there is a “warning” so i’m scared hahah
Sure - I use an older 8700k processor, and also a 3080 like you do.
In Tools/Preferences you definitely want to make sure your GPU is selected there, and at the bottom of that box where is says "Use CPU when performing GPU accelerated Processing" Make sure that is not selected.
@@geospatialtips Thank you, i'm working a lot now thanks to you i'm doing my dream job btw🖤
I doubt I had much to do with your success and I'm sure it's due to your hard work. But that's great news, congratulations!
There are some options for implenet this is code?
Can you attach an Arabic translation to the video?
Awesome, thanks for helping me out!
What are your PC specs as it seems you compute quite faster than what I am running - i9 13900K, 64GB RAM DDR5, Geforce 4080 16GB.
Hi, actually lower specs than you.
i7 8700k
64GB DDR4
GeForce 3080
It may just be the images I used are lower resolution than yours or I have used fewer images. But generally, with your spec, you should be doing better than I am.
it was fast forwarded
good content but i cant see your mouse pointer, a bit hard to follow
Hi - Thanks for the feedback. This video is pretty old now and I think I've got the cursor visible in all videos that follow this one. Time for a redo of the Metshape walkthrough anyway I think! Cheers
@geospatialtips is there any other app you can recommend for photogrammetry that is not complicated and free
How can I calculate NDVI from this orthomosaic, pls help
Hi - NDVI can only be calculated if the NIR band is captured, such as when a 4-band camera (RGBI) is used. This is only a 3-band camera (RGB) and so NDVI cannot be calculated unfortunately.
If you do have the 4th band in your data, then you will use the raster calculator to get the job done, this article should get you started: agisoft.freshdesk.com/support/solutions/articles/31000161545-prescription-maps-generation-in-agisoft-metashape
İf you use 16384 instead of 4096 for texture size in build texture tool, you can get a better 3d model.
Hi, thanks for the comment.
Yes, as a single texture file 16384 would be preferable to 4096 for a larger mesh, however it'll be slower to view on older PCs or online. The same "quality" can be achieved with 4x4096 textures.
It's best to make the decision as to whether you want many files, or a single texture file, versus your GPU memory capability for display etc.
Thanks, is good!!
Thank you 😊
Welcome!
Looks like the programmers changed the menus. These days they just don't seem to care if they invalidate tutorials. There's no selection tot "build dense cloud" anymore.
Yip, they did change it up.
It is now called Build Point Cloud (or something like that) a little further down the same menu. But, it does the exact same thing.
@@geospatialtips Thanks! That's very useful. I don't know if the recent trends in software to just change menu items at will, voiding UA-cam tutorials out there, is good or bad. I sure would have loved to ignore users back when I was stuck writing software that had user interfaces. But it still seems irresponsible to me! I suppose, with AI rising, it won't matter much.
Great info.
Thanks for watching!
If you had framed which button you were pressing, zoomed in and slowed it down, we could have watched those parts better. Thanks anyway.
Hi. Thanks very much for the feedback and suggestions. I have tried to implement some of these already in later videos and I hope they are easier to follow than this one.
More like this!
Working on it!
How the heck do you free rotate the viewport. I can't for the life of me find the hotkey or button press combination to rotate freely. It's infuriating.
Hi - Sure, what I did is:
Go to Model->Navigation Mode->Terrain Mode (Default is Object Mode)
Also, Model->Show/Hide Items->Show Trackball (Turn off)
Any way to calculate volumes without gcp?
Hi Spikey Slayer - yes, volume calculations as you would do them with Metashape, Pix4D etc. are relative to the data (ground surface) and not usually absolute. What I mean is that you want to find the amount of material above the ground. Thus, if the ground is high above, or far below, the real-world position, it does not matter because your material (volume) would shift accordingly. This is of course reliant on you processing the data carefully, like I show in my videos.
When it would matter is if you are using known bases, or base layers beneath your volume. In that case you would need to ensure that the two surfaces (base and ground level you processed) match well.
If you need more help, feel free to mail me at geospatialtips@gmail.com and I'll try explain in more depth - but I'm sure you get it.
@@geospatialtips Holy crap, thank you, ok i need to give it a try, have you any specific tutorial to do this? I already have metashape free, but i think i need the standard to do this
@@SpikeySlayer No tutorial on this yet - I just make a few videos when my day-job is quiet...
I suspect you would need Metashape Pro to calculate volumes, even the 30-day trial should work as I think it has full functionality.
@@geospatialtips nice thx a lot, yeah sure when you have time, no stress🤙🔥
I had some time to compare volume calculations with and without GCPs.
ua-cam.com/video/UjTToqTkpEE/v-deo.html
Full volume tutorial coming soon as well.
Antes de tentar reclassificar a nuvem de pontos densa, sempre aplicar o "reset classification"
Before running the reclassification of the dense cloud, you should apply the “reset classification”
How to capture photos
Hi - how you capture will depend on what drone/equipment you have. For smaller drones, I have tested and used Copterus ua-cam.com/video/Kwr4hzrdT0U/v-deo.html and Map Pilot Pro ua-cam.com/video/kZRXH9uNhoA/v-deo.html
To cover the general flight pattern and best practices, this video will describe what you should consider: ua-cam.com/video/5hW1aaV1Fks/v-deo.html
Thanks!
Muito bom!
I dont have dense cloud option
Hi!
From version 2 onward it is called Point Cloud and not dense cloud any longer.
good job
Thanks!
teşekkür ederim