I always take something really useful away from your videos, and this was no exception. I have had problems with rejecting some stubborn satellite trails from an image I've just taken, and your selective rejection technique sorted it out immediately. Brilliant.
Awesome technique Adam! I used the Game script for lot of things. But this SR technique is great. It rejects bad data to be introduced in the final image, rather than dealing with it. That DBE is great teaser. I found varieties of ways to do it, obviously yours must be very advanced. Love to know how. Thanks for sharing!
Hi Adam. I'm here not only to thank you for the excellent tutorials and the depth in which you cover each topic, but also for the way you convey it. The English language is not my original language and the way you pronounce the words along with the demonstrations of uses you make on PixInsight helps us a lot. So thank you for all this directly from Brazil.
Wow. (I'm new to PI and perhaps easily impressed, but I had no idea things like this could be accomplished.) I am staying tuned for more of your videos on UA-cam and will be checking out your other offerings. I find your presentation style very easy to follow. What, Why, How. Technical training done right. (Been there, did it, and really appreciate finding training excellence in my retirement project.
Adam - Thank you for this great tutorial! The detailed explanations are very useful (like always in your videos) and the described technique is really powerful. I would highly appreciate to see more of this kind of videos.
Great Video!! I have data sets that have suffered with the dreaded unremovable donuts! This has given me ideas on not only how to solve this issue but others as well, thanks :)
Seems like a great tool for removing almost any artifact, perhaps even airplane trails. But there must be a percent limit on the number of frames involved, e.g. pulling data out of 75% of all frames in the same spot doesn't leave enough good to make a good master light.
You are correct in general...but it depends! If the artifact happens to occur over an area of bright signal... even if you have just a few frames in that area that are artifact-free..you WIN! But the fewer the number of good frames, of course, the lower the S/N becomes you are just using a single good frame to remedy the issue (which at that point means the artifact really is in every other frame!).
Great tip! One problem I did notice with the GAME script is that it ditches all FITS headers in the image. So when you write the masked version of the image, it doesn't contain any headers you might want to use e.g. for image weighting.
Abosolutely. As shown, after you register your images (on the debayered and registered images) you can write the shapes over artifacts. Just to stress...the artifacts need to be in a subset of the data- if they are in every frame, different techniques must be used.
@adamblock Brilliant video as usual! I'm also a Adam block Studios Horizons member and couldn't get by without your website. My question is - what if you're using NSG? I've been studying your chapter on NSG and wondered if the idea is to employ this technique right after the NSG script completes and before Image Integration? Many thanks and keep up the awesome website!
Yeah... I guess it depends on the dither amount. NSG will ignore zeros in images... so it is OK to write black pixels before NSG... no problem. But doing this on the normalized images might be better since you can clearly see the issues. However, you will need to have larger spots to accommodate the shifts between them.
Thank you Adam. I can definitely make use of this tip. If I'm using WBBP, which set of files do I manipulate just before image integration so I can redo that final set of master files?
@@AdamBlock If I understand your process correctly, it assumes that some or most of the subs don't have the issue. If all of them do, how would it work in terms of rejection because all the pixels would be zero and you'd be left with a hole.
@@jasondain8713 Yes (I clearly state this in my explanation). If the issue is in all of the images (in the same place) you need to resort different kinds of techniques. This one, if you can do it, is one of the best since you are substituting real data into problem area through rejection. As far as those other techniques go... this is why people invest in my videos at AdamBlockStudios.com !! hint hint...
This is a nice technique if you happen to have only a subset of affected images. Unfortunately, I tend to find all of my subs are impacted. For example, I'll have some dust mote that for whatever reasons is not properly calibrated out with flats. But, it's on every single image. I'm still trying to figure why the flat didn't calibrate it out in the first place. Of course, I could always tear my imaging train apart, clean all of the optics, reassemble and then take another few nights of subs without the dust mote. If I do that, then I can use your technique :)
Yes, so you are on the right track. The most effective way to improve image quality is to first optimize the acquisition of data- there is no substitute for obtaining higher-quality input data. Very rarely is processing artifacts after-the-fact going to work out. If I understand, *every* time you observe you have uncalibrated dust motes (in the same positions). So this means you need to work on flat acquisition. Everything from bad calibration data, incorrect calibration settings, scattered light in your system..etc etc can be in play. You have to eliminate each as a source of the problem.
Thanks, Adam--and Hartmut! It seems to me that one could use this to "save" the images one might have either forgotten to create flats for or lost the flats for whatever reason, as long as one had a similar "good" data set with which to combine. Say you imaged...M42 on two different sessions, one of which you "forgot" to do flats for, both have say 20 subframes. If I understand the process correctly, as long as they are all registered, this should work to save the data you got without flats. Do I have it right?
Yes sir! Indeed, I have employed this very trick in the past to salvage some data for which there were know good flats. I applied flats from a different session..which still took care of the large scale vignetting... but not the dust...which I zapped with this method.
Awesome Adam, I consider myself a neophyte with PixInsight and still use the Batch Preprocessing for stacking. I understand conceptually with what you did and am wondering if you can use this method with BPP - I it looks like you can. I prefer short videos such as this such as this one rather than long winded one. - Cheers
I am not clear if you think this is a long-winded one. It might be- but if I leave even a small piece of information out or incomplete- I will get complaints! Take for example your question. You will note that I created the black bits on the REGISTERED images. If you create the shapes and *then* register the images it might not work out as well because the registration process will smear the shapes and they will not reject as well. Why do the shapes get smeared? Because registration interpolates pixels. What's interpolation? lol I have to pick and choose what to say and not say. Its tough! But this above is the reason you cannot incorporate WBPP (BPP) into automatically registering and combining your images. You have to pause to create the black shapes after registration and before integration. Using WBPP (BPP) to do calibrate only is the way to go.
@@AdamBlock Got it, thanks. No, I did NOT think this was long winded. I thought it was clear and to the point. I was speaking about some general videos that end up being over an hour long.
@@AstroQuest1 Ah... great. Technically I could have done this one a little faster- but then as I mention- some particulars are lost on some people- and I want to be as inclusive as possible when reaching my audience. :)
Thank you Adam for the nice video. Really powerful and helpful technique! Question, though: would doing this mean that those areas in the final integrated image will have a lower local SNR with respect to the rest? Essentially, you are integrating less frames in those areas, so it should be equivalent to having a total integration time lowered by "number of frames" * "exposure length", correct? Of course, it's still better than ending up with the artifacts/dust donuts, but it might be something worth considering, especially if the number of frames with the problems is high, compared to the total number of frames.
Yes, you are exactly correct. This is true for any kind of rejection. However, if the artifact is in an area of signficant signal (say the galaxy)- the local SNR will remain high. If in an area of low signal, but on the sky and unimportant- no problem. The issue is where there is low signal and important information... *but* with enough frames even this situation will work out well. The key is to have enough good frames to complete the average.
Hi, This method is very very interesting! But my problem is this: I have a session where there are two giant donuts that have not been removed, because I did not do the Flat, however these donuts are present in ALL pictures, so after using your method, I have not solved the problem. In the stack you can see the two black circles that were created by GAME. How could I solve? Thank
That is correct. This method is used only when the artifact is in a subset (small fraction) of the images thereby allowing you to reject the pixels mathematically. However, artifacts that are present in every frame require different techniques. I demonstrate these in my tutorials at AdamBlockStudios.com . For example: www.adamblockstudios.com/articles/repair-dust-donut-or-any-artifact-without-cloning and www.adamblockstudios.com/articles/Horizons_Dust_donut . Perhaps consider becoming a member of AdamBlockStudios.com !
Excellent video! I have a question what if you have this kind of artifact in all subs. If you remove these from all subs, you would have a hole in your final stacked image, right? Appreciate your response.
Yes correct. This technique relies on rejection of values through ImageIntegration. If you don't have any good values left over... yep, that is a problem! There are different methods of trying to repair artifacts that are in every frame (but it isn't through this kind of rejection). I have a number of videos on this topic at AdamBlockStudios.com . Please consider becoming a member and see all of the secrets! :)
@@austingstephens www.adamblockstudios.com/articles/Horizons_Dust_donut and www.adamblockstudios.com/articles/repair-dust-donut-or-any-artifact-without-cloning for examples
You need to have enough frames that *do not * have the donut ... otherwise the difference in S/N will be apparent. IN addition, if the normalization between images is not good..that could be an issue as well. If you are member of my site, you can show me the example through my forum.. (lay out all of the particulars). Also remember... use Winsorized Sigma Reject as a method...
@@AdamBlock i kinda figured it out a different way around by integrating the donuts to one image then integrating the rest and adjusting the low number. It worked. Probably not ideal …. I think that i will normalize everything and try again thanks for responding, if i find my self stuck I’ll take advantage of my membership ;)
Once again Thank you for a great video... now if I have these devils in all my subs (first light with my RC6 just the other day woo hoo !) is there a way to do similar ? I love fundamentals by the way :)
If they are all in the same place... sorry this will not help you. You will need to solve the more fundamental issue of generating flats that better calibrate things.
Ok sorry if this is a dumb question but if I have a donut in every frame, the GAME script is not going ot help right? It appears that it needs non dust mote data to average out the outlier and replace it with clean averaged pixels
Correct. But you are NOT averaging. You are truly excluding the artifact's values. This is the point. If the artifact is in every frame... there is no "rejection" possible. At AdamBlockStudios.com I demonstrate many other ways to get rid of artifacts. However, when you have frames that are artifact free- this is the ONLY way to use *real* data to mitigate the artifact.
I always take something really useful away from your videos, and this was no exception. I have had problems with rejecting some stubborn satellite trails from an image I've just taken, and your selective rejection technique sorted it out immediately. Brilliant.
Thanks!
Awesome technique Adam! I used the Game script for lot of things. But this SR technique is great. It rejects bad data to be introduced in the final image, rather than dealing with it. That DBE is great teaser. I found varieties of ways to do it, obviously yours must be very advanced. Love to know how. Thanks for sharing!
Hi Adam. I'm here not only to thank you for the excellent tutorials and the depth in which you cover each topic, but also for the way you convey it. The English language is not my original language and the way you pronounce the words along with the demonstrations of uses you make on PixInsight helps us a lot. So thank you for all this directly from Brazil.
Thank you!
Hello Adam, very innovative technique. Congrats to Hartmut and you for a good collaboration on this one.
Thanks! Maybe you can highlight this technique for all of your German followers on your channel.
@@AdamBlock woud love to do that. I will scan my images for a dataset that fits the needs.
Wow. (I'm new to PI and perhaps easily impressed, but I had no idea things like this could be accomplished.) I am staying tuned for more of your videos on UA-cam and will be checking out your other offerings. I find your presentation style very easy to follow. What, Why, How. Technical training done right. (Been there, did it, and really appreciate finding training excellence in my retirement project.
Well.... you really should check out my FastTrack Training which leads to much more... see ua-cam.com/video/BwrEqFS2Yd4/v-deo.html Thanks!!
I really appreciate and learn tons from your videos Adam. Took your Fast Track course and look forward to more videos like this!
Excellent!
Outstanding Adam, I'm about to include GAME script, thanks.
Adam - Thank you for this great tutorial! The detailed explanations are very useful (like always in your videos) and the described technique is really powerful. I would highly appreciate to see more of this kind of videos.
Thanks Joachim!
That's pretty amazing, and very well presented as always. Thanks!
Thank you very much!
Great Video!! I have data sets that have suffered with the dreaded unremovable donuts! This has given me ideas on not only how to solve this issue but others as well, thanks :)
Your welcome! Please consider more insights like this at AdamBlockStudios.com .
Very helpfull technique, would love to see more of these!
Thanks Pascal C (two programming languages!)
Great technique and well explained as always!
Thanks Daniel.
Seems like a great tool for removing almost any artifact, perhaps even airplane trails. But there must be a percent limit on the number of frames involved, e.g. pulling data out of 75% of all frames in the same spot doesn't leave enough good to make a good master light.
You are correct in general...but it depends! If the artifact happens to occur over an area of bright signal... even if you have just a few frames in that area that are artifact-free..you WIN! But the fewer the number of good frames, of course, the lower the S/N becomes you are just using a single good frame to remedy the issue (which at that point means the artifact really is in every other frame!).
Thanks for this useful technique, Adam! The vids on your website are beneficial too, well worth the price of admission :).
Very powerful Adam, thank you!
Great..I hope when you need it.. you will like using it. Of course I also hope you never need to.
Great tip! One problem I did notice with the GAME script is that it ditches all FITS headers in the image. So when you write the masked version of the image, it doesn't contain any headers you might want to use e.g. for image weighting.
Yes...I wrote to Harmut about this. I think this is an oversight that should be easy to fix.
This has now been implemented...so a new version should be available (automatically downloaded) from the repository. (It worked for me...)
Great video and technique. Can it be used for OSC? If so, when in the workflow does one apply it. Thanks, Des
Abosolutely. As shown, after you register your images (on the debayered and registered images) you can write the shapes over artifacts. Just to stress...the artifacts need to be in a subset of the data- if they are in every frame, different techniques must be used.
@adamblock Brilliant video as usual! I'm also a Adam block Studios Horizons member and couldn't get by without your website. My question is - what if you're using NSG? I've been studying your chapter on NSG and wondered if the idea is to employ this technique right after the NSG script completes and before Image Integration? Many thanks and keep up the awesome website!
Yeah... I guess it depends on the dither amount. NSG will ignore zeros in images... so it is OK to write black pixels before NSG... no problem. But doing this on the normalized images might be better since you can clearly see the issues. However, you will need to have larger spots to accommodate the shifts between them.
Thanks for this!!
Very nice, love this!
Thank you Adam. I can definitely make use of this tip. If I'm using WBBP, which set of files do I manipulate just before image integration so I can redo that final set of master files?
Great video
This is great! If I have dust donuts that the flats don't remove from a single night of imaging is there a good trick to remove? Thanks
That is exactly the idea!
@@AdamBlock If I understand your process correctly, it assumes that some or most of the subs don't have the issue. If all of them do, how would it work in terms of rejection because all the pixels would be zero and you'd be left with a hole.
@@jasondain8713 Yes (I clearly state this in my explanation). If the issue is in all of the images (in the same place) you need to resort different kinds of techniques. This one, if you can do it, is one of the best since you are substituting real data into problem area through rejection. As far as those other techniques go... this is why people invest in my videos at AdamBlockStudios.com !! hint hint...
This is a nice technique if you happen to have only a subset of affected images. Unfortunately, I tend to find all of my subs are impacted. For example, I'll have some dust mote that for whatever reasons is not properly calibrated out with flats. But, it's on every single image. I'm still trying to figure why the flat didn't calibrate it out in the first place. Of course, I could always tear my imaging train apart, clean all of the optics, reassemble and then take another few nights of subs without the dust mote. If I do that, then I can use your technique :)
Yes, so you are on the right track. The most effective way to improve image quality is to first optimize the acquisition of data- there is no substitute for obtaining higher-quality input data. Very rarely is processing artifacts after-the-fact going to work out. If I understand, *every* time you observe you have uncalibrated dust motes (in the same positions). So this means you need to work on flat acquisition. Everything from bad calibration data, incorrect calibration settings, scattered light in your system..etc etc can be in play. You have to eliminate each as a source of the problem.
Thanks, Adam--and Hartmut! It seems to me that one could use this to "save" the images one might have either forgotten to create flats for or lost the flats for whatever reason, as long as one had a similar "good" data set with which to combine. Say you imaged...M42 on two different sessions, one of which you "forgot" to do flats for, both have say 20 subframes. If I understand the process correctly, as long as they are all registered, this should work to save the data you got without flats. Do I have it right?
Yes sir! Indeed, I have employed this very trick in the past to salvage some data for which there were know good flats. I applied flats from a different session..which still took care of the large scale vignetting... but not the dust...which I zapped with this method.
Very useful technique
Thanks Shaun.
Very nice, pls what is the link to the follow up video making the background uniform as is mention at the end of video?
www.adamblockstudios.com/articles/dbe-you-are-control-not-the-other-way-around
Thanks you :)
Awesome Adam, I consider myself a neophyte with PixInsight and still use the Batch Preprocessing for stacking. I understand conceptually with what you did and am wondering if you can use this method with BPP - I it looks like you can. I prefer short videos such as this such as this one rather than long winded one. - Cheers
I am not clear if you think this is a long-winded one. It might be- but if I leave even a small piece of information out or incomplete- I will get complaints! Take for example your question. You will note that I created the black bits on the REGISTERED images. If you create the shapes and *then* register the images it might not work out as well because the registration process will smear the shapes and they will not reject as well. Why do the shapes get smeared? Because registration interpolates pixels. What's interpolation? lol I have to pick and choose what to say and not say. Its tough! But this above is the reason you cannot incorporate WBPP (BPP) into automatically registering and combining your images. You have to pause to create the black shapes after registration and before integration. Using WBPP (BPP) to do calibrate only is the way to go.
@@AdamBlock Got it, thanks. No, I did NOT think this was long winded. I thought it was clear and to the point. I was speaking about some general videos that end up being over an hour long.
@@AstroQuest1 Ah... great. Technically I could have done this one a little faster- but then as I mention- some particulars are lost on some people- and I want to be as inclusive as possible when reaching my audience. :)
Thank you Adam for the nice video. Really powerful and helpful technique! Question, though: would doing this mean that those areas in the final integrated image will have a lower local SNR with respect to the rest? Essentially, you are integrating less frames in those areas, so it should be equivalent to having a total integration time lowered by "number of frames" * "exposure length", correct? Of course, it's still better than ending up with the artifacts/dust donuts, but it might be something worth considering, especially if the number of frames with the problems is high, compared to the total number of frames.
Yes, you are exactly correct. This is true for any kind of rejection. However, if the artifact is in an area of signficant signal (say the galaxy)- the local SNR will remain high. If in an area of low signal, but on the sky and unimportant- no problem. The issue is where there is low signal and important information... *but* with enough frames even this situation will work out well. The key is to have enough good frames to complete the average.
Hi, This method is very very interesting!
But my problem is this:
I have a session where there are two giant donuts that have not been removed,
because I did not do the Flat, however these donuts are present in ALL pictures, so after using your method,
I have not solved the problem. In the stack you can see the two black circles that were created by GAME.
How could I solve?
Thank
That is correct. This method is used only when the artifact is in a subset (small fraction) of the images thereby allowing you to reject the pixels mathematically. However, artifacts that are present in every frame require different techniques. I demonstrate these in my tutorials at AdamBlockStudios.com . For example: www.adamblockstudios.com/articles/repair-dust-donut-or-any-artifact-without-cloning and www.adamblockstudios.com/articles/Horizons_Dust_donut . Perhaps consider becoming a member of AdamBlockStudios.com !
@@AdamBlock Ok, Thank.
nice
Excellent video! I have a question what if you have this kind of artifact in all subs. If you remove these from all subs, you would have a hole in your final stacked image, right? Appreciate your response.
Yes correct. This technique relies on rejection of values through ImageIntegration. If you don't have any good values left over... yep, that is a problem! There are different methods of trying to repair artifacts that are in every frame (but it isn't through this kind of rejection). I have a number of videos on this topic at AdamBlockStudios.com . Please consider becoming a member and see all of the secrets! :)
@@AdamBlock Hi Adam. I have the same issue as Grigory. Which package on your site has the tutorials on this subject?
@@austingstephens www.adamblockstudios.com/articles/Horizons_Dust_donut and www.adamblockstudios.com/articles/repair-dust-donut-or-any-artifact-without-cloning for examples
@@AdamBlock Thank you very much, Adam.
So I have tried this, it removed the donuts but left a shadow of the ellipsis lol... ill keep at I must be missing something
You need to have enough frames that *do not * have the donut ... otherwise the difference in S/N will be apparent. IN addition, if the normalization between images is not good..that could be an issue as well. If you are member of my site, you can show me the example through my forum.. (lay out all of the particulars). Also remember... use Winsorized Sigma Reject as a method...
@@AdamBlock i kinda figured it out a different way around by integrating the donuts to one image then integrating the rest and adjusting the low number. It worked. Probably not ideal …. I think that i will normalize everything and try again thanks for responding, if i find my self stuck I’ll take advantage of my membership ;)
@@AdamBlock The normalization was the trick thanks !
@@yomichee Excellent!
Once again Thank you for a great video... now if I have these devils in all my subs (first light with my RC6 just the other day woo hoo !) is there a way to do similar ? I love fundamentals by the way :)
If they are all in the same place... sorry this will not help you. You will need to solve the more fundamental issue of generating flats that better calibrate things.
@@AdamBlock I followed your instructions anyway and look forward to applying the technique in the future. Thank you for replying.
Ok sorry if this is a dumb question but if I have a donut in every frame, the GAME script is not going ot help right? It appears that it needs non dust mote data to average out the outlier and replace it with clean averaged pixels
Correct. But you are NOT averaging. You are truly excluding the artifact's values. This is the point. If the artifact is in every frame... there is no "rejection" possible. At AdamBlockStudios.com I demonstrate many other ways to get rid of artifacts. However, when you have frames that are artifact free- this is the ONLY way to use *real* data to mitigate the artifact.
@@AdamBlock Thank you for clarifying that !