Thank you a thousand times!! Long-time photographer (I still have fixer stains on my fingers), new to astrophotography/PI. Color balance has been driving me insane!
Wow. Lately when I run into a bad problem, you have "been there, done that". My RGB images color calibrated to "Pink" (magenta) stars. I struggled with little experience to fix it (I tried "curves" one color at a time, it helped). I just threw out PCC since for this data it flat out failed. I am weak processor and I didn't know about Linear Fit or Statistics. I'm going to try this technique right now! It's great when you realize you're not the ONLY person who has had magenta stars. Thank you! Your video about masks for stars and galaxy separately rendered was also very helpful. That will now be my regular technique. Also, your DBE technique, of the samples around the edges, worked better than I expected. I don't know how someone figured out to do it that way.
You can invert image after combining. I think it is under image. It will look white. You can then run SCNR on image and take out the green which will make the magenta stars white. Because magenta is opposite to green. Other option is you can use color mask which is under script and utilities and create a mask for magenta. Make sure that you blur the mask either 3 or 4. Apply the mask to your image and then go to curves and lower the red color which will take out the magenta because only magenta mask is applied. You can do it for different colors if you wish. Just select color mask for the necessary colors. Make sure that you blur the mask.
Hey Shawn! Ya a fellow astrophotographer and friend showed it to me recently. I found it interesting. I usually use PCC, but with this way I don't have to enter all the details I do with PCC. Somewhat simpler to use this Linearfit method. Thanks for tuning in! Hope all is well.
Another Great video Shawn! I often have issues with colour balancing as it is difficult to know where to place the preview on the main image for the best result. This method seems to work very well and I will probably use it for all my images. Thanks again for the video!
I'll give this a try. I have a light pollution filter that does a good job upping my S/N ratio, but tends to unbalance my colors a bit (making red lower than the others). I'll try this method to re-align them. It looks much easier than my current method of doing it by hand.
Thanks for your great video, you are really doing a great job with all of your videos! One question, when should you do this color balance in the PI process?
Probably because when he combined them, he did not weight them equally (see 2:09). That's what it looked like in the video. As for LinearFit. I first saw it used in another Tutorial @ua-cam.com/video/Xl036NZ3Eoc/v-deo.html . It's a pretty good process indeed.
My thoughts too. I guess we would have to test both routes and see how they turn out. Personally I just do the Linear Fit on the Ha, OIII and SII, which seems to work OK.
I'll have to try that way as well. Should work fine. I presented it this way though so those with a OSC master light would understand to split the channels, linearfit, and then recombine.
Try tone mapping instead and get away from this basic green output which is still dominated by the Ha. Linear fit will not fix this (helps a little at the beginning maybe). And don’t use SCNR since this removes the green and in doing so throws away data.
That was very interesting and instructive, but I have another question how are the progress with NINA and apparently they do not sit still there almost daily new updates.
NINA 1.11 night builds are progressing. The last one I installed/updated to was #8. I have grabbed the bull by the horns and become more familiar with how to use the new sequencer. It's now making more sense. I did an imaging run the other night using it and it worked well. I didn't have any real problems.
Hi, thanks for an amazing video. Why did you choose the red chanel as the reference? Is it because its the lowest number or something? On my image, red is the highest then green then blue so with blue being the lowest would that be my reference?
I would be interested to know how you came to the conclusion to use the image with the lowest median value. Logic tells me you've done this backwards. If you use the image with the lowest median value (rather than the highest) as your reference, you would potentially be clipping the data in the images with the higher median value by reducing them.
A friend and fellow astrophotographer showed this to me. Not a social media lover, so wished to remain anonymous. Not sure exactly why the lowest median value works. I did try using the highest and found it didn't colour balance as well. Leaned more to the green. That was my experience.
This troubles me too. It seems it would be nice if linear fit could be forced to calibrate using only nominal black background pixels. Otherwise truly colored stuff would seem to bias the results. Maybe if Reject low and high were limited to pixels for modeling. Once the model is done then it should be applied to all pixels except those that are saturated. I think.
The image you used to start working on, was it already color calibrated (PCC) before you ran the statistics and linear fit? My raw, combined RGB had gradient swaths of LP. I just used the raw RGB combination and found my red was considerably stronger in the median. I applied Linear Fit to blue and green after splitting the raw combined RGB image. I got a very blue/green image that remained very green/blue after STF with channels unlinked. Not usable. Should I have applied PCC to the original RGB before I split them? Leave it to an amateur to mess up your careful instructions, right?
Please consider subscribing if you are enjoying and finding my videos helpful. Thank you!
Thank you a thousand times!! Long-time photographer (I still have fixer stains on my fingers), new to astrophotography/PI. Color balance has been driving me insane!
This is by far the easiest, best and most reliable method of color calibration I’ve found. Thanks!
This method is great. Most reliable, fastest method I’ve found.
Wow. Lately when I run into a bad problem, you have "been there, done that". My RGB images color calibrated to "Pink" (magenta) stars. I struggled with little experience to fix it (I tried "curves" one color at a time, it helped). I just threw out PCC since for this data it flat out failed.
I am weak processor and I didn't know about Linear Fit or Statistics. I'm going to try this technique right now! It's great when you realize you're not the ONLY person who has had magenta stars.
Thank you! Your video about masks for stars and galaxy separately rendered was also very helpful. That will now be my regular technique. Also, your DBE technique, of the samples around the edges, worked better than I expected. I don't know how someone figured out to do it that way.
You can invert image after combining. I think it is under image. It will look white. You can then run SCNR on image and take out the green which will make the magenta stars white. Because magenta is opposite to green. Other option is you can use color mask which is under script and utilities and create a mask for magenta. Make sure that you blur the mask either 3 or 4. Apply the mask to your image and then go to curves and lower the red color which will take out the magenta because only magenta mask is applied. You can do it for different colors if you wish. Just select color mask for the necessary colors. Make sure that you blur the mask.
You are doing a great job of simplifying a very complex program. Keep up the good work.
Glad you think so! Appreciate it. Clear skies!
Great explanation! I find myself using this method to color balance more often than not.
Hey Shawn! Ya a fellow astrophotographer and friend showed it to me recently. I found it interesting. I usually use PCC, but with this way I don't have to enter all the details I do with PCC. Somewhat simpler to use this Linearfit method. Thanks for tuning in! Hope all is well.
Another Great video Shawn!
I often have issues with colour balancing as it is difficult to know where to place the preview on the main image for the best result. This method seems to work very well and I will probably use it for all my images. Thanks again for the video!
Thanks Phil. Appreciate the comment and tuning in to watch. Cheers!
Nice job...love when you make it look easy! and it is!
Thanks Greg! Clear skies!
Great video Shawn. Learned a lot. Thanks.
Shawn
I’ll give it a try
Cheers
I'll give this a try. I have a light pollution filter that does a good job upping my S/N ratio, but tends to unbalance my colors a bit (making red lower than the others). I'll try this method to re-align them. It looks much easier than my current method of doing it by hand.
Thanks for this tip Shawn !
Thank you for tuning in! Clear skies!
Thank you! You saved my image :)
Awesome!
Thanks for your great video, you are really doing a great job with all of your videos! One question, when should you do this color balance in the PI process?
Great tip Shawn
Thanks 👍
very interesting video and thank you
Why not just linear fit your narrowband images before combining them?
Why the extra steps of combining, separating into RGB and then combine again?
Probably because when he combined them, he did not weight them equally (see 2:09). That's what it looked like in the video. As for LinearFit. I first saw it used in another Tutorial @ua-cam.com/video/Xl036NZ3Eoc/v-deo.html . It's a pretty good process indeed.
My thoughts too. I guess we would have to test both routes and see how they turn out. Personally I just do the Linear Fit on the Ha, OIII and SII, which seems to work OK.
I'll have to try that way as well. Should work fine. I presented it this way though so those with a OSC master light would understand to split the channels, linearfit, and then recombine.
@@VisibledarkAstro That makes sense! Thanks for the great video!
Try tone mapping instead and get away from this basic green output which is still dominated by the Ha. Linear fit will not fix this (helps a little at the beginning maybe). And don’t use SCNR since this removes the green and in doing so throws away data.
That was very interesting and instructive, but I have another question how are the progress with NINA and apparently they do not sit still there almost daily new updates.
NINA 1.11 night builds are progressing. The last one I installed/updated to was #8. I have grabbed the bull by the horns and become more familiar with how to use the new sequencer. It's now making more sense. I did an imaging run the other night using it and it worked well. I didn't have any real problems.
Hi, thanks for an amazing video. Why did you choose the red chanel as the reference? Is it because its the lowest number or something? On my image, red is the highest then green then blue so with blue being the lowest would that be my reference?
Big help Shaun thankyou, Subbed
Damo
I would be interested to know how you came to the conclusion to use the image with the lowest median value. Logic tells me you've done this backwards. If you use the image with the lowest median value (rather than the highest) as your reference, you would potentially be clipping the data in the images with the higher median value by reducing them.
A friend and fellow astrophotographer showed this to me. Not a social media lover, so wished to remain anonymous. Not sure exactly why the lowest median value works. I did try using the highest and found it didn't colour balance as well. Leaned more to the green. That was my experience.
When should I use this as opposed to the color calibration/PCC options on pixinsight?
Why should the peaks of the three histogram line up?? If the image should contain just white stars, yes, but not for most nebulosity.
This troubles me too. It seems it would be nice if linear fit could be forced to calibrate using only nominal black background pixels. Otherwise truly colored stuff would seem to bias the results. Maybe if Reject low and high were limited to pixels for modeling. Once the model is done then it should be applied to all pixels except those that are saturated. I think.
This could easily become a short and handy script.
The image you used to start working on, was it already color calibrated (PCC) before you ran the statistics and linear fit? My raw, combined RGB had gradient swaths of LP. I just used the raw RGB combination and found my red was considerably stronger in the median. I applied Linear Fit to blue and green after splitting the raw combined RGB image. I got a very blue/green image that remained very green/blue after STF with channels unlinked. Not usable. Should I have applied PCC to the original RGB before I split them? Leave it to an amateur to mess up your careful instructions, right?
You didn't say how you know red is the median channel? Because it has the lowest value?
Lowest value yes.