Adam provide to us, without any doubts, the best tutorials about image processing wordlwide, from over twenty years (and many of them are free).The investiment in your tutorials are the best expense that an amateur can afford. Thanks a lot, for your time and creativity.
Those tools are based on the idea of real time adjustment of a desirable result. Something we always have in Ph, instead let’s say, “blind algorithmic runs” in PI, which of curse are extremely powerful and dynamic. So it seems like, PI is , Photoshopizing in a way.
Adam, you are solving my biggest gripe so far, regarding LRGB imaging. I avoided taking L-s for the reason that producing an LRGB image was a 50-50 chance turning out good or bad. Eventually I find it to be a waste of time, because I buried the L with the curves and it did nothing to the image. I'm gonna try this method ASAP! Thanks again!
Great... I hope this helps. This is exactly what is done in Photoshop... so it should work just fine. All of the behaviors I learned over the years (previous to PI) now have come back into play. :)
Perfect this is what I m looking for LRGB was always difficult for me to combine try different methods thank you Adam for your work greetings from switzerland
Thanks for this free public teaser! I just watched the entire series in Fundamentals on your website this weekend, and got a lot out of it - highly recommended!
Thanks! I did write you to many times concerning GalaCell... but the e-mails would not go through! Do you have any other means of contact? I did get yours through my site.
Interesting new tool Adam, and certainly one to try out. However, a bit underwhelmed by the comparison between the LRGB color channel combination method and the script. One thing that the script rather clearly demonstrates is that there is a need to ensure some form of DBE/gradient removal be done and that color image be color corrected (as would be a normal process) before proceeding to the 'stretch' and subsequent steps. If the component images have garbage, that seems to be ehanced in the combination process. The LRGB color combination image seemed to show an increase in black level and some reduction of the color bias/gradient relative to the script created image (at least as was visible in the UA-cam video). How does the script tool work with starless images created with tools like SXT? I normally separate the stars out and work with them separately from the object (except when the object is a globular cluster or something similar). That allows the freedom do work with colors/saturation/contrast without impacting the star color or creating 'bloom'. How does it react when the Lum layer is also used as a mask to prevent changing/impacting the background during the combination process? I like that the tool provides a means to preview the result, something the LRGB channel combination tool lacks. At first glance that seems to be the biggest single benefit, since the LRGB CC tool requires trial and error to get things 'just right'.
One thing I find striking is the improvement in sharpness of the ImageBlend result compared to the LRGBCombination. It looks like your L image is sharper than the RGB, so is LRGBCombination not adequately retaining that level of detail in the result?? I thought that was the whole point of the tool!
Now...that is question that is answered on my site in my tutorials (you should consider becoming a member!). I am not certain I understand what you mean by "integrate"... you do this after you have permanently stretched them...then you put them together and work on the image from that point. Is this what you mean?
looks great! is LRGB combination (native pix version) supposed to be a non-linear process? I've always used it in the linear stage with linearfitted, calibrated and background neutralised images, which would give me a very flat result that I'd manipulate further in processing. This is because I have quite a practiced and developed RGB processing workflow, and I feel more comfortable with using the regular Pix tools for my non-linear processes (curves, masked brightness adjustments etc.). With your tool, would I be able to still do the L+RGB combination in a linear state, not a non-linear one as you showed in the video, so that I can stretch and process them together?
The way I demonstrated its usage is how it was designed to be used for this kind of LRGB result. There is a thread about this and an explanation by Juan Conejero to this effect.
Hi Adam, I'm very near the end of the Fundamentals path. There's a recurring question that I've never had answered satisfactorily: If you've shot enough RGB that color noise in your target is well controlled then why do you need L? Your SNR is, by definition, already high so what's the benefit? I often see images where L is used to bring out faint details more quickly and the overall structure looks fine. The color, however, is usually mottled or smeared, presumably from insufficient RGB. It appears that the color data gets heavily processed to control noise with tools like Blurxterminator or with convolution. While this can look ok superficially it doesn't look very good under even casual closer inspection. Is there something I'm missing? Thanks!
You aren't missing anything. You will note in my explanation for LRGB in my tutorial I talked about why LRGB was useful in the past (time savings through binning to reduce read-noise). However, this isn't true any longer with CMOS. So why did I make the video? Well... people are still doing LRGB! lol I do not think there is a big benefit other than some time savings where it isn't possible to acquire as much color as desired. That is really it I think.
@@AdamBlock Thanks! As an experiment I processed NGC4565 with a synthetic L composed of integrated RGB masters and then with a true L. The difference was negligible. For bright targets I think I'll shoot pure RGB. For faint stuff like IFN I'll shoot LRGB because some parts are just too faint to get SNR on without 40+ hours of combined RGB. Thanks again! Btw, your section on image calibration helped me solve an issue I was having with my ASI6200MM. 😌
I also don’t see any benefit to using real L. Plus I really don’t care to stack 3X the images just to access control of certain portions of the image. There’s a lot of great tools for contrast control
Sorry Adam but I must have missed something. I saw in Fundamentals the whole process for developing M83. However I was quite "shocked" by the lack of calibration process. What happened to the famous dark-fdlat-bias frames? Maybe you explained it somewhere?
TelescopeLive provides only calibrated images. Other workflow examples show the calibration (WBPP processing). It is pretty much the same thing each time! :) I will be creating more content with more WBPP tutorials... the point of this section was the LRGB instruction (as well as more secret processing). Everyone always asks me to show what to do AFTER the initial processing... this is what I did.
You could... but the luminance (lightness) is the same information as when you keep it all togther as a single color image. There isn't a benefit of extracting this information unless you plan to take a wildly different path with it... but I cannot think of a reason right now that would be beneficial.
You could. However the NB ColourMapper script can be used to manage the *color* of the blend. Usually you would blend with Screen which is indeed one of the options in ImageBlend- but the color management part is done in NB ColourMapper.
anyone else having issues with the script download? I put the address in PI and checked for updates. It downloaded but doesnt appear anywhere to open. The site says will open under scripts> utilities it isnt there any ideas?
What is going on here? Was something in Pixinsight made EASIER to use?? I really don't understand how this can happen. It runs counter to the entire product philosophy behind Pixinsight!! I may have to delete my installation of it!
Adam provide to us, without any doubts, the best tutorials about image processing wordlwide, from over twenty years (and many of them are free).The investiment in your tutorials are the best expense that an amateur can afford. Thanks a lot, for your time and creativity.
Thank you!
Those tools are based on the idea of real time adjustment of a desirable result. Something we always have in Ph, instead let’s say, “blind algorithmic runs” in PI, which of curse are extremely powerful and dynamic. So it seems like, PI is , Photoshopizing in a way.
Well.. this is still a 3-rd party script. So there is a gateway for "photoshop like" algorithms and methods.
No you misunderstood. What was missing from PI, was the flexibility of realtime judgement running specific tasks, like we did in Ph. Thanks.
And it's the exact target that I was processing at the time. Awesome tutorial, thanks.
Adam, you are solving my biggest gripe so far, regarding LRGB imaging. I avoided taking L-s for the reason that producing an LRGB image was a 50-50 chance turning out good or bad. Eventually I find it to be a waste of time, because I buried the L with the curves and it did nothing to the image. I'm gonna try this method ASAP! Thanks again!
Great... I hope this helps. This is exactly what is done in Photoshop... so it should work just fine. All of the behaviors I learned over the years (previous to PI) now have come back into play. :)
Perfect this is what I m looking for LRGB was always difficult for me to combine try different methods thank you Adam for your work greetings from switzerland
Thanks for watching!
Thanks for this free public teaser!
I just watched the entire series in Fundamentals on your website this weekend, and got a lot out of it - highly recommended!
Great... Thank you for being a member!
You can also blend two monochrome images. I did this to mix bin1 and bin2 masters
Thanks Adam!
Thanks! I did write you to many times concerning GalaCell... but the e-mails would not go through! Do you have any other means of contact? I did get yours through my site.
Amazing Adam! This script is most welcome! Thank you
Thanks for watching!
Interesting new tool Adam, and certainly one to try out. However, a bit underwhelmed by the comparison between the LRGB color channel combination method and the script. One thing that the script rather clearly demonstrates is that there is a need to ensure some form of DBE/gradient removal be done and that color image be color corrected (as would be a normal process) before proceeding to the 'stretch' and subsequent steps. If the component images have garbage, that seems to be ehanced in the combination process. The LRGB color combination image seemed to show an increase in black level and some reduction of the color bias/gradient relative to the script created image (at least as was visible in the UA-cam video).
How does the script tool work with starless images created with tools like SXT? I normally separate the stars out and work with them separately from the object (except when the object is a globular cluster or something similar). That allows the freedom do work with colors/saturation/contrast without impacting the star color or creating 'bloom'. How does it react when the Lum layer is also used as a mask to prevent changing/impacting the background during the combination process?
I like that the tool provides a means to preview the result, something the LRGB channel combination tool lacks. At first glance that seems to be the biggest single benefit, since the LRGB CC tool requires trial and error to get things 'just right'.
Amazing work. Looking forward to trying the script on my data.
Great... I think you like it.
Excellent as usual, Adam.
Thanks for being so amazing Adam!
this helped me immensely
Perhaps consider joining my site as a member for more "immense" helpful information.
Thanks. Great tutorial!
That's cool. I bet this would work well for Ha, LRGB images.
Very nice. Thank you
One thing I find striking is the improvement in sharpness of the ImageBlend result compared to the LRGBCombination. It looks like your L image is sharper than the RGB, so is LRGBCombination not adequately retaining that level of detail in the result?? I thought that was the whole point of the tool!
I’ve just been using the color and luminous blend mode in Photoshop, which has been really effective, but curious about trying it this way
Nice! When in the process do you integrate them? At the start or after first processing both the luminance and rgb?
Now...that is question that is answered on my site in my tutorials (you should consider becoming a member!). I am not certain I understand what you mean by "integrate"... you do this after you have permanently stretched them...then you put them together and work on the image from that point. Is this what you mean?
@@AdamBlock Yes that is what i meant, thank you very much!
This is so awesome.
Can I do this with blending lumens from mono with OSC data?
Yep. This is open-ended.
looks great! is LRGB combination (native pix version) supposed to be a non-linear process? I've always used it in the linear stage with linearfitted, calibrated and background neutralised images, which would give me a very flat result that I'd manipulate further in processing. This is because I have quite a practiced and developed RGB processing workflow, and I feel more comfortable with using the regular Pix tools for my non-linear processes (curves, masked brightness adjustments etc.). With your tool, would I be able to still do the L+RGB combination in a linear state, not a non-linear one as you showed in the video, so that I can stretch and process them together?
The way I demonstrated its usage is how it was designed to be used for this kind of LRGB result. There is a thread about this and an explanation by Juan Conejero to this effect.
Hi Adam, I'm very near the end of the Fundamentals path. There's a recurring question that I've never had answered satisfactorily: If you've shot enough RGB that color noise in your target is well controlled then why do you need L? Your SNR is, by definition, already high so what's the benefit? I often see images where L is used to bring out faint details more quickly and the overall structure looks fine. The color, however, is usually mottled or smeared, presumably from insufficient RGB. It appears that the color data gets heavily processed to control noise with tools like Blurxterminator or with convolution. While this can look ok superficially it doesn't look very good under even casual closer inspection. Is there something I'm missing? Thanks!
You aren't missing anything. You will note in my explanation for LRGB in my tutorial I talked about why LRGB was useful in the past (time savings through binning to reduce read-noise). However, this isn't true any longer with CMOS. So why did I make the video? Well... people are still doing LRGB! lol I do not think there is a big benefit other than some time savings where it isn't possible to acquire as much color as desired. That is really it I think.
@@AdamBlock Thanks! As an experiment I processed NGC4565 with a synthetic L composed of integrated RGB masters and then with a true L. The difference was negligible. For bright targets I think I'll shoot pure RGB. For faint stuff like IFN I'll shoot LRGB because some parts are just too faint to get SNR on without 40+ hours of combined RGB. Thanks again! Btw, your section on image calibration helped me solve an issue I was having with my ASI6200MM. 😌
I also don’t see any benefit to using real L. Plus I really don’t care to stack 3X the images just to access control of certain portions of the image. There’s a lot of great tools for contrast control
Sorry Adam but I must have missed something. I saw in Fundamentals the whole process for developing M83. However I was quite "shocked" by the lack of calibration process. What happened to the famous dark-fdlat-bias frames? Maybe you explained it somewhere?
TelescopeLive provides only calibrated images. Other workflow examples show the calibration (WBPP processing). It is pretty much the same thing each time! :) I will be creating more content with more WBPP tutorials... the point of this section was the LRGB instruction (as well as more secret processing). Everyone always asks me to show what to do AFTER the initial processing... this is what I did.
Hi Adam with OSC do you extract a luminance image from the RGB image?
You could... but the luminance (lightness) is the same information as when you keep it all togther as a single color image. There isn't a benefit of extracting this information unless you plan to take a wildly different path with it... but I cannot think of a reason right now that would be beneficial.
Would you use This Image Blend script for adding continuum subtracted Ha data to RGB?
You could. However the NB ColourMapper script can be used to manage the *color* of the blend. Usually you would blend with Screen which is indeed one of the options in ImageBlend- but the color management part is done in NB ColourMapper.
@@AdamBlock Thanks Adam, just what I wanted to know.
anyone else having issues with the script download? I put the address in PI and checked for updates. It downloaded but doesnt appear anywhere to open. The site says will open under scripts> utilities it isnt there any ideas?
why don't you use oneshot color image for reference only.
Why did you not background neutralize this one?
What is going on here? Was something in Pixinsight made EASIER to use?? I really don't understand how this can happen. It runs counter to the entire product philosophy behind Pixinsight!! I may have to delete my installation of it!
Ha ha... I have the same issues... look I made a 9 minute video..I am surprised you watched it given its brevity. :)