Your videos are seriously amongst the best i have ever encountered in astro imaging. Your vocal manner is perfect for a tutorial and you always impart knowledge that i can immediately apply. I cannot thank you enough.👏👏👏
I love your videos. They are very informative. You mentioned using levels, brightness and curves tool about nine and a half minutes into the video. i would love to see how you handle those.
Sorry for the delay. I live in the Canadian backwoods and this is a very busy time, getting ready the big winter snow in. I hope to make a video showing the comprehensive addition of PL8 to my workflow shortly, and probably a separate video on PL8's light and color management power.
I really think sharpness and resolution is something that a lot of imagers could pay more attention to. Shooting techniques that promote higher resolution are a great companion to what is presented here.
I agree. Sharpening is a loaded topic, because like focal length, it will bring out flaws as well as strengths in an image. Shooting technique is going to matter more, too. Shorter subs, I think is ideal, though that is often limited by light pollution and the response that may require from the imager. It's definitely a deep topic worth exploring further.
If you have clicked the check boxes or moved the sliders, just click the reset icon at the lower far right of the BXT tool. It will unselect all check boxes and move the sliders back to their default state. The default state is almost always perfect for initial deconvolution and sharpening procedures
I have heard that it is better to choose the frame with the lowest median as the reference frame in the linear fit tool while you used the frame with the highest median as the reference. What is the rationale for choosing either the strongest or the weakest frame as the reference?
The outcome of experimentation. I tried developing a variety of DSOs using the low, mid and highest median image as the LF reference and the highest median reference consistently produced better results.
I shoot in OSC and process my images using Pixinsight (and all of the RC Astro tools) and Photoshop. I was wondering if there would be any benefit to separating the channels, and doing the preprocessing steps you described here? I shoot using a ZWO ASI071 APS-C sized sensor camera. Just curious about your thoughts. I used to process images in Photoshop using a method I learned from Scott Rosen of creating a synthetic luminance channel from the RGB image, doing the brightening and sharpening to that image then recombining with the slightly blurred/saturated RGB image using the luminance blend mode. Wondering if taking it a step further and processing the color channels/luminance separately would be benficial or just more work. Like Cuiv the Lazy Geek, I tend towards the lazy side a little, lol.
It's a good question. In theory, I think you would get better results if you split and sharpened the RGB channels separately. I base this on the fact that a color camera is really just a mono sensor with some kind fo Bayer layer over it. But I've never experimented to see. OSCs do not give you a luminance channel. It's all in RGB. I have experimented on the benefits of creating a super luminance, but I did not find it useful. You will not get additional information that way because all the information is already in the RGB output. If you are looking to improve sharpening, I would suggest cloning the RGB image and separating out the high frequency information. On one clone, derive very fine HF info. On the next, derive fine HF info. On the third, derive moderate HF info. Then delete the low frequency layers and soft light composite the high frequency information back onto your image. Hope that helps!
@@SKYST0RYdo you consider the micro, fine and regular sharpening tools macro in James Ritson astrotoolset in Affinity to be the same as your LF/HF technique here?
I find your work flow interesting but how do you know when you have pushed the sharpness too far? Think about it, this is wispy gasses in space, I feel when sharpening goes this far it no longer looks like gasses and looks too sharp. However I recognize there is an element of personal taste. Your images do have high impact, my personal taste is for more toned down processing
Images will reveal when you have pushed sharpening too far by devolving in various ways. The most obvious is images will become gritty. In some cases, other artifacts will appear that look like glitter around the image, and edges may become blown out into white or flattened into black. While nebulosity is in fact whispy gas, it forms structure defined by temperature, clashes of interstellar winds and the push and pull of radiation, and the boundaries can be quite resolved. We see the same fluid dynamics in the structure of regions in our own ocean which is divided into various thermal layers with sharply defined boundaries that do not mix. When viewed at a distances--and we are many light years away from the DSOs we observe, the boundaries between these structures look refined. Take a look at Hubble and JWST images of the Crab and Orion nebulae, as examples. Another way of thinking about this is imagine clouds. They have sharply defined boundaries. Cumululous clouds, as one example, have boundaries around each cloud, and the clouds sit upon a thermal or pressure layer in the atmosphere also with sharply defined boundaries. Clouds often move in defined streams, depending upon which wind stream they are caught up in, and those wind streams also have defined boundaries. If you got very close to the boundaries, they would look fuzzier, but we see them at a distance. Most persons shoot their DSOs in just a few hours and keep development of the information minimal. This limits their information. Due to light pollution, most shoot in narrowband, and narrowband typically only captures about 20 nm of the 320 nm of the visible spectrum, or about 6.25%. This means most images shot are information-poor. Persons develop these images and either accept the blur and grittiness, or keep the images darker and fuzzier to hide the flaws. It has led to many persons expecting that unresolved images are the true appearances of these DSOs.
What I don't like about your videos is that as you go off on tangents. You're doing work on the image while speaking the benefits of Affinity over PS instead of letting people know what you're doing in Affinity.
In the 29 seconds I compared Affinity Photo to PhotoShop, what I said was while PhotoShop is very powerful, it is overpriced, and that Affniity Photo does everything I need for a one time price of $50. Since people often tell me they find the software too expensive, I figure they will be happy to know there is an affordable alternative. If you watch the video, you will note that every procedure I describe is actually being done in the video. If I have to explain background theory, such as why frequency separation is important to sharpening, the video pauses while the background theory is covered. This is not a processing video. I have something like a hundred videos on processing, including a complete processing playlist. Since early processing is very repetitive, I don't cover it from scratch in every video.
@@yervantparnagian5999 This is a WORKFLOW video. It is in no way a comprehensive tutorial at all. What is really tells us, is maximizing the quality of an image is a pretty intense process. A process that cannot be totally fleshed out in a single 20 minute video. These videos show me what could be done, but also that it is way more hands on than I will ever do, and that I will simply have to settle for lesser finished images. Even the amount of data that is thought of to be 'enough'. I am happy if I can nail down 4 hours, where as he likes 15-20 as a low end, and 30+ hours as a high end. I might shoot that if I lived under dark skies, but I have to travel 5 hours to get that, and usually travel 1-2 hours to get darkish. Watch the videos and get some hints an inspiration, then jump in and develop your own work flow.
Understandable. LRGB shot under dark skies captures a LOT more information than the NB techniques many persons have to use to compensate for light pollution. In fact, typical NB filters only capture around 20 nm of the 320 nm visible spectrum, or about 6.25% of it, and many filter schemes capture even less. That simplifies an image. When viewed in LRGB, most objects are much busier than people are used to seeing. And since I tend to shun false color schemes, which greatly simplify light and shadow, my images will have a busier look in terms of color, light and shadow play and detail. Nonetheless, I prefer to go with truer color schemes and truer imaging. My images are not 100% true to visual because that isn't possible. The moment we stretch an image, it is altered. We humans, of necessity, place an observer effect on our images, one way or another. But the fact is I don't even think of this image of Melotte 15 as well resolved yet. I should add another 10 hours of data to it to really flesh out the fine cloud structure that only shows as a haze in the darker regions at this time. It is best to think of many of my images as not aiming to look pretty but rather aiming to reveal structure and color. If they come out as beautiful, that is a nice side effect.
Your videos are seriously amongst the best i have ever encountered in astro imaging. Your vocal manner is perfect for a tutorial and you always impart knowledge that i can immediately apply. I cannot thank you enough.👏👏👏
Thank you! I am glad it helps!
I love your videos. They are very informative. You mentioned using levels, brightness and curves tool about nine and a half minutes into the video. i would love to see how you handle those.
Sorry for the delay. I live in the Canadian backwoods and this is a very busy time, getting ready the big winter snow in. I hope to make a video showing the comprehensive addition of PL8 to my workflow shortly, and probably a separate video on PL8's light and color management power.
Brilliant, helped me to understand better your previous videos. Thank you.
Glad it helped.
You realy a great teacher and artist. Thanks again for a other video.
I really think sharpness and resolution is something that a lot of imagers could pay more attention to. Shooting techniques that promote higher resolution are a great companion to what is presented here.
I agree. Sharpening is a loaded topic, because like focal length, it will bring out flaws as well as strengths in an image. Shooting technique is going to matter more, too. Shorter subs, I think is ideal, though that is often limited by light pollution and the response that may require from the imager. It's definitely a deep topic worth exploring further.
Thank you for your videos, they have been great as I implement the strategies. Could you explain the "default" mode in blur exterminator, please?
If you have clicked the check boxes or moved the sliders, just click the reset icon at the lower far right of the BXT tool. It will unselect all check boxes and move the sliders back to their default state. The default state is almost always perfect for initial deconvolution and sharpening procedures
@@SKYST0RY Ahhh.. thank you.
I have heard that it is better to choose the frame with the lowest median as the reference frame in the linear fit tool while you used the frame with the highest median as the reference. What is the rationale for choosing either the strongest or the weakest frame as the reference?
The outcome of experimentation. I tried developing a variety of DSOs using the low, mid and highest median image as the LF reference and the highest median reference consistently produced better results.
I shoot in OSC and process my images using Pixinsight (and all of the RC Astro tools) and Photoshop. I was wondering if there would be any benefit to separating the channels, and doing the preprocessing steps you described here? I shoot using a ZWO ASI071 APS-C sized sensor camera. Just curious about your thoughts.
I used to process images in Photoshop using a method I learned from Scott Rosen of creating a synthetic luminance channel from the RGB image, doing the brightening and sharpening to that image then recombining with the slightly blurred/saturated RGB image using the luminance blend mode.
Wondering if taking it a step further and processing the color channels/luminance separately would be benficial or just more work. Like Cuiv the Lazy Geek, I tend towards the lazy side a little, lol.
It's a good question. In theory, I think you would get better results if you split and sharpened the RGB channels separately. I base this on the fact that a color camera is really just a mono sensor with some kind fo Bayer layer over it. But I've never experimented to see.
OSCs do not give you a luminance channel. It's all in RGB. I have experimented on the benefits of creating a super luminance, but I did not find it useful. You will not get additional information that way because all the information is already in the RGB output. If you are looking to improve sharpening, I would suggest cloning the RGB image and separating out the high frequency information. On one clone, derive very fine HF info. On the next, derive fine HF info. On the third, derive moderate HF info. Then delete the low frequency layers and soft light composite the high frequency information back onto your image.
Hope that helps!
@@SKYST0RYdo you consider the micro, fine and regular sharpening tools macro in James Ritson astrotoolset in Affinity to be the same as your LF/HF technique here?
@@chrislee8886 It's not the same as frequency separation. The Ritson sharpening macros do not separate the information.
I find your work flow interesting but how do you know when you have pushed the sharpness too far? Think about it, this is wispy gasses in space, I feel when sharpening goes this far it no longer looks like gasses and looks too sharp. However I recognize there is an element of personal taste. Your images do have high impact, my personal taste is for more toned down processing
Images will reveal when you have pushed sharpening too far by devolving in various ways. The most obvious is images will become gritty. In some cases, other artifacts will appear that look like glitter around the image, and edges may become blown out into white or flattened into black.
While nebulosity is in fact whispy gas, it forms structure defined by temperature, clashes of interstellar winds and the push and pull of radiation, and the boundaries can be quite resolved. We see the same fluid dynamics in the structure of regions in our own ocean which is divided into various thermal layers with sharply defined boundaries that do not mix. When viewed at a distances--and we are many light years away from the DSOs we observe, the boundaries between these structures look refined. Take a look at Hubble and JWST images of the Crab and Orion nebulae, as examples.
Another way of thinking about this is imagine clouds. They have sharply defined boundaries. Cumululous clouds, as one example, have boundaries around each cloud, and the clouds sit upon a thermal or pressure layer in the atmosphere also with sharply defined boundaries. Clouds often move in defined streams, depending upon which wind stream they are caught up in, and those wind streams also have defined boundaries. If you got very close to the boundaries, they would look fuzzier, but we see them at a distance.
Most persons shoot their DSOs in just a few hours and keep development of the information minimal. This limits their information. Due to light pollution, most shoot in narrowband, and narrowband typically only captures about 20 nm of the 320 nm of the visible spectrum, or about 6.25%. This means most images shot are information-poor. Persons develop these images and either accept the blur and grittiness, or keep the images darker and fuzzier to hide the flaws. It has led to many persons expecting that unresolved images are the true appearances of these DSOs.
What I don't like about your videos is that as you go off on tangents. You're doing work on the image while speaking the benefits of Affinity over PS instead of letting people know what you're doing in Affinity.
In the 29 seconds I compared Affinity Photo to PhotoShop, what I said was while PhotoShop is very powerful, it is overpriced, and that Affniity Photo does everything I need for a one time price of $50. Since people often tell me they find the software too expensive, I figure they will be happy to know there is an affordable alternative. If you watch the video, you will note that every procedure I describe is actually being done in the video. If I have to explain background theory, such as why frequency separation is important to sharpening, the video pauses while the background theory is covered. This is not a processing video. I have something like a hundred videos on processing, including a complete processing playlist. Since early processing is very repetitive, I don't cover it from scratch in every video.
@@SKYST0RY Not a processing video? Looks like that's exactly what your doing.
@@yervantparnagian5999 This is a WORKFLOW video. It is in no way a comprehensive tutorial at all. What is really tells us, is maximizing the quality of an image is a pretty intense process. A process that cannot be totally fleshed out in a single 20 minute video.
These videos show me what could be done, but also that it is way more hands on than I will ever do, and that I will simply have to settle for lesser finished images.
Even the amount of data that is thought of to be 'enough'. I am happy if I can nail down 4 hours, where as he likes 15-20 as a low end, and 30+ hours as a high end.
I might shoot that if I lived under dark skies, but I have to travel 5 hours to get that, and usually travel 1-2 hours to get darkish.
Watch the videos and get some hints an inspiration, then jump in and develop your own work flow.
I feel like this image, like many of your images, is oversharpened, and overprocessed to a point of being almost unbarable to look at.
Understandable. LRGB shot under dark skies captures a LOT more information than the NB techniques many persons have to use to compensate for light pollution. In fact, typical NB filters only capture around 20 nm of the 320 nm visible spectrum, or about 6.25% of it, and many filter schemes capture even less. That simplifies an image. When viewed in LRGB, most objects are much busier than people are used to seeing. And since I tend to shun false color schemes, which greatly simplify light and shadow, my images will have a busier look in terms of color, light and shadow play and detail. Nonetheless, I prefer to go with truer color schemes and truer imaging. My images are not 100% true to visual because that isn't possible. The moment we stretch an image, it is altered. We humans, of necessity, place an observer effect on our images, one way or another. But the fact is I don't even think of this image of Melotte 15 as well resolved yet. I should add another 10 hours of data to it to really flesh out the fine cloud structure that only shows as a haze in the darker regions at this time. It is best to think of many of my images as not aiming to look pretty but rather aiming to reveal structure and color. If they come out as beautiful, that is a nice side effect.