КОМЕНТАРІ •

  • @charliemiller3884
    @charliemiller3884 10 місяців тому +5

    After 10 years of shooting mono LRGB-SHO frames, I have converted to only using a OSC camera plus filter wheel with UVIR and Optolong L-eXtreme filters. This provides excellent RGB imaging and narrow band imaging with less imaging time and less processing time.

  • @Astro_Px
    @Astro_Px 3 дні тому +1

    Hi Sascha, another awesome, well presented, every important topic video. All new and critical information I never knew those details after doing this for 9 years... lessons learned - take advantage and study it - as it takes too long or maybe never to learn it on your own. Follow-up please: so you mentioned about stretching before recombining let's say in RGB - but I don't know the rational behind it... there are some folks I've seen do it before stretching - I have done it before - because I did not know any better, and the results were dissappointing to the degree that the picture looked better without Luminance - so again: why stretched / and how best to stretach - (a) RGB combo in linear, then stretch, strech lum, then add i no linear or (b) combo RGB+L in linear then stretch or (c) stretch each RGBL separately then combo in Non-linear? thanks

  • @OigresZevahc
    @OigresZevahc 10 місяців тому +5

    Thank you very much for all you do for us, Sasha!

  • @bobc3144L
    @bobc3144L 10 місяців тому +3

    Outstanding explanation! Thank you.

  • @larryfine4719
    @larryfine4719 5 місяців тому +1

    Ah, lots of things make sense here. While RGB for luminance is not technically a bad idea, the reduced imaging time of LRGB is definitely more efficient in terms of time :-)

  • @davewilton6021
    @davewilton6021 10 місяців тому +4

    Synthetic luminance can be very useful for SHO images and RGB images where you don't have real luminance subframes. After stretching, you create the synthetic lum and apply all your sharpening to that. Then you apply convolution (not deconvolution) to the color image. This blurs out the color noise. Then you recombine the images, with the sharpened synthetic lum as the new luminance channel. This sharpens the structure without sharpening the color noise. It doesn't increase your integration time, but it results in a better image without adding much complexity to the workflow.

    • @viewintospace
      @viewintospace 10 місяців тому

      Great input Dave! You describe here nicely what is preached by the ones promoting synthetic LUM. The issue is, the only thing you can state at advantage of doing this, is to blur out color noise. And if THAT is really the only tangible effect synth LUM has (and I would not know any other), then I know a MUCH faster way of achieving that.

  • @lukomatico
    @lukomatico 10 місяців тому +3

    Hey Sascha! Very interesting video mate, well done! there's so many facets to this particular question that I take my hat off to anyone tackling the subject haha! :-D
    Thanks for the mention by the way! Im glad my old vid was some use :-)
    Clear skies!

    • @viewintospace
      @viewintospace 10 місяців тому

      And thanks for inspiring this video with yours! I think it's always great what we can build on each others work and hence bring the thought process further one video at a time....

    • @Astro_Px
      @Astro_Px 3 дні тому

      hi guys - yes, two super guys, so for @lukomatico: just saw (one) video of yours (LRGB Galaxy processing) were you combo all channels and then stretch - whereas Sascha stretches first and then does the combo - either way - unclear of the reason - and I did the combo LRGB before stretching and adding the lum actually gave me less satisfactory results - Cheers, let's make America a Bortle 1 again - with you leading it 🙂

  • @darkrangersinc
    @darkrangersinc 10 місяців тому +4

    Great video and explanation! Never been a huge fan of Synthetic L or Ha doubling as a Luminance layer I would rather just add more actual data. But I think you did a nice job highlighting when it can make sense to use a Luminance Layer

  • @pcboreland1
    @pcboreland1 9 місяців тому +2

    You're on the right track I think with IR. Perhaps a blend of the two. This is what a number of people doing lucky dso imaging have been doing for some time.

  • @MrPedalpaddle
    @MrPedalpaddle 10 місяців тому +4

    The argument for synthetic luminence with Narrow Band would come from those who stretch and colorize each channel before combining - e.g., Steve @EnteringintoSpace would then add convolution to the colored channels to remove noise, then restoring the structure lost through the convolution with a synthetic luminence. Not sure off hand if @paulyman also does this.

    • @PaulymanAstro
      @PaulymanAstro 10 місяців тому +2

      I do. Exactly as you described. I do think carefully about how I do it and if I do it, as Sascha says though. Sometimes I use the Ha data, sometimes I create a synthetic lum by integrating multiple channels if I feel they add structure. RGB stretching is to me 90% about maintaining good colour contrast, synthetic luminance to me is all about maximising contrast and sharpness as well as highlighting interesting structures.

    • @MrPedalpaddle
      @MrPedalpaddle 10 місяців тому +1

      Thanks very much for the comment. I’m finding your tutorials very helpful. I hope you can update your Foraxx script to play with the new PI version. Cheers!

  • @starpartyguy5605
    @starpartyguy5605 9 місяців тому +1

    For many years, going back to the early 2000's, I shot long lum and short color using very small (compared to today) cameras, ST7, ST8, STF-8300. This year I moved to the QHY268M with 50 mm filters. I'm using a C9.25. on a G11 Gemini 2. I switched from Maxim to Nina along with all the extra stuff to learn, including Pixinsight. So learning curve, culture shock... I got my Optec Lepus f/6.3 focal reducer configured with a special spacer that OPtec made for me. Now I shoot 3 minute subs and no luminance. Images seem OK so far. But wow, so much new stuff to learn!

  • @paulbenoit249
    @paulbenoit249 10 місяців тому +3

    Great video, .....this is why I am planning to keep using my color camera to capture the color, and the equivalent mono is on its way to shoot luminance only (or Ha in some cases).....to try and get the same results as shooting fully mono LRGB, but with the benefit of no filters, filter wheel, ...

  • @pcboreland1
    @pcboreland1 9 місяців тому +3

    As a british english speaker it is loo-minance. English is so messed up! Great video, awesome!

  • @BruceMallett
    @BruceMallett 10 місяців тому +3

    Somewhere around 3:30 you say that luminance provides the detail and contrast. I've read this claim elsewhere and that it is sufficient to shoot RGB at a lower res (say bin2x2) if you keep the luminance at full res (bin1x1). Do you do this? It would save a lot of session time, would it not?

    • @viewintospace
      @viewintospace 10 місяців тому

      I don't do this, but yes, makes sense to me and should work fine. I also heard of people who shot the Lum with a mono cam and the RGB with an OSC cam. Also a way to save time.

  • @dbakker7219
    @dbakker7219 10 місяців тому +3

    HI Sasha, very good explanations! thank you. I experimented with IR too and another reason for less detai in your andromeda is that IR has a longer wavelength and thus always a lower resolution than visible light in our amateur scopes. Also using a refractor for IR does not work wll ( i think you used your FRA 400?) t the glass messes with your IR signal and diminishes its strength. I always use a reflector for IR imaging, no glass, no correctors of glass in between. I get more smaller galaxies/clusters but resolution is less.

    • @viewintospace
      @viewintospace 10 місяців тому

      That is really helpful - thanks!!!!

  • @davecurtis8833
    @davecurtis8833 7 місяців тому +1

    Great video. Pretty much matches my experiences. For a very bright nebula with bright stars like M42, would you use Lum as well RGB ?

    • @viewintospace
      @viewintospace 7 місяців тому

      If you shoot RGB and not Narrowband, then Lum should be used.

  • @Phenolisothiocyanate
    @Phenolisothiocyanate 5 місяців тому

    One thing that confuses me about luminance is: If the color data is good enough to assign a value to a pixel then why do you need lum? Conversely, if color data isn't good enough then won't lum just bring out noisy colors?

    • @viewintospace
      @viewintospace 5 місяців тому +1

      There is nothing like color data - it is simply light that passes a filter. Now what matters is signal to noice ratio. So where is light at all (and how much ) and where is dark. When I know that I only need to know how to color it and that is easier. In very dark areas even if a color signal is there, it will still be black, so no real issue.