To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepSkyDetail/ You’ll also get 20% off an annual premium subscription. The presentation by Tim Hutchinson is here (I really liked it!): ua-cam.com/video/1PUTWfWgD0g/v-deo.html
I'm really looking forward to that long exposure vs sufficient exposure experiment. The main drawback I see to very long exposures is you lose more exposure time per failed exposure. Given enough time, both strategies work fine, but we don't always have a lot of time. Summer nights are short and the clouds are relentless!
Very long exposures are problematic simply because it's more likely that you're going to have a gust of wind or a significant mount hiccup, or a cloud coming through or any number of other things that completely ruin the sub. Reality is going to intrude at some point.
Yeah, I'm actually really worried about that for the next video in this series, tbh! That video might come out, and I'll be like "well, I messed up!" lol
Hard to believe this topic is debated so heavily, it’s extremely easy to see in final results, longer is always better. AP is just full of myths and legends… Thank you for putting in the work…
All of this tracks - and in Tokyo because of how strong LP (and thus its shot noise) is, I get far less of a difference than you did (by the way I find it a bit easier to image right before and right after the Meridian with difference settings, although your loop method works too). I would recommend repeating that experiment in Narrowband, which nullifies a lot of the impact of light pollution. There is one thing I don't really like in your method: reliance on the measured SNR of an image. While that sounds fair and quantifiable on the surface... when I measure SNR of a target that is going through thin clouds, I get a better SNR reading than when the target doesn't go through clouds. I've seen far too much funky stuff with SNR calculation based only on subframe to trust it. This is why when I weigh subframes, I avoid SNR measurements at all costs and rely almost exclusively on #stars detected with good parameters (poor tracking, clouds, lower altitude, etc. will all contribute to lowering that number). I'm of the school of "the proof is in the pudding". While it is subjective, after I process an image to the best of my ability, which looks better than the other? In my opinion that is the only thing that matters :) but of course it is far less quantifiable than measure# like calculated SNR - which I unfortunately don't trust based on my own empirical evidence :) Keep going, this is fascinating!
It is nice to see some of the heavy hitters, such as Cuiv chiming in! It means we have an active community with a cross pollination of ideas from different players. This is good for science, technology AND astrophotography! Love it!
Yeah this isn't how I would've gone about measuring snr. Trying to measure SNR off of a galaxy is not straightforward. It's an extended object and that complicates things. I'm not totally sure how he did the snr calculation. Instead, you should do photometry on all of the stars in the image. When you convert your instrumental magnitudes to the AB magnitude system by deriving a zeropoint from a star catalog, any issues with clouds or conditions should be reflected in your uncertainties. Then, you'd compare the SNR of the magnitudes you calculated for the stars in the image for each subexposure. That would truly tell you. Just doing a raw SNR of the image leaves you open to all sorts of degeneracies I think.
@@pumpedupchips I've got videos on how I calculate the SNR on a stack of subs. The Tldr is this: the signal is the average brightness of the pixels at each point in the stack (minus the skyglow). The noise is the square root of the variance in the brightness as the you stack. It's not based on a Pixinsight script that measures the whole image, but is a true SNR measurement based on the actual data for specific pixels. The final numbers used in the statistics are the final SNR measurement of the last stacked sub frame. Counterbalancing the shooting (i.e., alternating in this case) for each type of sub frame (i.e., gain and exposure length) helps eliminate any of the potential confounds.
This is very true, but how the total time is obtained is also important. There will be a point when read noise always overwhelms the signal from a faint object and no matter how many subs are taken, the object will remain buried in the noise. So one does at least have to have longer subs to mitigate this and ensure sufficient photons from the said object are collected; which is one point of this video's argument I guess.
@mrrockenrock Oh, I agree! I was assuming that the integration time/sub-exp time argument was keeping the skies you're imaging under constant :) But imaging under dark skies is something that you can't beat!
If I hadn't fallen off the playground ladder and landed on my head on asphalt when I was a child maybe I could've been as smart. That was about 66 years ago and last October I took up this hobby. I've been using 3 minute exposures at gain 90 on my C8 SCT with OAG, 7.1 focal reducer and ASI071MC Pro. My mount is an AVX and doesn't guide the best so I don't bother with longer exposures. Thank you for this video.
Great video and I love the 2x2 comparisons. And as you mention, always good to see the numbers crunched, rather than 'just' looking at the images for comparison. Re even longer exposures, the problem with a blanket 'longer is better' is that so may other variables come into play with longer exposures that are not easily measured. For me with all the settings it's finding a sweet spot that suits the current conditions, target, gear etc, hitting the top or bottom of a bell curve with each variable if you like. Videos like this one, assist us in determining those sweet spots 🙂
I'm pretty new in astrophotography. With my 6" Celestron Evolution (alt-az mount), the longest sub recommended is 30s to avoid star movement at the corners. A couple nights ago, I got 4 hours on M101 with 30s exposures at approx. unity gain. The moon was bright and humidity was high, so results were just OK using the ASIair Pro stacking. This is a challenging target for my setup and experience. Thanks for doing this analysis. I see from comments below that there are still a lot of debate over the best settings. I will try longer subs.
Nice! I like Celestron SCTs. With an alt az mount, it is difficult to do longer subs without field rotation. So, even though sub length matters, it's not the only thing!
At a gain of 10 your read noise is likely higher, it is on my camera, so for equivalent exposure lengths the SNR will likely be better with the lower read noise, as you camera graph shows you want to pick a gain that gives you the lowest read noise and best Dynamic range. 10 - 120 subs will have 10 reads and hence more impact from read noise , 1 - 1200 sec sub will have 1 read and hence less overall read noise. Longer subs are better, but longer sub can have tracking error, wind, satellite etc that can cause the sub to be discarded.
I realize this is an older video, but it just showed up on my YT recommendations. As someone who owns the 294MM Pro, I'm quite familiar with it. The reason why you see a better SNR at unity gain isn't because you've reached unity gain, but rather that's where the high conversion gain kicks in in the 2x2 bin mode. As you showed in the graphs, you can clearly see the read noise drop significantly at that point. That's what provides you with the higher SNR. It's also why your dynamic range shoot back up to 13 stops. Again, it's not a function of the gain setting. Just increasing gain on a camera does not increase SNR. Had you done the same tests using the 1x1 bin mode, where the HCG is always on, you wouldn't have seen the SNR increase moving from gain 10 to gain 121. By the way, in the 1x1 mode, unity gain is at 108. Nothing magical about unity gain - it's just the point at which the e-/ADU = 1. Another thing I'd like to point out is that you can't just say "exposing for 10 minutes is better than exposing for 2 because SNR is higher" as was implied in the clip from TAIC. The pixels have a limit to how much light they can capture while still maintaining integrity. Expose too long and you're clipping, thus losing any information. This is easy to test, too. Let's assume you take a flat for your luminance filter and get a mid-range average ADU of 32k at 3 seconds. That kind of flat is useful for calibrating your lights as it will represent the optical system well. Now, take a flat at 3 minutes. Every pixel will be completely saturated. The flat will be utterly useless as no data can be retrieved from it about your optical system. Sure, you've got a pretty high SNR, but it has zero value.
Hi thanks for the comment! I think in the video, I point out that the 294 is different than other cameras at 5:00, specifically stating that there are two gain modes at about 5:04. My point was that at unity gain it switches to the other gain mode. Sorry this wasn't sufficiently clear. About the clip from TAIC, I agree with you that exposing longer might not get higher SNR for a multitude of reasons! Both because of clipping and diminishing returns from overcoming read noise. I've got a video coming up on it.
@@deepskydetail the 294 is unique in that upon its release, ZWO binned the pixels 2x2. If you looked at NINA (or SharpCap or other capture software), you would have seen it listed as bin 1 and 4.63 micron pixels. It wasn't until some time later ZWO "unlocked" the native 2.3 pixel via firmware. Suddenly, what used to be the default mode was now accessible only if you chose to 2x2 bin in your capture software. Caused a bit of confusion :). I would think that any camera sensor with a dual gain option would behave in the same fashion. ZWO tends to make the choice for you - at unity gain, kick in the HCG mode (the 294 is special since its bin 1x1 behavior has a different unity gain and HCG value than its bin 2x2 operation). Other manufacturers let you choose. For example, my Player One Poseidon lets me decide. Anyway, my point was that you don't experience a better SNR simply because you've changed the gain setting. It's all dependent on whether the gain setting you chose happened to kick in something like HCG, which drastically lowers the read noise. Looking forward to the new video!
You are missing a few basic notions about detection of astronomical objects. Way back in the time of photographic plates, A.G. Millikan divided the detections in two classes. Class 1 is basically when sky background is not relevant (i.e. spectroscopy and interferential filter photography) and class 2 is typically photography limited by sky background. And it still applies to digital imaging. Second I find it weird to measure the SNR on the galaxy itself. The sky background is the limiting factor in detecting the faintest stars. If the sky background is sufficiently high, you can basically omit the read noise of the camera (mainly today). If you have a pixel size of 1x1 arc second and the sky background is magnitude 21 per square arc second, basically a mag 21 star (if your stellar image size is 1 arc second, just as an example) will double the signal on that pixel. Detecting a magnitude 26 star implies you need to have a very high SNR on the sky background in order to see a 1% change of the signal (because yes, the sky background is the signal). These are just numbers, but amateurs usually have larger pixel size, larger image sizes and much higher sky background. So normally a single measure on a large area of sky background (without any star) is the way to go. If you have 100x100 pixels, 10000 points you already get a pretty good statistics. I may have missed the part where you talk about your preprocessing. How many darks, how many flats, etc... And in the end, for detecting faint stars, much shorter exposure times and processing exactly when one processes planetary images would allow to obtain sharper stars, and therefore a much better detection. Then with class 1 detection, it's yet another set of tests and the results will mostly depend on the camera itself. Just saying.
Good points! There is a lot to think about other than SNR or the DSO, like the sky background. But, I still think the exercise is useful. In terms of post processing, each stack had 20 flats, 20 dark flats, 20 darks applied.
The data I've seen for my setup is that I need 1 hour 6 minutes of short exposures (8 seconds) to match 1 hour of long exposures 3 minutes. To me that is inconsequential. It also it a good thing since it allows my astrophotography to be possible since I'm using a GoTo dob with no guiding.
Would be interesting to see the experiment include the super short exposures, like sub second ones. And super high gain, max gain or near max. Like what the result would be if you essentially filmed the DSO at max gain for an hour. I don't think the result would be good, but i'd like to see it. Might have to try it myself.
Great video! Thanks for the scientific approach to tackling this. One thing I wasn't clear on was if you ran the SNR statistics on the autostretched images or the raw stacks? Which makes me wonder more questions...If autostretched images were compared, does the that autostretch apply the exact same stretch to each image? Do the statistics change when comparing raw, autostreched or same-stretched stacks? Thanks for the great work!
Thanks! The stats were compared on the raw stacks to avoid all the great questions you brought up ;) Would the stats change if I compared the autostretched images? I honestly don't know. But they might. Running the stats on the raw images seemed like the best option. Plus, it was easier :)
I done tons of 15 second exposures, using a noisy dslr (Canon t2i). Yes I got an image, but every image was faint, with the full amount of camera noise. So I figured doing 3 minute exposures should give me way more signal, but still only 1 image worth of noise, and not even that much noise, because over the course of 3 minutes, a lot of pixels that would have been noise, were hit by photons, giving me less noise. I can even push the contrast of a single 3 minute image, and get it to almost look like the desired result. That is impossible with 15 second images (too much noise, too little signal).
Actually, before you revealed it, I immediately noticed that the first of four images was the best, and I was convinced that must be the one with the longest subexp and gain. And I was right.
Very cool! I wasn't sure how well YT compression would make things look, and I thought the three best all looked really similar. But it's good you could see the difference!
Many are less interested in “the perfect” picture vs the “best reasonable” that allows us to then go image other stuff. A law of diminishing returns. So while total exposure time might be king, a rule that could establish “getting the job done” would be cool. I wonder at what point does an image have so much noise that processing artefacts are inevitable. I am always amazed at what software tools can do for short (
I like your data driven approach - even though my statistics skills are abysmal. In the field, I usually use 180 - 300 seconds for DSO's. This helps mitigate the flub factor of losing 10 minutes of integration time after bumping into your tripod. The other reason, is I really don't know how much well depth I use up with with a given exposure - so I try to be conservative. Thanks for your video. It got me thinking!
@chrislee8886 Great comment! I wonder that too! I'm sure there are calculators out there that can help out with that. Tim's presentation gets into a lot of it. The "best reasonable" seems like a great video idea!
@ekalbkr thanks for the comment! I use around that range too. It feels really bad to throw out a long sub, doesn't it? It just goes to show you that there is so much that goes into finding the perfect sub length!
To really see the benefit of really long exposures, you need dark skies and a target with dust or IFN. If using NB filters then a target with some faint Ha or Oiii will show why and when you want long exposures.
I just discovered your channel today, and looking at the play lists I see lots of videos I want to watch! As others have said below, I appreciate your scientific approach as well as inclusion of actual images.😊😊😊
Thanks! Great stuff: I like numbers and I like comparisons. Subscribed and I will be watching many more of your videos. Thanks for all the work you put into this one!
I find that doing linger exposures than 30 seconds where I live and with my equipment over exposes the stars and while I might get better SNR for the deep sky object I lose star color. Of course that can be solved by combining images but I'm lazy! Really enjoyed the video, looking forward to the next one.
Thanks!! Yeah the stars getting overexposed is a problem. I think one of the solutions Tim talked about was lowering the gain to increase sub length. But, I'm not sure exactly how that would work in practice (I think the idea is good, I would like to experiment with it though!)
Thanks! While my higher math skills are somewhat rudimentary (maybe I could benefit from Brilliant😊), I understand enough to appreciate the approach. This and another recent video got me to subscribe. Real numbers matter - even imaginary ones!
Nice video! One thing I'd add is that perhaps you should've calculated the SNR using an ellipsis and adjusting a/b to fit the shape of the galaxy, that way you could analyze the SNR at different radii to see the effect it would have in bright vs faint details.
My prediction is that it’ll be virtually the same as the 120s subs, but depending on the target, there might be some blown out spots that are harder to control. Really curious to see your results.
I think there are three important breaking points when it comes to exposure length. 1. Break even with read noise (as discussed). 2. Short enough exposures to achieve lucky images (1 s or less, based on an astrobiscuit video improving down to 10 ms). 3. The point when an important part of the image gets overxposed/saturated. From 2 and 3 it's clear that it's not only a matter of equipment performance and natural phenomena, but also a subjective question of how many blown out star cores are you OK with? Are you OK with dealing with crazy amounts of data and read noise to achieve ultimate resolution with lucky imaging? So, yeah, I very much disagree with "great images can only come from long subs [far beyond read nose break even]".
Lucky imaging can yield a better result sometimes but no matter what the Math says. we are always at the mercy of the atmosphere and space weather activity as well as our local weather how hot, how cold, is there a breeze, etc. for our SNR. The quality of optics including filters matters as well. The Focal ratio also plays a huge factor in exposure time. Imaging with a Rasa or Hyperstar F/1.9-F/2.4 system the longest exposure you can get away with is 180s. But somewhere around F/5 to F/10 depending on aperture depends on the exposure length. I have played around with exposure times over the years. I have a similar camera but with the IMX 571 mono and for LRGB because of my Bortle 6/7 light pollution 120 L 180 RGB Narrowband Ha , O3, S2 180-300s all using the high dynamic range mode you spoke about.
@@deepskydetail I just spend 40+ hours imaging the longest project I have done is 144 hours. Then check the data an cull data accordingly usually keeping the best HFR / FWHM unless its summer time then as long as the stars are round and there are no high clouds the data gets stacked. HFR always goes south "3.5-5" where as if its clear in the winter I can get down to an HFR of "2-3"
Yeah, I think it's a good idea to try this with like 5 hours of data each. Maybe I'll do that with a narrowband target so I don't have to worry about the moon as much ;)
Dude thanks so much I was always wonder about this you just awnser my questions I had. That would be a good experiment to run for the long exposure vs shorter but also I think u need to add cooled vs Nirmal cam in case thermal noise impacts long exposure
There’s an interesting video on the channel of @AstroFarsography which is a recording of a presentation of Robin Glover, the author of Sharpcap, basically with the same conclusion. He has some interesting numbers, also with the effects of filters. For this video, it’s a good one! I like the more scientific approach (and the time and effort you put into it). Thanks!
Really well done and interesting study, Mark! I’ve mainly just looked at SNR and have done my own testing of short vs long exposures. In my experience with the same total integration time I’ve found better results with longer exposures. With that said, 300s are the longest I take. That’s predominantly due to not having .3” guiding, generally more like .7” and I feared that difference would surface with longer exposures. I should try it though. I’m doing a Hypertune this week so after my initial testing I might give some 10min subs a try and actually see how it works out for me. Cheers! Doug
@@deepskydetail yeah, of course! I’ll let you know how the results of Hypertune go. I’m planning on getting on a target this next week and I’ll try some 10min subs.
What prior distribution did you use for your Bayesian model. Also was there the same number of replicates for each of the combinations or was the total for the stacked image kept the same? The latter may impact the precision of the estimate of SNR for the 120s exposure combinations as you only have 20 reps per combo vs 160 per 15s combo . Thank you for your video. Steve
Good questions! Based on what I understand from JASPs documentation, the underlying distribution is just a normal distribution. Other distributions (e.g., ex-gaussian or gamma) might be better, but looking at the overall histograms, a normal is a decent approximation (I'm also a bit lazy and didn't want to code things in jags or stan assuming different priors). There were the same number of samples/replicates for each combination.
Maybe, for your personal "best exposure" you want to measure wich time gives you much less of a gain in image quality.. a best short exposure that overcome a certain point of SNR, but keeps the exposures as short as possible so to speak. I.e. you shoot at 120sec with an SNR of 8 and shooting 240sec subs with an SNR of 9.. it isnt worth it to double your time to get this little benefit. Maybe there will be a video of "your personal setups" best exposure time some day? Just to show there is a less worth it increase of quality with an increase of single sub exposure time. 😊 Apart of this: good informations in this video with fundamental knowledge, every astrophotographer should be aware of ❤
Great video. Do you calibrate your data sets to actual photo electrons when evaluating SNR? For example detecting 100 photons would have an expected standard deviation of 10. 10000 would be 100, etc. To make comparisons you have to normalize the adu values to photons. You likely did this but it deserves mention. Thanks again for this thought provoking video!
As always: just brilliant video! Thank you! I enjoy every one of your videos. It is exactly what I thought I would like todo once I have taken enough pictures to satisfy my “need-for-pictures” 😂 thank you again!
If you go down to the point of capturing individual photons (and any false detection that will inherently be present), and then stack the humongous number of samples that would produce, would the image converge towards the better quality as the number increase; or because you're essentially going to the minimum exposure time possible, the results will still be inferior?
Great question! I'll answer that by first talking about faint signal. If the signal is too close to the read noise or too faint compared to light pollution, then it will take waaaaay too long to get decent SNR (maybe thousands of hours or more). Theoretically, SNR can keep increasing, but it takes longer and longer to get noticeable improvement. So, (and I might be wrong of course) if exposures are too short where you're capturing 1 photon at a time, you could theoretically stack a humongous number of subs and it would eventually (and I do mean eventually as in a very long time) get an ok image. But, you could get good SNR with longer subs much, much, much more quickly!
One issue with your approach is the use of a relatively high signal area in the central region. The devil is in the shadows, so to speak. Target brightness and dynamic range. I would also prefer to see more sample points than 2x2 before arriving at any firm conclusions. Good subject to deep dive though!
Yes!! That's part of what I want to test next! My previous videos on filters and bortle zones have taught me that it's thr shadows/low SNR areas that are most affected by things. Stay tuned :)
It doesn’t matter because of Pixinsight. Are you ready to loose 10-20 min sub? I think, In this world, resolution and color rendition are matter; rest can be fixed in post-processing. It will be great to make video on why Japanese, European and USA-made scopes are consistently super quality.
@@deepskydetail Are you going to do video on superiority of Japanese, European and USA-made equipment? Interestingly, this critical topic is not covered in any UA-cam videos. You could be first, but to make such video is “another ball game”.
I don't know if I have enough experience with the different scopes from different countries. But it's something I could research. I can't make any promises, but we'll see :)
For a camera with HGC, is the gain at which the HGC kicks in more important than unity gain? For your camera, it looks like the HGC mode trigger is at unity gain, but that isn't the case with all cameras (e.g., ZWO ASI224). For example, if HGC kicks in at 60 and unity gain is 135, and the DR is a full stop higher at the 60 gain compared to 135 (with lower read noise and higher full well depth), would this alter your conclusions about the effects of using lower than unity gain?
That's a good question! I'm not exactly sure to be honest. I would say that if HGC is at 60, and you have more dynamic range, then the question becomes how much less read noise is there at 135 than 60? If the difference is really small, then I'd say go with 60. But you could run some tests to check!
This was very informative! Does it make sense to keep sub exposure length to below the point where “pixel” saturation starts occurring or is it okay to saturate some pixels?
You generally want to keep the sub-exposure length low enough so that your stars don't start saturating. I think it's probably ok for some pixels to be saturated sometimes, and hot pixels will always be saturated. But generally I'd keep sub length low enough so they don't saturate. Of course, you could blend in different sub-exposure lengths. Some people do it with good effect to help get fainter detail without blowing out the stars.
Thanks. That makes sense. I noticed recently on M57 with the EdgeHD 8 at F10 and the 2600 MC DUO at gain 100, saturation started happening at about 25 seconds to my surprise.
Does shooting 15sec subs not clip the blacks ? Also I mostly shoot gain 0 and longer subs (what ever time to get perfect history gram of ROI). Every time I bump up the gain, I feel I loose too much dynamic range
Good points! not shooting long enough can make it very hard to bring out the faint stuff, and you can end up clipping the darks! Your strategy of keeping gain low and shooting longer is a good one, I think. Of course every setup is going to be a bit different. So some people might want to have shorter exposures depending on their aperture, focal length, mount etc :)
Hello, I watched your video with great interest and I'm very impressed with the work you've done. However, I'd like to add a few nuances to what you presented. I use short exposures, i.e. exposure times well under 15 seconds, and my results show somewhat different trends. In my opinion, the 15-second unit exposure times you use are not optimized. When using short exposure times, you'd need to increase the gain significantly because read noise, which is important as you've shown, decreases with high gain. It's like running a 100-meter race in shorts and good shoes versus running the same 100-meter race in jeans and boots: despite your best efforts, your final time will be mediocre. It seems to me that, based on your results, the only thing you can really say is that with the 294 camera and your sampling, you'd have better detection with 120-second unit exposures than with 15-second exposures using low gains (121 is low). But you can't generalize from this. What's more, the total exposure time is too short, only 40 minutes. To get good detections with short exposures, you need a long overall exposure time, stacking, and a good final dynamic range that allows images to be sorted, for example. You also need cameras with low read noise, because in Lucky Imaging, read noise dominates over photon noise. In any case, with my own equipment and that of many other Lucky Imaging imagers, I can confirm that we get signal despite very short exposure times.
Thanks for the information! One thing that the video also found is 121 gain was better than 10 gain. So, maybe even higher gains might do even better (something to test!). I'll have to try more gain settings. Do you mind sharing what camera you are using for lucky imaging?
@@deepskydetail Good morning , at the moment I am using a 533M, 585 and a 464 from PO, for each camera I use the gain at 70-85% for exposure times of 350ms to 1s, beyond (2-5s) I lower the gain because I don't have enough image for the dynamic range to increase enough during final stacking. when I lower the exposure, my sorting becomes more severe. I capture in SER 8b when the gain is high, otherwise in 16b.
At 9:10 you explain how you divided the core area into 121 samples. What software are you using to measure SNR on those samples and how does it make the measurement?
I used R. To measure the SNR, I took the average brightness of the area and divided it by the cumulative standard deviation of the standard deviations as I stacked the images one at a time. I think I go over it here in this video: ua-cam.com/video/eRKk3lNyXO8/v-deo.html (about the 6 minute mark; although more context might be needed from about the 3-4 minute mark).
@@deepskydetail Thanks. I should have found this on my own but am new to your channel and haven't looked at all you've done. Excellent work and contributions to the astro community.
@@fugaterq thank you! I should have mentioned how I did the SNR calculations in this video (or at least point people to how it was done), but I wasn't expecting so many new viewers to be honest (i.e., I thought most of the people who would see this video had already seen the others)!
To clarify, the title answers the thumnail. Just wondering if you've ever done astrophotography? Maybe I should have gone over stacking and sun frames a bit more, and both don't make sense to non astrophotographers?
@@deepskydetail The thumbnail was broken and on a lof of devices the video starts autoplaying. I hate seeing videos with nonsensical titles. Yes, I do but I'm at a bit of a standstill with my current gear. My lightbucket is: astronomers/herotheelf
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/DeepSkyDetail/
You’ll also get 20% off an annual premium subscription.
The presentation by Tim Hutchinson is here (I really liked it!): ua-cam.com/video/1PUTWfWgD0g/v-deo.html
I'm really looking forward to that long exposure vs sufficient exposure experiment. The main drawback I see to very long exposures is you lose more exposure time per failed exposure. Given enough time, both strategies work fine, but we don't always have a lot of time. Summer nights are short and the clouds are relentless!
Yeah, I'm worried about that too. Guiding gremlins can ruin everything!
Very long exposures are problematic simply because it's more likely that you're going to have a gust of wind or a significant mount hiccup, or a cloud coming through or any number of other things that completely ruin the sub. Reality is going to intrude at some point.
Yeah, I'm actually really worried about that for the next video in this series, tbh! That video might come out, and I'll be like "well, I messed up!" lol
Hard to believe this topic is debated so heavily, it’s extremely easy to see in final results, longer is always better. AP is just full of myths and legends… Thank you for putting in the work…
All of this tracks - and in Tokyo because of how strong LP (and thus its shot noise) is, I get far less of a difference than you did (by the way I find it a bit easier to image right before and right after the Meridian with difference settings, although your loop method works too). I would recommend repeating that experiment in Narrowband, which nullifies a lot of the impact of light pollution.
There is one thing I don't really like in your method: reliance on the measured SNR of an image. While that sounds fair and quantifiable on the surface... when I measure SNR of a target that is going through thin clouds, I get a better SNR reading than when the target doesn't go through clouds. I've seen far too much funky stuff with SNR calculation based only on subframe to trust it. This is why when I weigh subframes, I avoid SNR measurements at all costs and rely almost exclusively on #stars detected with good parameters (poor tracking, clouds, lower altitude, etc. will all contribute to lowering that number).
I'm of the school of "the proof is in the pudding". While it is subjective, after I process an image to the best of my ability, which looks better than the other? In my opinion that is the only thing that matters :) but of course it is far less quantifiable than measure# like calculated SNR - which I unfortunately don't trust based on my own empirical evidence :)
Keep going, this is fascinating!
Thanks for the comment, and there's good information there! How do you measure SNR, out of curiosity? Pixinsight?
It is nice to see some of the heavy hitters, such as Cuiv chiming in! It means we have an active community with a cross pollination of ideas from different players. This is good for science, technology AND astrophotography! Love it!
@@deepskydetail Yep, basically PixInsight - your method looks more robust but I haven't looked into it enough to know :)
Yeah this isn't how I would've gone about measuring snr. Trying to measure SNR off of a galaxy is not straightforward. It's an extended object and that complicates things. I'm not totally sure how he did the snr calculation.
Instead, you should do photometry on all of the stars in the image. When you convert your instrumental magnitudes to the AB magnitude system by deriving a zeropoint from a star catalog, any issues with clouds or conditions should be reflected in your uncertainties. Then, you'd compare the SNR of the magnitudes you calculated for the stars in the image for each subexposure. That would truly tell you. Just doing a raw SNR of the image leaves you open to all sorts of degeneracies I think.
@@pumpedupchips I've got videos on how I calculate the SNR on a stack of subs. The Tldr is this: the signal is the average brightness of the pixels at each point in the stack (minus the skyglow). The noise is the square root of the variance in the brightness as the you stack. It's not based on a Pixinsight script that measures the whole image, but is a true SNR measurement based on the actual data for specific pixels. The final numbers used in the statistics are the final SNR measurement of the last stacked sub frame. Counterbalancing the shooting (i.e., alternating in this case) for each type of sub frame (i.e., gain and exposure length) helps eliminate any of the potential confounds.
This has been argued and debated for so long and by so many people. The crux of it is, TOTAL time integration is the key.
Total integration time is, I think, one of the most (if not the most) important thing! That being said other things still matter ;)
This is very true, but how the total time is obtained is also important. There will be a point when read noise always overwhelms the signal from a faint object and no matter how many subs are taken, the object will remain buried in the noise. So one does at least have to have longer subs to mitigate this and ensure sufficient photons from the said object are collected; which is one point of this video's argument I guess.
@@deepskydetail I think dark skies trump total integration time!
@mrrockenrock Oh, I agree! I was assuming that the integration time/sub-exp time argument was keeping the skies you're imaging under constant :) But imaging under dark skies is something that you can't beat!
Love the deep dives you get into for this stuff! A wealth of extremely useful information!
Thanks! Glad you enjoyed it!
It is nice to see some of the heavy hitters, such as yourself and Cuiv chiming in! It makes for a more vibrant community!
If I hadn't fallen off the playground ladder and landed on my head on asphalt when I was a child maybe I could've been as smart. That was about 66 years ago and last October I took up this hobby. I've been using 3 minute exposures at gain 90 on my C8 SCT with OAG, 7.1 focal reducer and ASI071MC Pro. My mount is an AVX and doesn't guide the best so I don't bother with longer exposures. Thank you for this video.
If it works, it works! There's so much more to exposure length than getting perfect SNR!
Great video and I love the 2x2 comparisons. And as you mention, always good to see the numbers crunched, rather than 'just' looking at the images for comparison. Re even longer exposures, the problem with a blanket 'longer is better' is that so may other variables come into play with longer exposures that are not easily measured. For me with all the settings it's finding a sweet spot that suits the current conditions, target, gear etc, hitting the top or bottom of a bell curve with each variable if you like. Videos like this one, assist us in determining those sweet spots 🙂
Thank you! I agree 100%! There are so many things to consider when figuring out the correct exposure time!
I'm pretty new in astrophotography. With my 6" Celestron Evolution (alt-az mount), the longest sub recommended is 30s to avoid star movement at the corners. A couple nights ago, I got 4 hours on M101 with 30s exposures at approx. unity gain. The moon was bright and humidity was high, so results were just OK using the ASIair Pro stacking. This is a challenging target for my setup and experience.
Thanks for doing this analysis. I see from comments below that there are still a lot of debate over the best settings. I will try longer subs.
Nice! I like Celestron SCTs. With an alt az mount, it is difficult to do longer subs without field rotation. So, even though sub length matters, it's not the only thing!
At a gain of 10 your read noise is likely higher, it is on my camera, so for equivalent exposure lengths the SNR will likely be better with the lower read noise, as you camera graph shows you want to pick a gain that gives you the lowest read noise and best Dynamic range. 10 - 120 subs will have 10 reads and hence more impact from read noise , 1 - 1200 sec sub will have 1 read and hence less overall read noise. Longer subs are better, but longer sub can have tracking error, wind, satellite etc that can cause the sub to be discarded.
I realize this is an older video, but it just showed up on my YT recommendations. As someone who owns the 294MM Pro, I'm quite familiar with it. The reason why you see a better SNR at unity gain isn't because you've reached unity gain, but rather that's where the high conversion gain kicks in in the 2x2 bin mode. As you showed in the graphs, you can clearly see the read noise drop significantly at that point. That's what provides you with the higher SNR. It's also why your dynamic range shoot back up to 13 stops. Again, it's not a function of the gain setting. Just increasing gain on a camera does not increase SNR. Had you done the same tests using the 1x1 bin mode, where the HCG is always on, you wouldn't have seen the SNR increase moving from gain 10 to gain 121. By the way, in the 1x1 mode, unity gain is at 108. Nothing magical about unity gain - it's just the point at which the e-/ADU = 1.
Another thing I'd like to point out is that you can't just say "exposing for 10 minutes is better than exposing for 2 because SNR is higher" as was implied in the clip from TAIC. The pixels have a limit to how much light they can capture while still maintaining integrity. Expose too long and you're clipping, thus losing any information. This is easy to test, too. Let's assume you take a flat for your luminance filter and get a mid-range average ADU of 32k at 3 seconds. That kind of flat is useful for calibrating your lights as it will represent the optical system well. Now, take a flat at 3 minutes. Every pixel will be completely saturated. The flat will be utterly useless as no data can be retrieved from it about your optical system. Sure, you've got a pretty high SNR, but it has zero value.
Hi thanks for the comment! I think in the video, I point out that the 294 is different than other cameras at 5:00, specifically stating that there are two gain modes at about 5:04. My point was that at unity gain it switches to the other gain mode. Sorry this wasn't sufficiently clear.
About the clip from TAIC, I agree with you that exposing longer might not get higher SNR for a multitude of reasons! Both because of clipping and diminishing returns from overcoming read noise. I've got a video coming up on it.
@@deepskydetail the 294 is unique in that upon its release, ZWO binned the pixels 2x2. If you looked at NINA (or SharpCap or other capture software), you would have seen it listed as bin 1 and 4.63 micron pixels. It wasn't until some time later ZWO "unlocked" the native 2.3 pixel via firmware. Suddenly, what used to be the default mode was now accessible only if you chose to 2x2 bin in your capture software. Caused a bit of confusion :).
I would think that any camera sensor with a dual gain option would behave in the same fashion. ZWO tends to make the choice for you - at unity gain, kick in the HCG mode (the 294 is special since its bin 1x1 behavior has a different unity gain and HCG value than its bin 2x2 operation). Other manufacturers let you choose. For example, my Player One Poseidon lets me decide. Anyway, my point was that you don't experience a better SNR simply because you've changed the gain setting. It's all dependent on whether the gain setting you chose happened to kick in something like HCG, which drastically lowers the read noise.
Looking forward to the new video!
You are missing a few basic notions about detection of astronomical objects. Way back in the time of photographic plates, A.G. Millikan divided the detections in two classes. Class 1 is basically when sky background is not relevant (i.e. spectroscopy and interferential filter photography) and class 2 is typically photography limited by sky background. And it still applies to digital imaging. Second I find it weird to measure the SNR on the galaxy itself. The sky background is the limiting factor in detecting the faintest stars. If the sky background is sufficiently high, you can basically omit the read noise of the camera (mainly today). If you have a pixel size of 1x1 arc second and the sky background is magnitude 21 per square arc second, basically a mag 21 star (if your stellar image size is 1 arc second, just as an example) will double the signal on that pixel. Detecting a magnitude 26 star implies you need to have a very high SNR on the sky background in order to see a 1% change of the signal (because yes, the sky background is the signal). These are just numbers, but amateurs usually have larger pixel size, larger image sizes and much higher sky background. So normally a single measure on a large area of sky background (without any star) is the way to go. If you have 100x100 pixels, 10000 points you already get a pretty good statistics. I may have missed the part where you talk about your preprocessing. How many darks, how many flats, etc... And in the end, for detecting faint stars, much shorter exposure times and processing exactly when one processes planetary images would allow to obtain sharper stars, and therefore a much better detection. Then with class 1 detection, it's yet another set of tests and the results will mostly depend on the camera itself. Just saying.
Good points! There is a lot to think about other than SNR or the DSO, like the sky background. But, I still think the exercise is useful.
In terms of post processing, each stack had 20 flats, 20 dark flats, 20 darks applied.
Good to see that your model and experiments line up with my own anecdotal observations.
Thanks! I'm glad things are consistent! :)
The data I've seen for my setup is that I need 1 hour 6 minutes of short exposures (8 seconds) to match 1 hour of long exposures 3 minutes. To me that is inconsequential. It also it a good thing since it allows my astrophotography to be possible since I'm using a GoTo dob with no guiding.
Very nice! Everyone's setup is different, and a dob with a decent size mirror and good F-ratio should collect light quite a bit faster I would think!
Would be interesting to see the experiment include the super short exposures, like sub second ones. And super high gain, max gain or near max. Like what the result would be if you essentially filmed the DSO at max gain for an hour. I don't think the result would be good, but i'd like to see it. Might have to try it myself.
That would be interesting! If you do it, let me know!
Great video! Thanks for the scientific approach to tackling this. One thing I wasn't clear on was if you ran the SNR statistics on the autostretched images or the raw stacks? Which makes me wonder more questions...If autostretched images were compared, does the that autostretch apply the exact same stretch to each image? Do the statistics change when comparing raw, autostreched or same-stretched stacks? Thanks for the great work!
Thanks! The stats were compared on the raw stacks to avoid all the great questions you brought up ;)
Would the stats change if I compared the autostretched images? I honestly don't know. But they might. Running the stats on the raw images seemed like the best option. Plus, it was easier :)
@@deepskydetail I swear I replied to this last night, but I have no idea where it went.
Raw stack comparisons sound perfect. Excellent work!
I done tons of 15 second exposures, using a noisy dslr (Canon t2i). Yes I got an image, but every image was faint, with the full amount of camera noise. So I figured doing 3 minute exposures should give me way more signal, but still only 1 image worth of noise, and not even that much noise, because over the course of 3 minutes, a lot of pixels that would have been noise, were hit by photons, giving me less noise. I can even push the contrast of a single 3 minute image, and get it to almost look like the desired result. That is impossible with 15 second images (too much noise, too little signal).
Yeah, I'm sure with a t2i a 15 second exposure would be pretty noisy! Is the camera modded?
Actually, before you revealed it, I immediately noticed that the first of four images was the best, and I was convinced that must be the one with the longest subexp and gain. And I was right.
Very cool! I wasn't sure how well YT compression would make things look, and I thought the three best all looked really similar. But it's good you could see the difference!
Many are less interested in “the perfect” picture vs the “best reasonable” that allows us to then go image other stuff. A law of diminishing returns. So while total exposure time might be king, a rule that could establish “getting the job done” would be cool. I wonder at what point does an image have so much noise that processing artefacts are inevitable. I am always amazed at what software tools can do for short (
I like your data driven approach - even though my statistics skills are abysmal. In the field, I usually use 180 - 300 seconds for DSO's. This helps mitigate the flub factor of losing 10 minutes of integration time after bumping into your tripod.
The other reason, is I really don't know how much well depth I use up with with a given exposure - so I try to be conservative.
Thanks for your video. It got me thinking!
@chrislee8886 Great comment! I wonder that too! I'm sure there are calculators out there that can help out with that. Tim's presentation gets into a lot of it. The "best reasonable" seems like a great video idea!
@ekalbkr thanks for the comment! I use around that range too. It feels really bad to throw out a long sub, doesn't it? It just goes to show you that there is so much that goes into finding the perfect sub length!
Another interesting comparison Unity Gain 300s vs Gain0 900s. Thanks for the clarification as too many people say only total integration time matters.
Yeah, that would be interesting! Thanks :)
To really see the benefit of really long exposures, you need dark skies and a target with dust or IFN. If using NB filters then a target with some faint Ha or Oiii will show why and when you want long exposures.
I just discovered your channel today, and looking at the play lists I see lots of videos I want to watch! As others have said below, I appreciate your scientific approach as well as inclusion of actual images.😊😊😊
I'm glad you enjoyed the video! Thank you for the feedback!
Can you imagine how many satellite artifacts you would get with a 10 minute sub? Egads.
Yeah, that is a problem!
Thanks! Great stuff: I like numbers and I like comparisons. Subscribed and I will be watching many more of your videos. Thanks for all the work you put into this one!
Thank you so much! I'm glad you enjoyed it :)
I find that doing linger exposures than 30 seconds where I live and with my equipment over exposes the stars and while I might get better SNR for the deep sky object I lose star color. Of course that can be solved by combining images but I'm lazy! Really enjoyed the video, looking forward to the next one.
Thanks!! Yeah the stars getting overexposed is a problem. I think one of the solutions Tim talked about was lowering the gain to increase sub length. But, I'm not sure exactly how that would work in practice (I think the idea is good, I would like to experiment with it though!)
Thanks! While my higher math skills are somewhat rudimentary (maybe I could benefit from Brilliant😊), I understand enough to appreciate the approach. This and another recent video got me to subscribe. Real numbers matter - even imaginary ones!
Thanks! Glad you've come aboard :)
Nice video! One thing I'd add is that perhaps you should've calculated the SNR using an ellipsis and adjusting a/b to fit the shape of the galaxy, that way you could analyze the SNR at different radii to see the effect it would have in bright vs faint details.
That is a great idea! I'll keep that in mind next time :)
Excellent video. Looking forward to seeing how the super long subs compare.
Thank you! Hopefully, I can make it soon. It's been clouds for days here though. Hopefully they'll clear up during the new moon.
My prediction is that it’ll be virtually the same as the 120s subs, but depending on the target, there might be some blown out spots that are harder to control. Really curious to see your results.
wow, I didn't think I would see jasp being used on astrophoto video :) Thanks for a great video.
Thanks! It's good to know someone recognized the program!
I think there are three important breaking points when it comes to exposure length.
1. Break even with read noise (as discussed).
2. Short enough exposures to achieve lucky images (1 s or less, based on an astrobiscuit video improving down to 10 ms).
3. The point when an important part of the image gets overxposed/saturated.
From 2 and 3 it's clear that it's not only a matter of equipment performance and natural phenomena, but also a subjective question of how many blown out star cores are you OK with? Are you OK with dealing with crazy amounts of data and read noise to achieve ultimate resolution with lucky imaging?
So, yeah, I very much disagree with "great images can only come from long subs [far beyond read nose break even]".
Definitely great points! Tim also pointed out those issues in his presentation too. There is a lot to think about when figuring out sub length!
Lucky imaging can yield a better result sometimes but no matter what the Math says. we are always at the mercy of the atmosphere and space weather activity as well as our local weather how hot, how cold, is there a breeze, etc. for our SNR. The quality of optics including filters matters as well. The Focal ratio also plays a huge factor in exposure time. Imaging with a Rasa or Hyperstar F/1.9-F/2.4 system the longest exposure you can get away with is 180s. But somewhere around F/5 to F/10 depending on aperture depends on the exposure length. I have played around with exposure times over the years. I have a similar camera but with the IMX 571 mono and for LRGB because of my Bortle 6/7 light pollution 120 L 180 RGB Narrowband Ha , O3, S2 180-300s all using the high dynamic range mode you spoke about.
Yes, there are so many variables to keep track of!
@@deepskydetail I just spend 40+ hours imaging the longest project I have done is 144 hours. Then check the data an cull data accordingly usually keeping the best HFR / FWHM unless its summer time then as long as the stars are round and there are no high clouds the data gets stacked. HFR always goes south "3.5-5" where as if its clear in the winter I can get down to an HFR of "2-3"
I liked the video, but I wonder if a stack of 40 minutes is enough for this kind of comparison. Would love to see a longer version of this.
Yeah, I think it's a good idea to try this with like 5 hours of data each. Maybe I'll do that with a narrowband target so I don't have to worry about the moon as much ;)
Dude thanks so much I was always wonder about this you just awnser my questions I had. That would be a good experiment to run for the long exposure vs shorter but also I think u need to add cooled vs Nirmal cam in case thermal noise impacts long exposure
Thanks! Yes cooled vs. non-cooled is a good idea too, and it will especially helpful for those with DSLRs.
There’s an interesting video on the channel of @AstroFarsography which is a recording of a presentation of Robin Glover, the author of Sharpcap, basically with the same conclusion. He has some interesting numbers, also with the effects of filters. For this video, it’s a good one! I like the more scientific approach (and the time and effort you put into it). Thanks!
Thank you! That is a great video with really good information. :)
Really well done and interesting study, Mark! I’ve mainly just looked at SNR and have done my own testing of short vs long exposures. In my experience with the same total integration time I’ve found better results with longer exposures. With that said, 300s are the longest I take. That’s predominantly due to not having .3” guiding, generally more like .7” and I feared that difference would surface with longer exposures. I should try it though. I’m doing a Hypertune this week so after my initial testing I might give some 10min subs a try and actually see how it works out for me.
Cheers!
Doug
Thanks Doug! It's going to be tough for me to get very long exposures too! I'm really worried about guiding. Maybe we should team up!
@@deepskydetail yeah, of course! I’ll let you know how the results of Hypertune go. I’m planning on getting on a target this next week and I’ll try some 10min subs.
@@AstroAF I hope the hypertune goes well! Let me know how it goes! I'll try too, but it has been very cloudy here lately.
What prior distribution did you use for your Bayesian model. Also was there the same number of replicates for each of the combinations or was the total for the stacked image kept the same? The latter may impact the precision of the estimate of SNR for the 120s exposure combinations as you only have 20 reps per combo vs 160 per 15s combo . Thank you for your video. Steve
Good questions! Based on what I understand from JASPs documentation, the underlying distribution is just a normal distribution. Other distributions (e.g., ex-gaussian or gamma) might be better, but looking at the overall histograms, a normal is a decent approximation (I'm also a bit lazy and didn't want to code things in jags or stan assuming different priors). There were the same number of samples/replicates for each combination.
Maybe, for your personal "best exposure" you want to measure wich time gives you much less of a gain in image quality.. a best short exposure that overcome a certain point of SNR, but keeps the exposures as short as possible so to speak. I.e. you shoot at 120sec with an SNR of 8 and shooting
240sec subs with an SNR of 9.. it isnt worth it to double your time to get this little benefit. Maybe there will be a video of "your personal setups" best exposure time some day? Just to show there is a less worth it increase of quality with an increase of single sub exposure time. 😊
Apart of this: good informations in this video with fundamental knowledge, every astrophotographer should be aware of ❤
Thanks for the comment! Those are great ideas :)
Great video. Do you calibrate your data sets to actual photo electrons when evaluating SNR? For example detecting 100 photons would have an expected standard deviation of 10. 10000 would be 100, etc. To make comparisons you have to normalize the adu values to photons. You likely did this but it deserves mention.
Thanks again for this thought provoking video!
Thanks for the comment! Yes, I calibrated each sub frame when evaluating the SNR! Great question :)
You’re my hero ❤
Thank you :)
Every video you do I end up rethinking my life 😀
I hope in a good way! 😉
@@deepskydetail always
As always: just brilliant video! Thank you! I enjoy every one of your videos. It is exactly what I thought I would like todo once I have taken enough pictures to satisfy my “need-for-pictures” 😂 thank you again!
Thank you!! If you do it yourself one day, let me know what the results are! I'd like to see if they replicate :)
If you go down to the point of capturing individual photons (and any false detection that will inherently be present), and then stack the humongous number of samples that would produce, would the image converge towards the better quality as the number increase; or because you're essentially going to the minimum exposure time possible, the results will still be inferior?
Great question! I'll answer that by first talking about faint signal. If the signal is too close to the read noise or too faint compared to light pollution, then it will take waaaaay too long to get decent SNR (maybe thousands of hours or more). Theoretically, SNR can keep increasing, but it takes longer and longer to get noticeable improvement.
So, (and I might be wrong of course) if exposures are too short where you're capturing 1 photon at a time, you could theoretically stack a humongous number of subs and it would eventually (and I do mean eventually as in a very long time) get an ok image. But, you could get good SNR with longer subs much, much, much more quickly!
You have one thing I can't buy. Patience! Thanks for the vid m8. 🙂
Lol! Maybe I just have a penchant for suffering :)
@@deepskydetail 🤔😁
One issue with your approach is the use of a relatively high signal area in the central region. The devil is in the shadows, so to speak. Target brightness and dynamic range. I would also prefer to see more sample points than 2x2 before arriving at any firm conclusions. Good subject to deep dive though!
Yes!! That's part of what I want to test next! My previous videos on filters and bortle zones have taught me that it's thr shadows/low SNR areas that are most affected by things. Stay tuned :)
It doesn’t matter because of Pixinsight. Are you ready to loose 10-20 min sub?
I think, In this world, resolution and color rendition are matter; rest can be fixed in post-processing. It will be great to make video on why Japanese, European and USA-made scopes are consistently super quality.
Good points! A lot can be done in post processing! Thanks for the comment!
@@deepskydetail Are you going to do video on superiority of Japanese, European and USA-made equipment? Interestingly, this critical topic is not covered in any UA-cam videos. You could be first, but to make such video is “another ball game”.
I don't know if I have enough experience with the different scopes from different countries. But it's something I could research. I can't make any promises, but we'll see :)
Great Video!
What stat package are you using?
Thanks! A combination of R with tidyverse and JASP.
@@deepskydetail thanks, was looking for a replacement for minitab.
For a camera with HGC, is the gain at which the HGC kicks in more important than unity gain? For your camera, it looks like the HGC mode trigger is at unity gain, but that isn't the case with all cameras (e.g., ZWO ASI224). For example, if HGC kicks in at 60 and unity gain is 135, and the DR is a full stop higher at the 60 gain compared to 135 (with lower read noise and higher full well depth), would this alter your conclusions about the effects of using lower than unity gain?
That's a good question! I'm not exactly sure to be honest. I would say that if HGC is at 60, and you have more dynamic range, then the question becomes how much less read noise is there at 135 than 60? If the difference is really small, then I'd say go with 60. But you could run some tests to check!
This was very informative! Does it make sense to keep sub exposure length to below the point where “pixel” saturation starts occurring or is it okay to saturate some pixels?
You generally want to keep the sub-exposure length low enough so that your stars don't start saturating. I think it's probably ok for some pixels to be saturated sometimes, and hot pixels will always be saturated. But generally I'd keep sub length low enough so they don't saturate. Of course, you could blend in different sub-exposure lengths. Some people do it with good effect to help get fainter detail without blowing out the stars.
Thanks. That makes sense. I noticed recently on M57 with the EdgeHD 8 at F10 and the 2600 MC DUO at gain 100, saturation started happening at about 25 seconds to my surprise.
Yep! Those big mirrors are deceptive! They collect light faster than you think even though their focal ratio is pretty long.
I missed a point. Did you take the experiment's measurements on the subframes or the integrated images?
I found the answer at just past 9:12. You took the measurements on the stacked images.
Thanks for the comment! I did use the sub-frames to calculate the SNR of the final stacked images. So it's actually kind of both!
Impressive
Thank you!
Does shooting 15sec subs not clip the blacks ?
Also I mostly shoot gain 0 and longer subs (what ever time to get perfect history gram of ROI). Every time I bump up the gain, I feel I loose too much dynamic range
Good points! not shooting long enough can make it very hard to bring out the faint stuff, and you can end up clipping the darks! Your strategy of keeping gain low and shooting longer is a good one, I think.
Of course every setup is going to be a bit different. So some people might want to have shorter exposures depending on their aperture, focal length, mount etc :)
I wonder what effect the Bortle scale has on all of this...
I wonder too! So many variables to test!
Hello,
I watched your video with great interest and I'm very impressed with the work you've done. However, I'd like to add a few nuances to what you presented. I use short exposures, i.e. exposure times well under 15 seconds, and my results show somewhat different trends.
In my opinion, the 15-second unit exposure times you use are not optimized. When using short exposure times, you'd need to increase the gain significantly because read noise, which is important as you've shown, decreases with high gain. It's like running a 100-meter race in shorts and good shoes versus running the same 100-meter race in jeans and boots: despite your best efforts, your final time will be mediocre.
It seems to me that, based on your results, the only thing you can really say is that with the 294 camera and your sampling, you'd have better detection with 120-second unit exposures than with 15-second exposures using low gains (121 is low). But you can't generalize from this.
What's more, the total exposure time is too short, only 40 minutes. To get good detections with short exposures, you need a long overall exposure time, stacking, and a good final dynamic range that allows images to be sorted, for example. You also need cameras with low read noise, because in Lucky Imaging, read noise dominates over photon noise.
In any case, with my own equipment and that of many other Lucky Imaging imagers, I can confirm that we get signal despite very short exposure times.
Thanks for the information! One thing that the video also found is 121 gain was better than 10 gain. So, maybe even higher gains might do even better (something to test!). I'll have to try more gain settings. Do you mind sharing what camera you are using for lucky imaging?
@@deepskydetail Good morning ,
at the moment I am using a 533M, 585 and a 464 from PO, for each camera I use the gain at 70-85% for exposure times of 350ms to 1s, beyond (2-5s) I lower the gain because I don't have enough image for the dynamic range to increase enough during final stacking. when I lower the exposure, my sorting becomes more severe. I capture in SER 8b when the gain is high, otherwise in 16b.
At 9:10 you explain how you divided the core area into 121 samples. What software are you using to measure SNR on those samples and how does it make the measurement?
I used R. To measure the SNR, I took the average brightness of the area and divided it by the cumulative standard deviation of the standard deviations as I stacked the images one at a time. I think I go over it here in this video: ua-cam.com/video/eRKk3lNyXO8/v-deo.html (about the 6 minute mark; although more context might be needed from about the 3-4 minute mark).
@@deepskydetail Thanks. I should have found this on my own but am new to your channel and haven't looked at all you've done. Excellent work and contributions to the astro community.
@@fugaterq thank you! I should have mentioned how I did the SNR calculations in this video (or at least point people to how it was done), but I wasn't expecting so many new viewers to be honest (i.e., I thought most of the people who would see this video had already seen the others)!
Excellent. Definitely a “subscribe”…
Thank you!! :)
Look man. All i need to know are where (photons) … they …. are.
Deeper and deeper into the snr hole we go... following the white rabbit
🕳 👈 🐇🚶♂️
We just keep following the rabbit ;)
Crap video too much waffle , never did see the results
Thanks for trying out the video anyway!
... what the fuck does the title even mean?
"Does sub length matter?" It's in the thumbnail...
To clarify, the title answers the thumnail. Just wondering if you've ever done astrophotography? Maybe I should have gone over stacking and sun frames a bit more, and both don't make sense to non astrophotographers?
@@deepskydetail The thumbnail was broken and on a lof of devices the video starts autoplaying. I hate seeing videos with nonsensical titles.
Yes, I do but I'm at a bit of a standstill with my current gear. My lightbucket is: astronomers/herotheelf
Ugh! Sorry! I didn't know the thumbnail was like that. I'll check it out.