Support me on Patreon!: www.patreon.com/cuivlazygeek Quite a few imprecisions and shortcuts in the video for clarity. Also potentially mistakes! If you find something, please add it as a reply, I'll try to summarize! FOV and resolution/Dawes Limit calculator: astronomy.tools/calculators/field_of_view/ Sampling vs Seeing calculator: astronomy.tools/calculators/ccd_suitability
Great in-depth but easy to understand explanations. I appreciate that when you state something as fact, you show references. When you're not convinced or unsure, you give examples of the conventional thinking. Thanks! Down the rabbit hole for more.
The very end mention about debayering vs mono is the first thing I actually noticed when I swapped to mono of the same sensor. The resolving power is much great than I ever anticipated!
Cuiv, yet another outstanding video. You do such a great job of explaining things in a way that even my old brain can understand. I have learned so much from your videos. Thanks for all of your efforts.
I use to develop software to process space based radar. We would create a synthetic aperture radar image by creating digital filters that performed convolution over a series of images as the antenna sweeps over an area on the earth. We then created images with a greater resolution than the Rayleigh limit of the antenna based on the diameter of the antenna. I have often wonder why we can’t do the same with Astro images. This could be done on planetary images as the planets rotate or on alt/az images as the series of images rotate. The filter would need to be both spatial and temporal and use the rotation vector to recreate the image. This is similar to what is done in using an array of telescopes to create an effective large resolution image (this is done only spatially). Edit: I am not sure that these techniques would require coherent light sources.
I think you should also mention that smaller pixels also results in reduced signal to noise, hence longer total exposures are required. Excellent video.
Reduced signal to noise per pixel indeed, and usually higher read noise as well - in this video I wanted to only talk about the ability to get details otherwise it would have been a one hour video :)
You are definitely correct, but I think it’s better to be properly sampled and just do longer exposures. I’m using 2600mm/mc for 540mm and it’s a pretty good mix.
@@darkrangersinc That is of course ignoring the idea of Lucky Imaging that Cuiv detailed in the video. It would be interesting to see a comparison between the 2 methods!
@@darkrangersinc Nope - thats the whole point of this and Cuivs last video - planetary uses similar (although slightly different rules re noise as bright targets) but more are using this for DSO now.
I so sincerely appreciate u taking the time and going into this level of detail on these topics, Cuiv! When I was first leaning Astro, it took me forever to learn and research these topics…usually one by one…and having them covered in a video like this is just awesome. Fantastic of you to help educate like this!! 🙏✌️
Great information Cuiv! Thanks. I use the calculator on rc-astro because it will also take into account drizzling and binning to correct over or under sampling.
An outstanding presentation! Yes, it is complicated and subtle, and a good reason i have struggled to have a good grip on the subject. Your ability to move through it in a logical and clear manner does wonders for my understanding. And, yes, your enthusiasm is infectious - the mark of a good teacher. More detailed presentations like this are always worth watching. A good topic suggestion: explain guiding ins and outs. How seeing, guide scope, sample rate, PID gain affect guiding. What are the proper step sizes for calibration, and why? Ill be waiting....😊
Hi Cuiv! you are right on track with your explanation and I couldn't have explained it better myself. Bayer interpolated data fills in the blank for each color to create a completed RGB image. This is why drizzling even at 1x recovers data better than just debayering as it "collects" data and recontructs it instead of interpolating (guessing) what the pixel is supposed to look like. Yes the OSC calculations are missing from most websites as the old way of thinking is that everyone is using mono CCDs and so now they are out of date, but of course now a days people are lazy and want to only use OSC 🙂 I'm the only madman that wants to do full frame mono on a RASA anymore LOL. Thanks for the video and clear skies!
Awesome video. So good to have someone breakdown those concepts and making sense of them all. Made me understand much more of the whole thing and connect the dots, thank you!
This video is very solid and important on explaining the idea of resolution、seeing、samipling. Despite of more time is needed for me to understand, this video really really helps. Thank you!
Very insightful explanation Cuiv! Love your channel and content. There is so much to learn in this hobby and I really appreciate your contribution to the community, its been very helpful for me and likely others just getting started in the hobby.
Bravo Cuiv, Bravissimo !!! Here are the main criteria to consider before choosing a setup and save money. For me, your most important video. Local average seeing needs to match with image scale.
Fascinating stuff again, thankyou. I enjoy learning more about this although I have to say it’s rather hard to apply it when I’m pointing my telescope at a distant galaxy on a slightly cloudy night and doing my best to capture some beautiful images! I guess most of my decision making of ‘scope, reducer, camera has already been done and I’m down to gain, exposure time and binning. One area I’d like a bit more explanation about is the relationship between fwhm and psf. They seem to get talked about in one breath and it’s hard to ‘resolve the difference’!
I'm glad you mentioned that you want your camera to be capable of twice the resolution of your optics. I was curious about that because when using the CCD suitability calculator on Astronomy Tools I found a really good match for my Orion 10" f/3.9 astrograph. By the way, Orion is no longer making that astrograph and it's a fine astrograph with a Dawes limit of 0.49 arc seconds per pixel resolution.(Theoretically) I decided on the QHY 183 M and using my Televue Paracor Type 2 coma corrector which has a multiplicity of 1.15 the calculator gives me a lot of flexibility. I can couple the coma corrector with a Televue 2 X Powermate and with the right seeing conditions I could conceivably get as high as 0 .17 arc seconds per pixel resolution. That's crazy resolution! Of course, using the coma corrector with a 2x Powermate is over 3 x magnification so I would need very good guiding but I've been averaging from around .47 to .25 arc seconds total guiding error, which is pretty damn good for an Orion HDX110 mount with the 10" astrograph on it. I'll bet i could kill it doing planetary with that setup. Unfortunately every time I've tried around here, near sea level, the seeing was terrible. So no usable planetary data yet. Really that telescope doesn't even make that mount break a sweat though. On clam windless nights I could probably get it even better by using a different guide scope. The guide scope i originally bought for that setup was only 250 mm focal length, don't know why the guy didn't tell me that except that I limited my options with a 250$ budget for guide scope and guiding camera. More recently I decided to use my William Optics Zenithstar 61mm as my guide scope, which has a focal length of 460 mm. Which is a better match for the 10" with 1000 mm focal length. Experts say your guide scope should be half of your imaging scope focal length so I'm right there. I can't wait to try all these things when the QHY 183 M comes. I'll be trying plate solving for the first time and NO MORE DSLR! HA! Clear skies Cuiv, and thanks again for the wonderful content.
Great video as always. Thank you for clearing things up for me. Two more things of complexity that can affect resolution are drizzling and binning. I guess discussing them would make the video 10 minutes longer (and five times more confusing?). Maybe a topic for future videos?
Excellent compilation of the important factors! One thing that I'd be tempted to mention is to clear up any confusion that resolution is the number of megapixels as that's the what the average person thinks when they hear the word "resolution" although as this is more geared towards semi-experienced+ astrophotoghers so maybe that's not necessary. Also if you're looking for topics to cover during the rainy season, that wifi software you discovered during your recent dedicated astro pc video might be worth a short video especially setting it up to auto run on pc start, I got it working but I'm sure some people would appreciate a walkthrough. Cheers
Great job, I really like the way to added all the grey area in this. As you did with your small pixel experiment, use the numbers as a guide but test for actual results.
Great video…thanks for summarizing a complex topic….. one recommendation to add is effects of binning on resolution…. Seems everyone ignores the implications on over sampling…. Including me
Seems to me you're the exact opposite of lazy, my brother. Brilliant video. I always learn from you. You always teach us more than we even need to know, but gotta love ya for it
Really interesting, brings back memories of when I was studying control engineering theory back in the nineteen eighties. I usually use a mono camera these days but during the regular periods of cloudy weather we have in the UK I' ve started trying to improve my early attempts at imaging with a DSLR by combining the captured images with Ha / Luminance data taken with the mono camera. The improvement in detail is quite striking but obviously not as good as a complete LRGB, SHO etc. image.
Always good to see these video’s. Would appreciate an extra “add on” around the effects of binning with OSC and what difference this can make vs what cost. I’ve never been fully clear on how or when to go this route…
Excellent presentation Cuiv! I think traditionally larger pixel sizes have been touted because in CCD cameras at least, they have greater well depth. I'm not sure if this is still the case with modern CMOS cameras but even if it is, the combination of small pixels and many short exposures offers some interesting possibilities.
Great video as always thank you. Added new info for me that I haven’t seen before on OSC vs Mono in the calculations. I’m oversampled @ 0.28 on an OSC so have been exploring with Superpixel debayering vs VNC and adding drizzling in to confuse my tiny brain further. At the end of the day I find it difficult to see the differences by eye. Still go smacked at what beams in to my backyard tho. Love the hobby. Thanks again 🙏🏻
This is the stuff it’s great to sit down with fellow nerds and talk through these things over a sake, panache, beer or espresso! Or all at the same time ;)
Really good tutorial and I learned a lot! I found that there is also a calculator about guide telescopes. Any plan to make another video about how to choose the resolution ratio between the main telescope/camera and the guide scope/camera (and also why, both guide scope and oag)?
I love to look at tight binaries and multiple stars using my Royal Astro optics in my classic F/16+ 60mm, 72mm, or 90 mm achromats. Having a near perfect doublet objective for these slow refractors makes the Airy disk large enough to appreciate the colors of double stars. Albeiro in Cygnus is lovely for example. The double-double….but i digress as that is visual❤
Great video overall, but I'm not sure lucky imaging is all that relevant to deep sky astrophotography. My understanding, which may be very wrong and please correct me if I'm wrong, is that very short exposures may improve resolution, but you sacrifice light-gathering capability and the ability to capture faint objects. Stacking improves the contrast by reducing the SN ratio and makes the fainter objects that you do capture stand out, but there's still a limit to how faint an object you can capture at all. So lucky imaging is great for bright objects like planets, but of limited utility with nebulae.
It definitely is relevant to deep sky as well! Check Astrobiscuit's BAT. The only limiting factor to exposure length is sensor read noise, and these days it's very low (otherwise a photon really doesn't care which subframe it was captured in)
Great video -- you seem to be able to predict what questions I'm pondering in my AstroP Journey. Possibly another related topic would be focus. What does "in-focus" mean, and how precise do you have to have your focus as temperature changes? I've been digging into this myself since I have just added a HyperStar to my SCT, and am learning the importance of "speed" and critical focus issues. Love looking at the focus curves and trying to understand what they are really telling me. Maybe you've already discussed in another video, but maybe not. And question all this raises -- why get those 14" f/10 scopes when you'll never be able to achieve the theoretical resolution they provide?
Aaah focus is another thing. Effectively your measured FWHM is a measure of your true image resolution (similar to Dawes limit). So if your FWHM doesn't change much you're all good!
The wave length of the light will affect the size of the Airy disk. This is true even with RGB much less infra red. The longer the wavelength the larger the disk and the lower the possible resolution will be.
Cuiv, I also make no claims on being an expert at this, but I am unsure about your conclusion regarding the effective resolution of color cameras related after 23:31. As I understand it, a color sensor creates image color by filtering the light frequencies corresponding to colors through the Bayer Layer, but the sensor itself is still monochrome and each of its pixels still relates a light value somewhere between 0 and 1. The data from each pixel thus serves as luminance data, and it is how the data compares with neighboring pixels that forms the chrominance channels. The end result is blended color laid over a mono-image, very similar to how mono cameras work. This is why if you put a colored filter over a color camera you still get data on all channels. Or for that matter why when using a OSC to image DSOs you still get a lot of data on the green channel even if there is actually little green in the object. Astrophotographers often confuse the way green works with the way a primary color works. Green is a non-primary, blended color and allows the whole range of light between red and blue in. It is a very active channel even in the absence of green objects in space. A study recently done by the Royal Canadian Astronomy Society found as much as 70% of the luminance data to be on the green channel of an OSC regardless of the lack of green color in DSO objects. There is a good discussion about this at the photo.stackexchange: photo.stackexchange.com/questions/114269/bayer-color-effective-true-resolution If you see an error to this reasoning, please let me know because it's a topic I've been wrestling a lot with lately.
I actually don't understand what you're trying to say :-) Debayering algorithms will always use data from neighboring pixels to deduce the missing R, G, or B components of each pixel (and of course the sensor itself is monochrome), so while the number of pixels will be the same (except for Debayering methods like Super Pixel), there is a loss in spatial resolution. That loss is limited due to the algorithms being very efficient (Red Cameras estimate roughly 20%), but is still there. I also don't get the fixation on green pixels in the second part of your comment - they're just pixels covered by a green filter, and will be used by the Debayering algorithm to estimate the green component of neighboring pixels, just like any other pixel covered by another color filter.
@@CuivTheLazyGeek Understandable. I didn't express my thoughts well, and I was reading on it further this morning in an article on debayering. Your conclusion is correct.
Thank you! Fantastic info! Exactly what I was looking for to better match a given camera to given lens or scope! ...as well as match both to my expectations ;)
Excellent material Guiv, I would probably add one more aspect, this is the limit of the accuracy of guiding the mount for each setup, so sometimes it's worse than the atmosphere))
Blur due to seeing and guiding add as the square root of both squared, as you know. Doing the math one might think guiding errors are not that important unless you have great seeing conditions, but i have found that at the resolution Cuiv discussed here, guiding likely needs to be better than 0.5 arcsec total rms, and ideally under 0.4" rms. The stars seem most affected, especially if the Ra and Dec guide errors are not the same value.
@@CuivTheLazyGeek I'm a great fan of taking short exposures, especially for rgb stars. There are many bright targets were 5s rgb subs, taken with a monochrome camera for higher resolution, yield stela results on reflection nebula and globular clusters. I started what is quite a length discussion on this technique on CN forum titled "Alnitak and the Flame Nebula: RBG lucky imaging" awhile back. It's a vigorous discussion of the the method with lots of image examples. Purists wince at calling 5s exposures lucky imaging, but there is no doubt the FWHM star values are very much smaller than longer exposures. Like you said, best done with no guiding and good polar alignment. It is interesting that out of 100 exposures, you will be lucky to get 25 good frames. Seeing effects are frozen in many of the images with sometime bizarre star distortions. After BlurXTerminator, one can achieve FWHM values in the order of 1.6 arc secs. Keep these great video coming.!
Great information-packed video! The point about debayering algorithm is very relevant indeed. What's your take on super-pixel method (as in DSS) vs interpolation or AHD?
Reducing OSC pixel size might help but I wonder. In the end the ability of the smaller pixel to resolve details is limited by the seeing (and other factors as you describe). Thus the added information one is seeking may not really be present. The situation is far more clear in the case of a mono camera. In reality, for the vast majority of hobbyists in my view, the imaging pixel Scale to strive for is about 1/3 the seeing limit (assuming the scope Dawes limit is small enough to support seeing limited resolutions), for that matter 1/2 is probably fine given guiding issues and other non idealities in the overall system. This brings up the issue of guiding camera resolution. I’ll just throw out a number for people to shoot at. Your guide camera/scope pixel resolution should be no worse than about 3x the imaging pixel resolution. This is because the guiding seems to be able to achieve guiding about 3x better than the resolution. People sometimes throw fl ratios around for the main and guide scope, but in my view it is best to compute the two pixel scales and make sure you have no worse than 3x in the guide system.
One shot color resolution should be indistinguishable from mono w/ RGB for broadband targets. Photosites are photosites. They have a bad reputation because a lot of software uses sub-par debayering algorithms (nearest neighbor, etc.) For narrow-band imaging, though, they have a major disadvantage. Hydrogen-alpha filters on a mono camera will register four times the response of a color sensor with a Bayer matrix. Dual band filters can bridge this performance gap somewhat, as you’ve demonstrated very well in your previous videos.
There is still some loss in spatial resolution (Red Cameras - for standard daytime use - estimates around 20% with the best Debayering algos) as neighboring pixel data is used, although Bayer drizzle can help get even better spatial resolution!
@@CuivTheLazyGeek This seems like it would be a straightforward thing to test if you have OSC and mono versions of the same camera and a test chart. If anyone does, it’s probably you…don’t demosaic the image, just take a mono decode so as to measure spatial not spectral resolution of the OSC image and compare the results with a mono camera image. A 20% difference should be easy to see side by side.
Wow, excellent video. I'll watch it again (and again!) as there was so much detail in it that it didn't all sink in. Where does drizzling, binning and dithering fit in to all this?
They fit in it complexly :) binning just lowers the resolution, drizzling tries to "artificially" increase resolution (in particular it makes decon work better), dithering is necessary for drizzling
Hi Cuiv, this is a very nice and informing video, over a topic, I thinking about for month. But sadly with one big mistake in it. Your signal is not the Dawes or Raylight criteria, it's the Airy disk. The picture you can see through an optics is a combination of indefinitly Airy disks. So you need to sample the Airy disk with 2 or 3 pixels, not the Dawes criteria. In fact if your pixel size is between the Raylight criteria (half an Airy disk) and a little bit smaller than the dawes criteria, then you are well sampled. At normal photography if you are closing the aperture to much you will get a blurry image. This is the same reasen and it starts at a sampling of around 2.5 pixels per Airy Disk, better visible at 3 pixels or more. For the seeing you told it right, because the seeing blurs the strars, so that there FWHM can bee seen as the airy disk of a teleskope, which you should sample with 2 or 3 pixels for a maximum of detail. And what I should mention, Dawes is the Limit, you can't get much more detail as your Dawes limit, even with BlurXterminater, becaus you don't get finer information out of your scope. A higher sampling just costs more light. Best Regards from Germany, Markus
Greetings Cuiv and Markus, Thank you Cuiv for a delightful and instructive presentation. Be glad Markus not sad. It's true the pixel size can be compared to the Airy disk directly (ignoring for now the fact that the disk is circular for typical scopes and pixels are rectangular) each expressed as a linear dimension such as microns. Visualizing pixels overlaying the Airy disk can aid in defining a sampling model. However, Cuiv correctly used the calculated pixel resolution (not the linear dimension at the focal plane) to compare with the Dawes limit, each expressed in angle of sky view. He also correctly described some of the factors influencing each. Now here is a curve ball for Cuiv. The linear dimension of the Airy disk at the focal plane is a function of the F# ( ratio of focal length to objective diameter) of the scope. An F# of 10 gives an approximate Airy disk size of 11 micrometers for example, and an F# of 20 gives approximately a 22 micrometer Airy disk size. This may help in understanding why some large observatory scopes can use sensors with relative large pixels. But with our faster (smaller F#) scopes available to amateurs today, it is cool that we also have sensors with smaller pixels and higher sensitivity.and especially lower read noise. By the way Cuiv, I now am a proud owner of a QHY5III-715. Your video was part to blame. Anyway, I enjoyed how you made this topic approachable and even fun. John Barry, from smoke filled skies in Tennessee
It has been Sunny firing day and Cloudy at night here in Santa Fe for so long I may become a solar guy soon! Many things can effect and affect solar imaging and the comparison with DSO imaging and resolution is interesting.
All dedicated astronomy cameras are amazing, all of them. It’s all about reducing noise, not collecting photons. Keep your camera cold, your pixels small and your exposures short. Then tracking isn’t a critical issue.
Well it is about both still :-) but yes if you reduce the read noise in particular, the manner in which you gather the photons doesn't matter anymore :)
What a fantastically informative video! Thanks you. Yet more support for lucky imaging and a let-off for we colour camera users - we needn’t feel quite so inferior after all😊
Hi Cuiv, Thank you again for your videos. It's always very helpful to remind the base and the numbers! :) Concerning guiding, what level of perfection should be achieved? The guiding error adds up to the seeing to blur our images, so we want to have the guiding better than the seeing. Does it make sense if we try to achieve an rms error that is not bigger than the Dawes limit? Or does it have an good impact in our pictures if we can do twice better?
Consistent guiding will add to the seeing as total blur = sqrt(seeing squared + guiding RMS squared), assuming same guiding error on both axis (that's an unlikely assumption though). I'd say if you're going to go for the most details, you would have very short exposures without guiding! Then use FWHM to select the best frames
Cuiv, great video! Just what I was looking for tonight. I have a request, NINA has a LuckyImaging plugin, could you possibly investigate and do a video on that for DSOs? Is it a possible solution worthy for use with long focal lengths? I have one scope, (12" LX200) that is difficult to not oversample with my 533MC. I have it on an EQ6 with an OAG and it tracks well, but at it's limit. We also are looking at imaging with our 24" RC at the local club observatory and oversampling is a problem with that as well. Thanks!
one thing about seeing that I still haven't understood fully is: in general you want your guiding numbers to be below your pixel scale so you don't get trailing or bloated stars, but if your seeing is for example 2" vs a .8"/pix scale, would having worse guiding (like 1", 1.5") actually ruin your subs or not?
That's actually a really difficult topic with no clear answer as it will also depend on the shape of the guide curve, and not just the RMS. A high frequency oscillation of low amplitude that keeps the stars round would be indistinguishable from seeing - I would assume the total impact in that case would be something like sqrt(seeing squared+RMS squared). If you have wider oscillations from time to time causing oblong stars in a direction then it would be fully separate, and lead to loss of details.
@@CuivTheLazyGeek I was assuming a smooth guide curve but yeah it still remains a complex topic. Eventually I'll try with my mak at 2000mm and see if I can get anything good out of it
Nice video, I would like to ask you, assuming that the optical sampling and the choice of the sensor is optimal in any case, do you recommend for to see the details better: A)to using a lens with a large Diameter (D) and a high ratio between the Focal length and the Diameter(f/D) B) or is a lens with a smaller Diameter(D) but with a low f/D focal ratio better?! Grazie
Thanks for calling out the impact of the Bayer matrix on resolution. As you said, this rarely comes up. Along the same lines, I wish camera manufacturers would eliminate the term "binning" in conjunction with color sensors. It's pure marketing and completely misrepresents what this "feature" actually does.
@@CuivTheLazyGeek Oh for sure. But this is really just software-based resampling - not true CCD-style binning. According to ZWO: "Because CMOS cameras all use software BIN during deep space shooting, image post-processing can also complete this process." Given this, I'd rather do it in post.
At 18:38, in the 'CCD Suitability Calculator', why is only the focal length important? At 06:46 you mention that the aperture is the (main?) factor. They're probably closely related and I'm stupid, but I wonder why aperture isn't included in the calculator. Awesome video!
The CCD compatibility calculator is only a very rough indication of image scale as the sensor will see it. As such it only relies on how large the image is (which comes from focal length) and how large each pixel is.
So, summarizing all the points to take in account: If I have a lens (not a telescope) 600mm with 95mm diameter, a QHY5iii715c camera (same as yours) and a 2x barlow lens, it says my resolution would be 0.25" per pixel, and my dawes limit is mentioned as 1.22 arc/secs, do you think if I do lucky imaging for planetary and use a deconvolution (or in this case the wavelet-regularized Van Cittert algorithm recommended for planetary in pixinsight), could I get the most of my equipment? or would I really really need a wider telescope? I'm asking only for lunar and planetary, since for deep space objects I have a camera with much bigger pixels and smaller lenses :D Thanks in advance for your insight and opinion!!
For visual use, people have always told me my 20" scope should be more affected by seeing than their smaller scope. But side by side it has never proven true. You are still only seeing light from the same angular size of the sky. You just 'catch' more of it. Maybe imaging is different because of the sensitivity of the cameras, but I'm not sure that old trope that smaller is better for seeing is not true in a practical sense.
Will Deep sky stacker automatically select the “lucky “ or better images when stacking ? Or are there specific settings that need to be set ? I am referring to time stamp 21:33 - where you say to use short exposures and have the images with poor detail excluded Thank you !
For me the question is: within a city (like Bortle 7), using a 130mm Newton. Should I do short exposures (15s-30s), increase ISO/gain and take much more images to get details, or less exposures with longer single exposure time and lower the gain?
It really depends on what you want to achieve. You want to always be sure you are overwhelming your read noise (I have some old videos in the topic), which is unfortunately easy to do in light polluted areas. For lucky imaging you'd need 5s or shorter (on my systems in Tokyo that still overwhelms read noise), but the easiest is to do longer exposures so stacking will be easier, etc.
I was wondering you could explain something about the Nyquist theorem. I understand you say you need a sample rate 2x the measured frequency. So for 1000Hz signal you need 2000Hz sample rate. However, what if the wave is not a perfect sine wave? For example, a sine wave at 0 degrees has a value of 0, at 90 it is 1, at 180 it is 0 and at 270 it is -1. So for one complete sine wave, we have 4 indisputable sample values. However, it could be a triangle wave with these same 4 values, not a sine wave. I never understood that. Especially in relation to audio and the 44KHz sample rate chosen for CDs. I am not sure if we can see a difference if light were transmitted as triangle wave but 1000Hz sine wave sound very different than a 1000Hz triangle wave. Thank you!
Ah, but we're not talking about A frequency of the signal, were talking about the maximum frequency of the signal. So take a signal, perform a Fourier transform on it, that gives you it's component frequencies. Take the max frequency of it, and you need a 2x sample rate to fully represent the signal. A perfect triangle wave as you mention has an infinite max frequency (due to the sharp angles) and as such you can't really sample it enough to capture it fully!
Am I correct in interpreting these results: with my 1600mm attached to a c6 I'm good when seeing is OK and slightly oversampled when seeing is poor, but since I'm over the Dawes limit (.83 vs .77) seeing would have to be very poor before it would make a difference for me?
Great video! I've the same Quattro 150P with a 0.95 comacorr and a 294MC Pro, so resolution is 1.66", following your advices it seems that I'm undersampling a lot! Am I correct? Is it worth change the camera for a smaller pixel one, like the 183? (2,4 micron)
You are undersampling vs the telescope, but not sure vs the seeing. Also, undersampling isn't a bad thing in and of itself! It just means you're not capturing as many details as you could - that's it. Not a big deal!
Great video Cuiv as always! One thing you didn't mention was adaptive optics used by bigger telescopes in Chile to combat seeing which is not available to us. How does that work?
That works with pneumatic actuators in the secondary mirror that deform the secondary mirror the opossite that the atmosphere blurs the image, this is done by measuring the airy disc (blur cause by the atmosphere) of a star at very high frecuency and sending that correction to the actuators in the secondary mirror (yes very high tech) Cheers.
Already answered better than I ever could! Fun fact, the adaptive optics "guiding" star can be an artificial star image made via laser (in the case there is no neighboring star bright enough to serve the purpose!)
Very informative, thank you! But, I have a question from my old, partially fossilized brain: so what if you’re over sampled? Where I live (Maryland, USA), it’s alway poor seeing. I get excited when average seeing is predicted. Actually, I get ecstatic when we get a forecast of clear skies at all. I haven’t been able to get my scope (Askar V, thanks to your two videos) for over a month…
"I'm not an expert..." he says... My eyes roll, my lips stick out going brbrbrbrbrrr while my finger brushes up and down against them. 🙄😜 I have this video downloaded and marked to watch later. 👍👍
Possible, yes. Good results, maybe not? The ai was trained for features specific to telescope aberrations, with specific point spread functions. Don't know how different the psf is for microscopes.
Rainy season lol Cuiv I live in Scotland, every season is the rainy season lol My main problem is at 56N the long summer twilight prevents imaging until late August. Still I don't need to deal with the Tokyo light pollution, I feel for you man!! Great video.
Resolution and aperture go hand in hand... I wonder how many people have bought cameras that they didn't 'size' for their scope setup because they just had to have the new fangled camera that just came out...
Bortle doesn't affect sampling, but it does affect how quickly you can overwhelm read noise with LP shot noise, and thus how short your exposures should be (so how much you could freeze seeing!)
Support me on Patreon!: www.patreon.com/cuivlazygeek
Quite a few imprecisions and shortcuts in the video for clarity. Also potentially mistakes! If you find something, please add it as a reply, I'll try to summarize!
FOV and resolution/Dawes Limit calculator: astronomy.tools/calculators/field_of_view/
Sampling vs Seeing calculator: astronomy.tools/calculators/ccd_suitability
Great in-depth but easy to understand explanations. I appreciate that when you state something as fact, you show references. When you're not convinced or unsure, you give examples of the conventional thinking.
Thanks! Down the rabbit hole for more.
This means so much to me, I'm glad this is helpful!
The very end mention about debayering vs mono is the first thing I actually noticed when I swapped to mono of the same sensor. The resolving power is much great than I ever anticipated!
Cuiv, yet another outstanding video. You do such a great job of explaining things in a way that even my old brain can understand. I have learned so much from your videos. Thanks for all of your efforts.
Thanks so much Marvin!! This is great to read :)
I could re watch your vids forever...an endless amount of information here. Your the best at this Cuiv!
Thanks so much!
I use to develop software to process space based radar. We would create a synthetic aperture radar image by creating digital filters that performed convolution over a series of images as the antenna sweeps over an area on the earth. We then created images with a greater resolution than the Rayleigh limit of the antenna based on the diameter of the antenna. I have often wonder why we can’t do the same with Astro images. This could be done on planetary images as the planets rotate or on alt/az images as the series of images rotate. The filter would need to be both spatial and temporal and use the rotation vector to recreate the image. This is similar to what is done in using an array of telescopes to create an effective large resolution image (this is done only spatially).
Edit:
I am not sure that these techniques would require coherent light sources.
Oh wow that's such a cool experience you have! Thanks for this feedback!
I think you should also mention that smaller pixels also results in reduced signal to noise, hence longer total exposures are required. Excellent video.
Reduced signal to noise per pixel indeed, and usually higher read noise as well - in this video I wanted to only talk about the ability to get details otherwise it would have been a one hour video :)
You are definitely correct, but I think it’s better to be properly sampled and just do longer exposures. I’m using 2600mm/mc for 540mm and it’s a pretty good mix.
@@darkrangersinc That is of course ignoring the idea of Lucky Imaging that Cuiv detailed in the video. It would be interesting to see a comparison between the 2 methods!
@@peterwebster2555 yeah I’ve never used lucky imaging. It was my understanding it’s mostly for planetary, correct?
@@darkrangersinc Nope - thats the whole point of this and Cuivs last video - planetary uses similar (although slightly different rules re noise as bright targets) but more are using this for DSO now.
I so sincerely appreciate u taking the time and going into this level of detail on these topics, Cuiv! When I was first leaning Astro, it took me forever to learn and research these topics…usually one by one…and having them covered in a video like this is just awesome. Fantastic of you to help educate like this!! 🙏✌️
Glad it was helpful!
Great information Cuiv! Thanks. I use the calculator on rc-astro because it will also take into account drizzling and binning to correct over or under sampling.
An outstanding presentation! Yes, it is complicated and subtle, and a good reason i have struggled to have a good grip on the subject. Your ability to move through it in a logical and clear manner does wonders for my understanding. And, yes, your enthusiasm is infectious - the mark of a good teacher.
More detailed presentations like this are always worth watching.
A good topic suggestion: explain guiding ins and outs. How seeing, guide scope, sample rate, PID gain affect guiding. What are the proper step sizes for calibration, and why? Ill be waiting....😊
Thanks for the feedback!! And yes I do need to get to guiding at some point!
Hi Cuiv! you are right on track with your explanation and I couldn't have explained it better myself. Bayer interpolated data fills in the blank for each color to create a completed RGB image. This is why drizzling even at 1x recovers data better than just debayering as it "collects" data and recontructs it instead of interpolating (guessing) what the pixel is supposed to look like. Yes the OSC calculations are missing from most websites as the old way of thinking is that everyone is using mono CCDs and so now they are out of date, but of course now a days people are lazy and want to only use OSC 🙂 I'm the only madman that wants to do full frame mono on a RASA anymore LOL. Thanks for the video and clear skies!
Have fun with that full frame :-)
Awesome video. So good to have someone breakdown those concepts and making sense of them all. Made me understand much more of the whole thing and connect the dots, thank you!
Glad it was helpful! Always love to see this feedback
Oh man, thankyou so much. My brain was hurting trying to work out the astronomy tool calculator but this made it so much easier to comprehend.
You're very welcome!
This video is very solid and important on explaining the idea of resolution、seeing、samipling.
Despite of more time is needed for me to understand, this video really really helps.
Thank you!
Great video and perfect timing - having just found the diffraction limit/Airy discs of my scope and thinking about ways to get around it.
This is the best overview of the topic I've seen. Thank you for this!
Very insightful explanation Cuiv! Love your channel and content. There is so much to learn in this hobby and I really appreciate your contribution to the community, its been very helpful for me and likely others just getting started in the hobby.
I appreciate that!
Bravo Cuiv, Bravissimo !!!
Here are the main criteria to consider before choosing a setup and save money.
For me, your most important video.
Local average seeing needs to match with image scale.
Although you can still be wildly undersampled and have amazing images ;)
@@CuivTheLazyGeek that's right.
Fascinating stuff again, thankyou. I enjoy learning more about this although I have to say it’s rather hard to apply it when I’m pointing my telescope at a distant galaxy on a slightly cloudy night and doing my best to capture some beautiful images! I guess most of my decision making of ‘scope, reducer, camera has already been done and I’m down to gain, exposure time and binning. One area I’d like a bit more explanation about is the relationship between fwhm and psf. They seem to get talked about in one breath and it’s hard to ‘resolve the difference’!
I love learning from you Cuiv. Thank you.
My pleasure!
I'm glad you mentioned that you want your camera to be capable of twice the resolution of your optics. I was curious about that because when using the CCD suitability calculator on Astronomy Tools I found a really good match for my Orion 10" f/3.9 astrograph. By the way, Orion is no longer making that astrograph and it's a fine astrograph with a Dawes limit of 0.49 arc seconds per pixel resolution.(Theoretically) I decided on the QHY 183 M and using my Televue Paracor Type 2 coma corrector which has a multiplicity of 1.15 the calculator gives me a lot of flexibility. I can couple the coma corrector with a Televue 2 X Powermate and with the right seeing conditions I could conceivably get as high as 0 .17 arc seconds per pixel resolution. That's crazy resolution! Of course, using the coma corrector with a 2x Powermate is over 3 x magnification so I would need very good guiding but I've been averaging from around .47 to .25 arc seconds total guiding error, which is pretty damn good for an Orion HDX110 mount with the 10" astrograph on it. I'll bet i could kill it doing planetary with that setup. Unfortunately every time I've tried around here, near sea level, the seeing was terrible. So no usable planetary data yet. Really that telescope doesn't even make that mount break a sweat though. On clam windless nights I could probably get it even better by using a different guide scope. The guide scope i originally bought for that setup was only 250 mm focal length, don't know why the guy didn't tell me that except that I limited my options with a 250$ budget for guide scope and guiding camera. More recently I decided to use my William Optics Zenithstar 61mm as my guide scope, which has a focal length of 460 mm. Which is a better match for the 10" with 1000 mm focal length. Experts say your guide scope should be half of your imaging scope focal length so I'm right there. I can't wait to try all these things when the QHY 183 M comes. I'll be trying plate solving for the first time and NO MORE DSLR! HA! Clear skies Cuiv, and thanks again for the wonderful content.
Oh wow that's a crazy pixel scale!! You may want to go for short exposures and eschew guiding in that case!
Great video as always. Thank you for clearing things up for me. Two more things of complexity that can affect resolution are drizzling and binning. I guess discussing them would make the video 10 minutes longer (and five times more confusing?). Maybe a topic for future videos?
Yes indeed!
Excellent compilation of the important factors!
One thing that I'd be tempted to mention is to clear up any confusion that resolution is the number of megapixels as that's the what the average person thinks when they hear the word "resolution" although as this is more geared towards semi-experienced+ astrophotoghers so maybe that's not necessary.
Also if you're looking for topics to cover during the rainy season, that wifi software you discovered during your recent dedicated astro pc video might be worth a short video especially setting it up to auto run on pc start, I got it working but I'm sure some people would appreciate a walkthrough.
Cheers
You're absolutely right, I should have called it pixel scale! Thanks for the idea btw!
Outstanding informative video. Very well presented. It helped me immensely to understand the variables regarding resolution.
Thank you, glad it was helpful!
Great job, I really like the way to added all the grey area in this. As you did with your small pixel experiment, use the numbers as a guide but test for actual results.
Great video…thanks for summarizing a complex topic….. one recommendation to add is effects of binning on resolution…. Seems everyone ignores the implications on over sampling…. Including me
Seems to me you're the exact opposite of lazy, my brother. Brilliant video. I always learn from you. You always teach us more than we even need to know, but gotta love ya for it
Thank you! Cheers!
What a great topic, i was already using Astronomy tools and trying to figure this out before i found this video.
Really interesting, brings back memories of when I was studying control engineering theory back in the nineteen eighties. I usually use a mono camera these days but during the regular periods of cloudy weather we have in the UK I' ve started trying to improve my early attempts at imaging with a DSLR by combining the captured images with Ha / Luminance data taken with the mono camera. The improvement in detail is quite striking but obviously not as good as a complete LRGB, SHO etc. image.
Always good to see these video’s. Would appreciate an extra “add on” around the effects of binning with OSC and what difference this can make vs what cost. I’ve never been fully clear on how or when to go this route…
Yesssss but that's a tough video as there are several competing binning methods, and I would only be able to try a single one out!
Belle analyse bien plus complète et pragmatique comme souvent.
Toujours captivant
Merci beaucoup !!
Really love the regular Uploads, thanks! 🙌
Excellent presentation Cuiv! I think traditionally larger pixel sizes have been touted because in CCD cameras at least, they have greater well depth. I'm not sure if this is still the case with modern CMOS cameras but even if it is, the combination of small pixels and many short exposures offers some interesting possibilities.
It's perfectly fine to be very undersampled as well - large FOV but undersampled can still give great images!
Great video as always thank you. Added new info for me that I haven’t seen before on OSC vs Mono in the calculations. I’m oversampled @ 0.28 on an OSC so have been exploring with Superpixel debayering vs VNC and adding drizzling in to confuse my tiny brain further. At the end of the day I find it difficult to see the differences by eye. Still go smacked at what beams in to my backyard tho. Love the hobby. Thanks again 🙏🏻
This is the stuff it’s great to sit down with fellow nerds and talk through these things over a sake, panache, beer or espresso! Or all at the same time ;)
Hahaha all at the same time of course!
Loved this. Thanks for explaining complex topics with great clarity.
Glad it was helpful!
Really good tutorial and I learned a lot! I found that there is also a calculator about guide telescopes. Any plan to make another video about how to choose the resolution ratio between the main telescope/camera and the guide scope/camera (and also why, both guide scope and oag)?
I love to look at tight binaries and multiple stars using my Royal Astro optics in my classic F/16+ 60mm, 72mm, or 90 mm achromats. Having a near perfect doublet objective for these slow refractors makes the Airy disk large enough to appreciate the colors of double stars. Albeiro in Cygnus is lovely for example. The double-double….but i digress as that is visual❤
I still love visual :D was fun reading!
Great video overall, but I'm not sure lucky imaging is all that relevant to deep sky astrophotography. My understanding, which may be very wrong and please correct me if I'm wrong, is that very short exposures may improve resolution, but you sacrifice light-gathering capability and the ability to capture faint objects. Stacking improves the contrast by reducing the SN ratio and makes the fainter objects that you do capture stand out, but there's still a limit to how faint an object you can capture at all. So lucky imaging is great for bright objects like planets, but of limited utility with nebulae.
It definitely is relevant to deep sky as well! Check Astrobiscuit's BAT. The only limiting factor to exposure length is sensor read noise, and these days it's very low (otherwise a photon really doesn't care which subframe it was captured in)
Wow, such a beautiful explanation of Nyquist theorem!
Glad you think so!
Great video -- you seem to be able to predict what questions I'm pondering in my AstroP Journey. Possibly another related topic would be focus. What does "in-focus" mean, and how precise do you have to have your focus as temperature changes? I've been digging into this myself since I have just added a HyperStar to my SCT, and am learning the importance of "speed" and critical focus issues. Love looking at the focus curves and trying to understand what they are really telling me. Maybe you've already discussed in another video, but maybe not. And question all this raises -- why get those 14" f/10 scopes when you'll never be able to achieve the theoretical resolution they provide?
Aaah focus is another thing. Effectively your measured FWHM is a measure of your true image resolution (similar to Dawes limit). So if your FWHM doesn't change much you're all good!
Great video, especially for astrophotographer wannabbies. Many questions answered quite fluently. A text book reference video!
Woohoo! I'm so happy to see so many such comments, it was a difficult to make video :)
Thank you Cuiv for this content. Very helpful as always.
My pleasure!
The wave length of the light will affect the size of the Airy disk. This is true even with RGB much less infra red. The longer the wavelength the larger the disk and the lower the possible resolution will be.
Cuiv, I also make no claims on being an expert at this, but I am unsure about your conclusion regarding the effective resolution of color cameras related after 23:31. As I understand it, a color sensor creates image color by filtering the light frequencies corresponding to colors through the Bayer Layer, but the sensor itself is still monochrome and each of its pixels still relates a light value somewhere between 0 and 1. The data from each pixel thus serves as luminance data, and it is how the data compares with neighboring pixels that forms the chrominance channels. The end result is blended color laid over a mono-image, very similar to how mono cameras work. This is why if you put a colored filter over a color camera you still get data on all channels. Or for that matter why when using a OSC to image DSOs you still get a lot of data on the green channel even if there is actually little green in the object. Astrophotographers often confuse the way green works with the way a primary color works. Green is a non-primary, blended color and allows the whole range of light between red and blue in. It is a very active channel even in the absence of green objects in space. A study recently done by the Royal Canadian Astronomy Society found as much as 70% of the luminance data to be on the green channel of an OSC regardless of the lack of green color in DSO objects. There is a good discussion about this at the photo.stackexchange: photo.stackexchange.com/questions/114269/bayer-color-effective-true-resolution If you see an error to this reasoning, please let me know because it's a topic I've been wrestling a lot with lately.
I actually don't understand what you're trying to say :-) Debayering algorithms will always use data from neighboring pixels to deduce the missing R, G, or B components of each pixel (and of course the sensor itself is monochrome), so while the number of pixels will be the same (except for Debayering methods like Super Pixel), there is a loss in spatial resolution. That loss is limited due to the algorithms being very efficient (Red Cameras estimate roughly 20%), but is still there.
I also don't get the fixation on green pixels in the second part of your comment - they're just pixels covered by a green filter, and will be used by the Debayering algorithm to estimate the green component of neighboring pixels, just like any other pixel covered by another color filter.
@@CuivTheLazyGeek Understandable. I didn't express my thoughts well, and I was reading on it further this morning in an article on debayering. Your conclusion is correct.
Thank you! Fantastic info! Exactly what I was looking for to better match a given camera to given lens or scope! ...as well as match both to my expectations ;)
Excellent material Guiv, I would probably add one more aspect, this is the limit of the accuracy of guiding the mount for each setup, so sometimes it's worse than the atmosphere))
Blur due to seeing and guiding add as the square root of both squared, as you know. Doing the math one might think guiding errors are not that important unless you have great seeing conditions, but i have found that at the resolution Cuiv discussed here, guiding likely needs to be better than 0.5 arcsec total rms, and ideally under 0.4" rms. The stars seem most affected, especially if the Ra and Dec guide errors are not the same value.
Yep, I think if I'm trying to get the most details, I would limit myself to short exposures without any guiding...
@@CuivTheLazyGeek I'm a great fan of taking short exposures, especially for rgb stars. There are many bright targets were 5s rgb subs, taken with a monochrome camera for higher resolution, yield stela results on reflection nebula and globular clusters. I started what is quite a length discussion on this technique on CN forum titled "Alnitak and the Flame Nebula: RBG lucky imaging" awhile back. It's a vigorous discussion of the the method with lots of image examples. Purists wince at calling 5s exposures lucky imaging, but there is no doubt the FWHM star values are very much smaller than longer exposures. Like you said, best done with no guiding and good polar alignment. It is interesting that out of 100 exposures, you will be lucky to get 25 good frames. Seeing effects are frozen in many of the images with sometime bizarre star distortions. After BlurXTerminator, one can achieve FWHM values in the order of 1.6 arc secs. Keep these great video coming.!
Great information-packed video! The point about debayering algorithm is very relevant indeed. What's your take on super-pixel method (as in DSS) vs interpolation or AHD?
Again good information for everybody to use. Thanks
Reducing OSC pixel size might help but I wonder. In the end the ability of the smaller pixel to resolve details is limited by the seeing (and other factors as you describe). Thus the added information one is seeking may not really be present. The situation is far more clear in the case of a mono camera. In reality, for the vast majority of hobbyists in my view, the imaging pixel
Scale to strive for is about 1/3 the seeing limit (assuming the scope Dawes limit is small enough to support seeing limited resolutions), for that matter 1/2 is probably fine given guiding issues and other non idealities in the overall system. This brings up the issue of guiding camera resolution. I’ll just throw out a number for people to shoot at. Your guide camera/scope pixel resolution should be no worse than about 3x the imaging pixel resolution. This is because the guiding seems to be able to achieve guiding about 3x better than the resolution. People sometimes throw fl ratios around for the main and guide scope, but in my view it is best to compute the two pixel scales and make sure you have no worse than 3x in the guide system.
One shot color resolution should be indistinguishable from mono w/ RGB for broadband targets. Photosites are photosites. They have a bad reputation because a lot of software uses sub-par debayering algorithms (nearest neighbor, etc.) For narrow-band imaging, though, they have a major disadvantage. Hydrogen-alpha filters on a mono camera will register four times the response of a color sensor with a Bayer matrix. Dual band filters can bridge this performance gap somewhat, as you’ve demonstrated very well in your previous videos.
There is still some loss in spatial resolution (Red Cameras - for standard daytime use - estimates around 20% with the best Debayering algos) as neighboring pixel data is used, although Bayer drizzle can help get even better spatial resolution!
@@CuivTheLazyGeek This seems like it would be a straightforward thing to test if you have OSC and mono versions of the same camera and a test chart. If anyone does, it’s probably you…don’t demosaic the image, just take a mono decode so as to measure spatial not spectral resolution of the OSC image and compare the results with a mono camera image. A 20% difference should be easy to see side by side.
Wow, excellent video. I'll watch it again (and again!) as there was so much detail in it that it didn't all sink in. Where does drizzling, binning and dithering fit in to all this?
They fit in it complexly :) binning just lowers the resolution, drizzling tries to "artificially" increase resolution (in particular it makes decon work better), dithering is necessary for drizzling
Hi Cuiv, this is a very nice and informing video, over a topic, I thinking about for month. But sadly with one big mistake in it.
Your signal is not the Dawes or Raylight criteria, it's the Airy disk. The picture you can see through an optics is a combination of indefinitly Airy disks. So you need to sample the Airy disk with 2 or 3 pixels, not the Dawes criteria. In fact if your pixel size is between the Raylight criteria (half an Airy disk) and a little bit smaller than the dawes criteria, then you are well sampled.
At normal photography if you are closing the aperture to much you will get a blurry image. This is the same reasen and it starts at a sampling of around 2.5 pixels per Airy Disk, better visible at 3 pixels or more.
For the seeing you told it right, because the seeing blurs the strars, so that there FWHM can bee seen as the airy disk of a teleskope, which you should sample with 2 or 3 pixels for a maximum of detail.
And what I should mention, Dawes is the Limit, you can't get much more detail as your Dawes limit, even with BlurXterminater, becaus you don't get finer information out of your scope. A higher sampling just costs more light.
Best Regards from Germany, Markus
Thanks Markus! This goes counter to the research I did while preparing this video - do you have a source I could check?
Greetings Cuiv and Markus,
Thank you Cuiv for a delightful and instructive presentation.
Be glad Markus not sad. It's true the pixel size can be compared to the Airy disk directly (ignoring for now the fact that the disk is circular for typical scopes and pixels are rectangular) each expressed as a linear dimension such as microns. Visualizing pixels overlaying the Airy disk can aid in defining a sampling model.
However, Cuiv correctly used the calculated pixel resolution (not the linear dimension at the focal plane) to compare with the Dawes limit, each expressed in angle of sky view. He also correctly described some of the factors influencing each.
Now here is a curve ball for Cuiv. The linear dimension of the Airy disk at the focal plane is a function of the F# ( ratio of focal length to objective diameter) of the scope. An F# of 10 gives an approximate Airy disk size of 11 micrometers for example, and an F# of 20 gives approximately a 22 micrometer Airy disk size. This may help in understanding why some large observatory scopes can use sensors with relative large pixels. But with our faster (smaller F#) scopes available to amateurs today, it is cool that we also have sensors with smaller pixels and higher sensitivity.and especially lower read noise.
By the way Cuiv, I now am a proud owner of a QHY5III-715. Your video was part to blame. Anyway, I enjoyed how you made this topic approachable and even fun.
John Barry,
from smoke filled skies in Tennessee
It has been Sunny firing day and Cloudy at night here in Santa Fe for so long I may become a solar guy soon! Many things can effect and affect solar imaging and the comparison with DSO imaging and resolution is interesting.
Have fun going for it!
All dedicated astronomy cameras are amazing, all of them. It’s all about reducing noise, not collecting photons. Keep your camera cold, your pixels small and your exposures short. Then tracking isn’t a critical issue.
Well it is about both still :-) but yes if you reduce the read noise in particular, the manner in which you gather the photons doesn't matter anymore :)
What a fantastically informative video! Thanks you. Yet more support for lucky imaging and a let-off for we colour camera users - we needn’t feel quite so inferior after all😊
Yes OSC is easy and fun after all ;)
Great video Cuiv!
Hi Cuiv,
Thank you again for your videos. It's always very helpful to remind the base and the numbers! :)
Concerning guiding, what level of perfection should be achieved? The guiding error adds up to the seeing to blur our images, so we want to have the guiding better than the seeing. Does it make sense if we try to achieve an rms error that is not bigger than the Dawes limit? Or does it have an good impact in our pictures if we can do twice better?
Consistent guiding will add to the seeing as total blur = sqrt(seeing squared + guiding RMS squared), assuming same guiding error on both axis (that's an unlikely assumption though). I'd say if you're going to go for the most details, you would have very short exposures without guiding! Then use FWHM to select the best frames
Cuiv, great video! Just what I was looking for tonight. I have a request, NINA has a LuckyImaging plugin, could you possibly investigate and do a video on that for DSOs? Is it a possible solution worthy for use with long focal lengths? I have one scope, (12" LX200) that is difficult to not oversample with my 533MC. I have it on an EQ6 with an OAG and it tracks well, but at it's limit. We also are looking at imaging with our 24" RC at the local club observatory and oversampling is a problem with that as well. Thanks!
Great video Cuiv, thanks!
Glad you liked it!
Excellent presentation
Thanks a lot!
one thing about seeing that I still haven't understood fully is: in general you want your guiding numbers to be below your pixel scale so you don't get trailing or bloated stars, but if your seeing is for example 2" vs a .8"/pix scale, would having worse guiding (like 1", 1.5") actually ruin your subs or not?
That's actually a really difficult topic with no clear answer as it will also depend on the shape of the guide curve, and not just the RMS. A high frequency oscillation of low amplitude that keeps the stars round would be indistinguishable from seeing - I would assume the total impact in that case would be something like sqrt(seeing squared+RMS squared). If you have wider oscillations from time to time causing oblong stars in a direction then it would be fully separate, and lead to loss of details.
@@CuivTheLazyGeek I was assuming a smooth guide curve but yeah it still remains a complex topic. Eventually I'll try with my mak at 2000mm and see if I can get anything good out of it
At last correctly presented theory! Well done.
Glad you enjoyed it!
Nice video, I would like to ask you, assuming that the optical sampling and the choice of the sensor is optimal in any case, do you recommend for to see the details better:
A)to using a lens with a large Diameter (D) and a high ratio between the Focal length and the Diameter(f/D)
B) or is a lens with a smaller Diameter(D) but with a low f/D focal ratio better?!
Grazie
Thanks for calling out the impact of the Bayer matrix on resolution. As you said, this rarely comes up. Along the same lines, I wish camera manufacturers would eliminate the term "binning" in conjunction with color sensors. It's pure marketing and completely misrepresents what this "feature" actually does.
Yep, I'm not a big fan of color binning either, although it does increase SNR but at the cost of a lot of res
@@CuivTheLazyGeek Oh for sure. But this is really just software-based resampling - not true CCD-style binning. According to ZWO: "Because CMOS cameras all use software BIN during deep space shooting, image post-processing can also complete this process." Given this, I'd rather do it in post.
At 18:38, in the 'CCD Suitability Calculator', why is only the focal length important? At 06:46 you mention that the aperture is the (main?) factor. They're probably closely related and I'm stupid, but I wonder why aperture isn't included in the calculator. Awesome video!
The CCD compatibility calculator is only a very rough indication of image scale as the sensor will see it. As such it only relies on how large the image is (which comes from focal length) and how large each pixel is.
Great vidio very informative keep them coming buddy
Thank you, interesting! I am over-sampled, can I just shoot with 2x2 binning instead of buying another camera?
So, summarizing all the points to take in account:
If I have a lens (not a telescope) 600mm with 95mm diameter, a QHY5iii715c camera (same as yours) and a 2x barlow lens, it says my resolution would be 0.25" per pixel, and my dawes limit is mentioned as 1.22 arc/secs, do you think if I do lucky imaging for planetary and use a deconvolution (or in this case the wavelet-regularized Van Cittert algorithm recommended for planetary in pixinsight), could I get the most of my equipment? or would I really really need a wider telescope?
I'm asking only for lunar and planetary, since for deep space objects I have a camera with much bigger pixels and smaller lenses :D
Thanks in advance for your insight and opinion!!
Well you're oversampled, so you'll get the most of your equipment in terms of resolution, but not in terms of SNR!
@6:34 LOL I had to watch that part twice.
My gf walked by and asked what I was watching when she heard that.
Airy disks touching are definitely very erotic lol
For visual use, people have always told me my 20" scope should be more affected by seeing than their smaller scope. But side by side it has never proven true. You are still only seeing light from the same angular size of the sky. You just 'catch' more of it. Maybe imaging is different because of the sensitivity of the cameras, but I'm not sure that old trope that smaller is better for seeing is not true in a practical sense.
Look up the Fried parameter!!
Will Deep sky stacker automatically select the “lucky “ or better images when stacking ?
Or are there specific settings that need to be set ?
I am referring to time stamp 21:33 - where you say to use short exposures and have the images with poor detail excluded
Thank you !
For me the question is: within a city (like Bortle 7), using a 130mm Newton. Should I do short exposures (15s-30s), increase ISO/gain and take much more images to get details, or less exposures with longer single exposure time and lower the gain?
It really depends on what you want to achieve. You want to always be sure you are overwhelming your read noise (I have some old videos in the topic), which is unfortunately easy to do in light polluted areas. For lucky imaging you'd need 5s or shorter (on my systems in Tokyo that still overwhelms read noise), but the easiest is to do longer exposures so stacking will be easier, etc.
Very well described! Nice work
Glad it was helpful!
I was wondering you could explain something about the Nyquist theorem. I understand you say you need a sample rate 2x the measured frequency. So for 1000Hz signal you need 2000Hz sample rate. However, what if the wave is not a perfect sine wave? For example, a sine wave at 0 degrees has a value of 0, at 90 it is 1, at 180 it is 0 and at 270 it is -1. So for one complete sine wave, we have 4 indisputable sample values. However, it could be a triangle wave with these same 4 values, not a sine wave. I never understood that. Especially in relation to audio and the 44KHz sample rate chosen for CDs. I am not sure if we can see a difference if light were transmitted as triangle wave but 1000Hz sine wave sound very different than a 1000Hz triangle wave. Thank you!
Ah, but we're not talking about A frequency of the signal, were talking about the maximum frequency of the signal. So take a signal, perform a Fourier transform on it, that gives you it's component frequencies. Take the max frequency of it, and you need a 2x sample rate to fully represent the signal.
A perfect triangle wave as you mention has an infinite max frequency (due to the sharp angles) and as such you can't really sample it enough to capture it fully!
Great video and thanks for the detail explanations.
No problem 👍
Am I correct in interpreting these results: with my 1600mm attached to a c6 I'm good when seeing is OK and slightly oversampled when seeing is poor, but since I'm over the Dawes limit (.83 vs .77) seeing would have to be very poor before it would make a difference for me?
Dowes limit and pixel size is the reason we can't see or take pictures of the lunar module from earth even with big telescopes?
Great video! I've the same Quattro 150P with a 0.95 comacorr and a 294MC Pro, so resolution is 1.66", following your advices it seems that I'm undersampling a lot! Am I correct? Is it worth change the camera for a smaller pixel one, like the 183? (2,4 micron)
You are undersampling vs the telescope, but not sure vs the seeing. Also, undersampling isn't a bad thing in and of itself! It just means you're not capturing as many details as you could - that's it. Not a big deal!
Great video Cuiv as always! One thing you didn't mention was adaptive optics used by bigger telescopes in Chile to combat seeing which is not available to us. How does that work?
That works with pneumatic actuators in the secondary mirror that deform the secondary mirror the opossite that the atmosphere blurs the image, this is done by measuring the airy disc (blur cause by the atmosphere) of a star at very high frecuency and sending that correction to the actuators in the secondary mirror (yes very high tech) Cheers.
Already answered better than I ever could! Fun fact, the adaptive optics "guiding" star can be an artificial star image made via laser (in the case there is no neighboring star bright enough to serve the purpose!)
@@CuivTheLazyGeek Thanks! Oh yes thats the photogenic yellow láser that You usually see in long exposures night scapes of the telescopes.
Nice video Cuiv.
Thank you!
Now wait, you said I should be trying to get a resolution of half of the dawes limit, did i hear that right?
Very informative, thank you! But, I have a question from my old, partially fossilized brain: so what if you’re over sampled? Where I live (Maryland, USA), it’s alway poor seeing. I get excited when average seeing is predicted. Actually, I get ecstatic when we get a forecast of clear skies at all. I haven’t been able to get my scope (Askar V, thanks to your two videos) for over a month…
In a system like dragonfly telescope array, do you calculate the aperture as a single unit of lens or as combined total units of lenses?
I think that's the point of the telescope (probably some variation of the drizzling method), but not sure!
"I'm not an expert..." he says... My eyes roll, my lips stick out going brbrbrbrbrrr while my finger brushes up and down against them. 🙄😜 I have this video downloaded and marked to watch later. 👍👍
Hahahaha thank you!
Thank you for a great video! Lets lucky image!
Great video Cuiv! You should come to visit me some time at the chilean mountains, Cheers.
Oh man that sounds awesome!
Interesting to know, if it possible to use blur exterminator software in microscopy as well.
Possible, yes. Good results, maybe not?
The ai was trained for features specific to telescope aberrations, with specific point spread functions. Don't know how different the psf is for microscopes.
You can try, but in theory it shouldn't work as it tries to deduce the PSF from star shapes;
Rainy season lol Cuiv I live in Scotland, every season is the rainy season lol My main problem is at 56N the long summer twilight prevents imaging until late August. Still I don't need to deal with the Tokyo light pollution, I feel for you man!! Great video.
You must be looking forward to both clear skies and dark nights, hang in there!!
Great!
excellent
Resolution and aperture go hand in hand... I wonder how many people have bought cameras that they didn't 'size' for their scope setup because they just had to have the new fangled camera that just came out...
It's not a must though! Great results can be achieved undersampled!
Waiting for the video series "Creating a NINA plugin to collimate newton telescopes" 😅
Ha! I wish!
Hahaha "I'm not an expert" said the expert!
About sampling. What bortle is OK?
Bortle doesn't affect sampling, but it does affect how quickly you can overwhelm read noise with LP shot noise, and thus how short your exposures should be (so how much you could freeze seeing!)
@CuivTheLazyGeek I have a bortle 6 and with narrowband filters I do 300 sec with my william optics and edgehd8. I got Zwo asi294mc pro
Nice one!
Thank you! Cheers!
👍👍
「梅雨の季節」(つゆのきせつ)
そのとおり!
If you're not an expert....damnnnnnn
Thanks, that's high praise :)
Huh?
Ha?