Wow, and here I was dialing everything in. I had about a fifth of this, time to do some more analysis. Always something to learn, which keeps it fresh and challenging. Because of videos like these I have come so very far these past two years, and still I find myself in awe of what I can achieve. This is a great video of which the link now resides on my desktop. Thank you very much.👍
Thanks for watching! Glad you found it useful. Absolutely: soooo much to learn. That’s just one of the reasons I like this hobby. That, and being a glutton for punishment. Clear skies!
You only need to achieve nyquist based on atmospherics, which is seldom better than 2 arc-sec. As you data shows, for deep space astrophotography any scope over 4" aperture is a waste of money unless you live in an exceptional location. I wish I had bought an ED102 rather than a EdgeHD 8". I appreciate your analytical treatment. There are too many UA-cam "astrophotographers" intent on hawking the latest gadget rather than real guidance.
Thank you for putting together such a well thought out and well described video. It's the first discussion I've seen which links seeing conditions, pixel size, FWHM and guiding performance in one place!
I have the edge hd 8” with focal reducer and this convinced me that binning 2x2 is the perfect combination! This is so extremely informative and you explain this so good! Thanks again and clear skies from Sweden! /Filip
Thanks for watching, Bob! You might run some of your 1x1 binned images through Pixinsight’s subframeselector process to see what FWHM your getting (in units of arcsec). If your RASA were in space, the best possible FWHM = 0.42 arcsec. Depending upon your mount and your seeing, your “real” FWHM may be around 2 arcsec. At 615 mm focal length, your binned pixel scale is around 1.26 arcsec/pixel. So your Nyquist limit is twice the pixel scale at 2.52 arcsec. If you are truly getting 2 arcsec when not binning, the you should stay at 1x1 bin mode. If the vast majority of 1x1 binned images come in at more than 2.5 arcsec for FWHM, then binning is a good choice. Never hurts to stay at bin 1x1 if you can handle the larger file sizes and longer processing time. Good luck!
Packed with indepth informations. Probably have to re-watch a couple of times to really cross-reference all dots you've been connecting here. Thanks for sharing.
Another great video James! Congrats on the 294MM PRO as well. I love mine, it's super versatile. I wish they would make a 16bit 4/3 sensor similar to this one.
Thanks, Joe. The 294 just arrived about the time your comment did (a "railroad mechanical problem" - whatever that is - delayed it a couple of days). I think "versatile" is a good description: a smaller pixel size than the 1600 and, in 2x2 mode, a pixel size a little larger than the 1600. Yet it has the same sensor size and setback as the 1600 so I don't need larger filters or have to change the back focus. Plug and play.
Excellent followup to your previous video. You've nailed the primary reasons I went with the 294MM last year: 2.315um pixels in its native mode pair up very well with my GT81. I really liked the versatility this offers. The biggest drawback is the sub size. In that bin1 mode, you've got 47MP subs that are just about 93MB each. With PixInsight, the calibrated subs are 187MB. So, hope you've got a decently fast processing box and a bunch of storage! One of the other areas you sacrifice is bit depth. In the bin1 mode, you've got a 12 bit ADC. Also, full well depth is considerably lower... so you're typically taking more, shorter subs. See my previous comment about processing power and storage :).
I'm excited about the 294. The smaller pixels at 1x1 and the larger "pixel" at 2x2 relative to the 3.8um of the 1600 presents options. And then, as you say, the 14-bit ADC in 2x2 mode is interesting. I want to experiment with Lum at 1x1 and RGB at 2x2, then upscale the RGB to match the 1x1 of the Lum. A lot more to learn...
Great! I've been doing mono for a while-you'll appreciate the flexibility. Good luck with your 294. I'm looking forward to seeing the difference between it and my 1600.
Thank you very much for making this video. Very informative. I have been scratching my head on this topic over the last few weeks. The question I keep having "If I have 2 camera/telescope setups one 203mm aperture and one is 80mm aperture. They both have similar FOV and both have similar resolutions. The 80mm has a faster f ration by about one stop. Everyone keeps telling me that the 203mm scope is the better scope to use because of etendue." So many variables. LOL
Hey, Jim. In theory, you should get better resolution with the 203-mm scope IF your pixels are small enough and your seeing/guiding is good enough. Which camera(s) are you pairing with these scopes?
Hi James. Great video. I have recently tried my c9.25 without the 0.7 reducer and I will tell you that guide stars (OAG) are a nightmare to find. With the reducer (and the ASI174mm) I have never had any difficulty guiding. I know this is a different issue, but I had never imaged without the reducer. Both of my cameras have 3.8um pixels. My asi2600mc has 16 bit and is much nicer than the 12 bit add of my asi533 (although both are very nice). I have had issues banning my asi2600mc which I believe are related to the osc Bayer pattern. Thanks again. Bob
Hey Bob Yep. I think the reducer is going to be a permanent attachment for my C9.25 from now on. With my seeing, I’m getting no benefit from that aperture+magnification. Might as well get some other benefits from the reduced focal length. How does binning work with a OSC? Is it 2x2 of a 4-pixel Bayer matrix? May be best to shoot 1x1, debayer, then bin 2x2 the debayered stack.
@@Aero19612 Hi James. I am not sure of the details of 2x2 binning for OSC. I tried it a couple of times and had very frustrating results (although my ever decreasing memory has caused me to forget exactly what the frustrations were). In one of Quiv TLG's video's, he suggested that binning (2x2, 3x3...) of CMOS sensors was best left to post processing. He did not elaborate or give references. Unfortunately for me, my primary reason for wanting to use higher binning was to ease my storage problems during post processing. PS I suspect you will have few problems finding guide stars with your reducer especially with the 174mm (which I will still say was one of the most helpful hints I got from you). Thanks. Bob
Great video. I had a feeling camera selection was very important in getting the most out of your scope and overall system. I was about to buy an asi533mc today, but heard they are releasing an ASI533mm soon. I guess I'll wait since I have heard mono cameras are more sensitive. Great video! I always learn so much from your content! Thank you.
Interesting. Yes, mono gives you better results at the expense of more "pieces" (e.g., filters and filter wheel) and longer processing time. But well worth it.
I like your analytical approach, please keep them up... Re: binning the 1600. I'm not sure I see the benefit at a fl < 1400mm apart from consistency of the resulting image to something less than is possible with your setup. Is it perhaps a better approach to keep with 1x1 binning and allow the image selection and stacking process to make the best of the varying quality - the better subs will hopefully be selected and weighted more highly - resulting in an image resolution somewhere between 1 and 2x binning modes. (FYI - I image with a 250mm/1200mm newt and the ASI1600. Seeing at my location is very variable I can have total rms anything varying from 0.4 to over 1.2 in an evening session. Since clear skies are rare here I take what I can get).
Thanks for watching, Paul. I agree. The binning helps for the C9.25 with and without the focal reducer (focal lengths of 1460 mm and 2280 mm). I need small pixels for the other scopes: 250 mm, 385 mm, and 700 mm.
Great video James, lots of food for thought. I just checked my TS106 (700mm FL) and 8” EdgeHD with reducer (1474mm FL) and, similar to you, the FWHM was 2.2 for the 106 and 2.4 for the EdgeHD (both with an ASI2600MC). But, in pixels, obviously the EdgeHD has more pixels per FWHM than the 106 so, after deconvolution, there is still way more detail and better resolution with the EdgeHD than the triplet. This is not to disagree at all with your excellent analysis on optimising the optical train, just to add a further consideration as the resolution can be improved more with the higher pixels/FWHM on the EdgeHD than the resolution improvement on the 106mm aperture (I think!)
Thanks for the feedback, James. Interesting. A good thread to pull: "what do we mean when we talk about resolution?" If you look at individual stars, the TS106 will show blockier stars, whereas the Edge will show a smoother roll-off of star brightness. At the limit of resolution where we look for the dip in brightness between 2 closely spaced stars, you'll see a coarse bright-less bright-bright variation with the TS106 and maybe a more gradual variation with the Edge. But can you truly see more detail? Are you dithering/drizzle? Maybe it's difficult to separate blocky vs smooth from actual detail size in our interpretation of "resolution". I don't know the answer. I like your findings though! Gives me a reason to keep using my C9.25! Thanks for watching!
So this is very useful information. Very nicely described. I have a redcat 51 which I'm struggling with in sharpness. I'm in bortle 6 to 7 and use an asi533mc pro Recently i decided not to go monochrome as our skies are cloudy too often to have much time to deal with so many filters etc. So, i purchased the best filters i could - Altair 4nm ha-oiii and sii-oiii combos. I'm using an am3 mount and getting between .4 to .7 guiding. I have always felt that the 533 is wrong for the redcat due to pixel size - although 99% of the web seem to say its the best combination for the price. I am now thinking i went the wrong route with the filters and should have made the move to monochrome at this point but - oh well - money is now spent! The question is, do you think there is any benefit in getting even smaller pixels than the 294 for the redcat 51 given the poor conditions i have? I know some have tried planetary cameras (not cooled) and had some reasonable success but I'm concerned that the lack of cooling and sub length will not suit my poor conditions.
Thanks for watching! And sorry for the late reply. Looks like you're getting good guiding performance out of your AM5. Your ASI533 is fine. Instead of spending more money right now, I would suggest that you dither and use drizzle integration. This technique reduces noise AND effectively cuts your pixel size in half for better resolution. Let me know how things go!
Thanks for responding... I have always employed dithering and drizzle but possibly I'm not dithering enough. I'm using an ASIAir plus and dithering by 1 pixel. I did a calculation at some point recently which suggested I should dither by 3 ish pixels but I have a choice of 1,2 or 5 I think. I recently set it to dither at 2 pixels and will see if I have any improvement. Any ideas if there is much of a difference between dithering settings? I did get a nice hour or two the other day between clouds and moon and noticed a dramatic improvement so possibly I'm just struggling with poor seeing etc. It's early days on the new filters so I think I'll concentrate this year on enjoying the new mount and filters plus getting much more data per target. Many thanks for sharing your knowledge!
I should add that after your advice on the dithering, I reviewed my dithering calculation: Main scope & camera: Redcat 51 250mm focal length ASI533mc pro (3.76um pixels) Focal ratio (from astronomy tools calculator) = 6.4 Guide scope & camera Zwo f4 120mm focal length ASI 120mm camera(3.75um pixels) Focal ratio (from astronomy tools calculator) = 6.45 The assumption, based on the technical data, is that the two cameras have the same (almost) focal ratio so to dither by my desired amount on the main scope, I should dither by the same amount on the guide scope. I'm going to assume that I should dither by about 10 pixels - I've no idea why (more research is needed!) - I'm blindly following the web on this magical figure! This is way different from the 1 pixel I was dithering by originally so I will change to that setting now. I guess the next question (if anyone knows) is... If you change the dithering amount during your capturing of data for an object, can you still use all the existing data or should you start again? I'm using PixInsight to process the data with the weighted batch script. Once again, many thanks for your video - I'd probably have blindly continued dithering at the wrong rate without it! I'll be bookmarking this video for future reference in case I can ever afford to build a new rig!
Thanks James! Your first principles approach to this hobby is SO helpful! I watched the first video many times trying to decipher how you created the summary plots at the end that show the knee in the curves and the guiding error band (Observable FWHM vs PHD2 Total Error). I'd greatly appreciate if you could explain how I can reproduce these plots for my scopes and cameras.
Thanks for watching these "boring" videos! I've got to work on an executable version of the program I wrote to do this and maybe upload a description of the analysis approach.
Hi James, seems like your seeing as judged by FWHM is similar to mine here in the UK. I agree that generally a pixel scale of 1 arc sec/pixel gives ideal sampling, but I could counter and say you may be limiting resolution for those excellent, but infrequent nights. I also have a few scopes, but will take my Esprit 150 as an example. It has a FL of 1040, so with an ASI1600, the scale is 0.74 arc/pixel with a best resolution of 1.5 arc secs. In April 2020 when everyone was locked down, we had glorious weather with many clear skies. Because it was galaxy season, I used my ASI183 with 2.4u pixels and a scale of 0.48 arc sec/pixel and a best resolution of near on 1.0 arc secs. Maybe because there were no planes and minimal pollutants (nice guess?) the atmosphere was very steady and I achieved 1.1 arc sec FWHM, the best resolved image I have ever achieved. I am so proud of this, yet it would have been missed with the ASI1600. So my moto is that it is better to over sample in the hope you may get an excellent night with good seeing and if it is more typically average, you can bin in post to make the image look more pleasing and nothing is lost.
Yep. I could certainly acquire the data at Bin1x1 and do the down-sample in Pixinsight. That would preserve the option for higher resolution in case of a miracle...or, you know, another plague. Thanks for watching, Adrian!
Hey Kyle. A bit. Dither/drizzle is like introducing pixels between the pixels you have (plus the noise reduction benefit). So, from a resolution perspective, dither/drizzle should help when your system is Nyquist limited (i.e., your pixels are too large - like my RC 51 and GT 81 in the video). If your system is Dawes (aperture/diffraction) limited, dither/drizzle may not improve resolution (but you still get the noise reduction benefit). For example, I have tons of pixels in the C9.25, but they don't help because seeing limits the FWHM to 2 arc-sec. That's my story...today
Thanks for creating this updated video, James! It was really insightful to me. Would be interesting to compare the options for my 8” EdgeHD which I am pairing with my ASI294MC Pro at the moment. Are you planning to share an excel-spreadsheet or a website tool in the future so your followers can try it on their setup?
Hi Rudi Local seeing is a huge variable in long focal length scope performance. I think the best thing you can do is go back to your data and let Pixinsight (or ASTAP) measure the FWHM for your subs (in arc-sec units) like I show at 9:30 in the video. If your best FWHM = 2 arc-sec, then target a pixel scale between 0.6 arc-sec/pixel to 1 arc-sec/pixel. if your best FWHM = 1 arc-sec, then target a pixel scale between 0.3 arc-sec/pixel to 0.5 arc-sec/pixel. With an 8" EdgeHD (2032 mm focal length and 4.65 um pixels) and no focal reducer, you have 0.47 arc-sec/pixel. Looks like a very good match if your FWHM is between 1 arc-sec and 2 arc-sec. If you are consistently near 1 arc-sec, then maybe shell out for an ASI2600 with 3.8 um pixels, but that's a lot of $'s. Thanks for watching!
@@Aero19612 thanks a lot for crushing the numbers for me. Very kind!!! I’ll rewatch the video again tonight to let it settle in. How did you get the curves with the “shoulders” you go through towards the end of the video?
The "shoulder" curves are an analysis I set up in Python. A bit too involved for Excel. Also, the big variable in that analysis is the relationship between guiding and seeing. Seeing is the real issue. So I hate to make that Python code available without doing more work to establish a relationship between seeing and guiding. That's why I point to the 9:30 mark in the video. That FWHM assessment is the BIG variable and that's the best way to characterize it for your system and your location.
his video seems like an honest explanation, but I stopped at 1:18,.... I would love to continue on this topic (Dawes vs Nyquist) but first I wanted to ask... what do you mean by "d"? The diameter of the telescope lens, the diameter of the entrance pupil or something else? Currently I also use optical systems (lenses and apertures) for photography at remote distances, so I asked this question! Friendliness
Yes, "d" is the diameter of the aperture. My RedCat 51 has a 51 mm diameter lens, so d = 55 mm. My C9.25 SCT has a primary mirror diameter of 9.25 in (235 mm) diameter, so d = 235 mm for that scope.
I was pulling my hair off trying to understand this, until someone on CN pointed me to this video! Amazing job, thanks a million! Subscribed! PS. 533M/MM vs 294MM for the RedCat 51? 😅
Hey Divye, thanks for watching and subscribing! The 533 has 3.76um pixels, a 14-bit ADC, and a smaller sensor size (smaller field of view). I've decided to move from my 1600MM (3.8um, 12-bit ADC) to the 294 for the smaller pixel size (Nyquist made me do it!). The 294 also has the same sensor size as the 1600, so I won't give up FoV. So, there are many trade-offs. You may not care about F0V with the Redcat. You might check Stellarium with those two cameras to see if the 533 FoV will allows you to fit targets you're interested in its FoV. If it's spatial resolution you want, go with the 294. if its tonal resolution, go with the 533. Good luck!
Terrific video! I get different FWHM from Sharpcap (during focusing) and Pixinsight (during processing). Sharpcap numbers heavily depends on exposure time. Which one is correct? How do you take into account exposure time?
You should get different FWHM for different exposure times. The shorter the exposure, the lesser the effect of seeing so FWHM should be smaller. As exposure times increase to the time we use for exposures, seeing has more time to "move the star" around on our sensor. In this case, FWHM should be larger.
@@Aero19612 Great! Do you calculate FWHM from pictures with the same scales? For example, you need to magnify 102 scope picture to scale of 9.25” scope pictures, then calculate FWHM. Pictures taken by 100 Esprit doesn’t come even close to old C8 pictures when scale of pictures are the same. People often surprise that I don’t want to get refractor, despite being beginner. This is the main reason. Pictures looks great until magnification. Then, they fall apart.
Very interesting analysis, James. In the end, it shows that not necessarily the biggest aperture will help to gather the best resolution, but nevertheless with the biggest aperture you'll get the most possible signal which might help to reach the faint details of a DSO. Any thoughts on that? I also have a C9.25 (Edge). Regards, Alfredo
Hey Alfredo. Well, you should first use an image analysis tool like Pixinsight or ASTAP to measure your actual FWHM. Maybe your seeing is much better than mine and you can actually get an advantage with your C9.25 that I can’t. If your system performs like mine, perhaps the focal reducer (drops F ratio to 6.3ish compared to ED102 at 6.9 = small benefit) and binning the sensor (provides tonal resolution you can’t get at 1x1. I’m definitely going to try the FR+2x2 with my SCT. For all the trouble with imaging with the SCT, I’d like to think I’m getting some image benefits. Geez. Thanks for watching!
Great presentation but a question, at 7:31 and thereafter, you display the Nyquist Limits, referring to the camera pixel size but I don't see where you are getting your numbers from. Can you clarify?
I’ll try. There are two limits to resolution in our images: (1) FWHM from Pixinsight and (2) pixelation from our cameras, or the Nyquist limit. In the case of the Redcat (250 mm scope paired with 3.8 micro-meter pixels in the camera. The pixel scale in arc sec is (3.8/1000/250)*(180/pi)*3600 = 3.14 arc-sec/pixel. But we need 2 pixels to resolve a feature, so we have a Nyquist limit to the resolution of 2*3.135 = 6.27 arc sec. So my scope/camera combo effectively limits me to a resolution of 6.27 arc sec while seeing is giving me about 5ish arc sec. If I want more resolution, then I have two options: (1) dither during imaging and perform drizzle integration or (2) get a camera with smaller pixels to pair with this short focal length scope. A later video shows what I’m getting using the ASI294MM (smaller pixels) and this scope. Hope that helps a bit! Thanks for watching!
@@Aero19612 Aah, okay got it thanks so much. One more question if you don't mind. Starting at 13.39 you introduce a sequence of graphs where the observable FWHM is plotted against RMS error with the lines being flat with steps.. In the case of the Redcat (the most simple), this initial flat value at 3.14 for 0 error is logical due to the pixel size relationship. But it stays flat until 2.0 then suddenly kicks up so that by 2.25 arc-sec error, the FWHM value is up over 9 and again reverts to a flat line. I understand the principle of the graphed line, but don't quite get whwere those profiles are being generated from. This is skipped over in the video. My initial thinking is that, yes, I expect to see a flat line at the bottom, based on each scope and camera, but once guiding error becomes excessive, then I'd expect to see a continual, almost linear ramping up of FWHM values.
I know this is an old post, but after watching it when it first came out it made me start to question the utility of my 8" EDGE w/.7 reducer vs. my AT115EDT w/.8 reducer. They are both used with my ASI2600MM camera and going by this video and binning 2x2 with the EDGE at 1400mm focal length gave me with a 1.09 arcsec/pixel resolution. The AT115EDT at 644mm focal length and no binning gave me a 1.2 arcsec/pixel resolution. They are so close I thought what is the point of the SCT for DSO's? The the 115EDT is faster (f5.6 vs f7) a little lighter, easier to guide (guide stars no problem) and gives a much bigger field of view. I can crop down by 50% at the end before I get down to what the 2x2 binning gave me from the start on the SCT. I just shot the whole galaxy season on the 115 because I could come up with no good reason to use the SCT. SURELY I am missing something... If you are still reading comments from this old (and excellent) video, I'd like to know where my thinking went off track. Thanks.
Hi Jeff. First thing to note is that just because you have a pixel scale of 1.2 arc-sec/pixel, doesn’t mean your scope+seeing can provide that. At best, your 115EDT will provide a best-possible FWHM = 1.02 arc-sec. That’s if it’s in space (if it were, you would need 0.51 arc-sec/pixel to truly see that resolution in a photograph per the Nyquist limit). Seeing will double that or more. Run your subframes through Pixinsight’s SubFrameSelector and have it calculate the FWHM for each image. Get the lowest value and divide by 2. That is the pixel scale you need to get the resolution that the seeing is giving you. I bet your 115EDT is about right on the money. For your 8”, the best possible FWHM is about 0.57 arcsec. And then seeing takes over. Would be nice to see what FWHM you get from Pixinsight when not binning. If that FWHM is greater than 2x1.09 (Nyquist) = 2.18 arcsec, then binning is the way to go and your binned imaging isn’t costing you any resolution. I bet these two scopes are very close in terms of true image resolution. The 8” scope will provide larger images in the field of view, but not necessarily better resolution. You will just have better TONAL resolution of an equally blurred Image. So, to summarize, I don’t think you’re missing anything, haha.
@@Aero19612 I really appreciate you reaching back to your old post to answer my question. When you say better TONAL resolution are you saying that the light averaged from the 4 (2x2) pixels vs the 1 pixel is likely to give me a more precise /accurate shade of gray or do you mean something else by "TONAL"?
Hello James, Thanks for informative video. I find binning confusing as well as sampling. The ASI294mm camera has pixels of 4.63microns (larger than ASI1600 which are 3.8 microns). So are your calculations correct with ASI294 with pixel size of 2.xx microns?
Hey Bill. The ASI294MM has 2.315 um pixels and, for the monochrome sensor, ZWO provides an "unlocked" mode that allows you to image in 1x1 mode. There is no such option for the color camera version. Also - and this is also confusing - the sensor ZWO uses for the 294MC is different from the sensor used for the 294MM. When ZWO first released the ASI294MM, they did not allow users to use the camera in 1x1 mode, hence, you had 4.63 um pixels. They gave in shortly after the chorus of complaints. Thanks for watching!
One quick question... In you slides beginning at 13:38 - you identify a "knee in the curve..." How did you derive this curve (and the others like it for your other scopes)? And (sorry, one more...) what effect does a focal reducer have on your analysis?
The knee in the curve analysis comes from a Monte Carlo analysis that simulates the random guiding-affected motion of the star across the pixel array and adds up the light in each pixel. I then go find the observable FWHM. Steps/knees in the curve happen when the FWHM jumps from 1 pixel, to 3 pixels, to 5 pixels, etc. There's a certain level of "garbage in, garbage out" with that analysis because of the assumptions. So it's probably best not to rely on that analysis to make purchasing decisions. I do look at the actual achieved FWHM compared with Nyquist limit to conclude the ASI294 might give me some resolution that I'm giving away with my ASI1600. My GT81 has the focal reducer installed. I did do an assessment of the C9.25 with the focal reducer-it's a better option for that scope. Focal reducers are good when Dawes limit controls because it increases the pixel scale without affecting spatial resolution. Then you get the other benefits such as larger FoV, easier to find guide stars in an OAG, more reliable focus with more stars available, etc. Thanks for watching, James!
Thank you again for a great video. Can you help me with something. I bought an ASI294MM Pro when it first came out. Great Camera. The ZWO Manual claims the sensor is a SONY IMX492 Mono CMOS. What is interesting is that SONY publishes the ADC depth of the IMX492LLJ as 12 Bits max. Not 14. I am thinking that ZWO's claim of 14 Bit depth when they 2X2 Bin the images, is simply the benefit of software binning 4 pixels (X4 adds two bit depth). So this camera would not be a true 14 bit Camera. I am not sure this could be a disadvantage for noise. What are your thoughts ?
Very clear thinking, Claude! I suspect you are correct. Also, with CMOS, the binning is done in software (as you say) after the pixel values are read. So you will get 4x the read noise (I suspect there is some random cancelling, so not really 4x), but I do think you will see the improved tonal variation when the 4 pixel values are averaged. But that should be true of 2x2 bin mode with the ASI1600 as well, which is also a 12-bit ADC, yet I never hear anyone talk about “14-bit ADC” in the bin2 mode. Hmm. Very good question! Have you confirmed you are getting improved tonal discrimination with your camera?
I'm using an ASI2600 with my EdgeHD 8", it has 3.76um pixels, very similar in size to your ASI1600. Curious if you have any thoughts on software vs. hardware binning? I've read in comments on Cloudy Nights that for CMOS cameras the benefits of binning are the same in software or hardware, but the downside of hardware is that your produced source material is always binned, but in software you can have the option of still working with the original data.
That is correct: for CMOS sensors, binning is done in the camera by averaging the values of the four pixels. Same for doing the binning in Pixinsight (for example). The only real benefit of letting ZWO do the binning, is that smaller-size images are downloaded so less time/space required while acquiring images (may not matter much). You will also save time during post processing since the images are smaller size AND you don't have to spend time converting all of the 1x1 images to 2x2 images. Based on this video, I've concluded there is zero benefit to working with the 1x1 pixels with my system/seeing so I might as well let the camera bin for me and save me some time/disk space. Thanks for watching, Jason!
How do you come up with the Nyquist limit for each scope? 7:50 You show FWHM/2 on the formula slide at the beginning but the Nyquist limits you show don't match that formula afterwards.
Yeah, it's a bit confusing. For the Redcat, I have an image scale of about 3.1 a-s/pixel, which means I can only resolve two closely-spaced stars for a FWHM = 2 x 3.1 a-s = 6.2 a-s or greater. But my actual FWHM is 4.83 a-s, so I need to reduce pixel size to take advantage of that FWHM.
The only difference in the two equations for limit of resolution is the coefficient 1.22 vs 1.029, and I can’t understand how the coefficient is determined to to be 1.22 when taking the zero point of the first with the peak of the second point
Correct. I'm showing a Moffat star model, which approximates the real point spread function. See this video for an explanation of these terms: ua-cam.com/video/N60vzaIZq6Y/v-deo.html Thanks for watching!
Thank you for setting the standards for educational youtube videos in astronomy.
Haha. Not sure about that. But I do like digging into the numbers. Challenging hobby we have here!
This is a wonderful exposition of resolution - a topic often overlooked with respect only to focal length and focal ratio. Well done indeed.
Thanks for watching a video on such a boring topic!
@@Aero19612 On the contrary!
Wow, and here I was dialing everything in. I had about a fifth of this, time to do some more analysis. Always something to learn, which keeps it fresh and challenging.
Because of videos like these I have come so very far these past two years, and still I find myself in awe of what I can achieve.
This is a great video of which the link now resides on my desktop.
Thank you very much.👍
Thanks for watching! Glad you found it useful. Absolutely: soooo much to learn. That’s just one of the reasons I like this hobby. That, and being a glutton for punishment. Clear skies!
You only need to achieve nyquist based on atmospherics, which is seldom better than 2 arc-sec. As you data shows, for deep space astrophotography any scope over 4" aperture is a waste of money unless you live in an exceptional location. I wish I had bought an ED102 rather than a EdgeHD 8". I appreciate your analytical treatment. There are too many UA-cam "astrophotographers" intent on hawking the latest gadget rather than real guidance.
Thank you for putting together such a well thought out and well described video. It's the first discussion I've seen which links seeing conditions, pixel size, FWHM and guiding performance in one place!
Thanks for watching!
I always come back to your videos! You are my reference in astrophotography!
Great, Kayed!
Nice breakdown of what should be base Optical knowledge for most of us .
Always learn something from your videos!
Thanks watching again, Douglas!
I have the edge hd 8” with focal reducer and this convinced me that binning 2x2 is the perfect combination! This is so extremely informative and you explain this so good! Thanks again and clear skies from Sweden! /Filip
Great! I hope it works out for you, Filip. Good luck!
Thanks for the analysis. This was extremely informative.
Glad it helped, John! Thanks for watching!
very nice discussion. I'm currently using a RASA 11 and ZWO 6200 so I'm interested in running it through binning and see the effect on FWHM
Thanks
Thanks for watching, Bob! You might run some of your 1x1 binned images through Pixinsight’s subframeselector process to see what FWHM your getting (in units of arcsec). If your RASA were in space, the best possible FWHM = 0.42 arcsec. Depending upon your mount and your seeing, your “real” FWHM may be around 2 arcsec. At 615 mm focal length, your binned pixel scale is around 1.26 arcsec/pixel. So your Nyquist limit is twice the pixel scale at 2.52 arcsec. If you are truly getting 2 arcsec when not binning, the you should stay at 1x1 bin mode. If the vast majority of 1x1 binned images come in at more than 2.5 arcsec for FWHM, then binning is a good choice. Never hurts to stay at bin 1x1 if you can handle the larger file sizes and longer processing time. Good luck!
The graph with the moving pixel array really helped. Thanks James, gonna try my 533 in 2x2 with the 9.25 Edge/focal reducer.
Great! Glad it helped.
Packed with indepth informations. Probably have to re-watch a couple of times to really cross-reference all dots you've been connecting here. Thanks for sharing.
Haha. Rewatching is free! Thanks for watching the first time.
This is excellent, so well put together… thanks for doing this 👍🏻
Thanks for watching, Olly!
Another great video James! Congrats on the 294MM PRO as well. I love mine, it's super versatile. I wish they would make a 16bit 4/3 sensor similar to this one.
Thanks, Joe. The 294 just arrived about the time your comment did (a "railroad mechanical problem" - whatever that is - delayed it a couple of days). I think "versatile" is a good description: a smaller pixel size than the 1600 and, in 2x2 mode, a pixel size a little larger than the 1600. Yet it has the same sensor size and setback as the 1600 so I don't need larger filters or have to change the back focus. Plug and play.
Excellent followup to your previous video. You've nailed the primary reasons I went with the 294MM last year: 2.315um pixels in its native mode pair up very well with my GT81. I really liked the versatility this offers. The biggest drawback is the sub size. In that bin1 mode, you've got 47MP subs that are just about 93MB each. With PixInsight, the calibrated subs are 187MB. So, hope you've got a decently fast processing box and a bunch of storage! One of the other areas you sacrifice is bit depth. In the bin1 mode, you've got a 12 bit ADC. Also, full well depth is considerably lower... so you're typically taking more, shorter subs. See my previous comment about processing power and storage :).
I'm excited about the 294. The smaller pixels at 1x1 and the larger "pixel" at 2x2 relative to the 3.8um of the 1600 presents options. And then, as you say, the 14-bit ADC in 2x2 mode is interesting. I want to experiment with Lum at 1x1 and RGB at 2x2, then upscale the RGB to match the 1x1 of the Lum. A lot more to learn...
That's a very interesting idea; I'd never thought to plot PHD2 guiding error versus FWHM.
Thanks for watching, Andrew!
My new 294MM should arrive Thursday. First time in mono realm for me too. Clear skies!
Great! I've been doing mono for a while-you'll appreciate the flexibility. Good luck with your 294. I'm looking forward to seeing the difference between it and my 1600.
Thank you very much for making this video. Very informative. I have been scratching my head on this topic over the last few weeks. The question I keep having "If I have 2 camera/telescope setups one 203mm aperture and one is 80mm aperture. They both have similar FOV and both have similar resolutions. The 80mm has a faster f ration by about one stop. Everyone keeps telling me that the 203mm scope is the better scope to use because of etendue." So many variables. LOL
Hey, Jim. In theory, you should get better resolution with the 203-mm scope IF your pixels are small enough and your seeing/guiding is good enough. Which camera(s) are you pairing with these scopes?
@@Aero19612 so with the 80mm Takahashi I am using the 183mm to get a tighter fov and with the 203mm (EdgeHD8) I am using a 6200mm.
Hi James. Great video. I have recently tried my c9.25 without the 0.7 reducer and I will tell you that guide stars (OAG) are a nightmare to find. With the reducer (and the ASI174mm) I have never had any difficulty guiding. I know this is a different issue, but I had never imaged without the reducer. Both of my cameras have 3.8um pixels. My asi2600mc has 16 bit and is much nicer than the 12 bit add of my asi533 (although both are very nice). I have had issues banning my asi2600mc which I believe are related to the osc Bayer pattern. Thanks again. Bob
Hey Bob
Yep. I think the reducer is going to be a permanent attachment for my C9.25 from now on. With my seeing, I’m getting no benefit from that aperture+magnification. Might as well get some other benefits from the reduced focal length. How does binning work with a OSC? Is it 2x2 of a 4-pixel Bayer matrix? May be best to shoot 1x1, debayer, then bin 2x2 the debayered stack.
@@Aero19612 Hi James. I am not sure of the details of 2x2 binning for OSC. I tried it a couple of times and had very frustrating results (although my ever decreasing memory has caused me to forget exactly what the frustrations were). In one of Quiv TLG's video's, he suggested that binning (2x2, 3x3...) of CMOS sensors was best left to post processing. He did not elaborate or give references. Unfortunately for me, my primary reason for wanting to use higher binning was to ease my storage problems during post processing. PS I suspect you will have few problems finding guide stars with your reducer especially with the 174mm (which I will still say was one of the most helpful hints I got from you). Thanks. Bob
Great video. I had a feeling camera selection was very important in getting the most out of your scope and overall system. I was about to buy an asi533mc today, but heard they are releasing an ASI533mm soon. I guess I'll wait since I have heard mono cameras are more sensitive. Great video! I always learn so much from your content! Thank you.
Interesting. Yes, mono gives you better results at the expense of more "pieces" (e.g., filters and filter wheel) and longer processing time. But well worth it.
I like your analytical approach, please keep them up...
Re: binning the 1600. I'm not sure I see the benefit at a fl < 1400mm apart from consistency of the resulting image to something less than is possible with your setup. Is it perhaps a better approach to keep with 1x1 binning and allow the image selection and stacking process to make the best of the varying quality - the better subs will hopefully be selected and weighted more highly - resulting in an image resolution somewhere between 1 and 2x binning modes. (FYI - I image with a 250mm/1200mm newt and the ASI1600. Seeing at my location is very variable I can have total rms anything varying from 0.4 to over 1.2 in an evening session. Since clear skies are rare here I take what I can get).
Thanks for watching, Paul. I agree. The binning helps for the C9.25 with and without the focal reducer (focal lengths of 1460 mm and 2280 mm). I need small pixels for the other scopes: 250 mm, 385 mm, and 700 mm.
Great video James, lots of food for thought. I just checked my TS106 (700mm FL) and 8” EdgeHD with reducer (1474mm FL) and, similar to you, the FWHM was 2.2 for the 106 and 2.4 for the EdgeHD (both with an ASI2600MC). But, in pixels, obviously the EdgeHD has more pixels per FWHM than the 106 so, after deconvolution, there is still way more detail and better resolution with the EdgeHD than the triplet.
This is not to disagree at all with your excellent analysis on optimising the optical train, just to add a further consideration as the resolution can be improved more with the higher pixels/FWHM on the EdgeHD than the resolution improvement on the 106mm aperture (I think!)
Thanks for the feedback, James. Interesting. A good thread to pull: "what do we mean when we talk about resolution?" If you look at individual stars, the TS106 will show blockier stars, whereas the Edge will show a smoother roll-off of star brightness. At the limit of resolution where we look for the dip in brightness between 2 closely spaced stars, you'll see a coarse bright-less bright-bright variation with the TS106 and maybe a more gradual variation with the Edge. But can you truly see more detail? Are you dithering/drizzle? Maybe it's difficult to separate blocky vs smooth from actual detail size in our interpretation of "resolution". I don't know the answer. I like your findings though! Gives me a reason to keep using my C9.25! Thanks for watching!
So this is very useful information. Very nicely described.
I have a redcat 51 which I'm struggling with in sharpness.
I'm in bortle 6 to 7 and use an asi533mc pro
Recently i decided not to go monochrome as our skies are cloudy too often to have much time to deal with so many filters etc.
So, i purchased the best filters i could - Altair 4nm ha-oiii and sii-oiii combos.
I'm using an am3 mount and getting between .4 to .7 guiding.
I have always felt that the 533 is wrong for the redcat due to pixel size - although 99% of the web seem to say its the best combination for the price.
I am now thinking i went the wrong route with the filters and should have made the move to monochrome at this point but - oh well - money is now spent!
The question is, do you think there is any benefit in getting even smaller pixels than the 294 for the redcat 51 given the poor conditions i have? I know some have tried planetary cameras (not cooled) and had some reasonable success but I'm concerned that the lack of cooling and sub length will not suit my poor conditions.
Thanks for watching! And sorry for the late reply. Looks like you're getting good guiding performance out of your AM5. Your ASI533 is fine. Instead of spending more money right now, I would suggest that you dither and use drizzle integration. This technique reduces noise AND effectively cuts your pixel size in half for better resolution. Let me know how things go!
Thanks for responding... I have always employed dithering and drizzle but possibly I'm not dithering enough. I'm using an ASIAir plus and dithering by 1 pixel. I did a calculation at some point recently which suggested I should dither by 3 ish pixels but I have a choice of 1,2 or 5 I think. I recently set it to dither at 2 pixels and will see if I have any improvement. Any ideas if there is much of a difference between dithering settings?
I did get a nice hour or two the other day between clouds and moon and noticed a dramatic improvement so possibly I'm just struggling with poor seeing etc.
It's early days on the new filters so I think I'll concentrate this year on enjoying the new mount and filters plus getting much more data per target.
Many thanks for sharing your knowledge!
I should add that after your advice on the dithering, I reviewed my dithering calculation:
Main scope & camera:
Redcat 51 250mm focal length
ASI533mc pro (3.76um pixels)
Focal ratio (from astronomy tools calculator) = 6.4
Guide scope & camera
Zwo f4 120mm focal length
ASI 120mm camera(3.75um pixels)
Focal ratio (from astronomy tools calculator) = 6.45
The assumption, based on the technical data, is that the two cameras have the same (almost) focal ratio so to dither by my desired amount on the main scope, I should dither by the same amount on the guide scope.
I'm going to assume that I should dither by about 10 pixels - I've no idea why (more research is needed!) - I'm blindly following the web on this magical figure!
This is way different from the 1 pixel I was dithering by originally so I will change to that setting now.
I guess the next question (if anyone knows) is... If you change the dithering amount during your capturing of data for an object, can you still use all the existing data or should you start again? I'm using PixInsight to process the data with the weighted batch script.
Once again, many thanks for your video - I'd probably have blindly continued dithering at the wrong rate without it! I'll be bookmarking this video for future reference in case I can ever afford to build a new rig!
Thanks James! Your first principles approach to this hobby is SO helpful! I watched the first video many times trying to decipher how you created the summary plots at the end that show the knee in the curves and the guiding error band (Observable FWHM vs PHD2 Total Error). I'd greatly appreciate if you could explain how I can reproduce these plots for my scopes and cameras.
Thanks for watching these "boring" videos! I've got to work on an executable version of the program I wrote to do this and maybe upload a description of the analysis approach.
Far from it James! Your stuff is fascinating and a real service to those looking for a deeper understanding. Appreciate you very much.
Hi James, seems like your seeing as judged by FWHM is similar to mine here in the UK. I agree that generally a pixel scale of 1 arc sec/pixel gives ideal sampling, but I could counter and say you may be limiting resolution for those excellent, but infrequent nights. I also have a few scopes, but will take my Esprit 150 as an example. It has a FL of 1040, so with an ASI1600, the scale is 0.74 arc/pixel with a best resolution of 1.5 arc secs. In April 2020 when everyone was locked down, we had glorious weather with many clear skies. Because it was galaxy season, I used my ASI183 with 2.4u pixels and a scale of 0.48 arc sec/pixel and a best resolution of near on 1.0 arc secs. Maybe because there were no planes and minimal pollutants (nice guess?) the atmosphere was very steady and I achieved 1.1 arc sec FWHM, the best resolved image I have ever achieved. I am so proud of this, yet it would have been missed with the ASI1600. So my moto is that it is better to over sample in the hope you may get an excellent night with good seeing and if it is more typically average, you can bin in post to make the image look more pleasing and nothing is lost.
Yep. I could certainly acquire the data at Bin1x1 and do the down-sample in Pixinsight. That would preserve the option for higher resolution in case of a miracle...or, you know, another plague. Thanks for watching, Adrian!
A great v2 of the original video James. Have you given any further thought to if Drizzling changes any of the conclusions made in this analysis?
Hey Kyle. A bit. Dither/drizzle is like introducing pixels between the pixels you have (plus the noise reduction benefit). So, from a resolution perspective, dither/drizzle should help when your system is Nyquist limited (i.e., your pixels are too large - like my RC 51 and GT 81 in the video). If your system is Dawes (aperture/diffraction) limited, dither/drizzle may not improve resolution (but you still get the noise reduction benefit). For example, I have tons of pixels in the C9.25, but they don't help because seeing limits the FWHM to 2 arc-sec. That's my story...today
Excellent content, James.
Thanks for watching, Bill!
Thanks for creating this updated video, James! It was really insightful to me. Would be interesting to compare the options for my 8” EdgeHD which I am pairing with my ASI294MC Pro at the moment. Are you planning to share an excel-spreadsheet or a website tool in the future so your followers can try it on their setup?
Hi Rudi
Local seeing is a huge variable in long focal length scope performance. I think the best thing you can do is go back to your data and let Pixinsight (or ASTAP) measure the FWHM for your subs (in arc-sec units) like I show at 9:30 in the video. If your best FWHM = 2 arc-sec, then target a pixel scale between 0.6 arc-sec/pixel to 1 arc-sec/pixel. if your best FWHM = 1 arc-sec, then target a pixel scale between 0.3 arc-sec/pixel to 0.5 arc-sec/pixel. With an 8" EdgeHD (2032 mm focal length and 4.65 um pixels) and no focal reducer, you have 0.47 arc-sec/pixel. Looks like a very good match if your FWHM is between 1 arc-sec and 2 arc-sec. If you are consistently near 1 arc-sec, then maybe shell out for an ASI2600 with 3.8 um pixels, but that's a lot of $'s. Thanks for watching!
@@Aero19612 thanks a lot for crushing the numbers for me. Very kind!!!
I’ll rewatch the video again tonight to let it settle in. How did you get the curves with the “shoulders” you go through towards the end of the video?
The "shoulder" curves are an analysis I set up in Python. A bit too involved for Excel. Also, the big variable in that analysis is the relationship between guiding and seeing. Seeing is the real issue. So I hate to make that Python code available without doing more work to establish a relationship between seeing and guiding. That's why I point to the 9:30 mark in the video. That FWHM assessment is the BIG variable and that's the best way to characterize it for your system and your location.
his video seems like an honest explanation, but I stopped at 1:18,.... I would love to continue on this topic (Dawes vs Nyquist) but first I wanted to ask... what do you mean by "d"? The diameter of the telescope lens, the diameter of the entrance pupil or something else?
Currently I also use optical systems (lenses and apertures) for photography at remote distances, so I asked this question!
Friendliness
Yes, "d" is the diameter of the aperture. My RedCat 51 has a 51 mm diameter lens, so d = 55 mm. My C9.25 SCT has a primary mirror diameter of 9.25 in (235 mm) diameter, so d = 235 mm for that scope.
Love your work James!
Thanks, Simon!
I was pulling my hair off trying to understand this, until someone on CN pointed me to this video! Amazing job, thanks a million! Subscribed!
PS. 533M/MM vs 294MM for the RedCat 51? 😅
Hey Divye, thanks for watching and subscribing! The 533 has 3.76um pixels, a 14-bit ADC, and a smaller sensor size (smaller field of view). I've decided to move from my 1600MM (3.8um, 12-bit ADC) to the 294 for the smaller pixel size (Nyquist made me do it!). The 294 also has the same sensor size as the 1600, so I won't give up FoV.
So, there are many trade-offs. You may not care about F0V with the Redcat. You might check Stellarium with those two cameras to see if the 533 FoV will allows you to fit targets you're interested in its FoV. If it's spatial resolution you want, go with the 294. if its tonal resolution, go with the 533. Good luck!
Terrific video! I get different FWHM from Sharpcap (during focusing) and Pixinsight (during processing). Sharpcap numbers heavily depends on exposure time. Which one is correct? How do you take into account exposure time?
You should get different FWHM for different exposure times. The shorter the exposure, the lesser the effect of seeing so FWHM should be smaller. As exposure times increase to the time we use for exposures, seeing has more time to "move the star" around on our sensor. In this case, FWHM should be larger.
@@Aero19612 Great! Do you calculate FWHM from pictures with the same scales? For example, you need to magnify 102 scope picture to scale of 9.25” scope pictures, then calculate FWHM. Pictures taken by 100 Esprit doesn’t come even close to old C8 pictures when scale of pictures are the same.
People often surprise that I don’t want to get refractor, despite being beginner. This is the main reason. Pictures looks great until magnification. Then, they fall apart.
Very interesting analysis, James. In the end, it shows that not necessarily the biggest aperture will help to gather the best resolution, but nevertheless with the biggest aperture you'll get the most possible signal which might help to reach the faint details of a DSO.
Any thoughts on that?
I also have a C9.25 (Edge).
Regards,
Alfredo
Hey Alfredo. Well, you should first use an image analysis tool like Pixinsight or ASTAP to measure your actual FWHM. Maybe your seeing is much better than mine and you can actually get an advantage with your C9.25 that I can’t. If your system performs like mine, perhaps the focal reducer (drops F ratio to 6.3ish compared to ED102 at 6.9 = small benefit) and binning the sensor (provides tonal resolution you can’t get at 1x1. I’m definitely going to try the FR+2x2 with my SCT. For all the trouble with imaging with the SCT, I’d like to think I’m getting some image benefits. Geez. Thanks for watching!
Sure, your video encourages me to do such analysis with the EdgeHD at f10 and Hyperstar. Worth to see the results
Great presentation but a question, at 7:31 and thereafter, you display the Nyquist Limits, referring to the camera pixel size but I don't see where you are getting your numbers from. Can you clarify?
I’ll try. There are two limits to resolution in our images: (1) FWHM from Pixinsight and (2) pixelation from our cameras, or the Nyquist limit. In the case of the Redcat (250 mm scope paired with 3.8 micro-meter pixels in the camera. The pixel scale in arc sec is
(3.8/1000/250)*(180/pi)*3600 = 3.14 arc-sec/pixel. But we need 2 pixels to resolve a feature, so we have a Nyquist limit to the resolution of
2*3.135 = 6.27 arc sec. So my scope/camera combo effectively limits me to a resolution of 6.27 arc sec while seeing is giving me about 5ish arc sec. If I want more resolution, then I have two options: (1) dither during imaging and perform drizzle integration or (2) get a camera with smaller pixels to pair with this short focal length scope. A later video shows what I’m getting using the ASI294MM (smaller pixels) and this scope. Hope that helps a bit! Thanks for watching!
@@Aero19612 Aah, okay got it thanks so much.
One more question if you don't mind. Starting at 13.39 you introduce a sequence of graphs where the observable FWHM is plotted against RMS error with the lines being flat with steps.. In the case of the Redcat (the most simple), this initial flat value at 3.14 for 0 error is logical due to the pixel size relationship. But it stays flat until 2.0 then suddenly kicks up so that by 2.25 arc-sec error, the FWHM value is up over 9 and again reverts to a flat line. I understand the principle of the graphed line, but don't quite get whwere those profiles are being generated from. This is skipped over in the video.
My initial thinking is that, yes, I expect to see a flat line at the bottom, based on each scope and camera, but once guiding error becomes excessive, then I'd expect to see a continual, almost linear ramping up of FWHM values.
I know this is an old post, but after watching it when it first came out it made me start to question the utility of my 8" EDGE w/.7 reducer vs. my AT115EDT w/.8 reducer. They are both used with my ASI2600MM camera and going by this video and binning 2x2 with the EDGE at 1400mm focal length gave me with a 1.09 arcsec/pixel resolution. The AT115EDT at 644mm focal length and no binning gave me a 1.2 arcsec/pixel resolution. They are so close I thought what is the point of the SCT for DSO's? The the 115EDT is faster (f5.6 vs f7) a little lighter, easier to guide (guide stars no problem) and gives a much bigger field of view. I can crop down by 50% at the end before I get down to what the 2x2 binning gave me from the start on the SCT. I just shot the whole galaxy season on the 115 because I could come up with no good reason to use the SCT. SURELY I am missing something... If you are still reading comments from this old (and excellent) video, I'd like to know where my thinking went off track. Thanks.
Hi Jeff. First thing to note is that just because you have a pixel scale of 1.2 arc-sec/pixel, doesn’t mean your scope+seeing can provide that. At best, your 115EDT will provide a best-possible FWHM = 1.02 arc-sec. That’s if it’s in space (if it were, you would need 0.51 arc-sec/pixel to truly see that resolution in a photograph per the Nyquist limit). Seeing will double that or more. Run your subframes through Pixinsight’s SubFrameSelector and have it calculate the FWHM for each image. Get the lowest value and divide by 2. That is the pixel scale you need to get the resolution that the seeing is giving you. I bet your 115EDT is about right on the money.
For your 8”, the best possible FWHM is about 0.57 arcsec. And then seeing takes over. Would be nice to see what FWHM you get from Pixinsight when not binning. If that FWHM is greater than 2x1.09 (Nyquist) = 2.18 arcsec, then binning is the way to go and your binned imaging isn’t costing you any resolution.
I bet these two scopes are very close in terms of true image resolution. The 8” scope will provide larger images in the field of view, but not necessarily better resolution. You will just have better TONAL resolution of an equally blurred Image.
So, to summarize, I don’t think you’re missing anything, haha.
@@Aero19612 I really appreciate you reaching back to your old post to answer my question. When you say better TONAL resolution are you saying that the light averaged from the 4 (2x2) pixels vs the 1 pixel is likely to give me a more precise /accurate shade of gray or do you mean something else by "TONAL"?
Lol, again falling in the rabbit hole.
The 294 is a good camera and I'm pretty sure you will like it.
Again, great video.
Thanks for watching, David! I'm looking forward to playing with the 294.
Hello James, Thanks for informative video. I find binning confusing as well as sampling. The ASI294mm camera has pixels of 4.63microns (larger than ASI1600 which are 3.8 microns). So are your calculations correct with ASI294 with pixel size of 2.xx microns?
Hey Bill. The ASI294MM has 2.315 um pixels and, for the monochrome sensor, ZWO provides an "unlocked" mode that allows you to image in 1x1 mode. There is no such option for the color camera version. Also - and this is also confusing - the sensor ZWO uses for the 294MC is different from the sensor used for the 294MM. When ZWO first released the ASI294MM, they did not allow users to use the camera in 1x1 mode, hence, you had 4.63 um pixels. They gave in shortly after the chorus of complaints. Thanks for watching!
Thank you, I can see I have a date with an Excel spreadsheet after this ;-)
Keep those spreadsheets handy!
One quick question... In you slides beginning at 13:38 - you identify a "knee in the curve..." How did you derive this curve (and the others like it for your other scopes)? And (sorry, one more...) what effect does a focal reducer have on your analysis?
The knee in the curve analysis comes from a Monte Carlo analysis that simulates the random guiding-affected motion of the star across the pixel array and adds up the light in each pixel. I then go find the observable FWHM. Steps/knees in the curve happen when the FWHM jumps from 1 pixel, to 3 pixels, to 5 pixels, etc. There's a certain level of "garbage in, garbage out" with that analysis because of the assumptions. So it's probably best not to rely on that analysis to make purchasing decisions. I do look at the actual achieved FWHM compared with Nyquist limit to conclude the ASI294 might give me some resolution that I'm giving away with my ASI1600.
My GT81 has the focal reducer installed. I did do an assessment of the C9.25 with the focal reducer-it's a better option for that scope. Focal reducers are good when Dawes limit controls because it increases the pixel scale without affecting spatial resolution. Then you get the other benefits such as larger FoV, easier to find guide stars in an OAG, more reliable focus with more stars available, etc. Thanks for watching, James!
Thank you again for a great video. Can you help me with something. I bought an ASI294MM Pro when it first came out. Great Camera. The ZWO Manual claims the sensor is a SONY IMX492 Mono CMOS. What is interesting is that SONY publishes the ADC depth of the IMX492LLJ as 12 Bits max. Not 14. I am thinking that ZWO's claim of 14 Bit depth when they 2X2 Bin the images, is simply the benefit of software binning 4 pixels (X4 adds two bit depth). So this camera would not be a true 14 bit Camera. I am not sure this could be a disadvantage for noise. What are your thoughts ?
Very clear thinking, Claude! I suspect you are correct. Also, with CMOS, the binning is done in software (as you say) after the pixel values are read. So you will get 4x the read noise (I suspect there is some random cancelling, so not really 4x), but I do think you will see the improved tonal variation when the 4 pixel values are averaged. But that should be true of 2x2 bin mode with the ASI1600 as well, which is also a 12-bit ADC, yet I never hear anyone talk about “14-bit ADC” in the bin2 mode. Hmm. Very good question! Have you confirmed you are getting improved tonal discrimination with your camera?
I'm using an ASI2600 with my EdgeHD 8", it has 3.76um pixels, very similar in size to your ASI1600. Curious if you have any thoughts on software vs. hardware binning? I've read in comments on Cloudy Nights that for CMOS cameras the benefits of binning are the same in software or hardware, but the downside of hardware is that your produced source material is always binned, but in software you can have the option of still working with the original data.
That is correct: for CMOS sensors, binning is done in the camera by averaging the values of the four pixels. Same for doing the binning in Pixinsight (for example). The only real benefit of letting ZWO do the binning, is that smaller-size images are downloaded so less time/space required while acquiring images (may not matter much). You will also save time during post processing since the images are smaller size AND you don't have to spend time converting all of the 1x1 images to 2x2 images. Based on this video, I've concluded there is zero benefit to working with the 1x1 pixels with my system/seeing so I might as well let the camera bin for me and save me some time/disk space. Thanks for watching, Jason!
Great video,
Thanks, Dan!
How do you come up with the Nyquist limit for each scope? 7:50 You show FWHM/2 on the formula slide at the beginning but the Nyquist limits you show don't match that formula afterwards.
Yeah, it's a bit confusing. For the Redcat, I have an image scale of about 3.1 a-s/pixel, which means I can only resolve two closely-spaced stars for a FWHM = 2 x 3.1 a-s = 6.2 a-s or greater. But my actual FWHM is 4.83 a-s, so I need to reduce pixel size to take advantage of that FWHM.
The only difference in the two equations for limit of resolution is the coefficient 1.22 vs 1.029, and I can’t understand how the coefficient is determined to to be 1.22 when taking the zero point of the first with the peak of the second point
Correct. I'm showing a Moffat star model, which approximates the real point spread function. See this video for an explanation of these terms:
ua-cam.com/video/N60vzaIZq6Y/v-deo.html
Thanks for watching!
Binning is winning! You will see a huge difference binning your 1600mm on your SCT, give it a shot 😎
Thanks Bill! Yep, gotta do some binning with that scope. Or, if you believe the numbers, toss it and just use the ED 102. Haha.