Excellent analysis James! I can shed some light into how NINA computes *HFR* (vanilla NINA not the Hocus Focus plugin, whose logic I don't know!) since I wrote its current version (almost surprised to see it hasn't changed even in the latest NINA versions!). It basically uses the formula from the "Half flux diameter" Wikipedia. Some points to know/remember: - I amended the HFR calculation for the purpose of getting robust autofocus (primary purpose, and tweaks went in that direction - robustness over accuracy, so it would work great for out of focus stars, for which FWHM in theory doesn't work) - The HFR calculation doesn't care about eccentricity (as per HFR formula), so the measure effectively is an average of min and max you mention, but due to how HFR is calculated, this actually skews towards the max (in theory it shouldn't, in practice it does - this is due to "background" choice below) - The formula relies on a critical metric: the brightness of each pixel above the background. How "background" is defined varies. In earlier versions of NINA, "background" was the mean of the whole image (which obviously includes a lot of stars, e.g. not a good representation of background, but still commonly used). To make the calculation more accurate and robust for autofocus purposes, I changed the background to be the mean of a bounding square around the star of dimensions three times that of the detected star diameter (max star diameter). Others could have used median (which changes the result). Because this naturally lowers the "background" variable of the HFR formula, the calculated HFR is greater than when defining "background" as the mean of the whole image. - The pixels used in the calculation are within the detected star radius....... plus 20%. This is a number I came to empirically, and which I added to make HFR calculation of out of focus stars more robust for systems with a large central obstruction far from ideal focus (where star detection can underestimate the total radius of the out of focus doughnut). This in theory shouldn't change the computed HFR (because those additional pixels would be at background level and thus end up not counting in the HFR formula), but in practice in does due to how stars and their radius are detected. So this changes also increases the computed HFR But that's all for HFR not the Hocus Focus FWHM calculation :)
Thanks for the insider insight, Cuiv! I was not aware of the HFR/HFD definition in Wikipedia. Interesting. Also, totally understand the special needs of a real-time focusing algorithm versus an image evaluation algorithm executed with all the time in world. I believe the Hocus Focus FWHM uses a PSF curve fit (since I have to activate that option if I want access to FWHM), but that PSF fit isn't used while focusing - still use HFR. As long as the metric provides a reliable trend from out of focus to in focus, it should be OK. Haha - that's why they're still using your original approach: It works! The nitty gritty of which stars are selected and how the background is estimated is probably at the root of the differences between NINA FWHM and PixInsight FWHM. Would surprise me at all if FITSView uses the formula you mention.
Thanks for making the unclear clear James! I never could make any sense with the numbers from one program to another or even one setup with another yet have seen other people do just that. I just be comparing the numbers only per night. Cheers
Thanks for watching, Kurt! I must say, I’m surprised there isn’t more consistency between NINA/HocusFocus and PixInsight. Both use PSF fitting and report results in arc-sec. Still a big difference. I’ll continue to use PixInsight for grading, but I have no idea what the “true” FWHM is for my images. Oh well. BTW: after a bit of a back order delay, I should be getting my Celestron dew heater ring before the next break in the clouds…looking forward to testing it out. The dew strap works but I always feel I’m on the ragged edge with it. I seem to have so few imaging hours available this time of year that I hate to waste one on dew.
Great video. I just watched a talk on the AstroImaging Channel by Russell Croman and he talked about how blurXterminator hasn't been trained to correct some of the abnormalities in our stars like astigmatism and defocus, however, in the next month or so, a new version of BlurXterminator is being released that will be able to deal with these other abnormalities and the example he showed of it working looked like absolute magic. He did say that things like astigmatism are made worse in the current version of BlurXterminator because BlurXterminator can't get a PSF from these abnormally shaped stars and so it tends to make those stars more irregular. It was a great talk. Maybe that has something to do with some of your readings looking more eccentric?
Hello could it be also a problem with collimation? Some residual coma in the center of the field of view? I suspect is what is happening to me working with a Newtonian
Hey Logan. I saw that talk. Yes, very interesting. There are many possibilities. First, I'm applying BlurX to single subframes with high noise. Second, there are very few stars to work with given it's an SCT image. And, of course, there's the image aberrations that mostly affect the corners. I was pleased to see that he does have a "RA stretch" PSF that's intended to deal with my primary concern. Maybe the next version will bring improvements to images acquired with long focal lengths (he seemed to concentrate on shorter focal length images in his presentation). The bottom line is that deconvolution relies on many stars in order to define a good PSF and that's just not the world I live in during galaxy season.
Yay!! I've also been noticing that NINA FWHM is markedly larger than in Pix...and then since they disagree being unsure of exactly how much I trust the calculations of either for doing image selection. But at least you've shown they are reasonably correlated, appreciated! How can it be that there is no gold standard metric for image sharpness in astrophotography, I'm baffled. Thanks for following through on this request though, another fantastic video!
Agree. Sure seems like that there would be some standardized approach for calculating FWHM (or HFD) for an image. The details are buried in the algorithms...
I hadn't really considered the difference in the FWHM when compared to the RA. LOL I guess I'll spend the rest of the weekend experimenting on past data to determine a possible new way to do image rating. Thanks!
It was very insightful of you to mention Ra oscillating at higher frequency than Dec. I am using CGXL mount to drive 1960mm telescope and my stars are really oblong. Looking at frequency analysis in PHD2 I noticed that I have a distinct oscillation at a period of 21.4 seconds additionally to worm gear oscillation. This Ra defect is stretching my stars really bad. My intent is to tune PHD2 guiding PEC algorithm to compensate for higher frequency oscillation, leaving lower frequency to mount built-in PPEC.
Be aware that ar those focal lengths if you used a cooled camera, the fan vibrations can actually elongate your stars in a random-ish direction (easy to check by turning off the fan and comparing)
Hey Oleg. Your CGX is a much better mount that my old CGEM, but I used to see a lot of gear harmonics with shorter periods than the main worm gear period of 478 sec. I also saw a period near 21 sec (see ua-cam.com/video/Im5qVO6He3s/v-deo.html). Give PHD2's PPEC a shot, but it's really intended to address work gear harmonics at much longer periods. I always used PPEC once it was introduced (still do). Also you might contact Celestron - you could have a bad bearing or gear tooth somewhere. 1960 mm is really tough. Good luck!
James your presentation format is spot on and very helpful. Specifically in this case where there are so many variables. I've gone through all 3 related videos now and have a general grasp of the concepts thanks to you. There are almost nothing out there that truly take this topic on. For people like myself that only have one or two scopes and a mount, its challenging to formulate some kind of expectation and baseline. As I find myself struggling to ask the right question, I’ve simplified my original post to this. If I know the resolution and focal ratio of a lens, the current seeing conditions, and total guiding results in rms, is there a way to estimate the FWHM?
Hi James. Thanks for another great video. I have no doubt that if you keep making videos, you will answer ALL of my questions. I to have wondered why pick sight thinks I am a better astrophotographer than NINA does.I will be interested to hear what you learn about blurXterminator's eccentricity issue. I wonder if your observations about target altitude vs FWHM can be leveraged in some way for OSC photography. You always leave me with more questions. You would have been a great professor.
Nope. Just more questions for you to ponder. On the OSC question: I suppose you could stack a reduced number of the best images and then develop a luminance image from those for detail processing. Then create a larger stack with more of the lower-quality images for color processing. Then combine via LRGBCombination.
Hey Greg. Good question. I don't. I save the SubframeSelector CSV file then use a Python program I wrote to read the csv file, perform the calculations, and re-name each file in the csv list. Always looking for ways to make life more complicated! Thanks for watching!
Mr. Lamb, when is the blur XT rabbit hole coming? Would love to hear your consolidation of what Russ Croman says, together, with Conejero latest recommendation of drizzling every single time in PixInsight. Apparently, FWHM of 3 is interesting, cut off line On drizzling up sampling. Not sure if this is your cup of tea, but it certainly would be an interesting rabbit hole . #delicategenious
I guess that after blur xt deconvolution star size in pixel goes down introducing undersampling. This makes quantization noise on eccentricity calculation spreading results. Try again by drizzling by 2 before blurr xt
I often wonder if some measure using Moffat and others Gaussian. Or, I thought it had to to with the star count in the sub and if that varied from PixInsight to Nina etc.. with different star counts the average could change.
Hi Eric. Yes, I know PixInsight and NINA/Hocus Focus use PSF fitting with either the Moffat or the Gaussian model. Not sure what FITSView and ASTAP are doing. I agree, the main difference likely comes down to which stars are being selected. I wonder if I'd get better results with more of an oversampled image? Thanks for watching!
This is essential topic! “Cannot see details lesser than FWHM”. Is that right? I thought resolution is FWHM/3. I recently broke FWHM 2 “barrier “. Analysis was done by software from Herbert Walter web site. Nothing changed. It just happened. I could magnify image taken with 9.25 edge (no reducer) 5 times. I could see clusters in M60. I think it tells how important seeing is. However, I still don’t understand what is seeing portion of my FWHM. About eccentricity. When FWHM is reduced, eccentricity grows. Thus, my average eccentricity 0.35-.42. It grew to 0.45-0.47, when FWHM was 1.75”. So, effect Blur XT is logical.
Check out this video on resolution: ua-cam.com/video/H-cAbF25gcI/v-deo.html The ideal FWHM for a 9.25" aperture is 0.5 arc-sec. The rest is seeing, guiding, and maybe being undersampled based on OSC vs Mono and pixels size. I don't agree that eccentricity increases when FWHM is reduced. They are independent quantities.
@@anata5127 Not everything written on CloudyNights is reliable. I believe we are discussing the difference between ""correlated" and "cause/effect". For example: 1. Larger aperture, D, telescopes provide smaller FWHM (FWHM = 1.029 lamda/D) 2. telescope aperture tends to be larger for longer focal length telescopes. 3. Longer focal length telescope provide higher magnification. 4. Higher magnification is more sensitive to mount gear imperfections 5. Mount gear imperfections produce eccentric stars Therefore, one concludes (incorrectly) that eccentricity increases as FWHM decreases. That's a "correlation" but not "cause/effect" I'm talking about images acquired with one telescope rather than different telescopes with different apertures. If I use a better mount (with fewer gear imperfections), I get lower eccentricity and the maximum FWHM gets closer to the minimum FWHM. So FWHM reduces as eccentricity reduces. Eccentricity comes from the mount and FWHM comes from the aperture/seeing
Correct. Seeing and resolution are not the same. However, seeing will increase your effective FWHM, and the final FWHM (from all sources) determines your final resolution. In excellent conditions, you can definitely get lower than 2". Unless you live on a mountain, that's a low probability occurrence. If you have a really good mount (with encoders) -- sounds like you do -- guiding won't help much if you're polar aligned really well.
@@Aero19612 Yes, I do. 10 micron. Polar alignment is done within star model. I don’t know how it works. When model is build, mount tells pointing accuracy rms, telescope orthogonality and polar alignment. They ask below 10’ polar alignment. Most essential is pointing accuracy. It has to be below 5-7”. I had C8. Could never get below 25”. Locking mirrors on Edge helped. All-in-all, limiting point is seeing for me.
Thanks James. Excellent video and well done. But I am confused by your FWHM/ HFR(D) "relationship" formula. It just doesn't "add up". As we all know there really is no actual formula since the parameter(s) are based on entirely different approaches. We all know and accept that. But that aside, there still tends to be a rough mathematical relationship. But based on the following thinking it should be a very different formula: A perfect Gaussian is HFD = 2 x (0.675) x sigma = (1.349)vx sigma and FWHM = (2.355) x sigma. So this would yield a different relationship which is closer to: FWHM = 1.75 HFD (or 3.5 HFR) Yes? Of course, neither parameter occurs as a perfect Gaussian in the real world. Curious of your thoughts
Hi Howard. The ideal star diffraction shape (point spread function) is discussed in this video: ua-cam.com/video/N60vzaIZq6Y/v-deo.html You are correct that HFD and FWHM are different; however, using the true PSF, the final mathematical forms of FWHM and HFD are simply different scale factors applied to the ratio of light wavelength (lamda) and telescope aperture (D). So, we have: FWHM = 1.029 * lamda/D and HFD = 0.942 * lamda/D so it becomes a simple matter to just ratio the HFD value by (1.29/0.942 = 1.09) to get the corresponding FWHM for the ideal PSF. Of course, PSF curve fits used by NINA and PixInsight are based on the simpler Moffat or Gaussian models rather than the theoretical Bessel function. The Moffat and Gaussian models do a decent job capturing the star profile in the brightest portion of the star and not so well in the dimmer outer annular region. That's probably where the big difference between "my" 1.09 and "your" 1.75). If we used the 1.75, the ASTAP results would be worse and the FITSView results would be better. Thanks for watching, as always
@@howardmaron Keep in mind: I'm not saying you're wrong and I'm right. I'm just laying out my reasoning based on the ideal PSF. Haha. It's also possible I missed your point and just sent us off on a wild goose chase. Don't ponder too long--might not be worth it in the end!
Excellent analysis James! I can shed some light into how NINA computes *HFR* (vanilla NINA not the Hocus Focus plugin, whose logic I don't know!) since I wrote its current version (almost surprised to see it hasn't changed even in the latest NINA versions!). It basically uses the formula from the "Half flux diameter" Wikipedia. Some points to know/remember:
- I amended the HFR calculation for the purpose of getting robust autofocus (primary purpose, and tweaks went in that direction - robustness over accuracy, so it would work great for out of focus stars, for which FWHM in theory doesn't work)
- The HFR calculation doesn't care about eccentricity (as per HFR formula), so the measure effectively is an average of min and max you mention, but due to how HFR is calculated, this actually skews towards the max (in theory it shouldn't, in practice it does - this is due to "background" choice below)
- The formula relies on a critical metric: the brightness of each pixel above the background. How "background" is defined varies. In earlier versions of NINA, "background" was the mean of the whole image (which obviously includes a lot of stars, e.g. not a good representation of background, but still commonly used). To make the calculation more accurate and robust for autofocus purposes, I changed the background to be the mean of a bounding square around the star of dimensions three times that of the detected star diameter (max star diameter). Others could have used median (which changes the result). Because this naturally lowers the "background" variable of the HFR formula, the calculated HFR is greater than when defining "background" as the mean of the whole image.
- The pixels used in the calculation are within the detected star radius....... plus 20%. This is a number I came to empirically, and which I added to make HFR calculation of out of focus stars more robust for systems with a large central obstruction far from ideal focus (where star detection can underestimate the total radius of the out of focus doughnut). This in theory shouldn't change the computed HFR (because those additional pixels would be at background level and thus end up not counting in the HFR formula), but in practice in does due to how stars and their radius are detected. So this changes also increases the computed HFR
But that's all for HFR not the Hocus Focus FWHM calculation :)
Thanks for the insider insight, Cuiv! I was not aware of the HFR/HFD definition in Wikipedia. Interesting. Also, totally understand the special needs of a real-time focusing algorithm versus an image evaluation algorithm executed with all the time in world. I believe the Hocus Focus FWHM uses a PSF curve fit (since I have to activate that option if I want access to FWHM), but that PSF fit isn't used while focusing - still use HFR. As long as the metric provides a reliable trend from out of focus to in focus, it should be OK. Haha - that's why they're still using your original approach: It works! The nitty gritty of which stars are selected and how the background is estimated is probably at the root of the differences between NINA FWHM and PixInsight FWHM. Would surprise me at all if FITSView uses the formula you mention.
Wow I had no idea you wrote NINA Cuiv.
@@AshA-ww8hc I didn't ! I contributed code to it
Thanks for making the unclear clear James! I never could make any sense with the numbers from one program to another or even one setup with another yet have seen other people do just that. I just be comparing the numbers only per night. Cheers
Thanks for watching, Kurt! I must say, I’m surprised there isn’t more consistency between NINA/HocusFocus and PixInsight. Both use PSF fitting and report results in arc-sec. Still a big difference. I’ll continue to use PixInsight for grading, but I have no idea what the “true” FWHM is for my images. Oh well. BTW: after a bit of a back order delay, I should be getting my Celestron dew heater ring before the next break in the clouds…looking forward to testing it out. The dew strap works but I always feel I’m on the ragged edge with it. I seem to have so few imaging hours available this time of year that I hate to waste one on dew.
Great video. I just watched a talk on the AstroImaging Channel by Russell Croman and he talked about how blurXterminator hasn't been trained to correct some of the abnormalities in our stars like astigmatism and defocus, however, in the next month or so, a new version of BlurXterminator is being released that will be able to deal with these other abnormalities and the example he showed of it working looked like absolute magic. He did say that things like astigmatism are made worse in the current version of BlurXterminator because BlurXterminator can't get a PSF from these abnormally shaped stars and so it tends to make those stars more irregular. It was a great talk. Maybe that has something to do with some of your readings looking more eccentric?
Hello could it be also a problem with collimation? Some residual coma in the center of the field of view? I suspect is what is happening to me working with a Newtonian
Hey Logan. I saw that talk. Yes, very interesting. There are many possibilities. First, I'm applying BlurX to single subframes with high noise. Second, there are very few stars to work with given it's an SCT image. And, of course, there's the image aberrations that mostly affect the corners. I was pleased to see that he does have a "RA stretch" PSF that's intended to deal with my primary concern. Maybe the next version will bring improvements to images acquired with long focal lengths (he seemed to concentrate on shorter focal length images in his presentation). The bottom line is that deconvolution relies on many stars in order to define a good PSF and that's just not the world I live in during galaxy season.
Always an excellent video! Thank you James
Thanks, Olivier! And thanks for watching!
Yay!! I've also been noticing that NINA FWHM is markedly larger than in Pix...and then since they disagree being unsure of exactly how much I trust the calculations of either for doing image selection. But at least you've shown they are reasonably correlated, appreciated! How can it be that there is no gold standard metric for image sharpness in astrophotography, I'm baffled. Thanks for following through on this request though, another fantastic video!
Agree. Sure seems like that there would be some standardized approach for calculating FWHM (or HFD) for an image. The details are buried in the algorithms...
I hadn't really considered the difference in the FWHM when compared to the RA. LOL I guess I'll spend the rest of the weekend experimenting on past data to determine a possible new way to do image rating. Thanks!
Ahhh. You have clouds too, huh? Thanks for watching, Neil!
It was very insightful of you to mention Ra oscillating at higher frequency than Dec. I am using CGXL mount to drive 1960mm telescope and my stars are really oblong. Looking at frequency analysis in PHD2 I noticed that I have a distinct oscillation at a period of 21.4 seconds additionally to worm gear oscillation. This Ra defect is stretching my stars really bad. My intent is to tune PHD2 guiding PEC algorithm to compensate for higher frequency oscillation, leaving lower frequency to mount built-in PPEC.
Be aware that ar those focal lengths if you used a cooled camera, the fan vibrations can actually elongate your stars in a random-ish direction (easy to check by turning off the fan and comparing)
Hey Oleg. Your CGX is a much better mount that my old CGEM, but I used to see a lot of gear harmonics with shorter periods than the main worm gear period of 478 sec. I also saw a period near 21 sec (see ua-cam.com/video/Im5qVO6He3s/v-deo.html). Give PHD2's PPEC a shot, but it's really intended to address work gear harmonics at much longer periods. I always used PPEC once it was introduced (still do). Also you might contact Celestron - you could have a bad bearing or gear tooth somewhere. 1960 mm is really tough. Good luck!
I really appreciate you for making this video. Have a like, cause I'm already subscribed.
@@_tyler-_- thanks for watching! And the like and the sub.
James your presentation format is spot on and very helpful. Specifically in this case where there are so many variables. I've gone through all 3 related videos now and have a general grasp of the concepts thanks to you. There are almost nothing out there that truly take this topic on. For people like myself that only have one or two scopes and a mount, its challenging to formulate some kind of expectation and baseline. As I find myself struggling to ask the right question, I’ve simplified my original post to this. If I know the resolution and focal ratio of a lens, the current seeing conditions, and total guiding results in rms, is there a way to estimate the FWHM?
Hi James. Thanks for another great video. I have no doubt that if you keep making videos, you will answer ALL of my questions. I to have wondered why pick sight thinks I am a better astrophotographer than NINA does.I will be interested to hear what you learn about blurXterminator's eccentricity issue. I wonder if your observations about target altitude vs FWHM can be leveraged in some way for OSC photography. You always leave me with more questions. You would have been a great professor.
Nope. Just more questions for you to ponder. On the OSC question: I suppose you could stack a reduced number of the best images and then develop a luminance image from those for detail processing. Then create a larger stack with more of the lower-quality images for color processing. Then combine via LRGBCombination.
After listening this, I believe I’m going to go back to regular photography. It seems like a mental torture then a beautiful hobby.
Hi James. Thanks for the interesting video. How do you get subframeselector to append the FWHMmax value as a prefix? Sorry for the basic question.
Hey Greg. Good question. I don't. I save the SubframeSelector CSV file then use a Python program I wrote to read the csv file, perform the calculations, and re-name each file in the csv list. Always looking for ways to make life more complicated! Thanks for watching!
@@Aero19612 Thanks for your reply. I tried everything in SubframeSelector prefix! formatting
Mr. Lamb, when is the blur XT rabbit hole coming?
Would love to hear your consolidation of what Russ Croman says, together, with Conejero latest recommendation of drizzling every single time in PixInsight.
Apparently, FWHM of 3 is interesting, cut off line On drizzling up sampling.
Not sure if this is your cup of tea, but it certainly would be an interesting rabbit hole .
#delicategenious
Those are very interesting topics. Thanks! I'll start thinking (and learning!).
Very interesting!
or, just depressing? Haha. Thanks for watching!
SharpCap in livestacking mode gives similar numbers for FWHM (in pixels) as PI.
Very interesting. Thanks for letting me know.
I guess that after blur xt deconvolution star size in pixel goes down introducing undersampling. This makes quantization noise on eccentricity calculation spreading results. Try again by drizzling by 2 before blurr xt
I often wonder if some measure using Moffat and others Gaussian. Or, I thought it had to to with the star count in the sub and if that varied from PixInsight to Nina etc.. with different star counts the average could change.
Hi Eric. Yes, I know PixInsight and NINA/Hocus Focus use PSF fitting with either the Moffat or the Gaussian model. Not sure what FITSView and ASTAP are doing. I agree, the main difference likely comes down to which stars are being selected. I wonder if I'd get better results with more of an oversampled image? Thanks for watching!
This is essential topic!
“Cannot see details lesser than FWHM”. Is that right? I thought resolution is FWHM/3.
I recently broke FWHM 2 “barrier “. Analysis was done by software from Herbert Walter web site. Nothing changed. It just happened. I could magnify image taken with 9.25 edge (no reducer) 5 times. I could see clusters in M60.
I think it tells how important seeing is. However, I still don’t understand what is seeing portion of my FWHM.
About eccentricity. When FWHM is reduced, eccentricity grows. Thus, my average eccentricity 0.35-.42. It grew to 0.45-0.47, when FWHM was 1.75”. So, effect Blur XT is logical.
Check out this video on resolution: ua-cam.com/video/H-cAbF25gcI/v-deo.html
The ideal FWHM for a 9.25" aperture is 0.5 arc-sec. The rest is seeing, guiding, and maybe being undersampled based on OSC vs Mono and pixels size.
I don't agree that eccentricity increases when FWHM is reduced. They are independent quantities.
@@anata5127 Not everything written on CloudyNights is reliable. I believe we are discussing the difference between ""correlated" and "cause/effect". For example:
1. Larger aperture, D, telescopes provide smaller FWHM (FWHM = 1.029 lamda/D)
2. telescope aperture tends to be larger for longer focal length telescopes.
3. Longer focal length telescope provide higher magnification.
4. Higher magnification is more sensitive to mount gear imperfections
5. Mount gear imperfections produce eccentric stars
Therefore, one concludes (incorrectly) that eccentricity increases as FWHM decreases. That's a "correlation" but not "cause/effect"
I'm talking about images acquired with one telescope rather than different telescopes with different apertures. If I use a better mount (with fewer gear imperfections), I get lower eccentricity and the maximum FWHM gets closer to the minimum FWHM. So FWHM reduces as eccentricity reduces. Eccentricity comes from the mount and FWHM comes from the aperture/seeing
Correct. Seeing and resolution are not the same. However, seeing will increase your effective FWHM, and the final FWHM (from all sources) determines your final resolution. In excellent conditions, you can definitely get lower than 2". Unless you live on a mountain, that's a low probability occurrence. If you have a really good mount (with encoders) -- sounds like you do -- guiding won't help much if you're polar aligned really well.
@@Aero19612 Not cause effect, but I always observe it. Always. For my accuracy tracking, I see it at FWHM
@@Aero19612 Yes, I do. 10 micron. Polar alignment is done within star model. I don’t know how it works. When model is build, mount tells pointing accuracy rms, telescope orthogonality and polar alignment. They ask below 10’ polar alignment. Most essential is pointing accuracy. It has to be below 5-7”. I had C8. Could never get below 25”. Locking mirrors on Edge helped.
All-in-all, limiting point is seeing for me.
Thanks James. Excellent video and well done. But I am confused by your FWHM/ HFR(D) "relationship" formula. It just doesn't "add up". As we all know there really is no actual formula since the parameter(s) are based on entirely different approaches. We all know and accept that. But that aside, there still tends to be a rough mathematical relationship. But based on the following thinking it should be a very different formula:
A perfect Gaussian is HFD = 2 x (0.675) x sigma = (1.349)vx sigma and FWHM = (2.355) x sigma. So this would yield a different relationship which is closer to:
FWHM = 1.75 HFD (or 3.5 HFR)
Yes?
Of course, neither parameter occurs as a perfect Gaussian in the real world.
Curious of your thoughts
Hi Howard. The ideal star diffraction shape (point spread function) is discussed in this video: ua-cam.com/video/N60vzaIZq6Y/v-deo.html
You are correct that HFD and FWHM are different; however, using the true PSF, the final mathematical forms of FWHM and HFD are simply different scale factors applied to the ratio of light wavelength (lamda) and telescope aperture (D). So, we have:
FWHM = 1.029 * lamda/D and
HFD = 0.942 * lamda/D
so it becomes a simple matter to just ratio the HFD value by (1.29/0.942 = 1.09) to get the corresponding FWHM for the ideal PSF. Of course, PSF curve fits used by NINA and PixInsight are based on the simpler Moffat or Gaussian models rather than the theoretical Bessel function. The Moffat and Gaussian models do a decent job capturing the star profile in the brightest portion of the star and not so well in the dimmer outer annular region. That's probably where the big difference between "my" 1.09 and "your" 1.75). If we used the 1.75, the ASTAP results would be worse and the FITSView results would be better.
Thanks for watching, as always
@@Aero19612 Thank you for responding. I have to ponder this.
@@howardmaron Keep in mind: I'm not saying you're wrong and I'm right. I'm just laying out my reasoning based on the ideal PSF. Haha. It's also possible I missed your point and just sent us off on a wild goose chase. Don't ponder too long--might not be worth it in the end!
As crystal, ty