Errors in DFT based fringe analysis


Mladen, Florida, USA
 

I think tere's an easy way to check for unintentional "fudge factors" created by the software as opposed to human and environmental factors. If we run a simulated 6" (Diam 150 mm) f/8 (RoC 2400 mm) parabolic mirror through DFTFRinge with Artificial Null checked, we should get essentially an RMS of 0, a Strehl of 1 and as conic of -1. Anything else in the results is the software error. Well, DFTFRinge agrees with theory very closely  (RMS 0.005, Strehl 0.999, Best fit Conic -0.975), but it's not without unwanted residuals.

If you run a 16-diameter wavefront chart you'll notice that the software returns some edge faux errors, and if you reduce contour lines height until they begin to show up you'll find that the error is around 0.04 waves or 1/25 wave, which is entirely due to software. To me that says that any result better than that (i.e. smaller than 1/25 wave RMS) is an artifact.

Now, a 0.04 wave RMS wavefront error is not bad, but it's not snap-to-focus either. It's close to what the ATMs usually refer to as a 1/7 wave peak to valley (not RMS!) on the wavefront. It's what I would call a "mid-way" quality mirror (better than the usual 1/4 or 1/5 wave ptv w and worse than the super 1/10-1/15 wave ptv w mirrors).

To be sure, an ideal simulated mirror should not have any residual errors. It's therefore prudent to set the limits of accuracy (tolerances) on our results, lest we uncritically begin to believe numbers. Similar issues arise in other test methods, from the Ronchi test to the Foucault and Ross null, and others. As Clint Eastwood said: "man's gotta know his limitations" :o)


George Roberts (Boston)
 

Mladen, I did the same test and got about 1/4th the error you got when looking at the 16 diameters.  I used 1000 pixels and 10 waves tilt in the simulated igram part - how many pixels did you use?  1000 pixels doesn't seem unreasonable ( approximately 1 megapixel camera can generate this and most cameras are > 10 megapixels these days).

Also I had gaussian blur on 10%.  I had to lower this to 1% to get similar results to you but then I got some visible print through which I didn't see in your igram.

Also increasing the tilt to 40 (closer to my typical igrams) and even with no gaussian blur I get about 5X less peak to peak error than you get.  With 1% gaussian blur it gets quite a bit better.


Dale Eason
 

On Thu, Jul 23, 2020 at 09:28 AM, Mladen, Florida, USA wrote:
If you run a 16-diameter wavefront chart you'll notice that the software returns some edge faux errors, and if you reduce contour lines height until they begin to show up you'll find that the error is around 0.04 waves or 1/25 wave, which is entirely due to software. To me that says that any result better than that (i.e. smaller than 1/25 wave RMS) is an artifact.
Having done similar tests and find no error I would say it is more likely that you have done something wrong.

What is a 16-diameter wavefront chart?  How was it made.   What are your DFTFringe settings? 


Dale Eason
 

Mladen,
Assuming you created the igram from DFTFringe.  What format was it in.  If .jpg then the the jpeg compression caused by the windows jpeg library will cause distortion.  When I do that I use .png as the file type.


Mladen, Florida, USA
 

On Thu, Jul 23, 2020 at 11:37 AM, George Roberts (Boston) wrote:
Also I had gaussian blur on 10%.  I had to lower this to 1% to get similar results to you but then I got some visible print through which I didn't see in your igram.
My gaussian blur is also 10%. All the information on my setup is in the image I didn't use excessive tilt. I would say 10 fringes at most. Maybe more fringes would have yielded better results and less edge error. I will try it when I get home later today. The point I was making is that we must know the software error is any, and that a simulated mirror must have no error whatsoever (that's how it is in raytracing software -- ideal mirrors give ideal results). No "fudge factor."


Mladen, Florida, USA
 

On Thu, Jul 23, 2020 at 12:09 PM, Dale Eason wrote:
Having done similar tests and find no error I would say it is more likely that you have done something wrong.
What is a 16-diameter wavefront chart?  How was it made.   What are your DFTFringe settings?
Dale, all my settings are visible in the image. If you copy those you should get the same results. I have 10 fringes (X tilt = 5) on my simulated igram. I imaging a higher tilt will yield better results (higher resolution). That would be good because then all errors are due to operators (testers) and environmental factors and not to software.

My estimate is that you need about 50-60 fringes (X tilt = 25 to 30). But I will have to check when I get home. 


Mladen, Florida, USA
 

What format was it in.
.png


Dale Eason
 

On Thu, Jul 23, 2020 at 11:45 AM, Mladen, Florida, USA wrote:
Still not sure what you mean by you used a "wavefront chart".

If you did not save the interferogram then I think you will not have the .jpg issue I posted about.  In that case ignore those comments.  

There is an accuracy limit to interferogram analysis software based on the size of the pixel and the number of pixels.  In this case your data created errors because you used a smaller fringe density and image size.  That is the same issue caused by pixelation that gives one jagged lines when drawing on a grid.  We get jagged edges.  One of the ways that is dealt with in the real data is to use different fringe spacings and orientations to move the jagged edges around.  


Sorin
 

"The point I was making is that we must know the software error is any, and that a simulated mirror must have no error whatsoever"

Mladen,

I think that your assumption is not correct. The simulated mirror could have no relation with the wavefront created from an interferogram.

I am not sure about the internal mechanisms of DFTFringe, but let's make a supposition. Let's say that the simulation part in software is completely independent of the analysis part. Dale could tell us if this is the case.

From what I can figure, the simulation part is analytically producing a surface, starting from some zernike terms, but the analysis part it should be very different. It should take an interferogram, use a numerical phase unwrapping mechanism, create a matrix with values and then apply some statistics to it. I see no point in such a scenario, to doubt about the numbers produced by the analysis part, starting from a simulation.

A real search for the fudge factors in the software, would imply to analyse with a different software the same interferometric data and to compare the result with the output of DFTfringe.

Sorin



On Thursday, July 23, 2020, 7:45:58 PM GMT+3, Mladen, Florida, USA <mkvranjican@...> wrote:


On Thu, Jul 23, 2020 at 11:37 AM, George Roberts (Boston) wrote:
Also I had gaussian blur on 10%.  I had to lower this to 1% to get similar results to you but then I got some visible print through which I didn't see in your igram.
My gaussian blur is also 10%. All the information on my setup is in the image I didn't use excessive tilt. I would say 10 fringes at most. Maybe more fringes would have yielded better results and less edge error. I will try it when I get home later today. The point I was making is that we must know the software error is any, and that a simulated mirror must have no error whatsoever (that's how it is in raytracing software -- ideal mirrors give ideal results). No "fudge factor."


Dale Eason
 

On Thu, Jul 23, 2020 at 02:38 AM, Diego Dabrio-Polo wrote:
The second logical step is to have that mirror tested at a reference facility:
To be useful the data from that test should be computer readable  or converted to computer readable so that it can be feed into DFTFringe to display comparison plots.  Hopefully the whole wave front and not just a few zernike values.


Dale Eason
 

On Thu, Jul 23, 2020 at 12:13 PM, Sorin wrote:
A real search for the fudge factors in the software, would imply to analyse with a different software the same interferometric data and to compare the result with the output of DFTfringe.
Yes.


vladimir galogaza
 

 "If we run a simulated 6" (Diam 150 mm) f/8 (RoC 2400 mm)  ...........
we should get essentially an RMS of 0, a Strehl of 1 and as conic of -1.   
Anything else in the results is the software error.  "

Apart from loosely defined combination  of the "essentially" and "everything else", I have tried this several times
but simulated interferograms for ideal mirror never produced " RMS of 0, a Strehl of 1 and as conic of -1.  "
Was it "essentially" good enough I do not know. Perhaps somebody will try so that we can compare results.

Vladimir.

On Thu, Jul 23, 2020 at 4:28 PM Mladen, Florida, USA <mkvranjican@...> wrote:
I think tere's an easy way to check for unintentional "fudge factors" created by the software as opposed to human and environmental factors. If we run a simulated 6" (Diam 150 mm) f/8 (RoC 2400 mm) parabolic mirror through DFTFRinge with Artificial Null checked, we should get essentially an RMS of 0, a Strehl of 1 and as conic of -1. Anything else in the results is the software error. Well, DFTFRinge agrees with theory very closely  (RMS 0.005, Strehl 0.999, Best fit Conic -0.975), but it's not without unwanted residuals.

If you run a 16-diameter wavefront chart you'll notice that the software returns some edge faux errors, and if you reduce contour lines height until they begin to show up you'll find that the error is around 0.04 waves or 1/25 wave, which is entirely due to software. To me that says that any result better than that (i.e. smaller than 1/25 wave RMS) is an artifact.

Now, a 0.04 wave RMS wavefront error is not bad, but it's not snap-to-focus either. It's close to what the ATMs usually refer to as a 1/7 wave peak to valley (not RMS!) on the wavefront. It's what I would call a "mid-way" quality mirror (better than the usual 1/4 or 1/5 wave ptv w and worse than the super 1/10-1/15 wave ptv w mirrors).

To be sure, an ideal simulated mirror should not have any residual errors. It's therefore prudent to set the limits of accuracy (tolerances) on our results, lest we uncritically begin to believe numbers. Similar issues arise in other test methods, from the Ronchi test to the Foucault and Ross null, and others. As Clint Eastwood said: "man's gotta know his limitations" :o)


--
Vladimir


Arjan
 

Apart from that, also the simulation could contain errors that are
perfectly analyzed.

"If we run a simulated 6" (Diam 150 mm) f/8 (RoC 2400 mm) ...........
we should get essentially an RMS of 0, a Strehl of 1 and as conic of -1.
Anything else in the results is the software error. "

Apart from loosely defined combination of the "essentially" and
"everything else", I have tried this several times
but simulated interferograms for ideal mirror never produced " RMS of 0, a
Strehl of 1 and as conic of -1. "
Was it "essentially" good enough I do not know. Perhaps somebody will try
so that we can compare results.

Vladimir.

On Thu, Jul 23, 2020 at 4:28 PM Mladen, Florida, USA <
mkvranjican@...> wrote:

I think tere's an easy way to check for unintentional "fudge factors"
created by the software as opposed to human and environmental factors.
If
we run a simulated 6" (Diam 150 mm) f/8 (RoC 2400 mm) parabolic mirror
through DFTFRinge with Artificial Null checked, we should get
essentially
an RMS of 0, a Strehl of 1 and as conic of -1. Anything else in the
results
is the software error. Well, DFTFRinge agrees with theory very closely
(RMS 0.005, Strehl 0.999, Best fit Conic -0.975), but it's not without
unwanted residuals.

If you run a 16-diameter wavefront chart you'll notice that the software
returns some edge faux errors, and if you reduce contour lines height
until
they begin to show up you'll find that the error is around 0.04 waves or
1/25 wave, which is entirely due to software. To me that says that any
result better than that (i.e. smaller than 1/25 wave *RMS*) is an
artifact.

Now, a 0.04 wave RMS wavefront error is not bad, but it's not
snap-to-focus either. It's close to what the ATMs usually refer to as a
1/7
wave peak to valley (*not* RMS!) on the wavefront. It's what I would
call
a "mid-way" quality mirror (better than the usual 1/4 or 1/5 wave ptv w
and
worse than the super 1/10-1/15 wave ptv w mirrors).

To be sure, an ideal simulated mirror should not have any residual
errors.
It's therefore prudent to set the limits of accuracy (tolerances) on our
results, lest we uncritically begin to believe numbers. Similar issues
arise in other test methods, from the Ronchi test to the Foucault and
Ross
null, and others. As Clint Eastwood said: "A man's gotta know his
limitations" :o)





Dale Eason
 

On Thu, Jul 23, 2020 at 12:13 PM, Sorin wrote:
I am not sure about the internal mechanisms of DFTFringe, but let's make a supposition. Let's say that the simulation part in software is completely independent of the analysis part. Dale could tell us if this is the case.
It is except that it uses the same zernike generator algorithms for both simulation and analysis.

The code for creating wave fronts or igrams is the same code.  That is for each x,y location on the mirror it generates an error value based on the zernike values input by the user.  Then for igram it uses the formula
 intensity = cos(spacing *2 * M_PI * S1) + a constant to prevent negative intensity.    Where s1 is the error value in wave length of the laser.  For the wave front it just ave the S1 value.


Dale Eason
 

Next you need to know that the best error value resolution one could get is governed by the image sensor itself.  Assuming the data is linear (which it usually is not)  it depends on the max intensity value a pixel can have.  For many of our sensors that is 255.  So the error height can only be known to  1/255 wave. However our images usually don't use the full intensity value.  Values less than half that are typical.  So the resolution for most of our bath data can never be better than  1/100 wave or there abouts.  That works out to +/- 1/50 wave.  Hmmm that is about the same as the best variation on the RR isn't it?
 

My simulation limits the pixel range to 100. So the resolution is limited to +/- 1/50 of a wave.


Mladen, Florida, USA
 

On Thu, Jul 23, 2020 at 01:09 PM, Dale Eason wrote:
Still not sure what you mean by you used a "wavefront chart".
That area in the results section at the bottom where you have a surface or wavefront graph.

There is an accuracy limit to interferogram analysis software based on the size of the pixel and the number of pixels.
So, there's not only a limit on how many, but how few fringes one can have. Is there a formula to calculate that range? Cant the program calculate the ideal number of fringes you should be setting your igrams to? Have any of the RR testers provided images of their igrams with their reports?

I just repeated the 150/2400 f/8 paraboloid simulation with 60 fringes (X tilt = 30) and I get perfect results. No artifacts whatsoever(see enclosed image). I imagine that the number of suitable fringes will depend on the total OPD wavefront error, which means it will have to be calculated for each mirror separately.


Mladen, Florida, USA
 

On Thu, Jul 23, 2020 at 01:34 PM, vladimir galogaza wrote:
I have tried this several times
but simulated interferograms for ideal mirror never produced " RMS of 0, a Strehl of 1 and as conic of -1.  "
Was it "essentially" good enough I do not know. Perhaps somebody will try so that we can compare results.
Maybe you should try again. Just remember X tilt this time = 30.


Mladen, Florida, USA
 

On Thu, Jul 23, 2020 at 01:45 PM, Arjan wrote:
Apart from that, also the simulation could contain errors that are
perfectly analyzed.
I don't k now what that means. A simulated mirror in any optical raytrace analysis program will yield a theoretically perfect result. A paraboloid will focus a point source at infinity to a perfect dot. In relaity this is not true due to diffraction, but theoretically it is. To get diffracted images you need a different simulation softaware.


Dale Eason
 

Perhaps you don't know or are forgetting there is a difference between analysis based on continuous functions like ray tracing uses and discrete sample based (pixel based) analysis that igram analysis uses.  Discrete sample based  resolution is limited by the sample density and the max value of a pixel.

In answer to your other question about how many fringes to use.  That has been discussed several times and the answer is it depends and several factors with the general answer for DFT analysis being as many as you can get without violating the rule that a fringe needs too be at least 3 pixels wide.  In practice around 100 is a good number for a mirror that is 640 x 640 on the image.

Yes people have published sample igrams from their RR work.  Not all but most I think.  They have done so on this list at times.  Usually they use more than 30 and closer to 100 IIRC.


Dale Eason
 

On Thu, Jul 23, 2020 at 01:42 PM, Mladen, Florida, USA wrote:
That area in the results section at the bottom where you have a surface or wavefront graph.
That is titled "Profile of wavefront error".  I usually say "profile plot"  when discussing it.

So you did not mean "using" you meant inspecting. When you discussed it.  By "using" I took that to mean you used it as input to the program.

Dale