Topics

errors of "error" models

 

#108 : A Proposed Pre-Virtual 2-Port of NanoVNA-0.6.0
Hello,
Allow us, please, to propose--FACUPOV in our SOW--the following 4-port as
- - - - - - - - - - - - - (c) gin&pez@arg (cc-by-4.0) 2019 : start - - - - - - - - - - -
the Pre-Virtual 2-Port of NanoVNA-0.6.0:
https://www.op4.eu/nvu/2020.02.22/NanoVNA.Pre.Virtual.2.Port.gpa.cc.by.4.i.2020.png
- - - - - - - - - - - - - finish : (c) gin&pez@arg (cc-by-4.0) 2019 - - - - - - - - - - -
Sincerely,
gin&pez@arg
REFERENCE
#92.2 : Our General Picture of [TheLeastVNA] - Update 2:
1 January 2020 - https://groups.io/g/nanovna-users/message/9026
:108#

 

#107': On the Missing Terms in NanoVNA firmware - ERRATUM

Dear Erik,

Thank you very much indeed for your valuable comment,
since you forced us to recheck the related code.

No, we can not explain that, simply because we erroneously
noticed this absence.

We are terribly sorry for the inconvenience.
Please accept our apologies.

Hence, we withdrawn this proposition:
https://www.op4.eu/nvu/2020.02.20/modifs.to.nanovna.0.6.0.gpa.cc.by.4.i.2020.png

Best regards,

gin&pez@arg

:107'#

erik@...
 

Can you explain why they are needed given the current content of eterm_calc_er?

--
NanoVNA Wiki: https://groups.io/g/nanovna-users/wiki/home
NanoVNA Files: https://groups.io/g/nanovna-users/files
Erik, PD0EK

 

#107 : On the Missing Terms in NanoVNA firmware

Hello,

Allow us, please, to propose the following modifications remarked by "//" :

https://www.op4.eu/nvu/2020.02.20/modifs.to.nanovna.0.6.0.gpa.cc.by.4.i.2020.png

Sincerely,

yin&pez@arg

REFERENCES

[I] : errors in "error" models :
[#08] : 25.09.2019 : https://groups.io/g/nanovna-users/message/3004
[#11] : 25.09.2019 : https://groups.io/g/nanovna-users/message/3049
[#11]' : 25.09.2019 : https://groups.io/g/nanovna-users/message/3041, by Gary O'Neil, N3GO
[#15] : 26.09.2019 : https://groups.io/g/nanovna-users/message/3147

[II] : error model(s) : 18.09.2019 : https://groups.io/g/nanovna-users/message/2553 :
REF [2] : https://www.op4.eu/code/HPseminar1989p3-9.png

 

On Tue, 7 Jan 2020 at 22:17, John Ackermann N8UR <jra@...> wrote:

Gary, just a guess (I'm not a VNA designer) but it might be because it's
easier to design and characterize an "absolute" (open or short) with
nominally infinite impedance than something that needs to match some
arbitrary value. And how would you choose the arbitrary values?
Different users have different requirements.

I think, but am not sure, that using arbitrary values also would prevent
any pretense at corrected measurements beyond those arbitrary values.
When your limits are infinity, nothing stands in your way. :-)

73,
John
You can certainly perform a calibration with arbitrary values - all the
electronic calibration units do this - they can't generate anywhere near
perfect opens and shorts using electronic switches.

Best accuracy occurs when the standards are as different as possible. In
principle, you could calibrate with 1.0, 1.0001 and 1.00002 ohms. But such
a calibration would be very unstable. The open and short have the greatest
phase difference that it is possible to make.

They are also the easiest to characterise, as you can determine their
properties from physical measurements and EM simulation, whereas you can't
do that with resistors or capacitors.

You can also calibrate with three shorts or three opens if you want. I have
done it with 3 shorts myself, but it not suitable for use over a wide range
of frequencies. If you look at the attached PDF, of a 110 GHz Keysight
85059A calibration and verification kit, you will see the opens and loads
are only rated for use to 50 GHz. Between 50 and 110 GHz, you use multiple
shorts. There are 4 different ones in the kit.

Dave

erik@...
 

There is also a physical explanation why the imperfections of SOL standards are described (of parameterized) in a certain way.
The connection from the VNA goes through the connector (which has a certain characteristics impedance into S, O or L.
The connector contains a center conductor, a dielectric and the outer solid metal wall. The characteristic impedance of the connector (and the cable if being used) depends on the size of the center conductor, the thickness of the dielectric (and its dielectric constant) and the outer wall. NOw as long as this continues the impedance stays characteristic. So what does a S, O and L do?
It makes a transition to a new impedance (0, infinite and Z0 for the perfect Short, Open and Load).
But this transition is difficult to make perfect.
The simplest is the Short. You stop the dielectric at a well defined place (called the "reference plane") and make a massive metal (solder or some other metal) connection between the central conductor and the outer wall. But as the connection is not perfect it could have a bit of induction and therefore the imperfections of the short are often modeled as the fixed induction (H) in series with the Short. The induction of the short can be modeled (e.g calculated from the physical characteristics), measured as described in the last document I shared, or "compared" to a known golden standard Short (and yes, even the kilo has to start somewhere)
So the inductive terms used to describe the short are there because they are actually present in the short!!!! They are not "invented" to compensate for imperfections. They describe in as few parameters possible the actual impedance of the short.

The Open is also fairly easy to make at a well defined place (called the "reference plane" ) by stopping the center conductor, the dielectric and the outer wall. Of course you will immediately understand this can never be a perfect "Open", yes, at zero Hz the impedance will be huge (infinite?) but there is a tiny capacity because the center conductor and the outer wall still can "see" each other through the air as the dielectric constant of air is not zero. So this imperfection is calculating from a physical model and specified as capacitance because that is what is actually making the "Open" not perfect.

For the "Load" you have to replace the metal of the Short with some material with a certain resistance to create exactly the right resistance which is easy at zero Hz but from the physical reality you can easily understand there is possibly some inductance (the resistance material has a certain "length" to cover from center conductor to outer wall) or capacitance (the resistor has some "depth" and you no longer have the characteristic impedance of Z0 so there is some extra capacitance.

As a small amount of parallel capacitance has about the same impact as some extra length the capacitive imperfections are sometimes described as shifts of the reference plane.

So in one sentence: The parameters used to model the real impedance of the calibration loads are chosen to match real physical imperfections present in the calibration loads

Now your question related to "real high frequencies". As you can understand extra capacitance has more impact at higher frequencies so that is why you see sometimes calibration loads that deviate substantial at very high frequencies from their perfect impedance at zero Hertz.
This is not a problem but it is reality!!! and as long as you use the real impedance of the calibration loads as (O,S,L) in the G formula there is no problem as the G formula will still be able to calculate G from the measured g, the measured s,o,l and the real impedances S,O and L.
Now what will happens if you calibrate the VNA using the real OSL and the measure one of the calibration standards? You get O, S and L with all their deviations from perfections, which you do not care about because they have no impact on your measurement.
Example. If you have a load with a fringe C you see a resistance with a small C, which may become VERY visible at very high frequencies, even if it is small.

In the documents I added you can see there are many more ways to calibrate out and compensate the internal imperfections of the VNA using various (imperfect) calibration standards and complicated measurement "tricks"
This implies you can physically model and precision manufacture calibration stands as "gold" standards (like with the meter, defined from the speed of light and the second) with impedances calculated from these model and then build a metronomy chain to the much more imperfect calibration standards we normal people use.

Sorry for the long post, I got carried away.....
--
NanoVNA Wiki: https://groups.io/g/nanovna-users/wiki/home
NanoVNA Files: https://groups.io/g/nanovna-users/files
Erik, PD0EK

Jeff Anderson
 

Hi Gary,

Regarding fringe capacitance, HP states that fringe capacitance can have an effect on measurement accuracy above about 300 MHz.

This should explain “why” knowing fringe capacitance is important. (And I hope it is already clear why you need to accurately know your standads’ Gammas).

By the way, different types of standards will have different values of fringe capacitance. The Gammas of different Opens aren’t simply a difference in “length change”.

Finally, I would like to add...

Someone once told me that it took HP 10 years to develop VNA error correction. If true, that would have been a tremendous amount of effort by a group of very talented scientists and engineers.

I’m just a retired engineer with a tangential interest in VNA’s. I won’t have the answers to all your questions, but I’ll try to answer what I can.

Best regards,

Jeff, k6jca

erik@...
 

And here is another introduction. Read section 2.1.3
This document from 1990 also mentions the ratio of cross ratios and describes various different approaches to calibration next to the well known SOLT

--
NanoVNA Wiki: https://groups.io/g/nanovna-users/wiki/home
NanoVNA Files: https://groups.io/g/nanovna-users/files
Erik, PD0EK

erik@...
 

Attached document should give some answers

The impedance model of the calibration load represents the actual impedance of the load either as it is calculated from theoretical modelling or from measurements like in attached document.

--
NanoVNA Wiki: https://groups.io/g/nanovna-users/wiki/home
NanoVNA Files: https://groups.io/g/nanovna-users/files
Erik, PD0EK

Gary O'Neil
 

Interesting.....

My question still doesn't seem to be getting through... Yet it remains a simple one. :-) Let me try this again.

Question: What justifies characterizing the calibration standards?
Answer: Because it improves measurement accuracy.

Question: How does it do that?
Answer: It makes post calibration measurements of the standards plot with the profile of the standards and not plot as though the standards were perfect.

Question: How does doing this make the measurements more accurate?
Answer: Because HP says characterization of the standards improves accuracy, everybody agrees, this is the way it's always been done, and it actually works.

Question: How much more accurate are the measurements after calibrating with the characterized standards?
Answer: Close to absolute.

Question: How do you know the measurements are close to absolutely accurate?
Answer: Because they were characterized by a certified test lab, who provides us with the correction coefficients.

Question: Does the certification lab provide a tolerance on the accuracy of the coefficients they give you?
Answer: I don't know... Probably.

Question: So you are confident that your results are as accurate as you can make them?
Answer: Yes.

Question: Why?
Answer: Whatever do you mean???

Question: Isn't there some degree of uncertainty remaining?
Answer: Well sure...

Question: How much?
Answer: I don't know, but I know its not very much once I've calibrated appropriately with my Characterized calibration standards.

Question: What impedance does the open circuit standard represent?
Answer: Oh I don't know, but it's pretty high... maybe a few k ohms.

Question: I think I read something around 50 femptoFarads being used to compensate for the fringing capacitance of the open standard as a typical correction. Does that sound about right?
Answer: Yeah! I might have heard or read something like that. It's a very small number.

Question: I'll say... but 50 femptoFarads is about 32 ohms at 100 GHz. If the the open circuit impedance drops to 32 ohms, How far does that move the dot?
Answer: I don't know, but it isn't very far?

Question: But it shows that it has a noticeable span on the display. Why is that?
Answer: Because it probably moves that far after it's been calibrated. The open circuit might be offset by the connector length... and it only happens at really high frequencies. In the GHz range maybe.

Question: I'm guessing it's an unavoidable shunt capacitance and maybe there's some angular displacement in play also?
Answer: Probably... Maybe... It has to be something, or else it would be just a dot.

Question; So that isn't an error in the calibration?
Answer: No... It shows that it has been calibrated because it's displaying what the open circuit really looks like.

Question: Then the open circuit doesn't look like a real open circuit?
Answer: Correct. It's not possible to make a perfect open circuit.

Question: So I've been told... That's about as close to an open circuit as we can manufacture though am I right?
Answer: I think so.

Question: Then why isn't the standard used to represent a precise open circuit after calibration? Isn't this a real open circuit in the real world?
Answer: Because then it wouldn't be accurate, and not all open circuits are the same. They might be at a slightly different offset.

Question: That's still just a length change though correct.
Answer: Yes, but now we can measure it and use the data to measure others like it.

Question: You can't manufacture a precise open circuit, but you need to measure them accurately?
Answer: Correct.

Question: Why?
Answer: Huh?

Question: Why? What's the point? How does it manifest its value?

--
73

Gary, N3GO

Jeff Anderson
 

Hi Gary,

Great questions. I have no idea what the answers are. But Erik’s and John’s replies seem reasonable.

Perhaps Dr. Kirby might know.

Best regards,

Jeff

John Ackermann N8UR
 

Gary, just a guess (I'm not a VNA designer) but it might be because it's
easier to design and characterize an "absolute" (open or short) with
nominally infinite impedance than something that needs to match some
arbitrary value. And how would you choose the arbitrary values?
Different users have different requirements.

I think, but am not sure, that using arbitrary values also would prevent
any pretense at corrected measurements beyond those arbitrary values.
When your limits are infinity, nothing stands in your way. :-)

73,
John
----

On 1/7/20 2:48 PM, Gary O'Neil wrote:
Hi again Jeff;

I believe I now sufficiently understand the technical aspects of the discussions in this thread to forego the wizardry behind the pursuit of high accuracy. it appears sufficiently sound.

On that happy note… I will state my one remaining question succinctly. Why the obsession over accuracy at the the two most unstable phase regions of highest Q and unreachable limits of infinity and zero?

A reasonable and credible answer will be a bounded tolerance of impedance or phase in those regions, and an estimate of the consequence of exceeding the tolerance boundaries.

I will reiterate… There is nothing wrong with how this is treated what is being done or the rationale behind the obsession. The only question is simply... Why?

erik@...
 

Could it be because these are most easy to manufacture? A short and an open?

--
NanoVNA Wiki: https://groups.io/g/nanovna-users/wiki/home
NanoVNA Files: https://groups.io/g/nanovna-users/files
Erik, PD0EK

Gary O'Neil
 

Hi again Jeff;

I believe I now sufficiently understand the technical aspects of the discussions in this thread to forego the wizardry behind the pursuit of high accuracy. it appears sufficiently sound.

On that happy note… I will state my one remaining question succinctly. Why the obsession over accuracy at the the two most unstable phase regions of highest Q and unreachable limits of infinity and zero?

A reasonable and credible answer will be a bounded tolerance of impedance or phase in those regions, and an estimate of the consequence of exceeding the tolerance boundaries.

I will reiterate… There is nothing wrong with how this is treated what is being done or the rationale behind the obsession. The only question is simply... Why?

--
73

Gary, N3GO

Jeff Anderson
 

Hi Gary,

I just wanted to make that I answered your question as to why a Short and an Open are not plotted at -1 and +1 after calibration. (The short answer is: because their actual Gammas do not equal -1 and +1.)

As to the math, don't be daunted! It is more straight-forward than you might think. Keep in mind:

1. The basic formula for one-port error correction is based upon the one-port signal-flow graph.
2. Deriving an equation from a signal-flow graph might seem awkward, but there are a number of sites on the web that will give you the rules (if I could do it, I'm sure you can, too).
3. The result will be an equation that, after rearranging, will give you an actual Gamma in terms of a measured gamma and three error terms.
4. But you cannot use this equation to find an actual Gamma until the three error terms are known.
5. To find these error terms, you first make three S11 (Gamma) measurements, each measurement is of a device with a *known* Gamma (thus you need 3 devices of different known Gammas).
6. For each measurement, plug the measured Gamma and the "known" Gamma into the equation derived in step 3, above. This will give you three equations with three unknowns.
7. Solving for the unknowns (i.e. the errors) is linear algebra.

Best regards,

- Jeff, k6jca

Gary O'Neil
 

Jeff, Erik, and John;

Thank you all for your patience with me on this. I at least feel comfortable with what I think I understand so far.

Once again I failed to express myself correctly and used the word devices in lieu of system. My bad... It inspired an answer to a question I wasn't asking. :-) I apologize for that and genuinely appreciate your (Jeff) response. That issue aside, I don't have any issues with your explanations, and they remain consistent with my understanding. My problem lies in following the math behind all of this to confirm or enhance my understanding of how calibration accuracy is assured. All signs point to this being done correctly, the results achieved are as desired, and the reasoning is rational. I'm not trying to be or to sound critical here... and at the risk of again inaccurately expressing myself, I'm not looking for cook book summary descriptions. The math is involved and challenging for an ole' timer to follow before losing concentration and falling asleep, and being new at the game of scrutinizing VNA performance to this level of detail makes it all the more daunting. I've resisted trying to figure out signal flow diagrams, but I sense learning how to use them may be less tedious than to continue trying to crawl through the equations and running spreadsheet examples. Learning is one of the perks of retirement though... and its all fun. :-)

Thanks again guys.

--
73

Gary, N3GO

erik@...
 

Not claiming I am competent to do this I would like to try to summarize in limited amount of words what this thread has provided

It adds value for VNA measurement, due to its internal transform, to understand the impact of the magnitude of measurement errors (such as noise) or not well characterized calibration standards on the calculated values, in particular to understand the impact pending the position on the Smith chart. (referring to the DERR part of the communication)
It is possible to formulate an elegant, rather compact formula to calculate G solely based on g,s,o,l,S,O and L
It is possible and it makes sense to compare 1 port (S11) measurement performance of two VNA's measuring the same load if these have been calibrated using the same calibration standards and approach (this is regardless if this has been done using SOL or any other calibration approach, the use includes the utilization of the description of the used calibration standards, either as perfect, parameter modeled or data based) as the results of these measurements should be equal.
--
NanoVNA Wiki: https://groups.io/g/nanovna-users/wiki/home
NanoVNA Files: https://groups.io/g/nanovna-users/files
Erik, PD0EK

Jeff Anderson
 

On Sun, Jan 5, 2020 at 08:04 PM, Gary O'Neil wrote:
Hi Gary,

I don't know if I can answer your questions, but let me start with the last ones...


This brings up yet another question. If the devices are "measured and
remembered" as the math clearly dictates, what would cause a short and an open
(defined as such) to appear anywhere other than the locations -1 and 1 without
biasing the algorithm to place them differently? If they are intentionally
placed at a different location, what is the justification for doing so, since
this would seem to create a need to compensate for the induced post
calibration offset errors ?
First, it isn't the "devices" that are "measured and remembered", it is the "system errors" that are measured and remembered, so that their effect on measurements can be compensated for. (These "system errors" are separate from any characterized imperfections the Standards might have.)

The Standards are the means by which the system errors are determined. These standards are assumed to be perfect and without error, but not perfect in the sense that the short's reflection coefficient is -1+j0 or the open's equal to +1+j0. Rather, they are considered perfect in the sense that their electrical attributes (delay, loss, parasitic effects) have been accurately characterized and are known to the VNA system. In other words, perfect, yet imperfect.

If one of these "perfect yet imperfect" standards is then measured on the VNA, prior to the VNA's calibration, the position of its Reflection Coefficient, plotted on the Smith Chart, will be quite different from what it should be. This difference is due to the VNA's "system" errors".

There are three system errors associated with one-port (i.e. S11) measurements. Thus, to determine what these three errors are, three different "known" standards are used, creating three equations with three unknowns, those unknowns being the unknown system errors. These equations are then solved, and the unknown errors become known.

(For more on this, see here: http://k6jca.blogspot.com/2019/12/vna-notes-on-12-term-error-model-and.html)

Now that these three errors are known, they can be compensated-for (i.e. corrected) in future measurements. And if I now take one of my "perfect yet imperfect" standards and measure it on the VNA, the VNA should now accurately place its Reflection Coefficient on the Smith Chart.

But should it be placed at -1+j0 or 1+j0?

Let's say that this standard is a Short standard, and let's say there is some inherent, yet well characterized, delay within the short itself, between the actual implementation of the short and the calibration reference plane (that lies within the short's connector). The VNA should not plot this short at -1+j0, but should instead plot the short at the point on the Smith Chart that represents the *actual* impedance of the Short at its reference plane, which is *not* -1+j0 (because the delay will cause rotation). Dr. Kirby's example is a good illustration of this concept.

Let's take another example: the Open standard. These standards have some amount of fringe capacitance, and so let's assume that our Open has some fringe capacitance but no delay.

Now, after our calibration procedure, let's say we were measuring an unknown capacitor that, by coincidence, had *exactly* the same amount of capacitance as the fringe capacitance of the Open. Would we want this capacitor to be plotted at +1+j0? Or would we want it to be plotted at some point, not 1+j0 , that represents the actual Reflection Coefficient for that capacitance?

We would want it to be accurately plotted at the point representing the value of the capacitance. And since, in this example, there is no difference between my "Open" standard and the capacitance I later measured, if I then measured my Open standard on the VNA, its Reflection Coefficient should also appear at that same point on the Smith Chart as my unknown capacitor, not at 1+j0.

I hope the above explanation helps answer your two questions. Please let me know if I've been confusing or not clear. And then, once we get through this concept, we can tackle your other questions.

Best regards,

- Jeff, k6jca

P.S. it is rare for a standard to have zero delay from its reference plane. APC-7 standards, being sexless, have 0 delay, but almost all other standards have some sort of non-zero delay and thus, when measured, should not appear at -1+j0 or 1+j0. So almost all open or short standards, when plotted, should plot rotated from -1+j0 or 1+j0.

John Ackermann N8UR
 

I hesitate to jump into this, but...

The "correction coefficients" that professional VNAs apply to compensate for, e.g., fringing capacitance in the opens, are not just electrical measurements of selected components.  Things like fringing capacitance are not flaws in the standards, but inevitable results of the physical realization of the electrical concept of an "open".

The coefficients do not come from electrical measurement of some "gold" standard.  Instead, they are derived from the physical characteristics of the standards, which are manufactured with extremely tight mechanical tolerances.   From these characteristics the coefficients can be determined by using fundamental equations like the capacitance formula more accurately than is possible by electrical measurements.  (And of course they are sanity-checked electrically as well.) 

So, the coefficients are not really correcting for "imperfections", but are instead acknowledging at a fundamental level the properties of physical objects, to improve the mathematical models of those objects being used in the correction equations.

John
----

On Jan 5, 2020, 11:04 PM, at 11:04 PM, Gary O'Neil <n3go@...> wrote:
Hi Jeff;

Per my post:
@ Gary O'Neil - https://groups.io/g/nanovna-users/message/9184

I don't find any source of disagreement in your posts:
@ Jeff Anderson - https://groups.io/g/nanovna-users/message/9178
@ Jeff Anderson - https://groups.io/g/nanovna-users/message/9181

I will also confess that I overstated a Hackborn quote which modified
its more accurate interpretation. He didn't dismiss anything, but
rather makes the statement that all of the errors and uncertainties in
the system are measured and remembered.

By that inexcusable but excellent example of my inability to make and
defend my point; I will attempt instead to understand your
understanding of the process, and search for where the two will
hopefully converge.

After several reads and re-reads of your and Erik's posts; I think you
two may be on the same page. Your post, and another by Dr. Kirby:
@ Dr. Kirby - https://groups.io/g/nanovna-users/message/9183

hint at a possible disconnect in "my" understanding, which may be
linked to a vagueness in the use of jargon, or more pathetically, my
lack of understanding of the jargon in use.

The way I am interpreting your posts, I see the use of the terms
calibration, characterization, and correction. You also identify the
noise and imperfect characterizations of the standards as not being
corrected by the error correction process.... referring to a Hand
quote.

You also make reference to HP and Keysight quotes... both of which I
agree with as being correct. To my point; any statement that the
"accuracy" of something (anything) used for the purpose of improving
the accuracy of the measurement must itself be accurate cannot be
argued. It is made true by the way it is stated and/or presented.

Clearly there is no argument that even with the highest of quality in
the standards, at some upper limit of frequency, the manufacture of
standards sets to the exacting dimensional tolerances required to
guarantee that the reference plane remains constant becomes
unachievable, significant rotational errors occur and corrections for
the known and well defined imperfections are needed in the calibration
in order to make meaningfully accurate measurements.

So my lack of understanding seems to lie in the question being what's
the point of attempting to model imperfect standards of uncertain
accuracy, and using that model to corrupt the ability of the algorithm
to accurately measure and remember all of the system errors and
uncertainties with uncertain guesses at what the ones that are measured
have been characterized to be? Are not the errors that manifest
themselves as problematic, only problematic because they result from
differences in the location of their respective reference planes? the
uncertainties of the parasitic reactance properties associated with
each of the standards are measurable, and thus they will be "measured
and remembered". As such, they are all present and accounted for in the
calibration. Characterization of the standard reference plane location
(degrees per GHz) would seem to be a more precise and accurate manner
to compensate (not calibrate) for their respective rotational offsets
without compromising the integrity of the calibration algorithm. After
that; how precise does the rotational compensation need to be in order
to sufficiently orient the regions of infinity to the VNA user such
they are presented with the most accurate measurement the VNA is
capable of providing?

This brings up yet another question. If the devices are "measured and
remembered" as the math clearly dictates, what would cause a short and
an open (defined as such) to appear anywhere other than the locations
-1 and 1 without biasing the algorithm to place them differently? If
they are intentionally placed at a different location, what is the
justification for doing so, since this would seem to create a need to
compensate for the induced post calibration offset errors ?


--
73

Gary, N3GO

Gary O'Neil
 

Hi Jeff;

Per my post:
@ Gary O'Neil - https://groups.io/g/nanovna-users/message/9184

I don't find any source of disagreement in your posts:
@ Jeff Anderson - https://groups.io/g/nanovna-users/message/9178
@ Jeff Anderson - https://groups.io/g/nanovna-users/message/9181

I will also confess that I overstated a Hackborn quote which modified its more accurate interpretation. He didn't dismiss anything, but rather makes the statement that all of the errors and uncertainties in the system are measured and remembered.

By that inexcusable but excellent example of my inability to make and defend my point; I will attempt instead to understand your understanding of the process, and search for where the two will hopefully converge.

After several reads and re-reads of your and Erik's posts; I think you two may be on the same page. Your post, and another by Dr. Kirby:
@ Dr. Kirby - https://groups.io/g/nanovna-users/message/9183

hint at a possible disconnect in "my" understanding, which may be linked to a vagueness in the use of jargon, or more pathetically, my lack of understanding of the jargon in use.

The way I am interpreting your posts, I see the use of the terms calibration, characterization, and correction. You also identify the noise and imperfect characterizations of the standards as not being corrected by the error correction process.... referring to a Hand quote.

You also make reference to HP and Keysight quotes... both of which I agree with as being correct. To my point; any statement that the "accuracy" of something (anything) used for the purpose of improving the accuracy of the measurement must itself be accurate cannot be argued. It is made true by the way it is stated and/or presented.

Clearly there is no argument that even with the highest of quality in the standards, at some upper limit of frequency, the manufacture of standards sets to the exacting dimensional tolerances required to guarantee that the reference plane remains constant becomes unachievable, significant rotational errors occur and corrections for the known and well defined imperfections are needed in the calibration in order to make meaningfully accurate measurements.

So my lack of understanding seems to lie in the question being what's the point of attempting to model imperfect standards of uncertain accuracy, and using that model to corrupt the ability of the algorithm to accurately measure and remember all of the system errors and uncertainties with uncertain guesses at what the ones that are measured have been characterized to be? Are not the errors that manifest themselves as problematic, only problematic because they result from differences in the location of their respective reference planes? the uncertainties of the parasitic reactance properties associated with each of the standards are measurable, and thus they will be "measured and remembered". As such, they are all present and accounted for in the calibration. Characterization of the standard reference plane location (degrees per GHz) would seem to be a more precise and accurate manner to compensate (not calibrate) for their respective rotational offsets without compromising the integrity of the calibration algorithm. After that; how precise does the rotational compensation need to be in order to sufficiently orient the regions of infinity to the VNA user such they are presented with the most accurate measurement the VNA is capable of providing?

This brings up yet another question. If the devices are "measured and remembered" as the math clearly dictates, what would cause a short and an open (defined as such) to appear anywhere other than the locations -1 and 1 without biasing the algorithm to place them differently? If they are intentionally placed at a different location, what is the justification for doing so, since this would seem to create a need to compensate for the induced post calibration offset errors ?


--
73

Gary, N3GO