Re: Newbie material on HF+(more)

Leif Asbrink
 

Hi Joanne,

Can you apply the technology that developers using the
ANAN SDR transmitters have for dramatically reducing IMD?
In principle yes, but there is no reason. People can use
the standard software for ANAN for transmit and Linrad
for receive if they want the enhancements.

And without that reduction why is a fancy SDR needed for
reception?
Because it allows better interference suppression and
some other tools that allow copying weaker signals.
It also allows a highly sensitive waterfall and perhaps
CW skimmer.

The FFT calculated number is an average already. In fact
if you make a really fine grain FFT and don't use overlapped
computation techniques you're averaging over syllables which
won't catch the peak.
NO NO!!
The RMS power is computed in the time domain on the
filtered signal.

All I'm saying is that the measuring
the peak of a signal is not as simple as falling out of bed.
It takes some thought.
Well, I strongly disagree. We were discussing S-meters.
They measure the power within a selected passband. You
can not do that with an FFT unless you do very special
(and un-interesting) things to make each bin have the response
of the IF filter. Means you would need a bin separation
of something like 25 Hz with a narrow window that makes
the response ov a perfect carrier about 100 bins wide.
Trensforms would have to overlap by 99%. Really silly...

Many digital modulation techniques
featuring multiple tones have a very high peak to average
power. That's why they are best run at power levels well
below the peak the transmitter can develop. The human
voice also has that problem. By comparison CW is really
simple. And will your RMS give the same reading for an
extended key down, a series of Morse code dahs, and a
series of Morse code dits?
Yes.

Of course, the average power over a period of seconds
is probably more important for the transfer of
information for most modes. But you need to know the
peaks in order to avoid overloading the A/D or D/A
elements in the system.
Now you are on really thin ice...
The signal of interest is rarely (or more adequately NEVER)
the dominating signal in the wideband signal fed to the
ADC in a SDR.

If I make a single 10 ns wide I and Q sample of RF as
filtered to the bandwidth of interest what do I have.
The bandwidth of interest is perhaps 2.4 kHz for SSB.
A "a single 10 ns wide I and Q sample" has no meaning,
that implicates that you represent the 2.4 kHz bandwidth
with a sampling frequency of 100 MHz. Grotesque!!
That is on the order of the sample rate for the ANAN SDRs.
What is grotesque about that?
To believe that the power for the S-meter is computed
from a signal filtered to 2.4 kHz bandwidth and presented
at a sampling rate of 100 MHz.

I can square I and Q, add them, and square root the result.
What do I have? If the RF carrier is say 1 MHz how long
do I have to average successive 10 ns wide samples to get
a decent true RMS reading by squaring I's and Q's, adding
I's and Q's, dividing by the number if I and Q pairs, and
square rooting the result? What happens if I average 125
samples instead of 100 samples? Does it matter what part
of the 1 MHz sine wave I start with?
Dear Joanne, this question is irrelevant. Nobody would
try to do something like this.
It is done every day.
I suggest you give me a single example. I am convinced
you will not be able to do that. (Please remember we are
discussing S-meters in receivers.)

But usually the filter is placed at
the lower frequency after filtering and decimation to
minimize computation.
I would say always.

But the result is equivalent to
filtering at the D/A output.
That is not the impression your way of writing will
give to the readers of this list. If you would apply
a 2.4 kHz wide filter on a signal with 100 MHz
sampling rate, it would be totally stupid to compute
the squares of I and Q for every sample. Exactly
the same information would be obtained if you made
the computation every 50000th sample provided you
have frequency shifted the signal to zero IF (which
is trivial in a SDR)

The problem remains even if
I gave 10,000 theoretically instantaneous samples per
second with a 2.4 kHz bandwidth SSB signal. One sample
can give you anything from 0 to the power at the peak
of a sinewave. It takes a fairly long term sample to
extract the average power.
NO. The average power is the average over a time that the
operator has to decide. Someone might want the average
power over 10 minutes (for the signal from a beacon)
while someone else wants the average power over 0.1
second.

And the peak power is of use
when trying to gain control so that ranges of D/A
converters on the audio outputs are not exceeded.
You are mixing up things in a way that I am afraid causes
confusion. We are on a public forum and I do take these
things very seriously. Suddenly you introduce an aspect of
AGC. Do not saturate the loudspeaker. That would not affect
the computation of signal power.

You say you have I and Q samples separated by 10 ns. That
means clock is 100 MHz and the bandwidth is 200 MHz.
Anyone with the slightest insight in SDR technology knows
that a 2.4 kHz wide filter at a 100 MHz sampling rate
is too demanding even for the best processors we have.
No, it means I used real hardware and sampled at some
unstated rate that may be as low as 4.8 ksps for a
2,4 kHz filter. You read into what I stated more than
I intended.
Dear Joanne, I am afraid your provocative way of writing
is confusing to many readers. We are on an international
forum where many readers do not have English as a native
language,. I read (carefully) what you write - and it
seems that you use irony in a way that goes beyond my
understanding.

You might modify your statements to make them understandable:

1) Mentioning the ADC is totally irrelevant. It has nothing
to do with the evaluation of a narrowband signal. Keeping
the ADC below saturation is a different problem.

2) To evaluate the power of a signal with a bandwidth
of 2.4 kHz we need to sample I and Q at least at 2.4 kHz.
Nothing is gained by sampling at a higher rate.

3) To compute average power we need to collect more
than a single I/Q pair. The average will of course
depend on the time we average over. The user has to
decide depending on what he is interested in.

4) To compute peak power we need to collect more
than a single I/Q pair. The user has to decide
what time interval he wants to know the peak (RMS)
power for.

Yes, now think in terms of the pathological cases and tell
me what range of answer you get with a 1.5 kHz sine wave
in a 300Hz to 2700 Hz filter sampled at say 10kHz after
all the decimation. For various averaging times what is
the peak to peak variation in the measurement with time?
It is zero if the signal is strong enough to make noise
irrelevant.

When you stop tilting at windmills I did not pose as
examples and concentrate on what I did say it might all
be more obvious.
Please do not use this kind of language. I am not
a native American English. What you wrote here
can not be decoded by me:-(

Even if you want to imagine sampling a
single 1.5 kHz signal out of a precision generator at some
frequency at 100 Msps what happens if you average over a
variable period of time and plot the output of the average
for several seconds before changing the averaging period?
The result would always be the same (assuming noise
is negligible.)

This is easier to visualize, and probably more accurate,
than working with a low sample rate above the Nyquist
limit.
I do not agree. I believe it is confusing to almost
everyone on this list.

Even an FFT derived FFT would show some variance
in measured signal level from FFT to FFT on a signal
at random frequencies.
No. Not if processing is properly done and the noise power
is very small compared to the signal power.

It seems to me you are making some incorrect assumptions.
Maybe using un-windowed FFT or something else. Anyway you
are wrong here.

And human voices are a particularly
vicious thing on which to try to get a good peak reading
without resorting to the relatively meaningless largest
single sample.
Why do you say this? "The largest single sample" out of
perhaps 1024 samples is not meaningless at all. It is actually
provides very meaningful information. Assume sampling
at 3 kHz (for a 2.4 kHz wide signal.) "The largest single
sample" out of 4096 samples would be the peak power (RMS)
taken at a rate about 0.5 Hz.

And that number does matter with respect
to limiting in equipment. But for things like SSB or
high peak to average digital waveforms the average power
over seconds is what matters more in the information
theory world.
Sure, given a max pep power (1.5 kW) one would get the
best information transfer with very hard compression
and an average power of about 900W. That would be
when the signal has a constant level in a background
of white noise. Such a signal is heavily distorted and
not easy to copy even at higher signal levels. If there
is some qsb or if signal levels are a little higher
it is much better to use less compression and perhaps
500W average power.

So to get back to where we started, what
number do we report to the person who just asked us for
an S-Meter reading? With SDRs we have more practical
answers to that question than with analog equipment.
Yes. I am afraid there is no consensus on this issue.
Personally I would report the level of the peak envelope
power evaluated at perhaps 0.1 second and averaged over
something likw 10 seconds.

On HF as I understand the report is always 59 or
599 (to save time.)

73

Leif

Join airspy@groups.io to automatically receive all group messages.