Can you apply the technology that developers using theIn principle yes, but there is no reason. People can use
the standard software for ANAN for transmit and Linrad
for receive if they want the enhancements.
And without that reduction why is a fancy SDR needed forBecause it allows better interference suppression and
some other tools that allow copying weaker signals.
It also allows a highly sensitive waterfall and perhaps
The FFT calculated number is an average already. In factNO NO!!
The RMS power is computed in the time domain on the
All I'm saying is that the measuringWell, I strongly disagree. We were discussing S-meters.
They measure the power within a selected passband. You
can not do that with an FFT unless you do very special
(and un-interesting) things to make each bin have the response
of the IF filter. Means you would need a bin separation
of something like 25 Hz with a narrow window that makes
the response ov a perfect carrier about 100 bins wide.
Trensforms would have to overlap by 99%. Really silly...
Many digital modulation techniquesYes.
Of course, the average power over a period of secondsNow you are on really thin ice...
The signal of interest is rarely (or more adequately NEVER)
the dominating signal in the wideband signal fed to the
ADC in a SDR.
To believe that the power for the S-meter is computedThat is on the order of the sample rate for the ANAN SDRs.If I make a single 10 ns wide I and Q sample of RF asThe bandwidth of interest is perhaps 2.4 kHz for SSB.
from a signal filtered to 2.4 kHz bandwidth and presented
at a sampling rate of 100 MHz.
I suggest you give me a single example. I am convincedIt is done every day.I can square I and Q, add them, and square root the result.Dear Joanne, this question is irrelevant. Nobody would
you will not be able to do that. (Please remember we are
discussing S-meters in receivers.)
But usually the filter is placed atI would say always.
But the result is equivalent toThat is not the impression your way of writing will
give to the readers of this list. If you would apply
a 2.4 kHz wide filter on a signal with 100 MHz
sampling rate, it would be totally stupid to compute
the squares of I and Q for every sample. Exactly
the same information would be obtained if you made
the computation every 50000th sample provided you
have frequency shifted the signal to zero IF (which
is trivial in a SDR)
The problem remains even ifNO. The average power is the average over a time that the
operator has to decide. Someone might want the average
power over 10 minutes (for the signal from a beacon)
while someone else wants the average power over 0.1
And the peak power is of useYou are mixing up things in a way that I am afraid causes
confusion. We are on a public forum and I do take these
things very seriously. Suddenly you introduce an aspect of
AGC. Do not saturate the loudspeaker. That would not affect
the computation of signal power.
Dear Joanne, I am afraid your provocative way of writingYou say you have I and Q samples separated by 10 ns. ThatNo, it means I used real hardware and sampled at some
is confusing to many readers. We are on an international
forum where many readers do not have English as a native
language,. I read (carefully) what you write - and it
seems that you use irony in a way that goes beyond my
You might modify your statements to make them understandable:
1) Mentioning the ADC is totally irrelevant. It has nothing
to do with the evaluation of a narrowband signal. Keeping
the ADC below saturation is a different problem.
2) To evaluate the power of a signal with a bandwidth
of 2.4 kHz we need to sample I and Q at least at 2.4 kHz.
Nothing is gained by sampling at a higher rate.
3) To compute average power we need to collect more
than a single I/Q pair. The average will of course
depend on the time we average over. The user has to
decide depending on what he is interested in.
4) To compute peak power we need to collect more
than a single I/Q pair. The user has to decide
what time interval he wants to know the peak (RMS)
Yes, now think in terms of the pathological cases and tellIt is zero if the signal is strong enough to make noise
When you stop tilting at windmills I did not pose asPlease do not use this kind of language. I am not
a native American English. What you wrote here
can not be decoded by me:-(
Even if you want to imagine sampling aThe result would always be the same (assuming noise
This is easier to visualize, and probably more accurate,I do not agree. I believe it is confusing to almost
everyone on this list.
Even an FFT derived FFT would show some varianceNo. Not if processing is properly done and the noise power
is very small compared to the signal power.
It seems to me you are making some incorrect assumptions.
Maybe using un-windowed FFT or something else. Anyway you
are wrong here.
And human voices are a particularlyWhy do you say this? "The largest single sample" out of
perhaps 1024 samples is not meaningless at all. It is actually
provides very meaningful information. Assume sampling
at 3 kHz (for a 2.4 kHz wide signal.) "The largest single
sample" out of 4096 samples would be the peak power (RMS)
taken at a rate about 0.5 Hz.
And that number does matter with respectSure, given a max pep power (1.5 kW) one would get the
best information transfer with very hard compression
and an average power of about 900W. That would be
when the signal has a constant level in a background
of white noise. Such a signal is heavily distorted and
not easy to copy even at higher signal levels. If there
is some qsb or if signal levels are a little higher
it is much better to use less compression and perhaps
500W average power.
So to get back to where we started, whatYes. I am afraid there is no consensus on this issue.
Personally I would report the level of the peak envelope
power evaluated at perhaps 0.1 second and averaged over
something likw 10 seconds.
On HF as I understand the report is always 59 or
599 (to save time.)