Re: Newbie material on HF+(more)

jdow
 

On 20180409 21:35, Leif Asbrink wrote:
Hi Joanne,

You wrote "2) There are rather few radios in the hands
of hams that are accurate with their S-Meter readings."
and I disagree. There are many SDRs in the hands of
hams today - and most of them have extremely accurate
S-meters.
Many, yes. I suspect "most" is still, by far, older
analog empty and solid state equipment with no A/D
converters involved.
I just think "There are rather few radios in the hands
of hams that are accurate with their S-Meter readings."
is an inappropriate statement.
In order of magnitude:
There are very few radios in the hands... (maybe 0.1% have one)
There are rather few radios in the hands... (maybe 1% have one)
There are few radios in the hands... (maybe 10% have one)
As I understand it many more than 1% of the ham community
owns an SDR.
Please correct me if my understanding of English is wrong.
For one none of those terms imply the numbers you cite.
Second, if it is in a drawer as a curiosity it doesn't
count except as a means to distort conversations.

The ham market needs a complementary HF transmitter to
match the HF+. That should lead into a complementary
transmitter for VHF on up.
Why is that? The old analog transmitter will continue
to serve well. The SDR can be added to the IF of an
analog radio or be used directly on the antenna in rx
mode.
After reading here for as long as I have I rather doubt
this is taking place all that often given the level of
technical expertise I hear on the ham bands when I listen
around.

(And both will lead to annoying interaction with
the FCC in the US. So I can understand his holding
off on it. "In the USA we only send to the address
found on the FCC web site for the call letters supplied.
" The PITA level is sufficient to keep a lot of
entrepreneurs out of the market this favoring the
big boys already in the market.
I do not think the FCC would create problems for someone
selling test instruments (signal generators) capable
of delivering 20 dBm of power. There is the Softrock
that can transmit 1 W on any frequency within an octave
frequency range. It is a kit, but I do not build with
surface mount components so I bought an assembled unit.
Even if it develops 20 dBm the FCC could determine it is
a transmitter that can transmit outside ham bands and as
such faces FCC type acceptance and other blathers of FCC
regulations. (I have a poor opinion of the FCC likening
the intelligence of its leadership to that of a randy
hedgehog. Never Ever make laws or in their case regulations
you do not enforce. Lack of enforcement leads to lack of
respect for those laws and regulations. That leads to the
overgrown patches of chaos I hear on VHF and HF these days.)
In bands were it is permitted 50 mW is the typical power
limit for something sold as a transmitter.

The softrock is not a transmitter. It is a building block
by which a radio amateur can build a transmitter. Amateurs
building their own equipment have full responsibility for
what their equipment puts on the air.
Not sold as a transmitter, although it can be used as such.
If so used it would be illegal unless used by a ham within
ham bands.

I do not think the modest number of SDR transmitters is
because of FCC and PTT regulations. It is because amateurs
already have transmitters and replacing them with an SDR
would add no benefit. If you run Linrad you could use
the Linrad speech processor to increase the average
transmitted power from a conventional SSB transmitter
while reducing the splatter:
https://www.youtube.com/watch?v=VmxaZe3MM2A
There is no need for a Tx SDR.
Can you apply the technology that developers using the
ANAN SDR transmitters have for dramatically reducing IMD?
And without that reduction why is a fancy SDR needed for
reception?

Well, I wrote peak power, but of course the meter
gives the peak RMS power since the detector is a
true rms detector. there should be no subtraction
of 3 dB.
True - depending on whether it is done sample by sample
or effectively millisecond by millisecond. RMS power in
the AF passband requires a millisecond by millisecond
average to get close. 10 ms is close. 100 ms starts
averaging out syllable level peaks. It's not as easy
as it sounds.
It is actually easier than it sounds. RMS power is computed
in the IF passband. The user is free to set the averaging
time as he wants.
The FFT calculated number is an average already. In fact
if you make a really fine grain FFT and don't use overlapped
computation techniques you're averaging over syllables which
won't catch the peak. All I'm saying is that the measuring
the peak of a signal is not as simple as falling out of bed.
It takes some thought. Many digital modulation techniques
featuring multiple tones have a very high peak to average
power. That's why they are best run at power levels well
below the peak the transmitter can develop. The human
voice also has that problem. By comparison CW is really
simple. And will your RMS give the same reading for an
extended key down, a series of Morse code dahs, and a
series of Morse code dits?

Of course, the average power over a period of seconds
is probably more important for the transfer of
information for most modes. But you need to know the
peaks in order to avoid overloading the A/D or D/A
elements in the system. They would set your compression
level in a simple manual or automatic gain control operation.

If I make a single 10 ns wide I and Q sample of RF as
filtered to the bandwidth of interest what do I have.
The bandwidth of interest is perhaps 2.4 kHz for SSB.
A "a single 10 ns wide I and Q sample" has no meaning,
that implicates that you represent the 2.4 kHz bandwidth
with a sampling frequency of 100 MHz. Grotesque!!
That is on the order of the sample rate for the ANAN SDRs.
What is grotesque about that?

I can square I and Q, add them, and square root the result.
What do I have? If the RF carrier is say 1 MHz how long
do I have to average successive 10 ns wide samples to get
a decent true RMS reading by squaring I's and Q's, adding
I's and Q's, dividing by the number if I and Q pairs, and
square rooting the result? What happens if I average 125
samples instead of 100 samples? Does it matter what part
of the 1 MHz sine wave I start with?
Dear Joanne, this question is irrelevant. Nobody would
try to do something like this.
It is done every day. But usually the filter is placed at
the lower frequency after filtering and decimation to
minimize computation. But the result is equivalent to
filtering at the D/A output. The problem remains even if
I gave 10,000 theoretically instantaneous samples per
second with a 2.4 kHz bandwidth SSB signal. One sample
can give you anything from 0 to the power at the peak
of a sinewave. It takes a fairly long term sample to
extract the average power. And the peak power is of use
when trying to gain control so that ranges of D/A
converters on the audio outputs are not exceeded.

You say you have I and Q samples separated by 10 ns. That
means clock is 100 MHz and the bandwidth is 200 MHz.
Anyone with the slightest insight in SDR technology knows
that a 2.4 kHz wide filter at a 100 MHz sampling rate
is too demanding even for the best processors we have.
No, it means I used real hardware and sampled at some
unstated rate that may be as low as 4.8 ksps for a
2,4 kHz filter. You read into what I stated more than
I intended.

What we do is to decimate, apply a filter with a bandwidth
of perhaps 40 MHz and then use every 4th data point.
(means 40 ns or 25 MHz bandwidth.) Then, filter and
decimate once more to get perhaps 640 ns or 1.5625 MHz
sampling. Then filter and decimate again to get perhaps
40.96 ms or 24.414 sampling. It is reasonable to apply a 2.4
kHz wide filter at this sampling rate but it is inefficient.
In Linrad a clever user would set the final sampling rate to
6 kHz or so.
If you would be really stupid and apply a 2.4 kHz filter
at a sampling rate of 100 MHz you could resample by a factor
of 100000/2.4 = 42000 without loosing any information at
all. Just square I and Q, add them, for every 42000th sample.
It is (of course) assumed that the frequency is shifted
so the 2.4 kHz baseband is in the range ±1.2 kHz.
And all that fancy processing is equivalent to placing a
2.4 kHz wide filter on the output of the A/D converter. You
just get fewer samples to mess around with.

That demands another question. Is there a material
difference between taking a 3 kHz wide set of 10 Hz wide
FFT samples and averaging the bin power levels compared
to filtering the signal to 3 kHz wide and measuring the
10 ns wide I/Q samples for the averaging above?
"the averaging above" ???
Anyway, after resampling to represent the desired frequency
range efficiently one cound do an fft with 3 kHz bandwidth,
then average 128 transforms and use the result for one line
in the waterfall.
Yes, now think in terms of the pathological cases and tell
me what range of answer you get with a 1.5 kHz sine wave
in a 300Hz to 2700 Hz filter sampled at say 10kHz after
all the decimation. For various averaging times what is
the peak to peak variation in the measurement with time?

Alternatively one could use a 128 times larger fft with
23 Hz bandwidth and take the average over 128 fft bins for
each pixel in the waterfall. The result should be identical.
We can however do much better. Use a 128 times larger
FFT. Average 10 transforms in the full fft size, then
pick the largest value with in each group of 120 average
powers and use for the waterfall. That strategy gives a
major improvement in sensitivity for narrowband signals.

I believe you with the discussion of your RMS
calculations. The calculations are, no doubt, correct.
The presumptions of accuracy depend on what it is you
think you are measuring. Measuring at RF frequencies
at some largish number of samples per cycle of RF
makes it easier to look for the instantaneous peak
which is 3 dB higher than the one cycle long (or
half cycle long) RMS. At AF, it's more awkward. But
some form of averaging over time is needed to get a
useful RMS value whereas a peak value less 3dB is
still probably as good a reading as you can get
with a very rapidly changing not particularly
repetitive waveform.
I think you need to study SDR technology. We use linear
transformations like frequency shift and filtering.
The RF signal is transferred to a baseband signal
with a sampling rate not much bigger than the signal
bandwidth. The transformation is IDEAL (done by digital
means) and it is done without any loss of information
whatsoever.
When you stop tilting at windmills I did not pose as
examples and concentrate on what I did say it might all
be more obvious. Even if you want to imagine sampling a
single 1.5 kHz signal out of a precision generator at some
frequency at 100 Msps what happens if you average over a
variable period of time and plot the output of the average
for several seconds before changing the averaging period?
This is easier to visualize, and probably more accurate,
than working with a low sample rate above the Nyquist
limit. Even an FFT derived FFT would show some variance
in measured signal level from FFT to FFT on a signal
at random frequencies. And human voices are a particularly
vicious thing on which to try to get a good peak reading
without resorting to the relatively meaningless largest
single sample. And that number does matter with respect
to limiting in equipment. But for things like SSB or
high peak to average digital waveforms the average power
over seconds is what matters more in the information
theory world. So to get back to where we started, what
number do we report to the person who just asked us for
an S-Meter reading? With SDRs we have more practical
answers to that question than with analog equipment.

{^_^}

Join airspy@groups.io to automatically receive all group messages.