Re: Transmitter Linearity

Andy G4JNT
 

I would argue that COFDM is not a way to improve weak signal performance.  COFDM and all the related modulations have as a prime aim to optimise bandwidth usage in defined channels and make the best use of allocated spectrum.    In data communications terms, they are generally referred to as "bandwidth-limited" modulation, making use of high order constellations like QAM and parallel stacked carriers themselves carrying QAM carrying .   They are NOT weak signal modes in their own right.  the need to conserve bandwidth automatically stops them having the best weak signal performance.

The reason for the overwhelming popularity of multicarrier OFDM modes is their resilience to multipath as the resulting symbol rate of teh multiple parallel tones is far lower than the combined data rate of the complete information stram.

In contrast, where bandwidth is not an issue but  power or S/N is, we enter the realm of "power limited" modulation. And this is where most of amateur radio sits.  But first of all you have to realise this does not just mean a wide signal bandwidth.   "Bandwidth expansion" means a lot more than this in signal processing terms.

Take as an example, repeating a messages (in SSB or Morse) five times to get it though.   It has taken 5 times as long to transmit the same amount of source information . In a way you could say it has been expanded 5 times. In data there are other ways of doing it.   You could expand the input data by adding error correction bits that increase it the same five-fold.  That would result in a far-stronger error correction (getting the message through) than the 5 times repeat can offer.

Back to modulation.   We rarely need more than a few tens of bits per second for normal QSO traffic.   Add very strong error correction which expands it say 5 times (and that is very strong indeed !) and you will get 100% copy at normalised S/N ratios (Eb/No) of  around 2 to 5dB depending on the actual modulation type - PSK, FSK.

But we are still only talking about a modulation that has to  carry say 1000 symbols per second.    And that will easily fit into  a 3kHz spectrum.   There's nothing to be gained in a pure additive noise environment by expanding further.   In fact there are  many purely practical advantages now of keeping modulation simple and using something  like basic FSK to overcome the practicalities of transmitters - like linearity.  

Even if the result is wider than it need be.   In spite of our reducing allocations, we still have a huge amount of spectrum to transmit a few hundred bits per second in.     3kHz arbitrary channelisation is still very wide in an AWGN environment.

Difficulties come when we consider extreme microwave paths including rain scatter and EME scintillation. And that requires a complete rethink  If we assume our signal is spread out to  a few hundred Hz, that automatically limits our symbol rate permitted to a fraction of that, say a few tens of symbols per second.  
We cannot POSSIBLY send anything with symbols moving faster or even approaching the random shifts of the spreading.  There's no hope of doing anything coherent

So we need to choose a modulation that can send that on a scattered path.   Its rather ironic, that the simplest modulation of its type, basic ON-OFF keying, is actually quite useful now.   We look for the power change over the entire tone-band of the spread signal and integrate down to the symbol speed   That way we look over a signal spreading bandwidth of say 200Hz but compress the result to say 10Hz.

Pure ON/OFF is wasteful of transmitter energy.  So we alternate between two tones, well separated in frequency to completely differentiate when the RS or scintillation or whatever, spreading occurs.  By comparing one tone's averaged power per symbol  against the other, 3dB advantage is gained over simple ON/OFF keying when transmitted peak power is compared [1]    [2]

By sending as one of, say,  64 tones instead of one of two, six information bits can be encoded at a time  so for paths that give only a few Hz of shift (VHF and down) multi FSK is used

To allow the symbol timing to be regenerated at the receiver, synchronisation signals have to be added to the symbols which reduces efficiency.   For really good weak signal performance and recovery this could be as much as another 100% overhead. 

But we're still only at 2k symbols per second for standard QSO speed .  For contest and routine Dx exchanges  with limited info content we can go a lot slower - which means less source information in the same bandwidth.  Or reduce the signal bandwidth further  by still heaver signal processing.    

The weak signal WSJT modes take all this into account and I think really do represent the best we are going to get on the amateur bands for weak signal Dx type working.    They don't support real time waffle type QSOs, where RTTY and PSK31 would be used, but for contest and EME exchanges and pre-formatted  messages can't be beaten

All the weak signal ones use  multi  FSK of 4,  9 or 65  tones with tone spacing options that can be chosen for the path. They all have a large overhead for synchronisation allowing for reliable lockup based only on a search in time and frequency over a limited range.

Up to recently that have all used similar error correction coding,   JT4, JT9 and WSPR all use convolutional coding with about three times expansion of the number of bits.   JT65 does it differently eiht Reed Solomon and its resulting six-symbol MFSK.      But they all have a similar performance in terms of normalised S/N (Eb/No)  needing about 4dB in their respective symbol bandwidths ranging from 1.5Hz for WSPR to 4.4Hz for JT4.  That puts it within about 4 dB (very roughly) of  Shannon's  theoretical limit for this modulation type.

The latest change is a completely new error correction coding in the latest version of WSJT, called QRA64.   The encoding is simpler than JT65, it is sent using the same modulation - one of 64 tones with a reduced synchronisation overhead  The clever bit comes in the decoding.   It is now believed to be within  0.6dB of the theoretical limit of Shannon limit for incoherent (FSK type ) modulation.



[1]  When you consider Tx MEAN power, two tone FSK and .OOK have identical performance.  The only difference being, you need a 200 Watt Tx, half on and half off, to equal a 100 Watt Tx sending FSK at 100% duty cycle.    Same mean, 3dB peak difference

[2] Coherent modes like PSK and QAM are inherently capable of better S/N performance than incoherent  FSK.  In fact, under ideal conditions  exactly 3dB better. Comparing two antipodal voltages for BPASK  instead of  power in two tone freqeuncies.   But coherency is complicated by the need for carrier recovery and phase locking, then symbol timing. The non linear processing needed for that, in a really weak signal situation with fading , multipath etc makes it a worse choice than keeping things simple and using non-coherent FSK.  Hence, why the all the WSJT modes for weak signal use use MFSK

Andy  G4JNT

On 7 September 2016 at 08:19, 'Chris Bartram' cbartram@... [ukmicrowaves] <ukmicrowaves@...> wrote:
 

FWIW, I accept that running an SSB transmitter well PA into compression isn't likely to cause the world to stop turning! However, I prefer to try to run my SSB transmitters linearly, rather than waste energy by spreading it across the band.

The real prize which comes from the use of more linear transmitters is the potential ability to use more effective modulation schemes than SSB, CW, or constant amplitude data. I have heard credible suggestions that the way to improve the weak signal performance of amateur radio (data) systems over real-world microwave paths is to use COFDM with wider bandwidths than the 2.3kHz occupied by SSB. This however requires a much more linear transmitter than most of us currently have. Practically that means the use of linearisation, and a software derived exciter.

Vy 73

Chris
GW4DGU/A




Join UKMicrowaves@groups.io to automatically receive all group messages.