Re: Conclusion of FSK testing

K3eui
 

W1HKJ

Dave

Wow..  thank you for all of the explanations.

Even if  FSK RTTY won't work well by keying the DTR line, I've learned so much from the past few emails about how the "system" works and thinks; toggling the DTR/RTS  pins with some imprecision.



CW
Since FLDIGI 4.1.09  I've successfully been using my Icom 7610 to do CW (rig in CW mode) keying via an USB-virtual serial port (DTR pin) and it works beautifully. The rig is in CW mode, and I can use the audio peak filter on the 7610 (only available in CW or RTTY mode). That is, keying the DTR line sounds great to me (monitor audio) at 20-25 wpm, my usual CW speed. And when I key the audio, with rig in SSB mode, CW also works just fine at 20-25 wpm  (Win 7 computer).


RTTY
When I operate   RTTY  via  AFSK, keying the sound card, rig in USB mode, everything always has been fine, regardless of where I key the audio in the FLDIGI Waterfall. The shift remains 170 Hz.  But I understand why staying above 1500 Hz on the WF with AFSK RTTY is likely to produce a cleaner RTTY signal since the 2nd and 3rd harmonic (3.0 kHz, 4.5 kHz) won't get thru. Since there are really very few RTTY QSO's any longer on the HF bands, I'm not going to fret over your conclusion to abandon the FSK choice to send RTTY.  AFSK works fine, for me. But when I call CQ with RTTY on 20m or 40m, I rarely get a response. Everyone is now on FT8/FT4  or  CW. Even PSK31 is rarely heard on HF.  I think I finally understand now why keying the audio (AFSK RTTY) with the audio codec's accuracy (sample rate) works just fine, but the rig is always in USB or LSB mode.


Once again, with much appreciation for what FLDIGI allows us to do, especially on the 80m EMCOM NBEMS nets.
The weekend NY, NJ, NH, and Pa NBEMS nets on 80 meters are all using THOR 22 for checkins now, and MFSK32 for traffic (flmsg).
The recent PaNBEMS net had over 70 checkins, from all over the mid-Atlantic, south to FLA, north to ME and west to Wisc.
THOR 22 is robust, fast, and very easy to decode even if the receiving station has mistuned by tens of Hz.

73 and TU

Barry Feierman  k3eui
Philadelphia region
PaNBEMS net manager 




On Jan 20, 2020, at 10:59 AM, Dave <w1hkj@...> wrote:

My efforts in 2020 are no better than they were in 2002 when I maintained gmfsk and started down the path of developing fldigi.  At least two persons on the list pointed out the difficulty of generating precisely  timed 22 msec baudot bits (mark/space intervals).  At 45.45 baud the on/off intervals are 22022 microseconds for the start and data bits, and 33033 microseconds for the 1.5 stop bit.  Generating precisely timed DTR or RTS states would be easy if there were no operating system or other programs / threads also running on the cpu.  In a modern operating system, including MS Windows, Unix, Linux and OS-X the operating system, system i/o drivers, and  user application must share the cpu and the i/o resources.  The operating system kernel provides the necessary context and core switching to keep all of the balls in the air.

Operating systems that try to be Posix compliant provide a system call that allows the various processes to generate timeouts with microsecond precision.  Note that precision and accuracy are not synonymous.  The system call is nanoseconds and the precision provided is in microseconds (no OS to my knowledge provides a sub microsecond precision).  Any call with fractional microsecond parameters are truncated to the microsecond.  The actual process invoked by the nanosecond call is to cause the operating system kernel to suspend the calling thread (application) for "at least" the specified number of microseconds.

The question is then "how accurate is the thread suspension".  Attached is a simple single thread (best of all worlds) test application that measures the nanosecond accuracy for 10, 15, 20, 25, 30, 35, 40, 45 and 50 millisecond parameters.  (you will need to compile the code).  The combined results for 1000 tests at each of the specified intervals is shown in this graph:

<lpoijdnmehkgkgog.png>

The mean error is + 158 microseconds.  The most likely is +167 microseconds.  The error is caused by the cpu being used by threads with the same or higher priority before being forced to relinquish the cpu by the OS kernel.  On a Unix/Linux system the priority of an application is assigned by it's "nice"ness value.  Nice applications do not hog the CPU.  It is possible to assign a lower value of niceness to an application all the way to causing the OS to fail.  Neither it nor any other process will have sufficient use of the CPU to keep the system up.  The physical power button may be the only resort if that happens.

The problem with doing that with fldigi is that all of it's threads are assigned an equal nicety.  fldigi can have as many as 30 concurrent threads of operation.  The OS kernel treats them all with equanimity.  The spread on the curve will be much greater for the same tests performed as a thread in fldigi.

The accuracy of timing DTR/RTS events is further complicated by the separation of hardware resources from application layer contexts.  The OS kernel insulates the hardware from all application layer requests for service.  The program may send the request to set or clear the DTR/RTS bit and the OS kernel/driver respond with "Got it".  The application then goes merrily on it's way assuming that all is well.  In the meantime the OS kernel/driver will actually change the state of the h/w when it is good and ready to do so ... no guarantees.  All would be OK if the delay between request and service were a fixed value.  If it is not then our uncertainty in timing is further increased (more spread in the graph).

The result of all of this is jitter in the DTR/RTS timing.  I have tested similar loops to the attached that toggle those signal lines.  The physical signal is then observed on an oscilloscope.  The 1/0 0/1 transistion jitter is very obvious.

All of this applies equally to when the DTR/RTS signal line is being used for CW or RTTY.  The human ear-mind computer can accommodate large variations in code timing.  Computer CW decoders are designed with timing uncertainty in mind.  We can live with crappy CW sent via DTR/RTS.  Both the machine and computer decoders for RTTY cannot tolerate this amount of timing jitter.

Bottom line:  without some computational miracle I see no path to generating FSK keyline output from fldigi.  You need a CW / FSK codec, or modem interface, such as Winkeyer, Mortty , nanoIO, US Navigator, etal.; or a simple converter the precise and accurate right channel on/off signal to a DTR/RTS keyline signal.  The simple converter diagram and it's use is included in the fldigi help documentation.

You may well ask how does fldigi generate the audio CW and TTY signal with both precision and accuracy.  The answer is that is does not.  The precision is generated by fldigi and the accuracy is provided by the audio codec generating a specific number of audio samples.  The audio codec timing is independent of the cpu load, or its clock operation.  The same process provides the frequency and timing accuracy needed for all of the digital modes.  An audio codec operating at 48000 samples per second can maintain timing accuracy to 21.833 microseconds ... no jitter.

73, David
W1HKJ


<sim.cxx>

Join nbems@groups.io to automatically receive all group messages.