My efforts in 2020 are no better than they were in 2002 when I
maintained gmfsk and started down the path of developing fldigi.
At least two persons on the list pointed out the difficulty of
generating precisely timed 22 msec baudot bits (mark/space
intervals). At 45.45 baud the on/off intervals are 22022
microseconds for the start and data bits, and 33033 microseconds
for the 1.5 stop bit. Generating precisely timed DTR or RTS
states would be easy if there were no operating system or other
programs / threads also running on the cpu. In a modern operating
system, including MS Windows, Unix, Linux and OS-X the operating
system, system i/o drivers, and user application must share the
cpu and the i/o resources. The operating system kernel provides
the necessary context and core switching to keep all of the balls
in the air.
Operating systems that try to be Posix compliant provide a system
call that allows the various processes to generate timeouts with
microsecond precision. Note that precision and accuracy are not
synonymous. The system call is nanoseconds and the precision
provided is in microseconds (no OS to my knowledge provides a sub
microsecond precision). Any call with fractional microsecond
parameters are truncated to the microsecond. The actual process
invoked by the nanosecond call is to cause the operating system
kernel to suspend the calling thread (application) for "at least"
the specified number of microseconds.
The question is then "how accurate is the thread suspension".
Attached is a simple single thread (best of all worlds) test
application that measures the nanosecond accuracy for 10, 15, 20,
25, 30, 35, 40, 45 and 50 millisecond parameters. (you will need
to compile the code). The combined results for 1000 tests at each
of the specified intervals is shown in this graph:
The mean error is + 158 microseconds. The most likely is +167
microseconds. The error is caused by the cpu being used by
threads with the same or higher priority before being forced to
relinquish the cpu by the OS kernel. On a Unix/Linux system the
priority of an application is assigned by it's "nice"ness value.
Nice applications do not hog the CPU. It is possible to assign a
lower value of niceness to an application all the way to causing
the OS to fail. Neither it nor any other process will have
sufficient use of the CPU to keep the system up. The physical
power button may be the only resort if that happens.
The problem with doing that with fldigi is that all of it's
threads are assigned an equal nicety. fldigi can have as many as
30 concurrent threads of operation. The OS kernel treats them all
with equanimity. The spread on the curve will be much greater for
the same tests performed as a thread in fldigi.
The accuracy of timing DTR/RTS events is further complicated by
the separation of hardware resources from application layer
contexts. The OS kernel insulates the hardware from all
application layer requests for service. The program may send the
request to set or clear the DTR/RTS bit and the OS kernel/driver
respond with "Got it". The application then goes merrily on it's
way assuming that all is well. In the meantime the OS
kernel/driver will actually change the state of the h/w when it is
good and ready to do so ... no guarantees. All would be OK if the
delay between request and service were a fixed value. If it is
not then our uncertainty in timing is further increased (more
spread in the graph).
The result of all of this is jitter in the DTR/RTS timing. I
have tested similar loops to the attached that toggle those signal
lines. The physical signal is then observed on an oscilloscope.
The 1/0 0/1 transistion jitter is very obvious.
All of this applies equally to when the DTR/RTS signal line is
being used for CW or RTTY. The human ear-mind computer can
accommodate large variations in code timing. Computer CW decoders
are designed with timing uncertainty in mind. We can live with
crappy CW sent via DTR/RTS. Both the machine and computer
decoders for RTTY cannot tolerate this amount of timing jitter.
Bottom line: without some computational miracle I see no path to
generating FSK keyline output from fldigi. You need a CW / FSK
codec, or modem interface, such as Winkeyer, Mortty , nanoIO, US
Navigator, etal.; or a simple converter the precise and accurate
right channel on/off signal to a DTR/RTS keyline signal. The
simple converter diagram and it's use is included in the fldigi
You may well ask how does fldigi generate the audio CW and TTY
signal with both precision and accuracy. The answer is that is
does not. The precision is generated by fldigi and the accuracy
is provided by the audio codec generating a specific number of
audio samples. The audio codec timing is independent of the cpu
load, or its clock operation. The same process provides the
frequency and timing accuracy needed for all of the digital
modes. An audio codec operating at 48000 samples per second can
maintain timing accuracy to 21.833 microseconds ... no jitter.