Re: New to WSPR Daemon
On your recent points:
1. We have not discussed archival, our current offering is access to online, uncompressed data for an 11 year sunspot cycle, as we've described in a 2020 TAPR/ARRL Digital Communications Conference paper at https://files.tapr.org/meetings/DCC_2020/2020DCC_G3ZIL.pdf
2. To support that we have an Enterprise licence from TimescaleDB allowing automatic data tiering between main memory (192GB), SSD disk (550GB) and the 7TB RAID. Both are already pretty hefty ... See https://docs.timescale.com/latest/using-timescaledb/data-tiering
3. We're using 30 day 'chunks' in TimescaleDB jargon. Hence they can be variable in size. The current chunk is entirely in main memory.
4. There's an outline diagram at the bottom of the page at http://wsprdaemon.org/technical.html
You'll see two Rob owned servers at independent sites. They take in data independently and provide resilience. There's also a third machine, a rented Digital Ocean Droplet with just the latest 7 days data to serve immediate, 'now' data needs should there be problems with both main servers.
5. Thanks for your comments on Aggregates - I'll post a comment when I have some results to share.
6. As for public APIs VK7JJ, WSPRWatch and Jim Lill's site at http://jimlill.com:8088/today_int.html already access WsprDaemon using three different methods (node.js, Swift, and bash/psql) and we've had a recent post in this forum on using R. This is how we would like to work - leaving the public facing interfaces to others.
7. My documentation at
currently provides detailed instructions for access for node.js, Python, bash/psql, KNIME and Octave and provides links for seven other methods. I'd envisage adding a detailed section on the method you intend to use when available, and I'll be adding a detailed section on R this coming week based on material from Andi on this forum.