Re: New to WSPR Daemon


Greg Beam
 

Hi Rob, Gwyn,

Apologies for the spam, I somehow sent the message before I was done writing it (two many thumbs I guess)

In any case, I suspect your storage needs will differ substantially based on use cases. Like I was saying:

  • How much data do you want to provide on your real-time endpoints (hot storage) in the PostgreSQL DB's
  • How much, where and what format to use for long term archives (cold storage), gz, zip, parquet,avro, etc.

I've used Timescale some, but only for personal learning / testing, never in a production environment. The Aggregate Functions (continuous or triggered) looks to be a really cool feature. The materialized tables would be what I was referring to above as Fact Tables. I would interested in seeing how that works with a constant in-flow of data as Materalized Views in PostgreSQL can put a heavy load on servers with large datasets.

I would think, at some point, you'll need/want an API on the front end rather than going to the database directly for public users (could be wrong). That could help determine what Materialized Views (Aggregates) you want to provide via public API's and which you provide instruction to users for building their own datasets via cold storage files. Either way, It would take a Hefty PostgreSQL server to handle years of data at the scale you're forecasting here.

I saw the VK7JJ and WSPR Watch 3rd party interfaces. I don't know their implementation details (I suspect DB direct), so it's hard to say, but, Parquet files would not be a good solution for that type of dynamic need.

73's
Greg, KI7MT

Join wsprdaemon@groups.io to automatically receive all group messages.