Date   

Re: New to WSPR Daemon

Gwyn Griffiths
 
Edited

Hello Greg
On your recent points:
1. We have not discussed archival, our current offering is access to online, uncompressed data for an 11 year sunspot cycle, as we've described in a 2020 TAPR/ARRL Digital Communications Conference paper at https://files.tapr.org/meetings/DCC_2020/2020DCC_G3ZIL.pdf

2. To support that we have an Enterprise licence from TimescaleDB allowing automatic data tiering between main memory (192GB), SSD disk (550GB) and the 7TB RAID. Both are already pretty hefty ... See https://docs.timescale.com/latest/using-timescaledb/data-tiering

3. We're using 30 day 'chunks' in TimescaleDB jargon. Hence they can be variable in size. The current chunk is entirely in main memory.

4. There's an outline diagram at the bottom of the page at http://wsprdaemon.org/technical.html
    You'll see two Rob owned servers at independent sites. They take in data independently and provide resilience. There's also a third machine, a rented Digital Ocean Droplet with just the latest 7 days data to serve immediate, 'now' data needs should there be problems with both main servers.

5. Thanks for your comments on Aggregates - I'll post a comment when I have some results to share.

6. As for public APIs VK7JJ, WSPRWatch and Jim Lill's site at http://jimlill.com:8088/today_int.html already access WsprDaemon using three different methods (node.js, Swift, and bash/psql) and we've had a recent post in this forum on using R. This is how we would like to work - leaving the public facing interfaces to others.

7. My documentation at
http://wsprdaemon.org/ewExternalFiles/Timescale_wsprdaemon_database_queries_and_APIs_V2.pdf
currently provides detailed instructions for access for node.js, Python, bash/psql, KNIME and Octave and provides links for seven other methods. I'd envisage adding a detailed section on the method you intend to use when available, and I'll be adding a detailed section on R this coming week based on material from Andi on this forum.

best wishes
Gwyn G3ZIL


Re: New to WSPR Daemon

Greg Beam
 

Hi Rob, Gwyn,

Apologies for the spam, I somehow sent the message before I was done writing it (two many thumbs I guess)

In any case, I suspect your storage needs will differ substantially based on use cases. Like I was saying:

  • How much data do you want to provide on your real-time endpoints (hot storage) in the PostgreSQL DB's
  • How much, where and what format to use for long term archives (cold storage), gz, zip, parquet,avro, etc.

I've used Timescale some, but only for personal learning / testing, never in a production environment. The Aggregate Functions (continuous or triggered) looks to be a really cool feature. The materialized tables would be what I was referring to above as Fact Tables. I would interested in seeing how that works with a constant in-flow of data as Materalized Views in PostgreSQL can put a heavy load on servers with large datasets.

I would think, at some point, you'll need/want an API on the front end rather than going to the database directly for public users (could be wrong). That could help determine what Materialized Views (Aggregates) you want to provide via public API's and which you provide instruction to users for building their own datasets via cold storage files. Either way, It would take a Hefty PostgreSQL server to handle years of data at the scale you're forecasting here.

I saw the VK7JJ and WSPR Watch 3rd party interfaces. I don't know their implementation details (I suspect DB direct), so it's hard to say, but, Parquet files would not be a good solution for that type of dynamic need.

73's
Greg, KI7MT


Re: Example query and analysis using R

Andi Fugard M0INF
 

Hello again,

I have added more examples:
https://inductivestep.github.io/WSPR-analysis/

Best wishes,

Andi


Re: New to WSPR Daemon

Gwyn Griffiths
 

Rob, Greg
Rob - Continuous aggregates are on my to list of topics to add to the list of examples I have in my TimescaleDB Guide. I'll need to check whether they can be used with aggregates such as percentiles as well as the example aggregates provided by Timescale. Hourly counts and averages spring immediately to mind. One approach would be to figure out what would be a useful Grafana dashboard that only used aggregates.

Greg - Thank you for more thought-provoking points. I wonder if discussion during the weekly Wednesday WsprDaemon Zoom meetings that Rob holds might be useful? For me personally, I have learnt as I've implemented the WsprDaemon database with Influx then Timescale; not the best approach, but it is a good (if not the 'best') approach to data storage and serving data to users with a whole host of applications via a growing number of interfaces.

73
Gwyn G3ZIL


Re: New to WSPR Daemon

Rob Robinett
 

Hi,

I am very pleased to see this discussion.  A recent email from Timescale suggests that long time span queries can be accelerated by defining continuous aggregate tables.  I wonder if those would help us as our database grows?


Rob

On Sat, Jan 9, 2021 at 3:53 AM Greg Beam <ki7mt01@...> wrote:

Hi Gwyn,

A couple of redress points here.

Regarding On-Disk File Size(s)
I too was looking for a solution for this, which is why I looked toward Parquet / Avro as a solution. Both are binary file formats that have the schemas embedded with them. From them, you could derive your Fact Tables (things needed for plot rendering). Typically, you have a Master DataSet set containing all rows / columns, then create a sub-set fact tables, or in some cases, a separate Parquet DataSet that you serve your plots with. This could be any combination of the Master columns and rows. Reducing that set down to only whats needed for a particular plot can yield huge disk saving and read speed increases.

File Compression
Using Parquet / Avro file formats dramatically saves on long-term disk space usage. This is why I created the Pandas Parquet Compression Test. As you can see, the base file size was about 3.7 GB and Snappy Compression (the default Parquet compression) comes in at 667 MB, or roughly a 5 to 1 reduction. Gzip and Brotli come in a couple hundred MB smaller (440MB to 470 MB ish) if one is really crunched for disk space.

Read Speeds
With those high compression levels, I was concerned about read speeds, but that turned out to be a non-issue. During my PyArrow Read Tests, I was able to read 47+ million rows, do a grouby and count in =< 2.01 second with Snappy and Brotli. That's fast considering I was reading all rows and all columns. Read times would be much faster on a limited DataFrame either by tale of select columns.

In any case, there's lots of ways to clean this fish, but having a good idea of what your output needs will be, at least initially, can help define your back end source file strategy. While Databases certainly make it easy (initially) they aren't always the best long term solution with large datasets. I've been breaking my groups up into yearly blocks. The cool thing about parquet is, you can append to the storage rather easily. If I need multiple years, I just do two year group queries. You could add them all together, but, that can get really large and DataFrames need to fit into Memory, as that's where Spark does it's processing.

73's
Greg, KI7MT



--
Rob Robinett
AI6VN
mobile: +1 650 218 8896


Re: New to WSPR Daemon

Greg Beam
 

Hi Gwyn,

A couple of redress points here.

Regarding On-Disk File Size(s)
I too was looking for a solution for this, which is why I looked toward Parquet / Avro as a solution. Both are binary file formats that have the schemas embedded with them. From them, you could derive your Fact Tables (things needed for plot rendering). Typically, you have a Master DataSet set containing all rows / columns, then create a sub-set fact tables, or in some cases, a separate Parquet DataSet that you serve your plots with. This could be any combination of the Master columns and rows. Reducing that set down to only whats needed for a particular plot can yield huge disk saving and read speed increases.

File Compression
Using Parquet / Avro file formats dramatically saves on long-term disk space usage. This is why I created the Pandas Parquet Compression Test. As you can see, the base file size was about 3.7 GB and Snappy Compression (the default Parquet compression) comes in at 667 MB, or roughly a 5 to 1 reduction. Gzip and Brotli come in a couple hundred MB smaller (440MB to 470 MB ish) if one is really crunched for disk space.

Read Speeds
With those high compression levels, I was concerned about read speeds, but that turned out to be a non-issue. During my PyArrow Read Tests, I was able to read 47+ million rows, do a grouby and count in =< 2.01 second with Snappy and Brotli. That's fast considering I was reading all rows and all columns. Read times would be much faster on a limited DataFrame either by tale of select columns.

In any case, there's lots of ways to clean this fish, but having a good idea of what your output needs will be, at least initially, can help define your back end source file strategy. While Databases certainly make it easy (initially) they aren't always the best long term solution with large datasets. I've been breaking my groups up into yearly blocks. The cool thing about parquet is, you can append to the storage rather easily. If I need multiple years, I just do two year group queries. You could add them all together, but, that can get really large and DataFrames need to fit into Memory, as that's where Spark does it's processing.

73's
Greg, KI7MT


Re: New to WSPR Daemon

Gwyn Griffiths
 
Edited

Hello Greg
Thanks for the additional explanations and details. They give a useful picture of where you are with csv file data and the approaches that you take. It is clear that once past the first stage of getting the data columns you want the subsequent steps from wsprnet csv or wsprdaemon TimescaleDB files will be the same. As fields such as numerical lat and lon for tx and rx are already in WsprDaemon this may reduce the load at the analysis steps - that was our hope.

I am in no doubt that multithread (and cluster) approaches are needed with this data. WsprDaemon already has 390 million spots online (from July 2020 onward) and this can, and does, result in slow response to a number of queries and hence Grafana graphics. For now, in the scheme of things, being able to see a plot of spot count per hour of day for each day over six months in a few tens of seconds is still useful, and a marvel.

But, these 390 million spots are only taking up 138 GB of the 7TB disk space Rob has made available - so different approaches such as those you describe are going to be needed to look at data over a whole sunspot cycle that the WsprDaemon should be able to hold.

Thanks for permission to abstract from your posts on this topic for our TimescaleDB guide.

73
Gwyn G3ZIL


Re: New to WSPR Daemon

Greg Beam
 

Hi Gwyn,

I should probably clarify the use of these type of tools a bit more so as to not confuse folks. What I've added so far is targeting WSPRnet CSV files. I'll be adding the same or similar for WSPR Daemon Schemas.

Their primary purpose of Spark is to Map Reduce a given DataFrame / DataSet. Say, for example, you have a years worth of spot data and want to plot, compare, or otherwise process. The steps would go something like:

  • Select just the columns you want from the Parquet Partitions (timestamp, field1, field2, filed3, etc)
  • Perform the aggregations (SUM, AVG, Count, STATS, Compare, or whatever you need)
  • Plot or save the results to csv/json or post to a DB Fact Table.
At the plot or save stage is where the performance increase comes in as it's all done in parallel on a Spark cluster (standalone or nodes). While this doesn't sound overly impressive, it is. Consider the November 2020 WSPRnet CSV file. It has 70+ million rows of data * 15 Columns, When one adds the remainder of the year, you could easily be over 500 Million rows of data. Doing aggregate functions on datasets of that scale can be very expensive time wise. If one has 20 or so results they want process every day of every month down to the hour level in a rolling fashion, it becomes impractical to do in a single thread call.

I've not added any Streaming Functions, but, Spark also allows for continuous ingestion of data from file/folder monitoring, UDP ports, channels, and others. I can see many use-cases with WSPR Daemon and Spark Stream Processing of spot data from multiple Kiwi's with multi-channel monitoring on each device. You could use it to process the data, or simply post it to a staging table for downstream analytics. Staging data for downstream activity is a commonly used for things like server logs or Web-Page clicks from millions of users. However, it doesn't' matter what the source data is, only that it's coming in on intervals or continuously.

If you're into Machine Learning and predictive analytics, the Spark ML-Lib provides a powerful set of tools also.

Essentially, Spark provides (in clusters or stand alone modes)

- DataSet / Dataframe Map Reduction capabilities
- Stream Processing of data and files
- Machine Learning tests and predictive analytics

73's
Greg, KI7MT


Re: WD config errors

John
 

Thank you Rob.
All working well.

John
TI4JWC


Re: New to WSPR Daemon

Greg Beam
 

Hi Gwyn,

Sure, you can add whatever you think would be helpful. It's a slow process documenting everything due to work but I'll slowly get there.

73's
Greg, KI7MT


Re: WD config errors

Rob Robinett
 

Hi John,

I'm sorry but in the stress of yesterday's insurrection I failed to test the conf file I emailed you which included a corrupt definition of your RECEIVER_LIST.

That list must include one or more strings, each with 5 space-separated strings, e.g.
declare RECEIVER_LIST=(
        "KIWI_0                  192.168.2.160:8073     TI4JWC         EK70wb    PASSWORD"
        "KIWI_1                  192.168.2.161:8073     TI4JWC         EK70wb    PASSWORD"
        "MERGED_0_1        KIWI_0,KIWI_1          TI4JWC         PASSWORD" 
) 

The conf file I sent you was missing the double quotes I highlighted in red above, so WD saw there was only one multi-line receiver definition of KIWI_0.

There might have been a second problem with the '\r' (Carriage Return) characters added to each line by your Windows text editor.
I stripped those CRs out using the linux utility 'dos2unix' before editing your conf file. 
If you make further edits from Windows and find WD fails to start, it would be safest to edit your conf file from the desktop of the Pi using the text editor supplied as part of that Desktop environment.

Rob

On Thu, Jan 7, 2021 at 2:39 PM John via groups.io <n0ure=yahoo.com@groups.io> wrote:
I made the 00:00 a single line  --- same error
j



--
Rob Robinett
AI6VN
mobile: +1 650 218 8896


Re: WD config errors

John
 

I made the 00:00 a single line  --- same error
j


Re: WD config errors

Rob Robinett
 

I am away from home. I’ll check it later today when I get home

On Thu, Jan 7, 2021 at 12:20 PM Glenn Elmore <n6gn@...> wrote:

On 1/7/21 12:40 PM, John via groups.io wrote:
> Here is with a comma added. I get the same errors..

The comma was no doubt a problem but there's another thing, perhaps
formatting was changed in posting. I don't know about the line wrap, 
but your entry for schedule 00:00 looks suspect

> declare WSPR_SCHEDULE_simple=(
>          "00:00  KIWI_0,80   KIWI_0,40   KIWI_0,30
>                  MERGED_0_1,20   MERGED_0_1,17 MERGED_0_1,15
> MERGED_0_1,12 MERGED_0_1,10 " )

I see it as two lines and I can't tell if that's what you actually have.
I think it needs to be a single line.

WD is a bit fussy about white space and delimiters. Check your .conf
very carefully.

Glenn






--
Rob Robinett
AI6VN
mobile: +1 650 218 8896


Re: WD config errors

Glenn Elmore
 

On 1/7/21 12:40 PM, John via groups.io wrote:
Here is with a comma added. I get the same errors..
The comma was no doubt a problem but there's another thing, perhaps formatting was changed in posting. I don't know about the line wrap,  but your entry for schedule 00:00 looks suspect

declare WSPR_SCHEDULE_simple=(
         "00:00  KIWI_0,80   KIWI_0,40   KIWI_0,30
                 MERGED_0_1,20   MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12 MERGED_0_1,10 " )
I see it as two lines and I can't tell if that's what you actually have. I think it needs to be a single line.

WD is a bit fussy about white space and delimiters. Check your .conf very carefully.

Glenn


Re: WD config errors

John
 

And also the same errors when  KIWI_0,KIWI_1   is coded.

J


Re: WD config errors

John
 

Here is with a comma added. I get the same errors..

 # To enable these options, remove the leading '#' and modify SIGNAL_LEVEL_UPLOAD_ID from "AI6VN" to your call sign:
#SIGNAL_LEVEL_UPLOAD="noise"       ### If this variable is defined and not "no", AND SIGNAL_LEVEL_UPLOAD_ID is defined, then upload signal levels to the wsprdaemon cloud database
                                   ### SIGNAL_LEVEL_UPLOAD_MODE="no"    => (Default) Upload spots directly to wsprnet.org
                                   ### SIGNAL_LEVEL_UPLOAD_MODE="noise" => Upload extended spots and noise data.  Upload spots directly to wsprnet.org
                                   ### SIGNAL_LEVEL_UPLOAD_MODE="proxy" => In addition to "noise", don't upload to wsprnet.org from this server.  Regenerate and upload spots to wsprnet.org on the wsprdaemon.org server
SIGNAL_LEVEL_UPLOAD="yes"          ### If this variable is defined as "yes" AND SIGNAL_LEVEL_UPLOAD_ID is defined, then upload extended spots and noise levels to the logs.wsprdaemon.org database and graphics file server.
SIGNAL_LEVEL_UPLOAD_ID="TI4JWC"    ### The name put in upload log records, the the title bar of the graph, and the name used to view spots and noise at that server.
SIGNAL_LEVEL_UPLOAD_GRAPHS="yes"   ### If this variable is defined as "yes" AND SIGNAL_LEVEL_UPLOAD_ID is defined, then FTP graphs of the last 24 hours to http://wsprdaemon.org/graphs/SIGNAL_LEVEL_UPLOAD_ID
#SIGNAL_LEVEL_LOCAL_GRAPHS="no"    ### If this variable is defined as "yes" AND SIGNAL_LEVEL_UPLOAD_ID is defined, then make graphs visible at http://localhost/
 
##############################################################
### The RECEIVER_LIST() array defines the physical (KIWI_xxx,AUDIO_xxx,SDR_xxx) and logical (MERG...) receive devices available on this server
### Each element of RECEIVER_LIST is a string with 5 space-seperated fields:
###   " ID(no spaces)             IP:PORT or RTL:n    MyCall       MyGrid  KiwPassword    Optional SIGNAL_LEVEL_ADJUSTMENTS
###                                                                                       [[DEFAULT:ADJUST,]BAND_0:ADJUST[,BAND_N:ADJUST_N]...]
###                                                                                       A comma-separated list of BAND:ADJUST pairs
###                                                                                       BAND is one of 2200..10, while ADJUST is in dBs TO BE ADDED to the raw data 
###                                                                                       So If you have a +10 dB LNA, ADJUST '-10' will LOWER the reported level so that your reports reflect the level at the input of the LNA
###                                                                                       DEFAULT defaults to zero and is applied to all bands not specified with a BAND:ADJUST
 
declare RECEIVER_LIST=(
        "KIWI_0                  192.168.2.160:8073     TI4JWC         EK70wb    6105
         KIWI_1                  192.168.2.161:8073     TI4JWC         EK70wb    6105
         MERGED_0_1              KIWI_0,      KIWI_1     TI4JWC         EK70wb  " )
 
### This table defines a schedule of configurations which will be applied by '-j a,all' and thus by the watchdog daemon when it runs '-j a,all' ev ery odd two minutes
### The first field of each entry in the start time for the configuration defined in the following fields
### Start time is in the format HH:MM (e.g 13:15) and by default is in the time zone of the host server unless ',UDT' is appended, e.g '01:30,UDT'
### Following the time are one or more fields of the format 'RECEIVER,BAND'
### If the time of the first entry is not 00:00, then the latest (not necessarily the last) entry will be applied at time 00:00
### So the form of each line is  "START_HH:MM[,UDT]   RECEIVER,BAND... ".  Here are some examples:
 
declare WSPR_SCHEDULE_simple=(
         "00:00  KIWI_0,80   KIWI_0,40   KIWI_0,30  
                 MERGED_0_1,20   MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12 MERGED_0_1,10 " )
 
###declare WSPR_SCHEDULE_complex=(
###         "06:00  KIWI_0,80      KIWI_0,40 KIWI_0,30  MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12   MERGED_0_1,10"       ### KIWI_0 => 7 rx channels (80/40/30/20/17/15/../10), KIWI_1 => 7 rx channels (../40/30/20/17/15/12/10)
###         "18:00  MERGED_0_1,80  MERGED_0_1,40 MERGED_0_1,30  MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 KIWI_0,12   KIWI_1,10"           ### KIWI_0 => 7 rx channels (80/40/30/20/17/15/12/..), KIWI_1 => 7 rx channels (80/40/30/20/17/15/../10)
###)
 
### This array WSPR_SCHEDULE defines the running configuration.  Here we make the simple configuration defined above the active one:
declare WSPR_SCHEDULE=( "${WSPR_SCHEDULE_simple[@]}" )
 


Re: WD config errors

Gwyn Griffiths
 

John
I think you need a comma in the line below between KIWI_0 and KIWI_1
regards
Gwyn G3ZIL


On Thu, Jan 7, 2021 at 06:02 PM, John wrote:
MERGED_0_1              KIWI_0      KIWI_1     TI4JWC         EK70wb  " )


WD config errors

John
 

I am still seeing ERRORS when I start WD. What is my config error?
John
TI4JWC
=====================
pi@raspberrypi:~/wsprdaemon $ wd -a -V
wsprdaemon.sh Copyright (C) 2020  Robert S. Robinett
This program comes with ABSOLUTELY NO WARRANTY; for details type './wsprdaemon.sh -h'
This is free software, and you are welcome to redistribute it under certain conditions.  execute'./wsprdaemon.sh -h' for details.
wsprdaemon depends heavily upon the 'wsprd' program and other technologies developed by Joe Taylor K1JT and others, to whom we are grateful.
Goto https://physics.princeton.edu/pulsar/K1JT/wsjtx.html to learn more about WSJT-x
 
ERROR: in WSPR_SCHEDULE line '00:00 KIWI_0,80 KIWI_0,40 KIWI_0,30 MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12 MERGED_0_1,10', job 'MERGED_0_1,20' specifies receiver 'MERGED_0_1' not found in RECEIVER_LIST
ERROR: in WSPR_SCHEDULE line '00:00 KIWI_0,80 KIWI_0,40 KIWI_0,30 MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12 MERGED_0_1,10', job 'MERGED_0_1,17' specifies receiver 'MERGED_0_1' not found in RECEIVER_LIST
ERROR: in WSPR_SCHEDULE line '00:00 KIWI_0,80 KIWI_0,40 KIWI_0,30 MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12 MERGED_0_1,10', job 'MERGED_0_1,15' specifies receiver 'MERGED_0_1' not found in RECEIVER_LIST
ERROR: in WSPR_SCHEDULE line '00:00 KIWI_0,80 KIWI_0,40 KIWI_0,30 MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12 MERGED_0_1,10', job 'MERGED_0_1,12' specifies receiver 'MERGED_0_1' not found in RECEIVER_LIST
ERROR: in WSPR_SCHEDULE line '00:00 KIWI_0,80 KIWI_0,40 KIWI_0,30 MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12 MERGED_0_1,10', job 'MERGED_0_1,10' specifies receiver 'MERGED_0_1' not found in RECEIVER_LIST


++++++++++++++++++++++++++++++++++
 # To enable these options, remove the leading '#' and modify SIGNAL_LEVEL_UPLOAD_ID from "AI6VN" to your call sign:
#SIGNAL_LEVEL_UPLOAD="noise"       ### If this variable is defined and not "no", AND SIGNAL_LEVEL_UPLOAD_ID is defined, then upload signal levels to the wsprdaemon cloud database
                                   ### SIGNAL_LEVEL_UPLOAD_MODE="no"    => (Default) Upload spots directly to wsprnet.org
                                   ### SIGNAL_LEVEL_UPLOAD_MODE="noise" => Upload extended spots and noise data.  Upload spots directly to wsprnet.org
                                   ### SIGNAL_LEVEL_UPLOAD_MODE="proxy" => In addition to "noise", don't upload to wsprnet.org from this server.  Regenerate and upload spots to wsprnet.org on the wsprdaemon.org server
SIGNAL_LEVEL_UPLOAD="yes"          ### If this variable is defined as "yes" AND SIGNAL_LEVEL_UPLOAD_ID is defined, then upload extended spots and noise levels to the logs.wsprdaemon.org database and graphics file server.
SIGNAL_LEVEL_UPLOAD_ID="TI4JWC"    ### The name put in upload log records, the the title bar of the graph, and the name used to view spots and noise at that server.
SIGNAL_LEVEL_UPLOAD_GRAPHS="yes"   ### If this variable is defined as "yes" AND SIGNAL_LEVEL_UPLOAD_ID is defined, then FTP graphs of the last 24 hours to http://wsprdaemon.org/graphs/SIGNAL_LEVEL_UPLOAD_ID
#SIGNAL_LEVEL_LOCAL_GRAPHS="no"    ### If this variable is defined as "yes" AND SIGNAL_LEVEL_UPLOAD_ID is defined, then make graphs visible at http://localhost/
 
##############################################################
### The RECEIVER_LIST() array defines the physical (KIWI_xxx,AUDIO_xxx,SDR_xxx) and logical (MERG...) receive devices available on this server
### Each element of RECEIVER_LIST is a string with 5 space-seperated fields:
###   " ID(no spaces)             IP:PORT or RTL:n    MyCall       MyGrid  KiwPassword    Optional SIGNAL_LEVEL_ADJUSTMENTS
###                                                                                       [[DEFAULT:ADJUST,]BAND_0:ADJUST[,BAND_N:ADJUST_N]...]
###                                                                                       A comma-separated list of BAND:ADJUST pairs
###                                                                                       BAND is one of 2200..10, while ADJUST is in dBs TO BE ADDED to the raw data 
###                                                                                       So If you have a +10 dB LNA, ADJUST '-10' will LOWER the reported level so that your reports reflect the level at the input of the LNA
###                                                                                       DEFAULT defaults to zero and is applied to all bands not specified with a BAND:ADJUST
 
declare RECEIVER_LIST=(
        "KIWI_0                  192.168.2.160:8073     TI4JWC         EK70wb    6105
         KIWI_1                  192.168.2.161:8073     TI4JWC         EK70wb    6105
         MERGED_0_1              KIWI_0      KIWI_1     TI4JWC         EK70wb  " )
 
### This table defines a schedule of configurations which will be applied by '-j a,all' and thus by the watchdog daemon when it runs '-j a,all' ev ery odd two minutes
### The first field of each entry in the start time for the configuration defined in the following fields
### Start time is in the format HH:MM (e.g 13:15) and by default is in the time zone of the host server unless ',UDT' is appended, e.g '01:30,UDT'
### Following the time are one or more fields of the format 'RECEIVER,BAND'
### If the time of the first entry is not 00:00, then the latest (not necessarily the last) entry will be applied at time 00:00
### So the form of each line is  "START_HH:MM[,UDT]   RECEIVER,BAND... ".  Here are some examples:
 
declare WSPR_SCHEDULE_simple=(
         "00:00  KIWI_0,80   KIWI_0,40   KIWI_0,30  
                 MERGED_0_1,20   MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12 MERGED_0_1,10 " )
 
###declare WSPR_SCHEDULE_complex=(
###         "06:00  KIWI_0,80      KIWI_0,40 KIWI_0,30  MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 MERGED_0_1,12   MERGED_0_1,10"       ### KIWI_0 => 7 rx channels (80/40/30/20/17/15/../10), KIWI_1 => 7 rx channels (../40/30/20/17/15/12/10)
###         "18:00  MERGED_0_1,80  MERGED_0_1,40 MERGED_0_1,30  MERGED_0_1,20 MERGED_0_1,17 MERGED_0_1,15 KIWI_0,12   KIWI_1,10"           ### KIWI_0 => 7 rx channels (80/40/30/20/17/15/12/..), KIWI_1 => 7 rx channels (80/40/30/20/17/15/../10)
###)
 
### This array WSPR_SCHEDULE defines the running configuration.  Here we make the simple configuration defined above the active one:
declare WSPR_SCHEDULE=( "${WSPR_SCHEDULE_simple[@]}" )
 
 


Re: Local Monitoring

Rob Robinett
 

Do this:

pi@Maui-Pi85:~/wsprdaemon/uploads.d/wsprnet.d/spots.d $ cd ~/wsprdaemon/uploads.d/wsprnet.d/spots.d
pi@Maui-Pi85:~/wsprdaemon/wsprdaemon.sh -d
pi@Maui-Pi85:~/wsprdaemon/uploads.d/wsprnet.d/spots.d $ tail -f uploads.log
210107 0432  0.36 -22  0.28   7.0401518 WA4DT          EM94   30  0     7    0
210107 0432  0.63  -7  0.19   7.0401760 KJ6WSM         CM98   33  0     1    0
210107 0432  0.17 -27  0.19   7.0401882 KN8DMK         EM89   33  0     1  -32
210107 0432  0.55  -8 -0.19  14.0970398 K6MTU          CM97   23  0     1    0
210107 0432  0.25 -26  1.09  14.0970972 K7CKW          DM42   37  0     4    0
210107 0432  0.46 -16  0.28  14.0971219 W6TOM          CM87   23  0     1    0
210107 0432  0.27 -24  0.19  14.0971600 W0YSE          DN41   40 -2     4    0
210107 0432  0.19 -28  0.41  18.1060830 KC7OGJ         DM33   37  0     1    0
210107 0432  0.24 -24  0.37  21.0960361 N6RQW          DM14   23 -1   385    0
Thu 07 Jan 2021 04:36:37 AM UTC: upload_to_wsprnet_daemon() uploading AI6VN/KH6_BL10rx spots file /tmp/wsprdaemon/uploads.d/wsprnet.d/wspr_spots.txt with 14 spots in it.
Thu 07 Jan 2021 04:36:40 AM UTC: upload_to_wsprnet_daemon() successful curl upload has completed. 14 of these offered 14 spots were accepted by wsprnet.org:
210107 0434  0.23 -26  0.71   3.5701796 N6RY           DM33   37  0     3    0
210107 0434  0.31 -26  0.11   7.0399906 KM4TLA         EM84    0  0     1    0
210107 0434  0.47 -17  0.07   7.0400303 W6LVP          DM04   37  0     1    0
210107 0434  0.28 -22 -0.87   7.0400498 DP0POL         IG06   37  0     1    0
210107 0434  0.56 -14  0.28   7.0400792 W6SCQ          CM98   37  0     1    0
210107 0434  0.24 -21  0.19   7.0401020 9Z4FV          FK90   43  0     2    0
210107 0434  0.44 -19  0.66   7.0401065 AE7YQ          DM41   37  0     5    0
210107 0434  0.33 -20 -0.32   7.0401381 N8VIM          FN42   37  0     1    0
210107 0434  0.18 -30  0.37   7.0401611 W5QJ           EM30   23  0     1    0
210107 0434  0.42 -20  0.15   7.0401958 VA7TZ          DN09   30  0     1    0
210107 0434  0.14 -22  0.37  10.1402037 N6WKZ          CM87   23  0     1    0
210107 0434  0.50 -12 -0.32  14.0970932 AB1K           CM98   23  0     1    0
210107 0434  0.17 -26  1.47  14.0970972 K7CKW          DM42   37  0     1  -24
210107 0434  0.11 -27  0.37  28.1261781 N6RQW          DM14   23  0     1   -8
^C
pi@Maui-Pi85:~/wsprdaemon/uploads.d/wsprnet.d/spots.d $


Re: Second KIWI setup?

Edward Hammond
 

Well, I guess I'm out of touch.  I've been running WD for months.  :-)  (W3ENR)

EH


On 1/5/21 10:12 PM, Rob Robinett wrote:
WD fully supports getting spots for 2 or more Kiwis, selecting one set of the best received spots and uploading only one copy of each spot under the single receiver call sign to wsprnet.org.
 
In addition, WD creates a log file where all spots are listed and a comparison of the SNR of the spots is listed.
Further, all spots from all receives can be recorded in the wsprdaemon.org SQL database for further analysis and text and graphs using SQL and Grafana.
Download WD and you will find in the config file a template for MERGE receivers.

Rob

301 - 320 of 450