Topics

rsync

John Hanley
 

I think we had this conversation a few years back but since rebuilding my server, I have forgotten.

I am setting up my backup system using rsync.

What is/are the best options(s) for rsync? I seem to recall using -a previously.

Also, in a previous thread, it was suggested using the cron.daily/weekly/etc directories rather than crontab. If I create a script to backup/archive all my music, photos, & home directories on a weekly basis, will that be any better/worse than setting it in crontab to backup music on Mondays, photos on Tuesdays, /home on Wednesdays at different times of the day?

Thoughts?

Thanks!
John

Gordon Haverland
 

On Wed, 8 Mar 2017 19:29:00 -0700
"John Hanley" <linux@...> wrote:

I think we had this conversation a few years back but since
rebuilding my server, I have forgotten.

I am setting up my backup system using rsync.

What is/are the best options(s) for rsync? I seem to recall using -a
previously.
If I do a search at Debian for "rsync backup", I see: dirvish,
backupninja, luckybackup, rsbackup, and a bunch of other things. I
believe most of these are backups, that can use rsync. I think
luckybackup is just a GUI front end to rsync. I don't think it does
anything else. I had installed it once upon a time and played with
it. It's probably had an update or two since then, and probably
doesn't work remotely like what it did then. :-) But it might be
interesting to look at.

--

Gord

mward <mward@...>
 

What is/are the best options(s) for rsync? I seem to recall using -a previously.
I like to use:

rsync -av --stats $SRC $DST >> $LOGFILE

After a run you can see what happened, useful if there are errors. '--stats' gives a nice summary of how many files xferred, how long it took, etc. Give it a test run with:

rsync -avn --stats $SRC $DST >> $LOGFILE

a weekly basis, will that be any better/worse than setting it in crontab to backup music on Mondays, photos on Tuesdays, /home on Wednesdays at different times of the day?
Not really, considered in the long-term. It means each individual backup job will take less time - that might be an advantage in certain circumstances, e.g. to avoid a conflict with other services.

--
**********************************************
 Michael Ward
 http://www.mward.ca
**********************************************

 

I use a tool called Duply, for both servers and my personal laptop, which is a scriptable bash front-end for duplicity. Duplicity is built on librsync, and there are other GUI tools for it too. There is even a clone of it for windows called Duplicati.

Its default config makes a full complete backup once a month, and then does 30 days of incremental snapshots. They are all stored in simple encrypted volumes, and include a metadata file to organize them. The volumes can be stored via FTP, SSH, AWS S3, etc. I used to backup to both a /backup folder locally for convenience, and also a separate remote system, twice daily. Recently, I started using AWS S3 instead. It was way cheaper than I expected, and to my surprise was much quicker than SSH. The storage cost was less than $1 per month.

The backups are formatted to allow you to pull out a single file from any period in the snapshots, on any system with the tool and the GPG keys.

As a side note, I also use Snappy to do hourly snapshot backups on my local BTRFS filesystem.

Cheers,
~Lewis

John Hanley
 

AWS looks interesting!


On Mar 10, 2017 10:58 AM, "Lewis Gunsch" <lewis@...> wrote:
I use a tool called Duply, for both servers and my personal laptop, which is a scriptable bash front-end for duplicity. Duplicity is built on librsync, and there are other GUI tools for it too. There is even a clone of it for windows called Duplicati.

Its default config makes a full complete backup once a month, and then does 30 days of incremental snapshots. They are all stored in simple encrypted volumes, and include a metadata file to organize them. The volumes can be stored via FTP, SSH, AWS S3, etc. I used to backup to both a /backup folder locally for convenience, and also a separate remote system, twice daily. Recently, I started using AWS S3 instead. It was way cheaper than I expected, and to my surprise was much quicker than SSH. The storage cost was less than $1 per month.

The backups are formatted to allow you to pull out a single file from any period in the snapshots, on any system with the tool and the GPG keys.

As a side note, I also use Snappy to do hourly snapshot backups on my local BTRFS filesystem.

Cheers,
~Lewis


John Hanley
 

Replying to a three year old thread as I am setting up my back up utility again.

Is there a reason you don’t use —delete in your rsync?

It seems that if you alter your directory and then rsync, the deleted file remains in your backup. I see cautions in the man page for rsync on —delete. Is it somehow dangerous?

Thanks!
John


On Wed, Mar 8, 2017 at 9:38 PM mward <mward@...> wrote:

> What is/are the best options(s) for rsync? I seem to recall using -a 
> previously.

I like to use:

rsync -av --stats $SRC $DST >> $LOGFILE

After a run you can see what happened, useful if there are errors. 
'--stats' gives a nice summary of how many files xferred, how long it 
took, etc. Give it a test run with:

rsync -avn --stats $SRC $DST >> $LOGFILE

> a weekly basis, will that be any better/worse than setting it in 
> crontab to backup music on Mondays, photos on Tuesdays, /home on 
> Wednesdays at different times of the day?

Not really, considered in the long-term. It means each individual 
backup job will take less time - that might be an advantage in certain 
circumstances, e.g. to avoid a conflict with other services.

--
**********************************************
 Michael Ward
 http://www.mward.ca
**********************************************




Gordon Haverland
 

On Sun, 12 Jan 2020 16:32:01 -0700
"John Hanley" <linux@...> wrote:

Replying to a three year old thread as I am setting up my back up
utility again.

Is there a reason you don’t use —delete in your rsync?
Courage?

If your site (and you are sysadmin/netadmin) works fine with --delete;
go for it. It takes much longer to accumatate data than to delete it.
Being cautious on delete is typically useful.

--

Gord

David Laycock
 

No more dangerous than normally deleting files.  I use --delete for my backups.  I'm backing up ZFS volumes on to external drives so there is no need for me to store out of date files on my offsite backup.  That log file mentioned is interesting.  I'll start using that as well.