We ran into a situation at work where we needed to move our SVN repo from an old Linux server (running Ubuntu 10.04, in 2017!!) to a shiny new cloud instance.
The standard advice you see on the web is to use
svn dump on the old server, transfer the dump file, then use
svn load on the new server. That works fine for small repos, where the total transfer time will be negligible no matter how you do it, but for a large repo, it’s a disaster. Time estimates I’ve seen for this around the web say it’ll take roughly 1 hour per GB to dump, then another hour per GB to load… not to mention the fact that the dump files are larger overall than the raw filesystem data, so the transfer itself is slower!
Not wanting to spend ~150 hours on this for our 70 GB repo, I wanted to try moving just the raw filesystem data. This StackOverflow answer indicated such a thing could work using
scp, but of course there are two scary things that could happen:
The solution, of course, is to use
rsync! You run it once to transfer your directory initially, then obtain a lock; then you run it a second time to pick up any changes you missed from the first (long) transfer. Then it’s just a matter of upgrading the repo and preparing it for use.
So, from beginning to end, the complete steps are:
$ rsync -aPz /path/to/svn-repo/ username@newServer:/destination/path/
$ rsync -aPz /path/to/svn-repo/ username@newServer:/destination/path/(picking up any changes that were made during the first copy)
$ svnadmin upgrade /destination/path/
$ svnadmin verify /destination/path/