We ran into a situation at work where we needed to move our SVN repo from an old Linux server (running Ubuntu 10.04, in 2017!!) to a shiny new cloud instance.
Dear God, don’t do this
The standard advice you see on the web is to use
svn dump on the old server, transfer the dump file, then use
svn load on the new server. That works fine for small repos, where the total transfer time will be negligible no matter how you do it, but for a large repo, it’s a disaster. Time estimates I’ve seen for this around the web say it’ll take roughly 1 hour per GB to dump, then another hour per GB to load… not to mention the fact that the dump files are larger overall than the raw filesystem data, so the transfer itself is slower!
Not wanting to spend ~150 hours on this for our 70 GB repo, I wanted to try moving just the raw filesystem data. This StackOverflow answer indicated such a thing could work using
scp, but of course there are two scary things that could happen:
- What happens if you have a network connection hiccup mid-transfer? (Nobody wants to start over!)
- What happens if somebody adds a new commit while you’re working? (You could, at an organizational level, ask for a “lock” for the hours you need to do the transfer, but it’d be nice not to impede everyone’s work for that long.)
The solution, of course, is to use
rsync! You run it once to transfer your directory initially, then obtain a lock; then you run it a second time to pick up any changes you missed from the first (long) transfer. Then it’s just a matter of upgrading the repo and preparing it for use.
So, from beginning to end, the complete steps are:
- On your old server:
$ rsync -aPz /path/to/svn-repo/ username@newServer:/destination/path/
- Email your team to get a “lock” (no more committing to the old server!)
- Once more:
$ rsync -aPz /path/to/svn-repo/ username@newServer:/destination/path/(picking up any changes that were made during the first copy)
- On the new server:
$ svnadmin upgrade /destination/path/
$ svnadmin verify /destination/path/