Now that we're getting close to the release date of Yosemite, I've gone ahead and upgraded my production iMac to Public Beta 3 (after some fairly extensive testing on my work machine). Things have gone pretty well, and all my software -- including Aperture, Photoshop CS6, FCPX, and Davinci Resolve -- is working really well. What seems to have broken, though, is part of my archive and backup system.
I'm a photographer and filmmaker (among other things), and I make a lot of data. I'm not particularly efficient at getting rid of the old stuff either, so it adds up to a pretty hefty 7TB of actively used stuff. Most of this is on 1 or 2TB hard drives in USB 3 enclosures, along with the main drive inside the iMac and a 2TB RAID 0 drive. I don't have any complains with any of this gear, and things normally move really fast.
What has ground to a halt, however, is my Time Capsule backup.
My backup and archive system is actually pretty straightforward: I have 7TB of drives that mirror the contents of my active storage drives, then another 2TB in the Time Capsule that backs up the main drive of my iMac with Time Machine. The other drives are connected to the Time Capsule through a USB 2 hub. I use a launchd deamon to run a script to rysnc the storage drives across the network to sparse bundles that run on the external disks connected to the Time Capsule. Although this isn't particularly fast, it has worked really well for the last couple of years (And if I need to update a ton of information on one of those backup disks, I can just plug it into my iMac and run rsync locally.).
All these things seem to be working just fine in Yosemite, but the Time Capsule itself is having a problem. I'm getting the error about time machine's identity changing, and now I'm pretty sure Time Machine wants to do a full new backup.
I've had this happen before if I poke around too much in the command line in the sparsebundle on the time capsule, but I don't think I've touched the backup for quite a while. Usually it... "just works"...
I'm not really concerned with this. Since I don't really store much of anything (other than my iTunes library) on my machine's drive, the old files in the current backup work fine. I'll probably take the essential stuff out of the image and burn them to Blu-ray discs in case I ever need old versions of my stuff. Then I'll just let it run the new backup from scratch and everything ought to be fine.
All that said, I'm thinking about building an all new backup and archive system, something based on my current scheme of several independent disks containing disk images that backup with r-sync. What I'd really like to add, though, is the ability to plug the disk to be backed up into the new backup machine and have it run the r-sync, instead of having to run it from across the network. Hopefully I'm going to be shooting another short film soon, and I'll be making a ton of data as I go.
I was thinking about building a Unix -- maybe Darwin based -- server in a chassis that will hold up to 16 drives. The machine would need to be able to read my USB 3 drives, as well as any Thunderbolt drives I add in the future. Since this is a long term project, I also want to be able to drives as time goes on and I need more storage. I'm wary of RAID technologies, so even though I've read about stuff like FreeNAS, I think I'm more interested in running these drives independently in there server instead of with any kind of parity or striping. I really believe that using disks from the same place in the same size constantly in an array is a recipe for simultaneous failure and data lass. And since I'm not going to be burning 7TB worth of Blu-rays to prevent it, I think it may be better to avoid that altogether.
Thanks for reading, and sorry about the wrong post or if this is the wrong forum to post in. I would appreciate it if any of you guys have ideas to share with me about all this.
I'm a photographer and filmmaker (among other things), and I make a lot of data. I'm not particularly efficient at getting rid of the old stuff either, so it adds up to a pretty hefty 7TB of actively used stuff. Most of this is on 1 or 2TB hard drives in USB 3 enclosures, along with the main drive inside the iMac and a 2TB RAID 0 drive. I don't have any complains with any of this gear, and things normally move really fast.
What has ground to a halt, however, is my Time Capsule backup.
My backup and archive system is actually pretty straightforward: I have 7TB of drives that mirror the contents of my active storage drives, then another 2TB in the Time Capsule that backs up the main drive of my iMac with Time Machine. The other drives are connected to the Time Capsule through a USB 2 hub. I use a launchd deamon to run a script to rysnc the storage drives across the network to sparse bundles that run on the external disks connected to the Time Capsule. Although this isn't particularly fast, it has worked really well for the last couple of years (And if I need to update a ton of information on one of those backup disks, I can just plug it into my iMac and run rsync locally.).
All these things seem to be working just fine in Yosemite, but the Time Capsule itself is having a problem. I'm getting the error about time machine's identity changing, and now I'm pretty sure Time Machine wants to do a full new backup.
I've had this happen before if I poke around too much in the command line in the sparsebundle on the time capsule, but I don't think I've touched the backup for quite a while. Usually it... "just works"...
I'm not really concerned with this. Since I don't really store much of anything (other than my iTunes library) on my machine's drive, the old files in the current backup work fine. I'll probably take the essential stuff out of the image and burn them to Blu-ray discs in case I ever need old versions of my stuff. Then I'll just let it run the new backup from scratch and everything ought to be fine.
All that said, I'm thinking about building an all new backup and archive system, something based on my current scheme of several independent disks containing disk images that backup with r-sync. What I'd really like to add, though, is the ability to plug the disk to be backed up into the new backup machine and have it run the r-sync, instead of having to run it from across the network. Hopefully I'm going to be shooting another short film soon, and I'll be making a ton of data as I go.
I was thinking about building a Unix -- maybe Darwin based -- server in a chassis that will hold up to 16 drives. The machine would need to be able to read my USB 3 drives, as well as any Thunderbolt drives I add in the future. Since this is a long term project, I also want to be able to drives as time goes on and I need more storage. I'm wary of RAID technologies, so even though I've read about stuff like FreeNAS, I think I'm more interested in running these drives independently in there server instead of with any kind of parity or striping. I really believe that using disks from the same place in the same size constantly in an array is a recipe for simultaneous failure and data lass. And since I'm not going to be burning 7TB worth of Blu-rays to prevent it, I think it may be better to avoid that altogether.
Thanks for reading, and sorry about the wrong post or if this is the wrong forum to post in. I would appreciate it if any of you guys have ideas to share with me about all this.