Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Before TM, CCC, or Arq can backup a disk, the disk password must first be put in by the operating system.. :p

After activating FileVault, do I have to do anything special before doing a TM, CCC Or ARQ backup or just backup as I used to do before FileVault is activated?
 
After activating FileVault, do I have to do anything special before doing a TM, CCC Or ARQ backup or just backup as I used to do before FileVault is activated?

Shouldn’t have to do anything special. Just do as you did before.
 
I—ARQ budget
ARQ allows users to set budget (in GB) for storing the ARQ backup dataset on the cloud. In the past, my Dropbox account was frozen several times because ARQ’s dataset had exceeded my account’s capacity. Even though I had set the budget to be several hundred of GB below the account capacity. Each time, I had to delete the entire dataset to unfreeze the account and started ARQ backup from scratch. Apparently, either the budget (check) interval was set too long or the budget function did not work well. ARQ will not let the user know if his budget had been exceeded.

Is calculating the size of the dataset in GB computational intensive? If not, I would like to set the budget interval to one day so that my account will never be frozen again.
[doublepost=1559712349][/doublepost]
II—ARQ backup to two destinations
Hope to get some help on ARQ backup to two destinations.

Q1) does each destination has its own backup schedule?

Q2) I understand that before backing up to the cloud, ARQ has to prepare the data on the Mac, such as segmentation, encryption, etc. This preparation seems to be CPU intensive.

Since there are now two designations, does the ARQ preparation of the data has to be done for each destination (that is the data has to be prepared twice )? There are 3 cases: (1) when the two destinations have the same backup time; (2) when their backup times are offset (say 1/2 hour) from each other; and (3) when their backup times are different from each other.
 
Last edited:
The default budget enforcement is every 30 days, so yes by the end of that 30 days eventually you will likely be over the budget amount. You could just set it a little lower to start with. I think running budget enforcement every day is overkill.

If you set two destinations, everything is done multiple times since it is two backup sets.
 
  • Like
Reactions: BigMcGuire
The default budget enforcement is every 30 days, ....running budget enforcement every day is overkill.

If you set two destinations, everything is done multiple times since it is two backup sets.

Thanks for answering my questions. More questions below.

Q1) Is the budget calculation CPU intensive?

Q2) if the user has 1TB data on the Mac, I understand ARQ has to prepare the data before uploading to the cloud. This preparation is CPU intensive. I also understand ARQ uses differential backup. If I only modified one word in a file then closed the file, then it is time for ARQ backup, does ARQ starts to prepare the whole 1TB data all over again, or it has saved the previous preparation and just need to prepare the modified file for uploading ?
 
1. I do notice the fans ramp up a bit when it is running that budget enforcement, so I do think it is shewing up some CPU cycles.

2. If you mean say you had a 400MB video, then edited and saved the video, would Arq upload the whole 400MB again. I think the answer to that is yes, but I have not tested it. You might email to dev and ask.
 
  • Like
Reactions: BigMcGuire
1. I do notice the fans ramp up a bit when it is running that budget enforcement, so I do think it is shewing up some CPU cycles.

2. If you mean say you had a 400MB video, then edited and saved the video, would Arq upload the whole 400MB again. I think the answer to that is yes, but I have not tested it. You might email to dev and ask.

Thanks for your super fast answers. Your answers normally take me a long time to digest!

I—differential backup
I meant chasing one word in a Word document when the internal disk had 1TB of user data. An extreme example created to help me to understand. In this scenario, it seems unthinkable that ARQ will redo segmentation, encryption of the 1TB data.

II—-budget
Just wondering how much data u had when the “fans ramp up a bit”?

In computer, does each file folder has a field that state its size in MB? If the answer is yes, then the budget calculation should not be computationally intensive as there are not that many folders in the home directory

III—Two destinations
Does each destination has its own backup scheduler?

Also what is the “ARQ agent”?
 
1. If you have say one 5MB Word doc among your 1TB of data, and edit the Word doc, the next backup would upload that 5MB again and not the whole 1TB if that is what you mean.

2. I have about 30GB backed up and a 60GB budget so that gives me around six months of versions.

3. Yes... there is a pref setting for each backup set. Arq agent always run in the background so backups can be done with the main app running all the time.
 
  • Like
Reactions: BigMcGuire
I have 369GB in B2. I don't have a budget, I use BackBlaze B2's lifecycle settings as my primary means of pruning because my data changes a lot (usually a 4 month duplicate deletion policy (not in Arq, in Backblaze Bucket settings)). Takes a good while to do a remove unreferenced objects or validate (high CPU).

Arq is pretty good about staying in the background and not bothering you (letting you do what you need to do).
 
  • Like
Reactions: Weaselboy
I have 369GB in B2
Screen Shot 2019-06-05 at 11.01.12 AM.png

Glad they are making money off somebody, because they aren't make anything from me. :D
 
  • Like
Reactions: BigMcGuire
I have 369GB in B2. I don't have a budget, I use BackBlaze B2's lifecycle settings as my primary means of pruning because my data changes a lot (usually a 4 month duplicate deletion policy (not in Arq, in Backblaze Bucket settings)). Takes a good while to do a remove unreferenced objects or validate (high CPU).

Arq is pretty good about staying in the background and not bothering you (letting you do what you need to do).
Thanks!
What is an “unreferenced object”? How does it relate to pruning? The ARQ dataset can be pruned by others without destroying the dataset or its ability to restore files?

This is smart, instead of ARQ calculating budget on your Mac, You have B2 do it for you. Also in ARQ, it enforces the budget only at interval and B2 presumably does it more often.
[doublepost=1559833985][/doublepost]I asked ARQ support the following questions and the founder Stefan answered.

Q1) on backing up to two destinations:
If I only changed one word of a 1 MB file and the Mac home directory has 1 TB of data, does ARQ’s preparation for cloud backup scan only that 1MB file (that is it will take less than a nanosecond to prepare) or has to scan the entire 1TB home directory?
Ans: It will scan all the files. We're working on a new version that doesn't.

Q2) budget calculation
Does a file folder has a field that contains its size (in MB)? If yes, it seems that calculation of the budget should be extremely fast as there are not that many folders in the Mac.

Ans:I don't understand this question, sorry. Are you asking about folders on your Mac's disk? No, those don't have cumulative size information attached. It has to be calculated.

About Q2, the answer seems to be during budget calculation, the size of each folder is recalculated by summing the size of all files in the folder. Just wondering why ARQ wouldn’t keep a history of such calculation for each folder so that it only need to update folders that have been changed.
[doublepost=1559834089][/doublepost]
Glad they are making money off somebody, because they aren't make anything from me. :D
Just wondering why you have negative charge.
 
Last edited:
Thanks!
What is an “unreferenced object”? How does it relate to pruning? The ARQ dataset can be pruned by others without destroying the dataset or its ability to restore files?

This is smart, instead of ARQ calculating budget on your Mac, You have B2 do it for you. Also in ARQ, it enforces the budget only at interval and B2 presumably does it more often.

Just wondering why you have negative charge.

I believe unreferenced objects just means it is making sure that nothing in the cache database that Arq has is missing locally. Kind of like a local validate data - though I'm not completely sure.

My Arq bill is pretty tiny too - minus the 2 BackBlaze Unlimited subscriptions. :)

I think the negative just means what was pulled from his CC, I assume.
 

Attachments

  • Screen Shot 2019-06-06 at 11.11.40 AM.png
    Screen Shot 2019-06-06 at 11.11.40 AM.png
    29.2 KB · Views: 209
  • Like
Reactions: Newtons Apple
Starting Over with ARQ.

I was reading the ARQ documentation and came across the chapter: “starting over with ARQ—- If you want to remove all traces of Arq and start over on macOS” (steps are attached).

I just bought a new Mac and had (an old version of) ARQ migrated from an old Mac using time machine “migration”. I am planning to do ARQ backup to a new destination. Is it advisable “ to remove all traces of Arq and start over on macOS?.
Under what condition warrants start over with ARQ?

——————————
  1. Quit the Arq app if it's running.
  2. Delete the folder (home)/Library/Arq (To find the Library folder, go to the Finder, hold down the Option key, and pick "Library" from the "Go" menu).
  3. Open the Keychain Access app (in Applications/Utilities) and remove all entries with names that start with "Arq".
  4. Delete the Arq app. This will cause Arq Agent to quit if it's running.
  5. Open System Preferences. Click on Users & Groups. Then click on Login Items. Delete the "Arq Agent" entry.
  6. Download a fresh Arq app from the Arq web site.
  7. Launch the Arq app.
 
I have two hard drives that contained TM backups of an old Mac. The drives are about 10 years old and I have not used them for about 4-5 years. Yesterday, I tried to use them but both had gone back. So even though one takes the precaution of storing backup on two hard drives, there is a non zero probability that both could go bad when they aged. To guard against old age, perhaps a SSD is more durable because the silicon chips are sealed and not exposed to the environment.
 
It seems that CCC can create hourly or daily snapshots on a APFS formatted backup SSD destination. These snapshots can be saved similar to the backup retention policy of time machine and ARQ(if thinning is selected). These saved snapshots allow CCC users to recover old files or revert system to the state at the time when the snapshot was taken.(CCC documentation states that there is no technical difference between the snapshots created by Time Machine and CCC.)

What is the difference when a backup is done using time machine, ARQ and snapshot? Will the CCC snapshots consume much more storage space than TM or ARQ? Would appreciate your explanation.
 
Last edited:
I'm a believer in having at least two physically separated backups. One in a good cloud service like Amazon S3 or Wasabi plus one that is local will do the job and protect against disasters affecting one backup or the other.

Previously I posted about my excellent experience using Arq to back up my family's Macs and Windows machines to Wasabi and, for a local backup, to a refurbished, Celeron-based, USB3-equipped Chromebook Acer C720p that I re-provisioned with Linux to run the minio S3-clone object database. It came with a 16GB internal SSD for its operating system, and my backup disks are simple, cheap USB3 hard disks with their own power supplies since the Chromebook can't be counted-on to power several USB3 hard drives. Of course the Chromebook has its own battery to serve as a UPS, though that won't do anything for the drives.

Getting it working was a fun geeky rainy-day adventure. It's worked brilliantly for getting close to a year now. Linux may be a geekfest, but once it's running, it runs like a hose!

The refurb Chromebook was attractive for its low electricity consumption, decent performance, 4GB RAM and fast USB3 connectivity to the backup drives. Its cost was attractive, too, considering it included a keyboard and display. I'd considered something like a Raspberry Pi, but until the new v.4B, the 'Pi had severely crimped networking capability since its network traffic was routed through its USB2 bus. Meanwhile I tried another card with a true high-speed Ethernet port and found its software abysmal and its community pretty useless. So, Chromebook it was, for me.

However, as noted in my previous posts, not all Chromebooks are amenable to a Linux installation. I didn't know this but got lucky in choosing a model that is well-regarded for Linux capability. Your mileage may vary.

But now there's a great alternative: the new Raspberry Pi v.4B is faster and comes equipped with an honest-to-goodness fast Ethernet port as well as USB3 (https://www.raspberrypi.org/products/raspberry-pi-4-model-b/).

Configured with 4GB RAM (very worthwhile), it costs around $55.

Minio is available for it (https://www.thepolyglotdeveloper.com/2017/02/using-raspberry-pi-distributed-object-storage-minio/).

Now, in addition to the 'Pi you'll need a case --ideally with a fan, as the new 'Pi gets hot and will throttle under heavy use. Be sure to get a compatible SD card and power supply, and you'll want a keyboard, mouse and display at least for setup.

Again, the cost of the refurb Chromebook looks decent if you can get a model that can be Linux'd. But you may prefer the more elemental approach of basing everything on a Pi. Once running, you can disconnect the monitor, keyboard and mouse and it'll just run forever.

FYI, then. It's a different approach that might be attractive for your local backup solution.

Meanwhile, Wasabi has been flawless and is considerably cheaper than S3. My very few support questions have typically been answered within a couple hours. Highly recommended.

Either way, the machine running minio can be used for other things too, such as a personal VPN, web server, Homebridge setup, or whatever you want. Let the geeking begin!
[doublepost=1563804067][/doublepost]
So even though one takes the precaution of storing backup on two hard drives, there is a non zero probability that both could go bad when they aged.

Sorry to hear this. Yes, it's a matter of when not if that a drive will die. This goes for SSDs too. They are more durable for the reasons you list, but when they fail they tend to fail utterly and irretrievably, whereas a spinning-disk drive may give some warning and be partially recoverable.

It's an argument for having a cloud-based solution as one prong of your backup strategy. Arq's encryption is performed on the machine being backed-up, so privacy issues are well addressed.
 
Be sure to get a compatible SD card and power supply, and you'll want a keyboard, mouse and display at least for setup.
Thanks for the great write up. I was thinking the same thing when I saw the new Pi 4 come out. Perfect for this.

Just a tip though... when you make your SD card with Raspbian on it, just drop a blank text file called ssh into the root directory, and upon first boot the OS will enable SSH to you can login over the network that way. I set my Pi-Hole up that way and have never used a KB or monitor at all.
 
Just a tip though... when you make your SD card with Raspbian on it, just drop a blank text file called ssh into the root directory, and upon first boot the OS will enable SSH to you can login over the network that way. I set my Pi-Hole up that way and have never used a KB or monitor at all.

Wow, that's a great tip. I always waded through config files etc. That's way easier!

I have a 'Pi v.4B that just arrived here for other (non-Arq) purposes, and a case and power supply arriving mid-week. So I'll be trying this! Thanks.
 
  • Like
Reactions: Weaselboy
I am a big fan of Arq, having used it for several years to back up to S3, Google, and OneDrive as well as locally to my NAS. One thing I have never been sure of though, is whether the backups would be susceptible to ransomware, or whether there is protection given against ransomware since there is no mounted drive. Anyone know?
 
I am a big fan of Arq, having used it for several years to back up to S3, Google, and OneDrive as well as locally to my NAS. One thing I have never been sure of though, is whether the backups would be susceptible to ransomware, or whether there is protection given against ransomware since there is no mounted drive. Anyone know?

I imagine ransomware has to encrypt all the data right? So Arq would see this as a file-change and re-upload all the encrypted data. One would just have to figure out how to find this change and download all the files from before the ransomeware upload date.

This is just my educated guess. Backups would not be affected because they'd be off site. Now if they were an external drive connected to your computer, I could see how they too could be encrypted and affected by ransomware.

So depends on if you're using a cloud service or a local LAN/external drive.
 
I imagine ransomware has to encrypt all the data right? So Arq would see this as a file-change and re-upload all the encrypted data. One would just have to figure out how to find this change and download all the files from before the ransomeware upload date.

This is just my educated guess. Backups would not be affected because they'd be off site. Now if they were an external drive connected to your computer, I could see how they too could be encrypted and affected by ransomware.

So depends on if you're using a cloud service or a local LAN/external drive.

That makes sense. But could the ransomware also reach out and encrypt the Arq file system (in the backups, either on the NAS or on a cloud service), thereby rendering the backup useless?
 
That makes sense. But could the ransomware also reach out and encrypt the Arq file system (in the backups, either on the NAS or on a cloud service), thereby rendering the backup useless?

I defer this question to smarter and wiser Macrumors members than myself. Very good question. If you don't get a response here, would contact the author of Arq.


My attempt to answer:

https://arqbackup.com/features/ --- "Backups are stored in your cloud account or NAS or SFTP server to protect against theft, ransomware, disaster."

I do know I have to connect to the cloud service (BackBlaze in my case) inside Arq to get a list of backups, so my guess is the file system is there? I'd just need the keys.
 
I do know I have to connect to the cloud service (BackBlaze in my case) inside Arq to get a list of backups, so my guess is the file system is there? I'd just need the keys.

The file system in cloud storage is not structured quite like your local disk system. But I agree, in principal, you just need the keys. In the case of AWS/S3 like storage (e.g. Backblaze B2), you need the application key ID and the secret key itself. So ransomeware needs to a) be written to access bucket storage, b) find the key ID, and c) the secret key.

The key ID and the secret key are stored in your login keychain (on a Mac). So make sure you use a strong login password on your Mac.

I am not aware of any ransomeware which has been written to access cloud bucket storage, let alone the specifics of macOS keychain. It is an unnecessary complication for a ransomeware attack as encrypting local (and/or local network) storage is quite enough to strike terror into anyone's heart.

Edit: For NAS storage this is vulnerable to a ransomeware attack if the storage is mounted as a file share. Using a cloud service is important for protection against encrypting ransomeware.
 
Last edited:
  • Like
Reactions: BigMcGuire
Error message: “disk erase failed couldn't open device”

I tried many times to erase an old 2TB seagate Backup plus hard drive (which contains a time machine backup) using disk utility on a Mac, but each time got the above message. I am not a computer person and at my wit’s end.

If you could give me some ideas how I could solve this problem, I would greatly appreciate it.
 
You may need to go deeper than macOS will. Strap your geek hat on and try https://gparted.org ...I've had it resurrect drives that were hosed in the way you describe. But: don't really trust that drive ever again. And be a bit careful that you use it to fiddle with the Seagate drive only. Booting gparted in a virtual machine is the safest way.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.