Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm planning on changing my backup strategy and have been working with a trial version of Arq Backup doing some testing of sending dmg image files to OneDrive, Wasabi, and Backblaze.

I have a very slow internet connection and uploads are about 125 KB. Backup efficiency is important. I'll be using this backup with archive files that rarely change, but when they do I don't want to send the entire file again if possible.

One test scenario is; a) create a dmg image file upload it, then b) modify the contents of the dmg, upload again. I have tried this with various size dmg files, deletes, restores, etc.

OneDrive behavior: After the first full upload of the dmg file, each subsequent upload afterward with changes made to the dmg file contents results in only sending a partial amount of data, not the full dmg file. When I browse the backup bucket in OneDrive, I see many folders under objects, each of a small size, so it looks like the dmg is being uploaded in chunks, not the full file. I also get accurate progress via the Arq app while the upload is occurring.

Wasabi or Blackblaze behavior: The first upload and each subsequent upload sends the entire dmg file. When I browse the bucket, there is a single objects folder for each upload, each the full size of the dmg file. I do not get accurate progress information during the upload via the Arq app.

So it appears that OneDrive is handling the uploads in a manner that chuncks the data and reduces the amount of data send after changes are made to the original file. The Arq session logs confirm only a partial amount of the data is being sent and browsing the uploaded contents within the buckets is see the smaller chunks. With larger dmg files in my test the amount of bandwidth is much less to OneDrive for file changes compared to sending the entire file again to the other services.

I actually want to use Wasabi or Blackblaze, so if anyone has any idea why this may be happening, I would appreciate it.
 
I now have Blackblaze B2 working the same as OneDrive, chunking a large file into many smaller files, and only uploading changes.

Yesterday, I initially created the B2 bucket manually using the default settings. I deleted that bucket and created a new bucket via the Arq interface. I noticed that Arq created the bucket using "Keep only the last version" life cycle setting, whereas yesterday I had "Keep all versions of the file" set.

Regardless if that is the setting that changed the behavior, it's now working great. Through all my testing I can see this is a solid product. I even had my ISP fail during an upload of a large dmg image file and I resumed it via a Verizon hotspot using my phone. Resuming failed uploads without having to start from the beginning is an absolute must given my ****** internet.

Arq is awesome! I hope there is a sale during the next 30 days.
 
  • Like
Reactions: Weaselboy
Here's an update on my Arq experience.

I managed to totally hose my main Windows VMWare Fusion virtual machine on a long flight yesterday— utterly wouldn’t boot, couldn’t be fixed, tried everything.

So (back home, with good broadband again) I sighed heavily, deleted the corrupted VM (175 GB!), did a Disk Utility run in Recovery Mode to fix some lost-block issues, and restored the ginormous file from an Arq backup on Wasabi.

Acid test!!

…and it works fine. Very pleased with Arq.

N.B.: Backing up VMs can be problematic. If they're shut down, a full backup should be no problem. But incrementally-backed-up ones can be a mess, especially if incrementally-backed-up while open. Now, I had a full-backup copy of my VM to turn to if needed but decided to give my Arq backup a try, even though it's incremental. I don't know if I just got lucky or if Arq does an especially good job in such situations. In any case, my complicated 175GB Windows VM booted right up after the restore, so kudos to Arq.
[doublepost=1552247009][/doublepost]
I'm planning on changing my backup strategy and have been working with a trial version of Arq Backup doing some testing of sending dmg image files to OneDrive, Wasabi, and Backblaze...

Have you posed the question about OneDrive's & Wasabi's behavior to Arq? They've been very responsive to my email questions.

In any case, once you have an explanation, please post it here!
 
Last edited:
  • Like
Reactions: 11201ny
Here's an update on my Arq experience.

I managed to totally hose my main Windows VMWare Fusion virtual machine on a long flight yesterday— utterly wouldn’t boot, couldn’t be fixed, tried everything.

So (back home, with good broadband again) I sighed heavily, deleted the corrupted VM (175 GB!), did a Disk Utility run in Recovery Mode to fix some lost-block issues, and restored the ginormous file from an Arq backup on Wasabi.

Acid test!!

…and it works fine. Very pleased with Arq.

N.B.: Backing up VMs can be problematic. If they're shut down, a full backup should be no problem. But incrementally-backed-up ones can be a mess, especially if incrementally-backed-up while open. Now, I had a full-backup copy of my VM to turn to if needed but decided to give my Arq backup a try, even though it's incremental. I don't know if I just got lucky or if Arq does an especially good job in such situations. In any case, my complicated 175GB Windows VM booted right up after the restore, so kudos to Arq.
[doublepost=1552247009][/doublepost]

Have you posed the question about OneDrive's & Wasabi's behavior to Arq? They've been very responsive to my email questions.

In any case, once you have an explanation, please post it here!


Good to hear your VM recovery worked out. Recoveries no matter how prepared can be a bit stressful until successfully completed. Backup integrity is definitely of paramount concern, along with security. I fully plan to perform periodic restore tests to both the same and alternate machines to give me peace of mind.

Regarding the behavior I experienced during my tests, I think I've sufficiently worked it out and will just go with what I have. I've started uploading data to B2, which I decided to go with after some testing and research.
 
Today, Arq 5.9 was released.

Adds support for two new providers: Backblaze B2 and Wasabi.

Naturally, I asked "Who the hell are Wasabi?"... Here's the long answer by digging around online...

Answer:

The founders of Carbonite (online backup solution) have left Carbonite to start a cloud storage company. They were building the company in secret and unveiled in May of 2017.

Right now they have 1 data center and 20 employees. And they have $8.5 million in venture funding from 3 investors:

https://******************/organization/wasabi-technologies-inc#/entity

I looked at the investors hoping to see if anyone would give the impression of being able to properly judge the success chances of the company, but none of them give any "we understand cloud storage" impression, so not much to learn there:
https://******************/person/ron-skates#/entity
https://******************/person/howard-cox#/entity
https://******************/person/desh-deshpande#/entity

Beyond the $8.5, they also raised +$10.7 million recently, so that's a total of ~$20 million in their funds: https://www.americaninno.com/boston...m-round-carbonite-founders-raises-for-wasabi/

And the CEO commented (in the link above) that he had to stop the 2nd fundraiser because they had more money than they needed (wtf?) and that he may decide to add $10 million of his own money next year (wtf?).

So at least they're very well-funded... People are throwing money at these ex-Carbonite guys...

The link above also reveals that the $8.5 million would have lasted them "to the end of the year" (may to december = 7 months)... and that they did the 2nd fundraiser just to get $1 million extra for marketing, and that they ended up with a lot more than they needed. This is all very strange!



That's a big "wtf?". So the founder only expected "a couple dozen companies" to start trials in the first few months. But they got over 600 in a few days. How on EARTH did he not foresee the massive interest in such a dirt-cheap S3-compatible cloud storage claiming to be as reliable as Amazon?! It's as if he's out of touch... Weird.

In their press releases, they're talking about being ready for "exabytes" of data, and having "0.99999999999 (eleven 9's)" reliability. I really don't know about that. This whole thing could be snake oil. Nobody knows at this point.

Their company twitter: https://twitter.com/wasabi_cloud (really weird timeline full of random tech articles, as if they are trying to create discussion about random subjects to get seen).

CEO twitter: https://twitter.com/Wasabi_Dave

May 2 CEO tweet: "Holy cow! We already had to stop new signups because we're concerned about running out of capacity." (https://twitter.com/Wasabi_Dave/status/859775051251494912)

Another on May 2: "Will have more storage online in two weeks. Get in line! First come first served."

Interesting May 4 tweet: "Day 2 as the world's most cost effective cloud storage. Seems like we underestimated the demand by quite a bit. Good problem to have." (https://twitter.com/Wasabi_Dave/status/860190321602424833)

Here is a video from someone on the team, speaking about his confidence in the Carbonite founders and that they know how to build online storage: https://twitter.com/wasabi_cloud/status/886646400775270401

The fact that they founded Carbonite and have experience with huge data centers and cloud storage gives me some hope that this is not just a "so cheap and underpriced that it's gonna crash and burn" company.

I also hope that their hardware infrastructure is good and won't crash and burn under the load... But it seems like they're already getting customers faster than they can expand storage capacity?!

Lastly, I worry that they are just sleazily subsidizing storage costs via the fundraised money and that they are just cheap right now to get tons of customers and publicity, and that they will do a price hike later. It's inevitable that they will need to raise this price... But they'll probably still be really cheap if that happens...

Here is a brief company presentation in the words of the CEO:
https://medium.com/@wasabi_cloud/welcome-to-wasabi-hot-storage-4e06d58e377c



Here is an article which reveals some good stuff:
http://www.crn.com/news/storage/300...-market-with-disruptive-price-performance.htm

From that article, we learn that David Friend co-founded Carbonite, was its CEO for 10 years, and left it in January of 2015 to found Wasabi. Which was in stealth mode until May of 2017. That's a lot of time out of his life in "stealth mode" building this thing, which makes me think it's serious... Hmm.

And here's some super interesting quotes from that article:



So that's fascinating. They do not use stock Windows or Linux. They use something custom-written to store data. Perhaps that's how they are able to be "6x faster than Amazon S3"? Perhaps they've programmed some FPGAs to do networking and storage management. I have no idea... they are very secretive about it.

It's actually worrying that they are not using stock Windows or Linux... because that means they've most likely custom-written, and that means bugs (I'm an exceptionally skilled programmer and hell yes there are bugs in ALL software, especially while it's fresh and new).

The article also goes on to add some comments from some tech guy in the business of online storage, who is a bit skeptical about how well they'll work out:



But best of all is this article: http://www.pcworld.com/article/3194...new-cloud-service-like-low-priced-wasabi.html

Here are some of the most important quotes from it:















So from those quotes, the most important things are: It can apparently restore data very, very fast (much faster than Amazon S3). They are using some sort of RAID (multiple harddrives in parallel for data safety and speed). And they are planning a second datacenter in USA and then expansion overseas (Europe!). And anytime a new datacenter opens up, they will let customers move their data there instead (for a fee).

So, with all of that backstory out of the way... here's how Backblaze B2 and Wasabi compare against each other:

Backblaze B2:
  • Storage cost per gigabyte: $0.005/GB.
  • Download cost per gigabyte: $0.02/GB. But you get 1 GB of free downloads per day, which is great for doing occasional, minor file restores for free.
  • There is no minimum monthly charge. You only pay for what you use.
  • There is also a very minor API call cost, but it's so small that you won't notice it.
  • Incredibly good storage architecture, which has fully proven its reliability. And they have recently expanded to having 2 data centers.
  • The company is reliable and won't vanish or hike its prices.
  • Perfect storage reliability. They're splitting data across something like 20 hard disks, and each disk is in a separate storage "pod" (cabinet), and out of those they're using 17+3 parity which means that they can lose 3 whole cabinets/3 whole disks out of the set and can still recover the data (and they recover data transparently). The whole system is incredibly well-engineered.
Wasabi:
  • Storage cost per gigabyte: $0.0039/GB. (-22% of the cost of B2 storage, so about 4/5ths of the B2 cost).
  • Download cost per gigabyte: $0.04/GB. (+100% of the B2 cost... 2x as expensive as B2).
  • Edit: I just found out that if you store less than 1 TB (1024 GB), they'll charge you as if you had 1 TB. Meaning their minimum monthly charge is actually $3.99. So some of the comparison numbers below are wrong. But I won't re-calculate them since they're still correct when you disregard the minimum charge.
  • There is no cost for API calls.
  • But there is a PRETTY BIG caveat with their billing system: Every object (file chunk) you upload is pre-billed for 90 days of storage (and that amount is non-refundable even if you delete the file again immediately).
  • We don't know if they'll do a big price hike soon. They are marketing themselves as an Amazon S3 API-compatible competitor to Amazon, which is ~$0.023/GB, so Wasabi has a LONG way that they can raise their $0.0039/GB and still be totally competitive against Amazon. And since they're not competing against B2 (different markets; Wasabi is S3-compatible and B2 isn't), then they may raise their price far above B2's cost. I am almost certain that Wasabi is doing an aggressive company-launch marketing campaign and are planning to get a lot of customers while living off of fundraised money, and then doing a huuuuge price hike (which at that point will still be cheaper than S3), and hoping companies and customers who needed their S3-compatibility stay on their "still cheaper than Amazon" service.
  • Further proof of that theory is that they said that the $8.5 million would last them for 7 months which shows they're running a very expensive, losing operation right now...
  • We don't even know if they'll stay afloat. But it does seem like they're competent enough to stay alive. And of course their goal is to make a successful company to earn them money.
Storage price comparisons for 700 GB:
  • B2 Monthly Cost: 700 * 0.005 = $3.5
  • Wasabi Monthly Cost: 700 * 0.0039 = $2.73.
  • B2 Total Yearly: $42.
  • Wasabi Total Yearly: $32.76.
  • B2 Download cost (700 GB): $14.
  • Wasabi Download cost (700 GB): $28.
Storage price comparisons for 1300 GB:
  • B2 Monthly Cost: 1300 * 0.005 = $6.5.
  • Wasabi Monthly Cost: 1300 * 0.0039 = $5.07.
  • B2 Total Yearly: $78.
  • Wasabi Total Yearly: $60.84.
  • B2 Download cost (1300 GB): $26.
  • Wasabi Download cost (1300 GB): $52.
So... on paper, for a normal home user's backup set at such small scales... there's hardly any difference in the yearly storage costs. And as for their difference in download cost, you never really need to download anything via Arq unless your hard drives and other backup systems all die, so the download cost is no huge reason to choose one over the other.

But the thing is... I am almost certain that Wasabi is going to raise their insanely low STORAGE price. $0.0039/GB is not sustainable. That's $11.7 per month to store 3000 gigabytes, which is enough to fill up a 3 TB hard drive, which costs about $83 each... So you'd need to store that data for 7 months for them to make a profit off of the cost of just the hard drive itself. Add in the costs of employees, hardware, electricity, etc, and you're looking at a lot longer time before they make a profit, if ever... Hmm...

Remember that Wasabi offers an Amazon S3 API and competes against Amazon's $0.023/GB cost. So let's say Wasabi raises the price to $0.02. That's still less than Amazon and therefore companies who need S3 API (a lot do) would stay with Wasabi, but suddenly it's 4x the cost of Backblaze B2 (which does not support S3 API).

In short: Who the hell knows what to do... Wasabi could be anything... Their founder has cloud storage experience from Carbonite and has worked in secret since January of 2015 to set up this new company. But they could die in a year or two due to the cut-throat "race to zero dollar" cloud storage market and their unsustainably low prices, or they could thrive and expand globally. They could die under the load of too many customers (see their early tweets a day after launch), or they could be super fast. They could raise the price to far beyond B2's price, and the risk of them doing that is pretty high after they've gained enough customers... Or, they could keep their $0.0039 and forever be 4/5ths the price of B2 storage...

I had signed up with a B2 trial in preparation for Arq's B2 support... and I think I'll still choose B2...

There are so many risks with Wasabi. Sure, I'd pay $60.84 ($17.16 less than B2) per year instead of $78 to store 1300 GB... but, then they'll probably raise the price and suddenly it'll be something like $90 per year for Wasabi. Do I really want the hassle of switching provider AGAIN if Wasabi, an unproven company with a "try to make a splash with low fees" launch marketing campaign, suddenly raises their prices? They're competing with S3 (and their API); NOT with B2. So they are free to raise their price much higher than B2, and it's almost inevitable that they'll do so... In fact, Wasabi could raise their current prices to far above B2's price and they still wouldn't piss off people who come from Amazon S3's massive prices.

So, B2 is a much safer choice. For just $17 more per year, about the cost of a single pizza and coke at a restaurant, I can choose B2 instead and rest safe knowing that Backblaze B2 is a trusted company with a proven track record and a stable price. And an incredibly safe 17+3 hard disk parity storage architecture which will never lose a single byte of my data (https://www.backblaze.com/blog/vault-cloud-storage-architecture/).

You will all have to ask yourselves the same question. I don't know Wasabi's future. Time will tell if Wasabi is trustworthy. They may have a truly badly designed system, and they may be unsustainable and massively hike their prices as soon as they've ensnared enough customers.

I can say for sure: Switching providers sucks. It drains time and energy. I had about 1300 GB at Amazon Cloud Drive. It took about 2 weeks to upload all of that data. Then I had to just delete it all, which loses all history records... And I will now have to spend weeks doing the migration to the new provider, during which time I have no online backups... I don't know if it's worth trying Wasabi just because it's currently 4/5th's (~80%) of the B2 storage cost (while being twice as expensive at downloads)...

I am not interested in losing time and energy to be a tester for an unproven company (Wasabi), just to save a measly $17 per year ($1.4 per month) while risking a sudden, massive price hike or a collapse of the company... Heck, I buy candy bars that cost more than that damn $1.4 price difference per month.

So I'm going to choose B2. Affordable, completely proven as a company, and super reliable storage. Let's go! :)

Amazing post. Just started an account with Wasabi. It is cheaper (no download fees), and seems to be a faster upload.
 
How are you guys finding Wasabi? I tried them and ended up at B2. The Wasabi management interface was messed up to the point I could not use the service. The fonts showed double and it was difficult to find the hyperlinks to various pages. I tried both Firefox and Safari on macOS Mojave. Basically I had no choice but to discontinue my evaluation.
 
How are you guys finding Wasabi? I tried them and ended up at B2. The Wasabi management interface was messed up to the point I could not use the service. The fonts showed double and it was difficult to find the hyperlinks to various pages. I tried both Firefox and Safari on macOS Mojave. Basically I had no choice but to discontinue my evaluation.

Just at the beginning of a 12TB upload. Let you know when done.
 
How are you guys finding Wasabi? I tried them and ended up at B2. The Wasabi management interface was messed up to the point I could not use the service.

How odd. It worked well for me when I set things up and provisioned everything. My backups (and, as I've posted here, a critical restore) have gone totally smoothly. Literally no problems at all. On the few occasions I've had questions, they were answered by email within hours. I trust that whatever UI issues you experienced were temporary.
[doublepost=1553541074][/doublepost]...I just logged into Wasabi and paged through their configuration screens. No issues evident. Shrug.
 
How odd. It worked well for me when I set things up and provisioned everything. My backups (and, as I've posted here, a critical restore) have gone totally smoothly. Literally no problems at all. On the few occasions I've had questions, they were answered by email within hours. I trust that whatever UI issues you experienced were temporary.
[doublepost=1553541074][/doublepost]...I just logged into Wasabi and paged through their configuration screens. No issues evident. Shrug.

Yep, the interface is fine for me now as well. At this point I'll continue using B2. I've almost completed uploading all that I need to backup.
 
It's actually worrying that they are not using stock Windows or Linux... because that means they've most likely custom-written

...He didn't say they aren't using FreeBSD or some other flavor of Unix.

Whatever they're using, it works spectacularly well.
 
Question about ARQ backup with Extended Journaled and APFS.

I have been doing ARQ backups of my old Mac running El Capitan formatted in Extended Journaled. I am about to get a new Mac running Mojave and formatted in APFS. After migrating the contents from old to new Mac, can I continue to do ARQ backup of the new Mac to the dataset created by the old Mac? Or I should delete the dataset and start from scratch?
 
I am not a computer person and would appreciate help.

I just bought a new APSF formatted Mac with Mojave. My old Mac is Journaled formatted with El Capitan. I would like to migrate everything except the OS from the old to the new Mac.

I have ARQ, CCC and time machine backups of the old Mac. I searched the internet on how to do the migration. There are various ways of doing it, for example using the CCC backup or connecting the two Macs together and using setup or migration assistants.

I was overwhelmed with information and got confused. So I called the Stanford apple store for help. An employee told me that the easier way is to use time machine as connecting two macs together carries some risk. If I don’t know what I am doing, I may ruin the data of both macs.

He recommends I erase everything on my external SSD drive which is formatted in Journaled and make a fresh time machine backup of the old computer. Then connect the SSD to the new machine. When power up, the new Mac will ask “how I would like to setup this Mac?”. I then select “from time machine backup” and that is it. All applications along with registration codes will migrate to the new Mac except the passwords of internet applications such as Gmail and Apple ID, which I have to re-enter. This method has the advantage that if the migration fails, it does not affect the contents in the old Mac and also I have a TM backup, he said.

I would appreciate if u could let me know if he is correct.
 
I am not a computer person and would appreciate help.
Using either the TM or CCC backup as the source will have the exact same end result.

If you have an external SSD you can use for this, I would make a CCC backup/clone to the SSD. IMO that will be fastest during the import. TM uses this weird system of hard links that I think slows down imports like this.

The BIG key here, and the Apple Store tech mentioned it, is to do this during the initial system setup. When prompted, say you want to import and point to the CCC disk and off you go. DO NOT make an account on the new Mac then try to import with Migration Assistant afterwords. That can cause all sorts of permissions problems.
 
  • Like
Reactions: nouveau_redneck
Thank you, much appreciated!

If I use CCC, I would select “restore from CCC backup” and click on the CCC Bakup drive icon?

So the difference between time machine and carbon copy cloner is just the speed of migration; both produce the same end result?

When I did CCC backup for the first time there was an issue whether to include the startup volume(not sure what it was). I select to include the startup vol. will this inclusion mess up the migration?

I do have a spare SSD for use in migration.
 
If I use CCC, I would select “restore from CCC backup” and click on the CCC Bakup drive icon?

Not exactly... during initial setup it will ask if you want to import data. Say yes and you will get a screen similar to this (this one is from Migration Assistant but it looks similar). Pick the first option then in the next step pick your CCC disk as the source and off you go.

macos-high-sierra-migration-assistant.jpg

So the difference between time machine and carbon copy cloner is just the speed of migration; both produce the same end result?

Yes... exactly. :)

When I did CCC backup for the first time there was an issue whether to include the startup volume(not sure what it was). I select to include the startup vol. will this inclusion mess up the migration?

I think you might have seen something about including the recovery partition. That won't matter either way for what we are doing here.

Let us know how you make out and how you like the new Mac!
 
Thank u again and speechless for ur kind assistance!

Buying a new Mac because I have an app called Papers 3 (the company was sold, no more update). For a 2014 Mac, papers 3 only works with El Capitan, but it will work with later OSes if the Mac model year is newer. El Capitan does not have the drivers for high resolution monitor such as the 5K LG UltraFine which I purchased without knowing it. Papers 3 is a very important app in my profession. Stanford bookstore is having a Mac sale. It costs me $3K to solve the papers 3 problem.
 
  • Like
Reactions: Weaselboy
Have some questions about Time Machine local snapshots. Would appreciate for your help.

Q1) does ARQ backup the snapshots? Is backup of a backup useful?

Q2) both ARQ and TM snapshots do incremental backup hourly. Is this redundant? Is this redundancy desirable or useful?

Q3) how much space does the snapshots normally take?
I have a 1TB startup disk and 500GB data. How much space will snapshots take up?

Q4) from apple:
“How often local snapshots are saved?
Time Machine saves one snapshot of your startup disk approximately every hour, and keeps it for 24 hours. It keeps an additional snapshot of your last successful Time Machine backup until space is needed. And in High Sierra or later, another snapshot is saved before installing any macOS update.”

What does “last successful Time Machine backup” mean? Does this takes up 500GB of space?
 
Q1) ARQ does not backup TM snapshots.
Q2) Different backup products. They don't overlap in the way they work. I don't believe that ARQ uses file system snapshots. TM is for local backups to external disk. ARQ is primarily for backup to cloud, though can be used to external disk or NAS. I would not use both ARQ and TM to external disk, though you can.
Q3) TM snapshots depend on how much changes, not on the total size.
Q4) If you have 500 GB data, then your first TM backup will be 500 GB (less anything excluded). After that it will increase by the size of changed files

The similarity between ARQ and TM, is that ARQ mimics me aspects of TM's functionality. In particular, it can do backups every hour (like TM) and it thins the backups to every day and every week as they age (just like TM).
 
Last edited:
  • Like
Reactions: Weaselboy
Q1) ARQ does not backup TM snapshots.
Q3) TM snapshots depend on how much changes, not on the total size.
Q4) If you have 500 GB data, then your first TM backup will be 500 GB (less anything excluded). After that it will increase by the size of changed files.

Thank u for answering my questions! Still trying to understand snapshots and the space used by snapshots. I have some elementary questions about snapshots given below.

I just bought a new Mac with a 1TB SSD. At time zero, the startup disk has 500GB data, also the time machine was setup to backup hourly. In the next 48 hours, I modified 48 files of different sizes. File 1 within the first hour, file 2 within the second hours, and so on. The change in each file size before and after modification was the same and was 1MB.

Q1) case 1
No external drive was connected to the Mac during these 48hrs for TM backup. at time t=1hr, TM tried to make the fist snapshots, and found that the size would be 500GB. Together with the original 500GB data, this snapshot would have caused the total storage to exceed the 1TB SSD capacity (or the 80% ceiling). Thus, TM did not make the first ever snapshot. As a result, at t=48 hrs, there was no snapshot on the SSD. Yes?

Q2) case 2
An external drive was connected to the Mac at t=0 for TM backup and the backup was completed at time t=1 min. At =25 hr, there was 24 snapshots (1 to 24) on the SSD and the total space used was 500,024MB. Yes?

And at t=49hr, the snapshots on the SSD were snapshots 25 to 48 and the total space used was 500,024MB. Yes?

Using the above two cases as an example, what does Apple mean, when it says, “...keeps it for 24 hours. It keeps an additional snapshot of your last successful Time Machine backup until space is needed.”, what is “last successful Time Machine backup” and how much space it used? Is the space used about 500GB or just 24MB?
 
Mac comes with five folders: system, library, applications, users and home. From apple:
“System
This folder contains the macOS operating system. .............”

I use ARQ to backup three folders: library, applications and home. The first two folders are relevant to applications I installed including registration codes whereas the last folder contains my data.

1) case 1
I lost my Mac and buy a new one with the same version of OS as the lost Mac. I restore these three folders from ARQ to the new Mac. Will the new Mac and the lost Mac perform identically?

2) case 2
I lost my Mac and buy a new one with the newest version of OS which is different than the lost Mac. I restore these three folders from ARQ to the new Mac. Will the new Mac and the lost Mac perform identically?
 
Mac comes with five folders: system, library, applications, users and home. From apple:
“System
This folder contains the macOS operating system. .............”

I use ARQ to backup three folders: library, applications and home. The first two folders are relevant to applications I installed including registration codes whereas the last folder contains my data.

1) case 1
I lost my Mac and buy a new one with the same version of OS as the lost Mac. I restore these three folders from ARQ to the new Mac. Will the new Mac and the lost Mac perform identically?

2) case 2
I lost my Mac and buy a new one with the newest version of OS which is different than the lost Mac. I restore these three folders from ARQ to the new Mac. Will the new Mac and the lost Mac perform identically?

Arq is not really what you want for the purposes you stated. Think of Arq as a means to automate moving selective folders to a target drive, typically an online cloud provider. It's strength is backing the areas of your Mac drive where you store your documents and files.

Here are three backup methods and how they compare:

Arq:
Used for moving selected folders to another location, typically the cloud. Great for backing personal areas of your Mac file system. It is fast, provides flexibility in destination, and it encrypt both the file contents and file folder structure, such that the contents are not readable where you store them.

It is not a good solution for backing up system areas of your drive, for any type of full system recovery. Typically if you lost your system, you would rebuild it and then restore your personal file content back to your file system. Thus you would need to be familiar with your file system such that you would be able to ensure that you are selectively backing up all that you need.

Carbon Copy Cloner:
Excellent for full system backup. It essentially creates a full copy of the file system, including system areas and drive boot information. The backups that it makes are fully bootable. So if you were to make a full backup of your Mac to a thumb drive, for instance, you could boot off of that thumb drive and have an exact copy of your drive.

It does this by copying all files in an efficient manner and thus takes advantage of the full bandwidth between the source and destination drives. Destinations include any attached drive, or a disk image, or a remote Mac system drive.

Time Machine:
Full system recovery or selective file recovery made to a local or attached disk, or networked disk. It creates incremental backups pruning as time goes on. It does not offer much flexibility in scheduling. My personal experience, and subjective observation from reading other peoples experience, is that it is the most trouble prone of the three. For me, it was corruption of the backup archive about once per year.

It has a nice interface and is a great way of selectively resorting files. Again, personally, I would not trust it as a my only backup method. Take that with a grain of salt if you will, but I think both Arq and CCC are better solutions.

Which brings me to the point that I think is important. If your data is critical, and who's data is not, then it is probably best that you use two backup methods. I personally use CCC and Arq. CCC for local full recovery in the event of a system failure, and Arq for offsite storage of critical files.
 
Arq is not really what you want for the purposes you stated. Think of Arq as a means to automate moving selective folders to a target drive, typically an online cloud provider. It's strength is backing the areas of your Mac drive where you store your documents and files.

Thank u for ur answer, I appreciate it!

So the answer to both case 1 and case 2 are NO. And ARQ is only good to backup the home folder as suggested by ARQ?
 
Thank u for ur answer, I appreciate it!

So the answer to both case 1 and case 2 are NO. And ARQ is only good to backup the home folder as suggested by ARQ?

You could backup more than that. Really anything on your drive that you want, including the applications folders. But using Arq to backup your home folder /Users/<username> is a good place to start. It includes ~/Library which has much of your application specific data, such as Notes data, application settings, etc. You can backup your applications folder and others as well. I just wouldn't think of using Arq as full system recovery, as there are better products for that purpose.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.