Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

spaceguns

macrumors newbie
Original poster
Aug 13, 2019
8
3
I only have partial information, and do not 100% know the exact OSx version or how full the drive was, but I think it was pretty topped out. No recent backups, important data, and I have already put the fear of god into them regarding backups so this doesn't happen again. They have a several months old timemachine backup available so if there may be something useful there like a partition table or encryption key I need I may be able to get access to it.

The set up, steps taken and how we got this far:
-Friend's 2015 Macbook Air stopped turning on. May have had a bad shutdown involved. Dead logic board.
-Drive Enclosure is obtained. The drive would not successfully boot into any macs at the Apple store. Genius Bar may or may not have run some commands to try and recover things. Report that part of the SSD that managed FileVault is probably damaged.
-Friend's university tech shop takes a look with no luck.
-They stress how important some of their research data is, I refer them to Drive Savers out of an abundance of caution.
-Drive Savers tries to hook it into a system (??), says it isn't working, sends them on their way within 15 minutes or so.
-Bottom line, the usual boot off of it on a different mac, etc troubleshooting isn't working.
-I take pity on them and have taken this on as a side project, believe this is an encrypted APFS drive.
-I manage to capture a seemingly 100% good 500GB disk image, and mirror to a 2TB external drive, directly off the bad drive in the enclosure on an Ubuntu system using ddrescue. For anyone else going through this with a drive that keeps dropping connection, try a powered USB hub.

ddrescue commands used

Code:
sudo ddrescue -f -n -c 4096 /dev/sdc "/media/spaceguns/TOSHIBA EXT/RescueImage1.dmg" "/media/spaceguns/TOSHIBA EXT/mapfile1.txt"

sudo ddrescue -d -f -r3 -c 4096 /dev/sdc "/media/spaceguns/TOSHIBA EXT/RescueImage1.dmg" "/media/spaceguns/TOSHIBA EXT/mapfile1.txt"

sudo ddrescue -d -f -r3 -c 4096 /dev/sdc /dev/sdb mapfile.txt

No bad sectors or errors on both, a quick look at the image in a hex editor and yep, stuff is in there alright. Have a backup of the dmg just in case.

Feeling a new level of confidence that I won't fry the last hope we have of getting this data back by hammering it with attempts, it's time to jump on my wife's Macbook Pro and see what we can see. Unfortunately, not being a mac guy, I am hitting some walls, probably caused by my own ignorance of the underlying system and available commands. I am comfortable at a command line, but I am not a mac user.

Mac Attempts and command results:

DISK UTILITY File>Open Disk Image>RescueImage1.dmg

Hangs and very briefly it appears on the left showing the following then it immediately disappears



Hooking the cloned drive up

Click Mount on disk3s2 and nothing happens, no feedback

File->Get Info on disk3s2

Volume type : APFS Physical Store
BSD device node : disk3s2
Connection : USB
Device tree path : IODeviceTree:/PCI0@0/XHC1@14
Writable : No
Is case-sensitive : No
Volume capacity : 500,068,036,608
Owners enabled : No
Is encrypted : No
Can be verified : Yes
Can be repaired : Yes
Bootable : No
Journaled : No
Disk number : 3
Partition number : 2
Media name :
Media type : Generic
Ejectable : Yes
Solid state : No
S.M.A.R.T. status : Not Supported
Parent disks : disk3

File->Get Info on AppleAPFSMedia

Volume type : Uninitialized
BSD device node : disk4
Connection : USB
Device tree path : IODeviceTree:/PCI0@0/XHC1@14
Writable : No
Is case-sensitive : No
Volume capacity : 500,068,036,608
Available space (Purgeable + Free) : 0
Purgeable space : 0
Free space : 0
Used space : 500,068,036,608
Owners enabled : No
Is encrypted : No
Can be verified : No
Can be repaired : No
Bootable : No
Journaled : No
Disk number : 4
Media name : AppleAPFSMedia
Media type : Generic
Ejectable : Yes
Solid state : No
S.M.A.R.T. status : Not Supported

Disk Utility First Aid Results

Code:
Running First Aid on “AppleAPFSMedia” (disk4)

Fixing damaged partition map.
Invalid disk.

Operation failed…


Running First Aid on “” (disk3s2)

Repairing storage system
Performing fsck_apfs -y -x /dev/disk3s2
Checking the container superblock.
Storage system check exit code is 0.

Operation successful.

Still no mount on disk3s2. I think I have exhausted my Disk Utility GUI options so now it is off to the command line! This will all be on the cloned drive. I am trimming out references to the other system drives for easy of reviewing.

Code:
Janes-MacBook-Pro:~ John$ diskutil list
/dev/disk3 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *2.0 TB     disk3
   1:                        EFI EFI                     209.7 MB   disk3s1
   2:                 Apple_APFS Container disk4         500.1 GB   disk3s2

/dev/disk4 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +ERROR      disk4
                                Physical Store disk3s2


Janes-MacBook-Pro:~ John$ diskutil apfs list
APFS Containers (2 found)
|
+-- Container disk1 (trimmed data)
+-- Container ERROR -69808
   ======================
   APFS Container Reference:     disk4
   Size (Capacity Ceiling):      ERROR -69620
   Capacity In Use By Volumes:   ERROR -69620
   Capacity Not Allocated:       ERROR -69620
   |
   +-< Physical Store disk3s2 0804ED4C-B212-4BF2-B475-6026969AE826
   |   -----------------------------------------------------------
   |   APFS Physical Store Disk:   disk3s2
   |   Size:                       500068036608 B (500.1 GB)
   |
   +-> No Volumes


Janes-MacBook-Pro:~ John$ distill mountDisk /dev/disk3s2
One or more volume(s) failed to mount

Janes-MacBook-Pro:Documents John$ diskutil mountDisk disk4
Volume(s) mounted successfully

If that actually did something I am not aware. Nothing additional seemed listed or mounted anywhere. I ran the below gpt commands, unmounted disk 4, and ran them again and checked Disk Utility/finder for any changes. Meanwhile assuming we are not actually mounting.

Code:
Janes-MacBook-Pro:~ John$ sudo gpt show disk3
      start        size  index  contents
          0           1         PMBR
          1           1         Pri GPT header
          2          32         Pri GPT table
         34           6         
         40      409600      1  GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
     409640   976695384      2  GPT part - 7C3457EF-0000-11AA-AA11-00306543ECAC
   977105024  2929858399         
  3906963423          32         Sec GPT table
  3906963455           1         Sec GPT header
Janes-MacBook-Pro:~ John$ sudo gpt show disk3s1
   start    size  index  contents
      0       1         MBR
      1  409599         
Janes-MacBook-Pro:~ John$ sudo gpt show disk3s2
     start       size  index  contents
         0  976695384         
Janes-MacBook-Pro:~ John$ sudo gpt show disk4
     start       size  index  contents
         0  122086923       

Janes-MacBook-Pro:~ John$ diskutil apfs unlockVolume /dev/disk4
/dev/disk4 is not an APFS Volume

Same result for all /dev/disk* attempts

Code:
Janes-MacBook-Pro:~ John$ diskutil verifyVolume disk4
Started file system verification on disk4
Verifying storage system
Performing fsck_apfs -n -x /dev/disk3s2
Checking the container superblock
Storage system check exit code is 0
Finished file system verification on disk4

Seemed to result in no changes

Code:
Janes-MacBook-Pro:~ John$ diskutil verifyDisk /dev/disk4
Janes-MacBook-Pro:~ John$ diskutil repairDisk /dev/disk4

both returned

Code:
Unable to verify this whole disk: A GUID Partition Table (GPT) partitioning scheme is required (-69773)

Restart, option key at Chime = Clone drive does not appear

tried iBoysoft
iBoysoft sees it but does not accept the password and has no option to utilize recovery key (I did try this in addition to the password)



Tried Recovery Studio Pro on Win10 and it noted that no apfs keys were found after a full scan, which is why I am wondering if some item required to decrypt is wiped and if it might be able to be recovered and placed back in there from a backup.

I am just lost at what to do from here, or what may be missing. I assume getting the volume to show back up to where I can see it in terminal is the next step but I don’t know how to approach it.

Disclosure - after the above I did try some hail mary commands butI don’t mind recloning a fresh drive copy to try and work further on this.

Also worked with the that wouldn't mount using hdiutil but running out of characters here. Can recap if it may be helpful.
 
  • Like
Reactions: casperes1996
Hm. So, APFS works by having both a container and actual volumes, as you may be able to snuff out from the digging you've done. Where a "traditional" file system just writes a volume directly to its partition in the partition map, APFS writes a container - sort of like its own partition map inside the partition it's given. From the container, which can span multiple drives, it can create volumes. Thus it can resize the volumes within the container without any issue or messing with the partitioning. - Very similar to ZFS in many ways.

It seems like whilst you've extracted a valid container, and perhaps data, the whole volume group inside the container is messed up with ERROR all over the shop. The information needed to recreate the volume details are likely present in the old backups, though whether the file system index is affected by the errors is unknown. You may be able to create a new, empty volume of the same information; Size, drive number and such - and see in your hex editor if there's sufficient similarity between it and the beginning sectors if the clone that you can mirror it and get somewhere; I don't know if this part of the drive information is also under any form of encryption.

I can't think of any command or anything that you haven't already tried that could help you out though I'm afraid.
diskutil ap
gets you a list of all APFS related commands (meant to be publicly visible).

If I were to get on the front of encryption, since the volume header info seems lost, I think there's a good chance the volume doesn't actually know it's encrypted with a password. Even if given the correct password to unencrypted the data it wouldn't be able to verify the password. It might still be able to decrypt the data on the drive, but the drive wouldn't know it was valid, and can't discern the unencrypted and encrypted data. - This issue is perhaps what you ran into with the recovery software you mentioned.

I have no good ideas I'm afraid, but it's an interesting problem and I'd love to hear it if you find a solution.
APFS is still a relatively new file system, meaning that the field of data recovery has a lot less experience with it, than if the lost data was, say FAT, EXT4 or even NTFS. Hell, or JHFS+
[doublepost=1565753279][/doublepost]Oh, and can I just say... TARDIS is an excellent volume name, especially if it hosts Time Machine backups. I've just watched a boatload of Who myself. Finished rewatching the 2005 era show and went right back to William Hartnell. Lovely indeed.
Good luck with the data recovery!
 
An old Time Machine backup is very unlikely to contain a partition table or an encryption key. TM backups are made by walking files and dirs using the file-system. Both partition tables and encryption keys are at a level far lower than that.

Depending on your programming skills, it's possible that poking around in the innards of the encrypted data will give more details. As a result of a completely separate discussion in this thread:
https://forums.macrumors.com/threads/how-exactly-does-filevault-work.2191792/

I learned that the researchers who dug into FileVault2 have published their software, which allows decoding FV encryption on non-Mac systems.

Here's the PDF paper that discusses the details:
https://eprint.iacr.org/2012/374.pdf

Their library is listed at:
http://code.google.com/p/libfvde/

which appears to be a dead link.

Googling finds this:
https://github.com/libyal/libfvde

The lib may be able to help directly, if it lets you determine exactly which parts of the FV decryption process have been damaged. If not, it may take some programming to turn it into a tool for poking around at a non-working FV disk.


I should note there are several layers here. One is the GPT partitioning scheme. The next is FileVault. After that is the CoreStorage container, whose contents are encrypted, and finally APFS. If an earlier layer is damaged, then later ones will be unobtainable, especially if FV's key storage management has been lost or damaged.

FV is designed so that losing the key makes all the encrypted data inaccessible. If this weren't so, then it would be weak protection.
 
Last edited:
What @chown33 said. But with the caveat that since APFS, there won't be a CoreStorage component in play.

But from what little I've seen re: APFS, FileVault will work in a similar way in that there probably is (was?) a key saved in the header for the APFS container, which then runs a similar decryption process of decrypt keychain, decrypt secondary key, decrypt volume key, decrypt drive.

One thing I've seen mention of, but nothing concrete: might be another layer of encryption, at the file level. Similar to how iOS encrypts files and filesystem. Which would make some sense, as APFS went to iOS first, and that filesystem was already APFS-ish.

Some details on how things work in iOS found here: https://www.apple.com/business/docs/site/iOS_Security_Guide.pdf
 
First off, thank you all for taking the time to give such thorough feedback. I will be reading/reviewing everything in the links and I appreciate the help on my journey of learning the unique apple filesystem and encryption methods.

What I am tracking as possible hail mary leads are
1. Research and see if there is a possibility that Volume data may be in a time machine backup, but as chown33 stated this is unlikely. I haven't dug into time machine at all (not a mac user) but I will look at documentation and my wife's time machine to see if there is any data and pathway to essentially rebuild a Volume off of data backed up to Time Machine. If time machine just does a dir walk and copies files and doesn't copy any type of system/partition info then that's a dead end. Honestly while I am generally comfortable screwing around with low level stuff I have never had to really dig into how Volume data works at that level.

2. If I can rebuild the volume and have all the keys exactly where they need to be I can find where the volume data ends/any other critical part then maybe successfully copy the encrypted data over the remainder, splicing them together.

If I am tracking right to have any chance I would need FileVault and APFS keys. I know if the keys are gone, it's a dead end unless there is some underlying flaw in the later discovered in the encryption implementation.

Next step is to read and digest the documentation for https://github.com/libyal/libfvde to get a better handle on how the underlying works and see if there is anything I can extrapolate from it, then build from source and see if it gets any further.
https://github.com/libyal/libfvde/blob/master/documentation/FileVault Drive Encryption (FVDE).asciidoc

Also, thank you for the compliments on TARDIS, yes, it is my wife' s Time Machine drive and I thought it was pretty clever. For an additional laugh she made a vinyl pattern and decorated it https://imgur.com/XdjbQxm

For anyone else following along. Stress the 3-2-1 Backup Rule to everyone (yes, including offsite, local backups don't help if your house burns down with everything in it), along with testing those backups and hammering in how extra critical it is if they are using full disk encryption as that can greatly frustrate recovery efforts.
 
  • Like
Reactions: hobowankenobi
I'll make it brief:
If "Drivesavers" says they can't "get at" the data, the chances aren't good at getting it back.

I suggest you give your wife a tutorial in the concept of "backing up".
I STRONGLY SUGGEST that you DO NOT use Time Machine for this.

Use either CarbonCopyCloner or SuperDuper instead.
And... keep at least ONE of the backups "in the clear" (NO encryption).
If necessary, keep it locked in a safe.

As you have discovered, when things go wrong, getting data back is a chore.
But when it's strongly encrypted -- it can make things worse to... impossible.
 
What I am tracking as possible hail mary leads are
1. Research and see if there is a possibility that Volume data may be in a time machine backup, but as chown33 stated this is unlikely. I haven't dug into time machine at all (not a mac user) but I will look at documentation and my wife's time machine to see if there is any data and pathway to essentially rebuild a Volume off of data backed up to Time Machine. If time machine just does a dir walk and copies files and doesn't copy any type of system/partition info then that's a dead end. Honestly while I am generally comfortable screwing around with low level stuff I have never had to really dig into how Volume data works at that level.

Time Machine has its own file system, and AFAIK currently doesn't even support being on an APFS Volume yet. Time Machine is still HFS+ only as my knowledge goes. Thus, it seems like it probably is just a file walk, but one based on deltas.

2. If I can rebuild the volume and have all the keys exactly where they need to be I can find where the volume data ends/any other critical part then maybe successfully copy the encrypted data over the remainder, splicing them together.

I think it's a long shot, but it's the best thing I can think of yeah.

Also, thank you for the compliments on TARDIS, yes, it is my wife' s Time Machine drive and I thought it was pretty clever. For an additional laugh she made a vinyl pattern and decorated it https://imgur.com/XdjbQxm

Absolutely brilliant. Or as Christopher Eccleston, the 9th Doctor would say, Fantastic.

For anyone else following along. Stress the 3-2-1 Backup Rule to everyone (yes, including offsite, local backups don't help if your house burns down with everything in it), along with testing those backups and hammering in how extra critical it is if they are using full disk encryption as that can greatly frustrate recovery efforts.

... I used to have my data on 3 different drives; Some things in iCloud too but I don't count that... Over the Summer I've had 2 drives die almost at the same time. Got one replaced under warranty
 
I'll make it brief:
If "Drivesavers" says they can't "get at" the data, the chances aren't good at getting it back.

I suggest you give your wife a tutorial in the concept of "backing up".
I STRONGLY SUGGEST that you DO NOT use Time Machine for this.

Use either CarbonCopyCloner or SuperDuper instead.
And... keep at least ONE of the backups "in the clear" (NO encryption).
If necessary, keep it locked in a safe.

As you have discovered, when things go wrong, getting data back is a chore.
But when it's strongly encrypted -- it can make things worse to... impossible.
The failed drive is a friend's, OP has been working off his wife's Mac.
 
I'll make it brief:
If "Drivesavers" says they can't "get at" the data, the chances aren't good at getting it back.

I suggest you give your wife a tutorial in the concept of "backing up".
I STRONGLY SUGGEST that you DO NOT use Time Machine for this.


"Carbon Copy" is actually a quite bad idea. Why? Here is the scenario where CCC fails. You are editing this novel you are writing and you stop and back it up using CCC. Now you have a good copy. Then you accidentally delete Chapter Three while writing Chapter Five but don't notice your mistake. But it is OK you have a backup. Now after finishing Chapter five you do another backup and overwrite your previous backup. Chapter three is now gone forever.

If you had used Time Machine you have EVERY version of your novel automatically saved with no data overwritten.

Yes, incremental backups are hard to understand and CCC is conceptually simple so people like it. But you REALY need to do incremental backups and TM makes this easy.

There are many other ways CCC can fail. But in every case it has the same form: You do a backup, everything is OK. Data gets corrupted and you do another backup and kill you only good backup by writing corrupted data over op of it.

The lesson learned is to NEVER overwrite a backup. Keep it "forever", this means for YEARS. TM does this automatically.

Some people thing TM saves data is some obscure format. No. It simply copies the files using the file system. The saved files are just like if you copied them using the finder. But with one difference: It only copies the files that changed.

The first time you run TM it copies every file. The next time it only copies files that have changed. That's it. Over time if you edit a lot of files the multiple copies start taking a lot of room. So TM will thin out the number of redundant files by keeping only daily or weekly copies of the oldest files rather than hourly copies.

When buying a disk for TM, buy one that is at least twice as large as the amount of data you want to back up.

TM should be you first level backup and just let it run and do its hourly backup.

But overall the rule of thumb is that you always need to be sure these rules are followed:
1) the data exists in at least three physical devices even when a backup operation is in progress.
2) the data exits in at least two geographic locations even when a backup operation is in progress.
The above is dead-minimum. If the data is really important then add more copies in more locations

A simple way to do this is to use TM and a cloud backup service. Run both of these continuously. I used Backblaze for cloud backup. It runs continuously and sends data off to a secure location within some minutes of saving the file. The cost is only $6 per month for unlimited storage. Then in addition to TM and the cloud. You might periodically use something like CCC to make a copy and rotate this with another copy you keep off site.


Never overwrite backup data.
 
  • Like
Reactions: avz and bernuli
"Carbon Copy" is actually a quite bad idea. Why? Here is the scenario where CCC fails. You are editing this novel you are writing and you stop and back it up using CCC. Now you have a good copy. Then you accidentally delete Chapter Three while writing Chapter Five but don't notice your mistake. But it is OK you have a backup. Now after finishing Chapter five you do another backup and overwrite your previous backup. Chapter three is now gone forever.

If you had used Time Machine you have EVERY version of your novel automatically saved with no data overwritten.

Yes, incremental backups are hard to understand and CCC is conceptually simple so people like it. But you REALY need to do incremental backups and TM makes this easy.

There are many other ways CCC can fail. But in every case it has the same form: You do a backup, everything is OK. Data gets corrupted and you do another backup and kill you only good backup by writing corrupted data over op of it.

The lesson learned is to NEVER overwrite a backup. Keep it "forever", this means for YEARS. TM does this automatically.

Some people thing TM saves data is some obscure format. No. It simply copies the files using the file system. The saved files are just like if you copied them using the finder. But with one difference: It only copies the files that changed.

The first time you run TM it copies every file. The next time it only copies files that have changed. That's it. Over time if you edit a lot of files the multiple copies start taking a lot of room. So TM will thin out the number of redundant files by keeping only daily or weekly copies of the oldest files rather than hourly copies.

When buying a disk for TM, buy one that is at least twice as large as the amount of data you want to back up.

TM should be you first level backup and just let it run and do its hourly backup.

But overall the rule of thumb is that you always need to be sure these rules are followed:
1) the data exists in at least three physical devices even when a backup operation is in progress.
2) the data exits in at least two geographic locations even when a backup operation is in progress.
The above is dead-minimum. If the data is really important then add more copies in more locations

A simple way to do this is to use TM and a cloud backup service. Run both of these continuously. I used Backblaze for cloud backup. It runs continuously and sends data off to a secure location within some minutes of saving the file. The cost is only $6 per month for unlimited storage. Then in addition to TM and the cloud. You might periodically use something like CCC to make a copy and rotate this with another copy you keep off site.


Never overwrite backup data.
Isn't this what CCC Safetynet does?
https://bombich.com/kb/ccc5/protect...stination-volume-carbon-copy-cloner-safetynet
 
As a minor update while I have the fun times of exploring the ins and outs of the file system before I throw in the towel once I confirm any of the encryption keys I need are well and truly gone.

I was able to extract what I believe the be the full encrypted EncryptedRoot.plist.wipekey file via hexeditor. There's enough in the xml that matches up with what I would expect to see in the file with PassphraseWrappedKEKStruct (x2), and KEKWrappedVolumeKeyStruct being sdauibfasbubi gobs of encrypted text. Also EFILoginGraphics is a mess of data but this was not mentioned in the paper as important to the encryption.

Also using SleuthKit on the dmg
Code:
GUID Partition Table (EFI)
Offset Sector: 0
Units are in 512-byte sectors

      Slot      Start        End          Length       Description
000:  Meta      0000000000   0000000000   0000000001   Safety Table
001:  -------   0000000000   0000000039   0000000040   Unallocated
002:  Meta      0000000001   0000000001   0000000001   GPT Header
003:  Meta      0000000002   0000000033   0000000032   Partition Table
004:  000       0000000040   0000409639   0000409600   EFI System Partition
005:  001       0000409640   0977105023   0976695384   Macintosh HD
006:  -------   0977105024   0977105059   0000000036   Unallocated

Next step from the paper would be "The EncryptedRoot.plist.wipekey file is encrypted using AES-XTS (details later) with an all-zeros tweak key, but the encryption key is easily available in the header (first block) of the CoreStorage volume (see table 2 in Appendix)"

Anyone have any recommendations for finding my way to the first block of the CoreStorage volume or nearby or how to maybe grep or regex this thing out of the image?

I am traveling so only have a windows machine and minimal time and don't want to mess with mounting to vm's at the moment but I figured I would update those to looked to help and see if anyone had any tips to get to the first block of the CoreStorage volume while I continue to research. Meanwhile running full image searches for some obvious-ish terms that might dial me in since some google-fu on this has failed me, or I have failed it.

After I have exhausted my abilities I will shoot the researchers an email to see if there is anything I am missing before calling it.

Also, yeah, thanks for pointing out this is a friend's drive. My household is already on cloud with some versioning (tested lightly, although I should test more extensively) and periodic offline local backups. Although it could always be better (can never have enough).
 
  • Like
Reactions: casperes1996
@spaceguns In case you haven't given up on this yet / still have the backup image, I answered your StackExchange post here: https://apple.stackexchange.com/questions/366693

Wow, was just about to give everything back to them with notes in case a tool emerged. I have been tied up this week but will try taking a swing at this tomorrow or later this week when my wife doesn't need to use her Mac for anything for a few hours.

Hadn't totally written this drive off but definitely went in the "check tools in a few years" pile.
 
@spaceguns In case you haven't given up on this yet / still have the backup image, I answered your StackExchange post here: https://apple.stackexchange.com/questions/366693

Holy Pancake, you've done epic work, man! This is absolutely incredible, and making it open source is even better. I don't know much about APFS on a lower level yet, so I'll read through your code and the documentation when I have some spare time. And the lecture you linked to on Stack seems very interesting! don't know if it's worth the hassle, but if you set up a donation thing on GitHub I'll gladly chug you $10 for the effort, mate ;)
 
@spaceguns In case you haven't given up on this yet / still have the backup image, I answered your StackExchange post here: https://apple.stackexchange.com/questions/366693

I pulled your code from Git. Tried to build it.
In src/apfs/string/object.h line 248, you define what you write in your comment should be an empty result string, but you set it to:
result_string = '\0';
which is null.
On compile this gives the error:
expression which evaluates to zero treated as a null pointer constant of type 'char *' [-Werror,-Wnon-literal-null-conversion]
both with clang and GNU's gcc.
I've never written C code, though I have had a very brief affair with C++. Is this intended to point to null-termination? Does it still compile fine for you?Should this just be '' instead of '\0'?

Just to see what'd happen I set it to '' instead of '\0'. Compiler also does not like empty character constants

Changing it from '\0' to "\0" however does make the compiler stop complaining about that, but moves on to complain about something in j.h about SF_DATALESS being an undeclared identifier.

For full disclosure, I'm trying to compile it on Mojave, which may influence this.

PS: When I say it neither works with clang or gcc, the way I tried changing to the home-brew installed gcc, was just to set gcc as an alias for gcc-9 in bash, since gcc defaults to Xcode's clang compiler. I assumed the make utility would just call gcc in the shell and thus it'd use gcc-9 instead, but as I mentioned, I don't really work much with the C languages, and mostly write Java, Swift, Scala and a wee bit of Kotlin so I don't really muck about a lot with gcc and makefiles yet :) Gradle handles it all anyway :p

Addendum:
Realised make probably doesn't just call gcc from the current shell session for security reasons, so tried running it with GNU's make instead with make, since it'd probably be set to Homebrew's GNU gcc, but same result. Though when I set it to "\0" instead of '\0' it did actually create the bin for apfs-read, though it complained about j.h and didn't create apfs-inspect.
 
Last edited:
@casperes1996, thanks for that! That explains why the format string for "Unknown subtype" wasn't being displayed properly when I was doing my own data recovery a few weeks ago; the line should read *result_string = '\0'; — I've corrected it upstream. result_string is a pointer to the first character of the string, obtained via a call to malloc() in get_o_type_string(). The erroneous line replaces the malloc'd pointer with a NULL pointer, when the actual intent is to set the first character of the string to ASCII NUL, indicating end of string.

Please report any other issues you find directly on GitHub.

Setting a shell alias won't affect make; use a symlink instead, i.e cd /usr/local/bin ; ln -s gcc-9 gcc — that's my setup, though it's unadvisable if you use also use Xcode, which expects gcc to be Apple Clang. Alternatively, you can edit the CC= line of the Makefile appropriately (e.g. CC=/usr/local/bin/gcc-9) which won't affect Xcode. I guess Apple's version of Clang is less tolerant or Clang doesn't support some of the -W flags I'm using; I've never used Clang, though my uni lecturer was adamant about doing so 😅 I should try compiling with -pedantic and see what crops up...

A donation would be much appreciated :) Here's my PayPal: https://paypal.me/jivanpal

EDIT: Clang may not be suitable, since some of the datatype definitions rely on __attribute__(( packed(n) )), which I believe is GCC-specific. At least, there's no mention of packed support in the Clang documentation. One could use #pragma instead, but I was going directly off of the APFS spec.
 
Last edited:
@casperes1996, thanks for that! That explains why the format string for "Unknown subtype" wasn't being displayed properly when I was doing my own data recovery a few weeks ago; the line should read *result_string = '\0'; — I've corrected it upstream. result_string is a pointer to the first character of the string, obtained via a call to malloc() in get_o_type_string(). The erroneous line replaces the malloc'd pointer with a NULL pointer, when the actual intent is to set the first character of the string to ASCII NUL, indicating end of string.

Makes a lot of sense. One of the big language designers, forget if it were Bjarne Stroustup, Dennis Richie, Brian Kernighan or one of the Java folk, once said that "Pointers were a mistake. Very powerful but have caused so many bugs and millions, if not billions of dollars in broken code" ;).
Pointers are great, but can easily lead to issues.

Please report any other issues you find directly on GitHub.

Right, of course. I'll follow the proper channels if I find anything going forward.

Setting a shell alias won't affect make; use a symlink instead, i.e cd /usr/local/bin ; ln -s gcc-9 gcc — that's my setup, though it's unadvisable if you use also use Xcode, which expects gcc to be Apple Clang. Alternatively, you can edit the CC= line of the Makefile appropriately (e.g. CC=/usr/local/bin/gcc-9) which won't affect Xcode. I guess Apple's version of Clang is less tolerant or Clang doesn't support some of the -W flags I'm using; I've never used Clang, though my uni lecturer was adamant about doing so 😅 I should try compiling with -pedantic and see what crops up...

Right, I figured out later on yesterday that it probably wouldn't work like that. I do use Xcode, so I'll just edit the makefile. Thanks for replying to me and making the tool :).
 
Well, I am definitely swimming out of my league on this one. Finally got time while dealing with a major move to jump into this (in a hotel currently).

apfs-inspect results found a superblock and the output seem to coincide with the expected output until

Code:
Reading the Ephemeral objects used by this checkpoint ... OK.
Validating the Ephemeral objects ... FAILED.
An Ephemeral object used by this checkpoint is malformed. Going back to look at the previous checkpoint instead.
END: Handling of this case has not yet been implemented.

This level is very over my head so I am not sure how/if I can currently proceed from here. Either way this appears to be an incredible pro level tool that has given some hope to this problem even if I can't fully understand how to work out from here yet or if we need to put this drive aside for now and wait for a future implantation/fix.
 
@spaceguns Ugh, annoying. I've created a spaceguns branch in the Git repo with a slightly altered version of apfs-inspect that should allow you to proceed further. Use git checkout spaceguns in your local version of the repo, re-compile, then give it a shot again! :)
 
@JivanPal holy hell that was a quick response.

Code:
...snip

Reading the Ephemeral objects used by this checkpoint ... OK.
Validating the Ephemeral objects ... FAILED.
An Ephemeral object used by this checkpoint is malformed. Proceeding anyway.
OK.

Details of the Ephemeral objects:

...snip around 38 chunks of object data

The container superblock states that the container object map has Physical OID 0x1a01ae.
Loading the container object map ... OK.
Validating the container object map ... FAILED.
This container object map is malformed. Going back to look at the previous checkpoint instead.
END: Handling of this case has not yet been implemented.

trying apfs-read on the container object map it left us with

Code:
 sudo ./apfs-read /dev/disk3 0x10101ae

Opening file at `/dev/disk3` in read-only mode ... OK.

Reading block 0x10101ae ... validating ... FAILED.
The specified block may contain file-system data or be free space.

Details of block 0x10101ae:
--------------------------------------------------------------------------------
Stored checksum:    0x91dfc482c4acd4d1
OID:                0x7584f0047ca14457
XID:                0x10d50910126142ef
Storage type:       Virtual
Type flags:         No-header, Encrypted, Non-persistent (should never appear on disk --- if it does, file a bug against the APFS implementation that created this object)
Type:               Unknown type (0x%08x) --- perhaps this type was introduced in a later version of APFS than that published on 2019-02-27.
Subtype:            Unknown subtype (0x00009876) --- perhaps this subtype was introduced in a later version of APFS than that published on 2019-02-27.

--- This tool cannot currently display more details about this type of block ---
--------------------------------------------------------------------------------

END: All done.

If any of the detailed output may be useful I can dump it on pastebin. Sorry I am not at the level to fully parse this output solo.
 
I have email notifications turned on and just so happened not to be doing anything else, haha. Shame that the object map appears to be corrupted. Good thinking to use apfs-read, however, you mistyped 0x1a01ae from the apfs-inspect output as 0x10101ae. Let's see what's actually there; I don't expect anything useful, but worth checking anyway. If you see all zeroes for the checksum, it likely means the entire 4096-byte block at that address was erroneously zeroed out. To be sure, you can get a hexdump of that particular block with sudo dd if=backup.img bs=4096 count=1 skip=$((0x1a01ae)) | xxd, where backup.img is the same path you've been passing to apfs-inspect.

Sorry I am not at the level to fully parse this output solo.

Most of the output from apfs-inspect is usually unhelpful, but you never know when some info will end up being useful. I appreciate the snipping, though; we don't really need to look at anything other than the end result for now.

When I get the time to continue working on the tools, I'll make it a priority to implement going back through the APFS checkpoints, as that should make your investigation easier. I might also write something that looks at all of the available checkpoints to find all valid container superblocks that have valid object maps. That should give us a good number of references to the locations of the encryption keys, and also give us multiple APFS volume superblocks to work with, in case any of those don't have valid volume object maps. Getting the encryption keys is the first task, though; if we don't have those, we can't decrypt any volume superblocks, so my second priority will be implementing APFS encryption support.

EDIT: Corrected skip=$((0x0x1a01ae)) to skip=$((0x1a01ae)).
 
Last edited:
Horrible attention to detail on my part, had done an earlier address from playing around with it and didn't completely adjust it on the command line.

Code:
 sudo ./apfs-read /dev/disk3 0x1a01ae
Password:

Opening file at `/dev/disk3` in read-only mode ... OK.

Reading block 0x1a01ae ... validating ... FAILED.
The specified block may contain file-system data or be free space.

Details of block 0x1a01ae:
--------------------------------------------------------------------------------
Stored checksum:    0x0000000000000000
OID:                0x0
XID:                0x0
Storage type:       Virtual
Type flags:         (none)
Type:               (none/invalid)
Subtype:            (none/invalid)

--- This tool cannot currently display more details about this type of block ---
--------------------------------------------------------------------------------

END: All done.

As expected all zeros, but checking with hexdump. I put in what you typed but also corrected the code to what I think you meant
skip=$((0x0x1a01ae)) to skip=$0x1a01ae
Output for both

Code:
$ sudo dd if=/dev/disk3 bs=4096 count=1 skip=$((0x0x1a01ae)) | xxd
-bash: 0x0x1a01ae: value too great for base (error token is "0x0x1a01ae")
00000000: 5361 7669 6e67 2073 6573 7369 6f6e 2e2e  Saving session..
00000010: 2e0a 2e2e 2e63 6f70 7969 6e67 2073 6861  .....copying sha
00000020: 7265 6420 6869 7374 6f72 792e 2e2e 0a2e  red history.....
00000030: 2e2e 7361 7669 6e67 2068 6973 746f 7279  ..saving history
00000040: 2e2e 2e74 7275 6e63 6174 696e 6720 6869  ...truncating hi
00000050: 7374 6f72 7920 6669 6c65 732e 2e2e 0a2e  story files.....
00000060: 2e2e 636f 6d70 6c65 7465 642e 0a44 656c  ..completed..Del
00000070: 6574 696e 6720 6578 7069 7265 6420 7365  eting expired se
00000080: 7373 696f 6e73 2e2e 2e33 3220 636f 6d70  ssions...32 comp
00000090: 6c65 7465 642e 0a                        leted..


---trying this adjusted to 0x1a01ae---

$ sudo dd if=/dev/disk3 bs=4096 count=1 skip=$((0x1a01ae)) | xxd
Password:
1+0 records in
1+0 records out
4096 bytes transferred in 0.000371 secs (11041047 bytes/sec)
00000000: 0000 0000 0000 0000 0000 0000 0000 0000  ................

and a bunch more zero lines

I will also kick some $ at you for doing this for the community. I will be giving the drives back to the friend I am helping next week but I will keep a dmg copy with their permission. We all understand this totally a volunteer timeline and not entirely sure to work. I just want to pass on how great the work you have done is and they think it is awesome that we made it this far and that a friend doing them a solid and the internet tech community has put so much work into this already.
 
Yes, I did indeed mean $((0x1a01ae)), good spot!

Thanks for the kind words :) If you are going to send some cash my way, I now have a GitHub Sponsors page set up (as @casperes1996 suggested — thanks for that, I didn't realise GitHub had a sponsorship programme!), so consider doing so via that, as GitHub will match your contribution to me at no extra cost to you: https://github.com/sponsors/jivanpal

I may have some spare time over the weekend or during next week to Skype, if that'd be convenient, so I could guide you through the process. You already follow me on Twitter (@jivan_pal), so feel free to DM me there and we can sort out how best to communicate.
 
Last edited:
Thanks for the kind words :) If you are going to send some cash my way, I now have a GitHub Sponsors page set up (as @casperes1996 suggested — thanks for that, I didn't realise GitHub had a sponsorship programme!), so consider doing so via that, as GitHub will match your contribution to me at no extra cost to you: https://github.com/sponsors/

I didn't know GitHub did donation matching. That's crazy cool!

Your work on these tools has been absolutely extraordinary, man. It's amazing we have tools like this, open source nonetheless, to get a better insight into the workings of APFS. And you're really very helpful towards @spaceguns which I really admire and appreciate even though it's not me :p.

Totally unrelated; Since the tools are written in C; What IDE/editor do you use for writing C? I like Xcode, but I don't like Xcode for C/C++. It's nice for developing apps with a UI and such, but for CLI tools it's slow and cumbersome without enough positives to outweigh it. For Java I like Intellij a lot. I wanna try out more C/C++ stuff, but it's really not a nice experience using Nano for it like I have in my past experimentation :p
 
@casperes1996 I've been using Visual Studio Code for basically everything I do for about 12–18 months now. The extension library is what really makes it. I've been doing WordPress theme/plugin development and other web dev stuff, including some SQL (MySQL/MariaDB flavour), and also the APFS tools with it. I don't actually use the Live Server functionality of VSCode for web dev projects, instead opting for Docker containers so that e.g. I can mess with a WordPress database or web server locally when necessary.

Before that, since around the start of 2015, Adobe Brackets was my main text editor, I used Eclipse for my uni Java projects, and was exposed to Atom for short while at the start of 2018 because I was taking an Agda class, Emacs and Atom are the only editors with Agda support, and I wouldn't dare to touch Emacs. I tried switching from Brackets to Atom, but didn't really like Atom, then came across VSCode shortly after.

Stepping back to 2007, when I started learning MS-DOS and HTML and only had a Windows Me machine, I used MS Notepad to write HTML for a while, then stumbled upon MS FrontPage, which came with the computer's MS Office 2000 package, and I used that from then on. I also recall installing Apache and PHP servers on it at one point, with no understanding of what they were, and got rid of them shortly after. I also took a stab at C++, following the infamous C++ in 21 Days course for like the first 4 days of content, using a Borland C++ compiler/linker since that's what the course mentions. I think I downloaded a programming-oriented GUI text editor for writing C++... definitely not an IDE and I definitely wasn't using Notepad, but I may have just been using FrontPage despite its lack of C++ syntax highlighting.

In the days before I had a MacBook (so, 2010–2013), it was Notepad++ or Gedit depending on whether I was using Windows or Linux, but I didn't do much programming back then, more just editing config files. I did learn C and used Notepad++ for that, but I couldn't get my head around pointers back then. At the same time, whilst taking an IT class in secondary school, we were introduced to Adobe Dreamweaver, which was pretty nice compared to FrontPage, and I had a laptop by then which came with Vista but then was multi-booting Windows 7 and a few Linux distros, so I used Dreamweaver for a while at home for those assignments, too.

I only tend to use proper IDEs for the types of projects they're intended for, e.g. IntelliJ IDEA for large Java projects (though I haven't actually used Java for anything since 2016, other than helping others with small assignments, and one average-size assignment I had last year, for which I used VSCode!), Android Studio for Android apps, and though I've never actually used Xcode since I've not developed any iOS apps, I likely will in the future for that reason alone. I've been exposed to Xcode and Qt Creator before to compile other people's projects, but not to make anything of my own.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.