Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I submited also a bug report with this content : "Using Photos app in i0S 17, record a live photo. Try to apply as wallpaper and it works. Now edit same live photo and change to a frame and press Make Key Photo, then save [tap Done]. Try yo apply this edited live photo as wallpaper and you get the annoying bug message: "Motion Not Available". This bug can also be reproduced using Swift language: Photo framework PHLivePhotoEditingContext(livePhotoEditingInput: , apply a filter in Context.frameProcessor = and save with Context.saveLivePhoto(to: output) . The same message Motion Not Available will appear. I think this is a major bug." Because the culprit seems to be inside Photo framework at all. On my iPad no recorded live photo is able to be applied as wallpaper for example.
Yes seems super buggy. Too little motion there isn't an animation at all. Too much motion seems to smear, length is also too short. Even the Live Photos don't work properly.
 
I've also been trying to figure out how to run live wallpapers on iOS 17, and recently came across this repository with a working example of wallpaper generation for iOS 17. I haven't gone into the implementation details yet, just checked out the example

Excellent find! I recognized the attached photo as being from Wallpapers Now, the app that has working iOS 17 live wallpapers. Sure enough, its publisher is REAPPS so you found the developer's GitHub.
 
Maybe the software used to generate their movs is able to insert some coded sections and other elements to fake movement to mov.
 
Last edited:
In short how they did it.
1. Take one camera shot, save image and video {or use for testing one working live photo}.
2. Open a local mov {must be HEVC coded, old format of mov as MP4 will not work at all). You can use your old videos but generate new one with HEVC coder.
3. Mix metadata tracks from saved video in step 1 {I think they use same metadata in all videos but is nice to have your own} with video content from step 2 and add unique identifier to this new video. Keep in mind the mixing track must have the same duration of video track saved in step 1. As a bonus you will figure out how many frames / s your video in step 2 must have to animate as same speed as that from step 1.
4. Write a heic file with multiple frames {seems 1 frame will not work at all} , add unique identifier
5. Save both video in step 3 and image in step 4 to Photos pairing them.
Will not go into details how those steps must be done, but programmers will figure out how is done.
For all those to work in same device even if you download them multiple time, maybe write again unique identifier to both files and generate new files {different name, date of creation, metadata simple to write}.
Enjoy :)
 
Last edited:
In short how they did it.
1. Take one camera shot, save image and video {or use for testing one working live photo}.
2. Open a local mov {must be HEVC coded, old format of mov as MP4 will not work at all). You can use your old videos but generate new one with HEVC coder.
3. Mix metadata tracks from saved video in step 1 {I think they use same metadata in all videos but is nice to have your own} with video content from step 2 and add unique identifier to this new video. Keep in mind the mixing track must have the same duration of video track saved in step 1. As a bonus you will figure out how many frames / s your video in step 2 must have to animate as same speed as that from step 1.
4. Write a heic file with multiple frames {seems 1 frame will not work at all} , add unique identifier
5. Save both video in step 3 and image in step 4 to Photos pairing them.
Will not go into details how those steps must be done, but programmers will figure out how is done.
For all those to work in same device even if you download them multiple time, maybe write again unique identifier to both files and generate new files {different name, date of creation, metadata simple to write}.
Enjoy :)
Thanks for the high level. What do you mean by mix metadata tracks? Do you mean overlap the two videos?
 
In short how they did it.
1. Take one camera shot, save image and video {or use for testing one working live photo}.
2. Open a local mov {must be HEVC coded, old format of mov as MP4 will not work at all). You can use your old videos but generate new one with HEVC coder.
3. Mix metadata tracks from saved video in step 1 {I think they use same metadata in all videos but is nice to have your own} with video content from step 2 and add unique identifier to this new video. Keep in mind the mixing track must have the same duration of video track saved in step 1. As a bonus you will figure out how many frames / s your video in step 2 must have to animate as same speed as that from step 1.
4. Write a heic file with multiple frames {seems 1 frame will not work at all} , add unique identifier
5. Save both video in step 3 and image in step 4 to Photos pairing them.
Will not go into details how those steps must be done, but programmers will figure out how is done.
For all those to work in same device even if you download them multiple time, maybe write again unique identifier to both files and generate new files {different name, date of creation, metadata simple to write}.
Enjoy :)
Do you have any code or workflow / encoding preset or just anything to supporting this? Been trying to figure this out since iOS 17 db1 was released, I feel that I have tried everything and can't get it to work.

Used the code mentioned above, and my observations are as following:
  • The code works for saving Live Photos that do work as live wallpapers.
  • The provided MOV is encoded with HEVC, the image is a standard JPEG with .HEIC file extension and it works.
  • I have also tried downloading more Live Wallpapers from the mentioned app, unpack them and run them through that code. They are all HEVC + JPEG and when combined together as they're supposed to, they do work.
  • However, when I tried to put any other image with identical metadata to pair with this video, it did not work, getting a generic error PHPhotosErrorDomain error -1, which doesn't say anything at all.
  • I also tried recreating the original photo and combining it with the original video - I exported the identical frame that was used for the photo from the video, in the original resolution, with the same chroma subsampling, metadata and everything, yet it still didn't work.
  • I even tried a combination of image from one working wallpaper + video from another, and it didn't work neither.
  • I always made sure that the metadata and encoding was identical, and at this point, I am completely frustrated and ran out of ideas as to why this is not working.
I just can't find any other combination of Photo + Video when it works other than the combination provided by other apps.
 
@astorphobis Does it work when you 1) take a pair of working HEIC & MOV, 2) Re-encode the MOV, 3) Copy over the metadata from the original video to the new video?
 
@astorphobis Does it work when you 1) take a pair of working HEIC & MOV, 2) Re-encode the MOV, 3) Copy over the metadata from the original video to the new video?
I forgot to mention that I've also tried to create my own, custom live wallpaper with the same result.

I have, however, little to no experience with video formats, encoding etc., so I tried my best yet I still failed. Haven't tried re-encoding the MOV, I have, however, tried to copy metadata over from video in Pair 1 to video in Pair 2 and then try to combine the new video with photo from Pair 1, but that didn't work neither.

In the meantime, I managed to get at least something working - I combined the code that was mentioned before with the QuickTimeHelper to copy over the metadata and actually this is now able to create a custom live photo, it, however, says that motion is not supported. A little step towards the goal.

PS: The code is a mess, I was experimenting (hence the name) and just trying to figure it out, and didn't bother with tidying it up or anything.

PPS: "Lights" is one of the working - tested pairs of Photo + Video, "Wave" is some random stock footage encoded in HEIC, and this combines the "Lights.HEIC" image and "Wave.MOV" video. It should also copy over metadata from "Lights.MOV", but I haven't had time to confirm that yet, will look into it over the weekend.

Swift:
func DoMagicStuff() {
// MAGIC STUFF HERE
Log(string: "Loading ...")
    guard let imageFileURL = Bundle.main.url(forResource: "Lights", withExtension: "HEIC"),
          let videoFileURL = Bundle.main.url(forResource: "Lights", withExtension: "MOV"),
          let test1FileURL = Bundle.main.url(forResource: "Wave", withExtension: "MOV")
    else {
        Log(string: "Error: File is nil")
        return
    }
    
    Log(string: "Loaded image \(imageFileURL)")
    Log(string: "Loaded video \(videoFileURL)")
    
    let twilightHelper = QuickTimeHelper(path: videoFileURL.path)
    let test1Helper = QuickTimeHelper(path: test1FileURL.path)
    
    guard let assetIdentifier = twilightHelper.readAssetIdentifier() else {
        Log(string: "Error: Unable to read asset identifier")
        return
    }
    
    let documentDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
    let destURL = documentDirectory.appendingPathComponent("Test1_Updated.MOV")
    
    test1Helper.write(destURL.path, assetIdentifier: assetIdentifier)
    
    let photoLibrary = PHPhotoLibrary.shared()
    
    Log(string: "Performing changes...")
    
    photoLibrary.performChanges({
        
        let creationRequest = PHAssetCreationRequest.forAsset()
        creationRequest.addResource(with: .photo, fileURL: imageFileURL, options: nil)
        creationRequest.addResource(with: .pairedVideo, fileURL: destURL, options: nil)
        
    }, completionHandler: { success, error in
        if success {
            Log(string: "Live Photo saved successfully!")
        } else if let error = error {
            Log(string: "Error saving Live Photo to the library: \(error.localizedDescription)")
        }
    })
}
 
Do you have any code or workflow / encoding preset or just anything to supporting this? Been trying to figure this out since iOS 17 db1 was released, I feel that I have tried everything and can't get it to work.

Used the code mentioned above, and my observations are as following:
  • The code works for saving Live Photos that do work as live wallpapers.
  • The provided MOV is encoded with HEVC, the image is a standard JPEG with .HEIC file extension and it works.
  • I have also tried downloading more Live Wallpapers from the mentioned app, unpack them and run them through that code. They are all HEVC + JPEG and when combined together as they're supposed to, they do work.
  • However, when I tried to put any other image with identical metadata to pair with this video, it did not work, getting a generic error PHPhotosErrorDomain error -1, which doesn't say anything at all.
  • I also tried recreating the original photo and combining it with the original video - I exported the identical frame that was used for the photo from the video, in the original resolution, with the same chroma subsampling, metadata and everything, yet it still didn't work.
  • I even tried a combination of image from one working wallpaper + video from another, and it didn't work neither.
  • I always made sure that the metadata and encoding was identical, and at this point, I am completely frustrated and ran out of ideas as to why this is not working.
I just can't find any other combination of Photo + Video when it works other than the combination provided by other apps.
You will need mebx metadata from one video recorded with camera [at your needed resolution let's say 1920x1080], those tracks you mix with your video [which must have same resolution and your video must be HEVC, 60 fps, 1' duration] so the movement and whatever info are in mebx will match your frames at all. Mix with video means adding your video track in composition, and the meta in 2nd, 3rd etc of your composition, to answer to another question above. From now I think is easy to write code, I did it and it works. Have other projects to do, was fun anyway to discover new things.
 
@area7 What are you using to copy over the Metadata? I've been using AVAssetWriter with
AVMutableMetadataItem, but been struggling.
 
@area7 What are you using to copy over the Metadata? I've been using AVAssetWriter with
AVMutableMetadataItem, but been struggling.
Because format is boxed data you need to use AVComposition 1track for your video and others for meta tracks and add them from the other mov (captured with camera for example, investigate avasset tracks for a captured live photo video file and see it has metadatatracks added, use addMutableTrack(withMediaType: .metadata,} , and normal metas also like unique id with AVMutableMetadataItem.. Make sure you use track duration as initial source video {that with metadata tracks}
 
Last edited:
Using Live Photos on iOS 15 was so simple, take a Live Photo etc and use it on Lock Screen. I hope Apple fixes this issue soon, really miss this option.
 
If you want some fun ask chatgpt this :
Explain what is the format for com.apple.quicktime.live-photo-info, because it contains sample time, sample buffer and value like 3 0.0145119996741414 1844164067 128 86.7509384155273 14.2610721588135 0.396545708179474 -0.0815095826983452 1.92900002002716 4 4 0 -1 0 0 0 0 0 0 0 0 0 9.80908925027372e-45 0.264391481876373 3209850598 10779 50282

For sure is a binary format :)
 
Because format is boxed data you need to use AVComposition 1track for your video and others for meta tracks and add them from the other mov (captured with camera for example, investigate avasset tracks for a captured live photo video file and see it has metadatatracks added, use addMutableTrack(withMediaType: .metadata,} , and normal metas also like unique id with AVMutableMetadataItem.. Make sure you use track duration as initial source video {that with metadata tracks}
Still not working for me. doesn't seem to make a difference whether I use avassetwritter or the avassetexportsession.
 
If you want some fun ask chatgpt this :
Explain what is the format for com.apple.quicktime.live-photo-info, because it contains sample time, sample buffer and value like 3 0.0145119996741414 1844164067 128 86.7509384155273 14.2610721588135 0.396545708179474 -0.0815095826983452 1.92900002002716 4 4 0 -1 0 0 0 0 0 0 0 0 0 9.80908925027372e-45 0.264391481876373 3209850598 10779 50282

For sure is a binary format :)
How are you getting the images for the heic? AVAssetImageGenerator?
 
How are you getting the images for the heic? AVAssetImageGenerator?
AVAssetImageGenerator for frame extraction. CGImageDestinationCreateWithURL for iterations of images extacted {use CGImageDestinationAddImage }. You must have 1 video of 1 second, 60 fps, 1080x1920, hevc compressed {this format worked for me with hundreds of video automatically tested}. Made my own generator works flawless :) to have smooth animation on lock screen, I used animation from longer videos {10", 20 "} and edited speed etc to match full animation in 1". If you are using hevc and mp4 in videos, those are working too if you pass them in app in let videoWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: [AVVideoCodecKey : AVVideoCodecType.hevc, AVVideoWidthKey :1080, AVVideoHeightKey : 1920]) {maybe when you add unique UUID meta}, and after you go compose those.
 
Last edited:
AVAssetImageGenerator for frame extraction. CGImageDestinationCreateWithURL for iterations of images extacted {use CGImageDestinationAddImage }
I am using exactly that to create the heic. How many images at what time stamp do you extract and add to the heic?
 
I am using exactly that to create the heic. How many images at what time stamp do you extract and add to the heic?
You can generate 60 images from 1" movie using:
let frameRate = 60
imageGenerator.requestedTimeToleranceBefore = CMTime(value: 1, timescale: Int32(frameRate))
imageGenerator.requestedTimeToleranceAfter = CMTime(value: 1, timescale: Int32(frameRate))
let durationInSeconds = CMTimeGetSeconds(videoAsset.duration) // 1 second
let totalFrames = Int(durationInSeconds * Double(frameRate))
for frameIndex in 0..<totalFrames {
let frameTime = CMTimeMake(Int64(frameIndex) , Int32(frameRate))

This will give you an array of 60 images for 1" video with 60 frames. You can ignore at all this process, in the end 1 image is still usable if it matches for example frame 0 and still image time.
 
Last edited:
You can generate 60 images from 1" movie using:
let frameRate = 60
imageGenerator.requestedTimeToleranceBefore = CMTime(value: 1, timescale: Int32(frameRate))
imageGenerator.requestedTimeToleranceAfter = CMTime(value: 1, timescale: Int32(frameRate))
let durationInSeconds = CMTimeGetSeconds(videoAsset.duration) // 1 second
let totalFrames = Int(durationInSeconds * Double(frameRate))
for frameIndex in 0..<totalFrames {
let frameTime = CMTimeMake(Int64(frameIndex) , Int32(frameRate))

This will give you an array of 60 images for 1" video with 60 frames. You can ignore at all this process, in the end 1 image is still usable if it matches for example frame 0 and still image time.
So adding just 1 image to the heic is enough?
 
So adding just 1 image to the heic is enough?

Yes, but make sure it is the same image {or frame} you add at still image time. Anyway work on tracks, because I think you are not adding the metatracks {from a working live photo video} with 1" lenght to your 1" video at all. I said previous you can use any live photo metatracks recorded with camera, but there is supplementary work to do {like saving patterns for any kind of formats, not worth the work since is private API in the end}. Use an existing working one. You need traks.metadata)[0], .metadata)[1] from a working lw. Having those in mind, you will be soon there:





let thrirdCompositionTrack = mixComposition.addMutableTrack(withMediaType: .metadata, preferredTrackID: kCMPersistentTrackID_Invalid)
do {
try thrirdCompositionTrack?.insertTimeRange(CMTimeRange(start: kCMTimeZero, end:videoAssets1.tracks(withMediaType: .video)[0].asset!.duration), of: videoAssets2.tracks(withMediaType: .metadata)[0], at: kCMTimeZero)
} catch {
print("Error = \(error.localizedDescription)")
} // videoAssets1 is your personal video , videoAssets2 is a working one

Hope it helps, in the end you will succeed. Ignore my bad coding there, I have real important projects to do, not even checked thrird or declaration of objects names as "English" at all :) I work for 6 months now in AI sphere :)
 
Last edited:
.metadata)[1]
I really appreciate your help!

now it copies all the metadata now and I got it to work!

es, but make sure it is the same image {or frame} you add at still image time
Still Image Time's sample time is set to 0 but I had to use the image from 0.5s to get the Live Photo to work on the Lock Screen. I am using the video (one copy is processed so Live Photo related metadata is striped and original for copying the metadata ) with the trees and water falls from the wallpaper now app. What am I missing? I will test with my own working Live Photo.

Sample Time : 0 s
Sample Duration : 0.00 s
Still Image Time : -1

Made my own generator works flawless :) to have smooth animation on lock screen, I used animation from longer videos {10", 20 "} and edited speed etc to match full animation in 1".
Can you explain a little more about this?
 
  • Like
Reactions: astorphobis
I really appreciate your help!

now it copies all the metadata now and I got it to work!


Still Image Time's sample time is set to 0 but I had to use the image from 0.5s to get the Live Photo to work on the Lock Screen. I am using the video (one copy is processed so Live Photo related metadata is striped and original for copying the metadata ) with the trees and water falls from the wallpaper now app. What am I missing? I will test with my own working Live Photo.

Sample Time : 0 s
Sample Duration : 0.00 s
Still Image Time : -1


Can you explain a little more about this?
Would you mind sharing your code here? I guess I'm doing something wrong and ChatGPT is just refusing to help ¯\_(ツ)_/¯
 
Would you mind sharing your code here? I guess I'm doing something wrong and ChatGPT is just refusing to help ¯\_(ツ)_/¯
ChatGPT wrote this but I agree that it doesn't work well with Swift and Apple libraries.

Here is my code so far. It's not detecting the generated Live Photo as being supported. I'm out of depth here working with Swift so I'm looking for feedback @area7 @Bostonbanana.
 
Last edited:
  • Like
Reactions: astorphobis
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.