Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Kind of misleading if true especially since ProRAW is not even available on the entry Pro model (128 GB) if I am not mistaken
Photo ProRAW is available for all iPhone storage sizes in the supported models (iPhone 12/13/14 Pro).

Try printing a 12MP photo on a large canvas, you need at least 32MP for a good print! Advertising iPhone 14 Pro and its 48MP, while in reality for everyday photos the megapixels remains 12, is shameless.
Possibly because people don't print everyday photos in sizes large enough for it to be a problem. You can print a 12MP photo at a bearable 100ppi on a ~40"x30" canvas (100x76cm), or in a 13.4"x10" canvas (34x25cm) at the typical ppi of most commercial color printers (300ppi).

Professional and amateur photographers are likely to want to print at about 600ppi, but those people already know how to handle RAW photos so they can get the full 48MP if they want. 48MP photo files are HUGE for everyday photos.
 
Digital photography generally falls into one of two categories, online workflows and professional workflows. The latter basically means the ability to make large prints, which is the example Apple used in touting the 48MP RAW workflow.

Mixing the two workflows usually doesn’t yield the best results.

We don’t need 48MP JPEGs for several excellent reasons:
- the JPEG compression and noise reduction will destroy fine detail, so 48MP file size is really moot.
- when you post online, most sites will resize your image to 50kb or so anyway, so there’s no point in uploading a large file
- a rule of thumb is that at normal viewing distances the megapixels of an JPEG image will make a quite decent print with a short size equivalent to the number of megapixels. IOW an 8MP image will make a good 8x10, a 12MP will make a good 11x14 or 13x19, and so on. The only reason that you’d need a 48MP JPEG is to make a 48x60 print. People that do that are going to use RAW and a professional post-processing workflow.

A side note is that if you hang many matted and framed 48x60 prints, you’re going to need a bigger house. They’re huge! And you really need to be using a tripod if your print output is this large.

Most people don’t have the skills to process a RAW image - out of the camera a RAW image will look muddy, noisy, and low in contrast because it hasn’t been tone mapped, noise reduced, etc. If you don’t know what these terms mean you do not need a RAW file. Most people can’t do this and almost no one would do it for every image even if they can.

Another disadvantage to a 48MP file size is that it will up your storage space 4x as fast as 12MP images. Then we can complain about Apple‘s prices for storage. So if your phone has 125GB of 12MP images, you will now need 512MP for the same number of 48MP images.

A 48MP RAW image can certainly produce superior results, but the workflow will be above and beyond what most casual users can or want to do. Apples pixel binning is a perfect solution for casual use, with the option of a 48MP RAW for critical printing. Just make sure that you’re not just using the file sizes to brag about your phone’s spec sheet.
 
Am I right in saying that you will be able to shoot 48MP ProRaw photos on the 128GB iPhone 14 Pro?

It’s just the video that’s restricted?
Yes. ProRAW is available on all 14 Pro and 14 Pro Max models. ProRes (4K30) is also available on all 14 Pro and 14 Pro Max models except on the 128GB variant of 14 Pro and 14 Pro Max where ProRes is restricted to 1080p30.
 
- the JPEG compression and noise reduction will destroy fine detail, so 48MP file size is really moot.
This is an excellent point. The extra resolution may be useful for the computational image processing that is done before the final image is created (Deep Fusion, Smart HDR...), but after the processing + noise reduction, details fine enough to require the 48MP resolution would be destroyed by the extremely heavy noise reduction algorithm Apple applies to its JPEG/HEIC files.
 
  • Like
Reactions: Azathoth123
Very easy to convert raw to HEIC. There are many times when I accidentally activate raw. What I do is to use shortcuts and within secs, I get the compressed heic version
I would be very interested how I can do this if you wouldn’t mind explaining or linking to a guide.
 
I would be very interested how I can do this if you wouldn’t mind explaining or linking to a guide.
Please see. Can select multiple photos
48732662-0A8C-4994-BB63-26CB950769A9.jpeg
 
  • Like
Reactions: zakarhino and JPack
Just used the above to compress 74.6 raw iPhone 14 Pro file to heif.

Only 5.2mb! Simple workflow to get the best resolution in 48mp in those important scenarios
 

Attachments

  • D991C46F-B26C-42CF-BAE9-E0052788CACD.png
    D991C46F-B26C-42CF-BAE9-E0052788CACD.png
    1.8 MB · Views: 119
  • Like
Reactions: whitedragon101
It’s only 48MP when using ProRAW in the stock app.

As Halide devs have pointed out in their latest blog post nobody knows whether or not that decision is “a product one or a technical one.” In other words devs might be able to enable 48MP HEIC photos with all the Apple processing in their own third party apps.

Even if it’s a technical limitation I’m betting someone will make a workaround that takes a ProRAW photo and immediately converts it to the much smaller HEIC format before saving to the library (similar to the Shortcut being shown above).

Within the next 2 weeks we’ll know for sure.
 
Kind of misleading if true especially since ProRAW is not even available on the entry Pro model (128 GB) if I am not mistaken
You’re confusing ProRAW with ProRes. There is a limitation in using ProRes on the 128gb model — instead of 4k/30fps, it is 1080p/30fps. There is no limitation on ProRAW, other than the fact that a ProRAW 48mp file will be 80+mb in size, so shooting in this format with eat up storage quicker.
 
  • Like
Reactions: CrashKC and r_123
The quad pixel sensor is 48mp, 4 pixels in a group (48/4 = 12) for normal shots. ProRAW gets you 48mp as others here have mentioned.
 
Austin Mann’s iPhone 14 pro camera review. Amazing! On his review, he has a link via Dropbox for you to download a ProRAW 48mp DNG file for you to edit and play with. Amazing resolution!

 
Austin Mann’s iPhone 14 pro camera review. Amazing! On his review, he has a link via Dropbox for you to download a ProRAW 48mp DNG file for you to edit and play with. Amazing resolution!

Yes. I downloaded and tested converting that 74mb file into HEIF file. It became 5.2mb!

There is a slight lag editing that file on 13 Pro Max. Hopefully 14 Pro Max handles it better
 
Yes. I downloaded and tested converting that 74mb file into HEIF file. It became 5.2mb!

There is a slight lag editing that file on 13 Pro Max. Hopefully 14 Pro Max handles it better
Yeah, hopefully so with the new chip. Austin did say on his review the shooting ProRAW at 48 megapixels is a little slow. He said it’s not terribly slow but slow enough to miss action shots, perhaps.
 
If it‘s true I‘m happy so I can save money 😉
? The advantage here is that the new sensor is MUCH bigger. And it's a big bonus that the photos are binned down to 12 MP for JPEGs. This is the best of both worlds. Better quality images with no significant increase in storage space.

EDIT:

@JPack beat me to it.
 
Last edited:
It’s only 48MP when using ProRAW in the stock app.

As Halide devs have pointed out in their latest blog post nobody knows whether or not that decision is “a product one or a technical one.” In other words devs might be able to enable 48MP HEIC photos with all the Apple processing in their own third party apps.

Even if it’s a technical limitation I’m betting someone will make a workaround that takes a ProRAW photo and immediately converts it to the much smaller HEIC format before saving to the library (similar to the Shortcut being shown above).

Within the next 2 weeks we’ll know for sure.

Basically, it's a PRO feature, that means normal consumers who are not using a decent (often paid) 3rd party camera app are not going to get auto-tuned-ready-for-posting 48M photos.

Sure the photos will benefit from the 65% larger sensor (but a slower aperture, f1.78 vs f1.5, 14 Pro compared to the 13 Pro), but looking back at the main sensors on 13 Pro and 12 Pro Max (regular 13), how did they compare? (1/1.9 vs 1/2.55, 34% larger sensor size).
 
I'm a bit curious how this will work. Normally these sensors with sub-pixel groups work as they do here with the 12 MP images being created. Instead of having all those additional pixels in the typical color matrix (RGGB Bayer) each of the RGGB sub-groups is separated into four pixels. This has some solid perks, but it's not meant to really create 48MP images. So if Apple's putting them into ProRAW, it implies they're doing some kind of sub-pixel sampling and fancy debayering behind the scenes with all that data the camera actually captures (as opposed to a single exposure), which does afford some practical upsampling use from the 12 MP use case. An actual RAW sensor read out—just a single frame—would include those sub-groups as is, and the only really sensible thing to do with a file like that would be to debayer it to a 12 MP image.

It does have practical, useful applications. But yeah, it's a bit odd to call it a 48 MP sensor. On one hand, that could actually be meaningful in some ways with the behind-the-scenes magic being done here, but on the other hand, not so helpful as an explanation for individual exposures. Olympus, for example, is using a similar technology in their new OM-1. But they are representing it as a 20 MP sensor, just as would be represented from the final output produced when those sub-sub pixel groups are debayered. On the other hand, their high resolution modes, which use multiple exposures and sub-pixel sampling, are represented in terms of the MP that are produced. And that's reasonable to do in their case as well.
 
I'm a bit curious how this will work. Normally these sensors with sub-pixel groups work as they do here with the 12 MP images being created. Instead of having all those additional pixels in the typical color matrix (RGGB Bayer) each of the RGGB sub-groups is separated into four pixels. This has some solid perks, but it's not meant to really create 48MP images. So if Apple's putting them into ProRAW, it implies they're doing some kind of sub-pixel sampling and fancy debayering behind the scenes with all that data the camera actually captures (as opposed to a single exposure), which does afford some practical upsampling use from the 12 MP use case. An actual RAW sensor read out—just a single frame—would include those sub-groups as is, and the only really sensible thing to do with a file like that would be to debayer it to a 12 MP image.
I thought this too, and then look like this is the case. But as you can see from many examples today the 48 megapixel raw photos have vastly more detail and more ideal for cropping than the 12 mp shots. So I don't know what Apple is doing but they're getting a heck of a lot of detail out of it. Not quite an actual 48 Mp Image like you would get out of a full frame camera but a lot more than 12 megapixels
 
I thought this too, and then look like this is the case. But as you can see from many examples today the 48 megapixel raw photos have vastly more detail and more ideal for cropping than the 12 mp shots. So I don't know what Apple is doing but they're getting a heck of a lot of detail out of it. Not quite an actual 48 Mp Image like you would get out of a full frame camera but a lot more than 12 megapixels
I haven't played with any of this yet, so I'm just spit-balling off the top of my head.

Assuming no sub-pixel sampling or whatnot is taking place in the *real* RAW files—not necessarily a given, but perhaps—it may be reasonable that detail could improve simply because that sub-sub-pixel-array (there's got to be a smarter term for that) still allows for some finer collection of detail. It would have some negative impacts on low light performance, but a fair amount of that could be mitigated under better lighting conditions and through technologies like the image stabilization (or manual exposure).

If you're getting a 48 MP *color* photo from an actual single-frame RAW file... it would be interesting to see what is taking place behind the scenes. Because that would have to be debayered from the 48MP readout, but then *upsampled* or otherwise interpolated to retain the 48 MP resolution. Proper debayering of a real RAW exposure from the 48 MP sensor would debayer to 12 MP. With only one exposure, it would also be a relatively small improvement compared to the overhead of the actual resolution baked into the resulting file. The only way for it to not be underutilized would be for multiple exposures to exist, captured in a manner that would allow for sub-pixel sampling. I don't know what is going on behind the scenes, here. Is the camera actually generating a real RAW single exposure? Or is some multi-exposure magic being baked into those as well? We know the magic is being baked into ProRAW, but those are "RAW" files with frosting and sprinkles.
 
I haven't played with any of this yet, so I'm just spit-balling off the top of my head.

Assuming no sub-pixel sampling or whatnot is taking place in the *real* RAW files—not necessarily a given, but perhaps—it may be reasonable that detail could improve simply because that sub-sub-pixel-array (there's got to be a smarter term for that) still allows for some finer collection of detail. It would have some negative impacts on low light performance, but a fair amount of that could be mitigated under better lighting conditions and through technologies like the image stabilization (or manual exposure).

If you're getting a 48 MP *color* photo from an actual single-frame RAW file... it would be interesting to see what is taking place behind the scenes. Because that would have to be debayered from the 48MP readout, but then *upsampled* or otherwise interpolated to retain the 48 MP resolution. Proper debayering of a real RAW exposure from the 48 MP sensor would debayer to 12 MP. With only one exposure, it would also be a relatively small improvement compared to the overhead of the actual resolution baked into the resulting file. The only way for it to not be underutilized would be for multiple exposures to exist, captured in a manner that would allow for sub-pixel sampling. I don't know what is going on behind the scenes, here. Is the camera actually generating a real RAW single exposure? Or is some multi-exposure magic being baked into those as well? We know the magic is being baked into ProRAW, but those are "RAW" files with frosting and sprinkles.
It’s apple magic :)

ProRAW definitely takes multiple exposures into account so they could be doing something like you suggest as well.

Debayering is fancy math to get color from inherently b/w sensors anyway. So they could be doing a reverse pass on each pixel. I read an explanation somewhere else that is far more detail than I could ever explain, but there is a way to mathematically do it.

As long as you have physical pixels, and a known color filter pattern above them for color, you can mathematically derive the color for each pixel to get the full pixel count.

Because the pixels are so small, though this doesn’t work well for low light so that’s where your quad bayer comes in.

Quad bayer arrays also allow for lower noise and more DR. This is why the Sony A7S3- which is a full frame 12 mpixel camera mostly used for video - is also a quad bayer chip. Sony just doesn’t expose the full rez for stills there because a) they have dedicated chips and camera for higher res, and b) presumably to gain more DR out of that sensor while still maintaining a high speed output for video.

Allowing for high speed 4k/120
And low rolling shutter.

Apple did a nice compromise here to get daytime shots with a ton of real detail. It’s impressive.
 
  • Like
Reactions: EugW
It’s apple magic :)

ProRAW definitely takes multiple exposures into account so they could be doing something like you suggest as well.

Debayering is fancy math to get color from inherently b/w sensors anyway. So they could be doing a reverse pass on each pixel. I read an explanation somewhere else that is far more detail than I could ever explain, but there is a way to mathematically do it.

As long as you have physical pixels, and a known color filter pattern above them for color, you can mathematically derive the color for each pixel to get the full pixel count.

Because the pixels are so small, though this doesn’t work well for low light so that’s where your quad bayer comes in.

Quad bayer arrays also allow for lower noise and more DR. This is why the Sony A7S3- which is a full frame 12 mpixel camera mostly used for video - is also a quad bayer chip. Sony just doesn’t expose the full rez for stills there because a) they have dedicated chips and camera for higher res, and b) presumably to gain more DR out of that sensor while still maintaining a high speed output for video.

Allowing for high speed 4k/120
And low rolling shutter.

Apple did a nice compromise here to get daytime shots with a ton of real detail. It’s impressive.
Well... there definitely needs to be some "Apple magic" baked in in this scenario. It's a bit curious to me, because this is the sort of thing that goes agains the integrity of genuine RAW files, but so be it in this case. For mobile phone cameras I think there's a great argument for it. And if it's getting usable "RAW" files in a case like this, outside the ProRAW use case, that could also be a good thing. And one I'll appreciate.

Anything which is debayering, here, from a single exposure, is going to have to make some considerable assumptions regarding what neighboring pixels represent. 12 MP? Easy. 48 MP? A whole heck of a lot is being extrapolated relative to the surface area captured with each color. You can certainly extrapolate data and upsample, but not to resolve reasonably to 48 MP. I just wonder, though, because Apple is already capturing a heap of data with each exposure. So if they're not interested in outputting a genuine "RAW" single exposure, they could be creating a RAW file which includes a lot of data beyond what would have been representative of a single exposure. I guess I'll be able to actually check that for myself, to satisfy curiosity.

So far the only proper camera I've had any experience with where this sort of sensor technology is employed is the Olympus OM-1. It didn't so much help with noise performance. But it *did* work wonders playing a role to better preserve color fidelity and some detail in noise, which functionally served to make the sensor usable a full stop or two beyond what would have been acceptable in the previous generation. But this makes sense where the sensor is being debayered as it was designed to be debayered. Pulling that up into the full raw resolution is some off-the-books dark arts. Heck, I just realized I have another camera which uses this sort of sensor. An astronomy camera. It is monochrome and can employ the full resolution of the sensor, but that's because it doesn't have the bayer matrix. Each of those quad bayer sub-pixel groups is reading simple luminance data. The color version of the same camera follows the same standards of combining the RGGB groups of sub-pixels (i.e. the equivalent of the iPhone outputting a 12 MP RAW photo from the 48 MP sensor).

I've finally got my iPhone Pro up and running. It will be fun to experiment for myself. I'll be utterly delighted if this stuff amounts to a noticeable improvement in images.
 
  • Like
Reactions: anticipate
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.