That sounds interesting. I'm not being sarcastic, I used film for many years and have studied early colour schemes for film quite extensively, but never really thought about digital color sensors. So I looked it up and couldn't find anything, do you have a link?
See this is where you make yourself sound silly. What does the fact that I'm using a DSLR have to do with the fact that iOS software simulations for DSLR optics look like crap?
Um...what does Desaturation filters or software lens correction for SLR lenses have to do with iOS software imitating what DSLRs do optically? Did you even read the question you're trying to answer? It was to name something your iOS can simulate in software that an SLR does in optics that doesn't look like cheap gimmicky crap on the iPhone.
Ok, let me address just this part of your post, first. You say you are asking an honest question. I will give you an honest answer. Because I have no clue where you stand on this, understand that I am not trying to be snarky, but I am starting at "first the earth cooled" so to speak.
The 'D' in DSLR stands for 'Digital' it is powered by a computer chip that is making a discrete digital signal out of our analog world. So lets pretend were going to make a VERY basic sensor. Instead of megapixels, ours has just 25 pixels in a 5*5 array and they are only black OR white (1 bit color).
OOOOO OOOOO
OOOOO OXXXO
OOOOO OXXXO
OOOOO OXXXO
OOOOO OOOOO
So the image on the left is blank, and the one on the right is a square, right? Depending on your font spacing, anyway. That said, our 25 pixel sensor does not have the resolution to display a circle. We could pull some tricks, but it would really end up looking like 1970's video games (for the same reason really). We can add more pixels to make it resemble a circle more and more, but it would never really be a circle. For example, the screen you are looking at is composed of pixels. If you zoom in enough, you will see the jaggies in a pure black OR white image.
This is one example of a digital artifact. You 36Mp Nikon has 36 million pixels compared to your 1080p display's 2.1 million, but there are still a very specific number of them. The jaggies are much adder to see like that, but they are still there.
Now, lets talk about color. If, instead of restricting ourselves to black OR white, lets go from black to white on a scale of 0-7. 0 is black and 7 is white. Now, wherever there are jaggies, we can put a pixel next to them that is 3 or 4. This is anti-aliasing, and it fools the eye into thinking the spine is more like a real curve and less like a stair stepped approximation. It is still a digital trick, though.
So we have to spend a moment to talk about bits. A computer counts in base 2. So everything is either 0 or 1. Instead of a 10's column and a 1 column, you have a 2's column and a one. 10 = 2 (1*2 + 0*1). It takes TWO bits to do that (one for each column), and it could count form 0-3.
With 8 bits you would ave a 1's column, 2's, 4's, 8's 16's, 32's 64's and 128's column. If they were all set to one, you would get 255 (128+64+32+16+8+4+2+1 = 255). You've see that 0-255 in photoshop or similar applications.
Now we make all the colors were see in a computer by mixing red, green, and blue light. We adjust the brightness of the light from 0-255 (8 bits per channel 8 3 channels = 24 bit color). This yields 16.7 million different colors. The AVERAGE human eye can distinguish about 4 million non-adjacent colors. This means that if I showed the AVERAGE person a grey card with 128 red, 128 green, 128 blue, and then showed them 128,128,127, they would think it was the same color. They could only tell them apart if they were right next to each other. People are different, and some folks can better tell colors apart. It stands to reason that those people might be better at photography so you might well be one of them.
The Leaf system I mentioned used a "black and white" sensor and simply dropped filters in front of each of three exposures then recombined them. Obviously, the camera and subject could not be moving. It was 'good enough' for product shots, but was useless for children.
That said, 24 bit color, with 16.7 million flavors, is largely considered 'good enough.' Other solutions offering thousands of colors were not. This is why video cards raced to 24 bits and then pretty much stopped.
Now it may LOOK like every color, but red isn't limited to 256 values. It could be 122.37. Your eye may or may not be good enough to tell the difference between 122 and 122.37, but red COULD be that number. This is the difference between digital and analog. With analog, it can be any real number (unless you're having dinner with Stephen Hawking on the bleeding edge of physics, but we will ignore that for now as it is WAY beyond human perception).
So a camera that shows 64,000 colors would look kinda cheap and gimmicky, but one that shows 16.7 million colors would be 'good enough' most of the time. Now The sensors in cameras go a bit further because 'good enough' is too low a bar in our circles. I believe the current sensors actually capture at 14 bit for a range that the vast majority of people cannot distinguish from real world color, but it is STILL an approximation. It is just a better one than people can see. Like is the jaggies were to small to see.because the pixels were microscopic.
Of course, the computer graphics card you are using to display it on your monitor probably doesn't have that range so you may still end up not seeing it all.
Hopefully, this covers a enough of how digital SLRs are digital approximations of an analog world.
Now, more to the argument we have been having:
When DXO does a lens correction, it is a digital change. Move this pixel by this much to make a fisheye look normal sort of thing (the actual algorithm is much more complicated, of course). But the real lens on your camera isn't EXACTLY like the one on mine. So it is a 'best guess' to cover both your lens and mine.
In the end, it is the same idea though. Instead of moving pixels, a digital DOF effect might apply gaussian blur based on distance from lens (which could be read from some sci-fi autofocus system or something).
Yes the early versions would probably be crappy, but they would improve, just like numbers of pixels and numbers of colors and light sensitivity. Eventually, it gets so good you cannot tell the difference even in a still photograph (like 14 bit color). Then they shrink that down until it fits in a cell phone.
Just like the first CCD was 100 pixels by 100 pixels by 8 bits (monochrome), and grew into the sensor you love in your Nikon, so do other features. There are still artifacts from that 36 million pixel sensor. They arrange the pixels, they make them scan faster and so on.
All of this high end stuff will eventually trickle down to smaller cameras though. Just like the original DSLR stats wouldn't compare well to a cell phone today (1987 1.3 mega pixel sensor)
Here is an image from one
http://eocamera.jemcgarvey.com/img/HET2.jpg
No, not as good as an iPhone 6.
So I maintain that phone cameras will continue to improve until they are too good to tell the difference with human eyes. If folks what shallow DOF and Bokeh to go along with that, they will find a way to do so. This, too, will eventually get too good to tell.