It's a metter of opinion which software does the best conversion.
Not really, you can see which software does, so the actual conversion is easy to see- you can argue between "best" and good enough, but "best" is basically easy to see and objectively measurable in terms of sharpness, tonal range, and other IQ-based metrics. If we were talking about a relatively similar basis for conversion, then it'd be different but the differences are objectively measurable.
Also those people who argue about it are looking at 100 englagements which are like making 30 inch wide prints. It's a totally unrelistic test. We
It's only unrealistic if you're not concerned about making large prints or producing the best possible image you can. For most people, "good enough" is well, good enough. For others, especially if you're selling large prints for lots of money, or you're looking downstream at upcoming reproduction technologies, or selling significant crops at relatively large sizes, that pixel peeping is a totally valid test. One of the galleries I'm trying to get in will sell fine art prints that are 8'+ on a side- and you can see the differences in quality of output at sizes like that. When you're hanging beside other photographers, that tiny IQ difference makes your work look better.
If you look at the flying Eagle in my online gallery- the image will go *almost* to Super A3+ before it starts to fall apart when you put your nose up to the glass. It's a heavy crop, and Imagekind won't print it as large as I will- but even then, I'd hesitate to sell too large a print if you couldn't see the feather details in the tail on close inspection- even though you have to get inside the normal viewing distance to see that level of detail. Could I sell prints that don't meet my quality standards? Sure. Would I? Most likely not.
call them "pixel peepers" in real life and of the raw converts can product the image you want but each takes a different route to get to that image.
If you lose a step of detail, you'll never gain it back. So, from my testing, I'd say that no- you can't really get the same results from all the converters- which is a shame and actually pretty puzzling since ACR should be using Nikon's own API- but it doesn't even convert the same pixels! Sure, it's very minute detail, but that'll show up as apparent sharpness in an image, it'll show up in extreme enlargements too. How important that is depends a lot on what you're trying to achieve, which is why I said "see if Aperture lives up to your expectations."
Then you shoot raw all those setting you make for "sharpness" and white balance has no effect. those are used to control the raw to jpeg conversion process and if you don't do jpeg converson they are moot. But Nikon's software can put these settigs to use inside CaptureNX and have NX duplicate what the camera would have done had it beed set to record in jpeg format. Bt what's the point? If that is what you want shoot jpeg.
The point there is for the times you don't want to wade through a long shoot re-setting WB, but you do want to provide the client with files that are editable multiple times without a loss of quality. Personally, I think you're better off shooting a white or gray card, but if the light's changing a lot and you're following action that can be difficult.
NX has a poor user interface and it is slow. Aperture is designed for processing large numbers of photos and is set up well for this.
Yes, but you can batch convert in NX without really spending a lot of time in its interface and still have the best-quality NEF conversion possible.
One way to deside what to use Adobe Camera Raw, or Aperture or Lightroom or NX is how many phots you shoot. Do you come home with 1,500 images or 50? and then what do you do with them. Do you make prints or are they mustly going on the web?
All of them will batch process, so for me that's not the major decision point. I own PS CS3, Bibble Pro, Aperture and CaptureNX. I tend to use all of them for something, but mostly none of them every time. If you're only looking to do processing in a single tool, then you'll have to choose which one works best for your workflow- but that's not the only option on the table.
Why shoot at 12-bit if all your output devices are 8-bit and the difference for a 12-bit printer isn't really that visible? It's futureproofing more than anything because sooner or later we'll get better output devices and the files will print even better than they did originally. Is that worth it for everyone? Not really- but to dismiss it totally is rather silly.
All the converters won't produce the same image result, in fact- in my testing, none of the produced the same results. The differences are subtle, but then back in the film days, the differences between some developers and film types were subtle- but some of us still made our choices based upon those subtleties.