If user
Importance depends on the individual. In your case, the answer appears to be no. All I can do is bring forward the facts...
Now.. those willing to spend / invest in a $200-$400 PCIe 3.0 PCIe SSD adapter and another pile of cash for multiple SSD's probably have a higher level of interest/importance in achieving top performance for their workflow. These types of users probably also use 3 sticks of memory per CPU in a 2009-2012 cMP to improve memory performance that many claim has no real-world performance gain.
[doublepost=1548985583][/doublepost]
Thx for sharing, I've been trying to identify a slowdown in PCIe SSD performance for over a year. While I thought it was an effect of the patches to the firmware bitcode that has been seen on Windows, it's somewhat comforting to know the source of the big drop in small file performance.
I bet that disk benchmark disabled cache intentionally in order to measure pure disk performance. However, that will be totally unrealistic.
e.g. When Apple design APFS, it should be use with cache (this is the normal situation). Therefore, the 4k no cache writing performance isn't that important. And they should focus on improving the real world whole system performance, but not pure disk 4k no cache writing performance.
I will say may be you really discovered one of the performance disadvantage by using APFS (vs HFS+). However, if that doses't matter in real world, is that really still so important?
Importance depends on the individual. In your case, the answer appears to be no. All I can do is bring forward the facts...
Now.. those willing to spend / invest in a $200-$400 PCIe 3.0 PCIe SSD adapter and another pile of cash for multiple SSD's probably have a higher level of interest/importance in achieving top performance for their workflow. These types of users probably also use 3 sticks of memory per CPU in a 2009-2012 cMP to improve memory performance that many claim has no real-world performance gain.
[doublepost=1548985583][/doublepost]
Yes, I too have noticed this as well. Subjectively, I’ll add that HFS+ feels snappier when clicking on a target and waiting for a disk response. I won’t try to prove it to anyone, I have no need. It just feels that way to me.
I know, I’ll be the next person to be on the receiving end of contrary comments, but I do notice the same behaviors reported in your observations.
Thx for sharing, I've been trying to identify a slowdown in PCIe SSD performance for over a year. While I thought it was an effect of the patches to the firmware bitcode that has been seen on Windows, it's somewhat comforting to know the source of the big drop in small file performance.