EDIT: currency conversion fix.
Probably because the paradigm is being misapplied.
The "all in one case" is not the capability - - it is merely the implementation.
The capability itself is an effective means of providing a flexible architecture to suit different work environments & needs...and a key word here is "effective". This means that the 'cost' of affording XYZ capability change/enhancement should be commensurate with the overall system...and this encompasses not only fiscal cost ($ or £), but also other factors, such as size, noise level, interfaces, environmental compatibility...a whole bunch of stuff.
Yes, it did seem that way ... until one actually sharpened your pencil to look at the ramifications of this 'future' that it represented.
The 2013 nMP lacked a 10Gbit Ethernet port, so that protocol as a means of implementing effective high performance data storage wasn't an option (per se ... today, its a $300/node option on the desktop side).
Similarly, it lacked any TB ports on its front, so even the concept of "Sneaker Net" transport between systems failed the most basic User Interface (UI) test.
Furthermore, because of the relatively high expense of TB, the cost of external storage was higher too.
M.2 is merely the implementation of a capability (fast storage) .. so where is there the implementation to enhance fast storage such as by having open M.2 expansion ports? When none are present, just what is the alternative to provide said capability? Right: it is once again those expensive & hard to reach TB ports on the back of the machine.
When you do the math, yes.
Because that £120 (= $175) expense is above and beyond the costs of the "spinning rust" drives themselves...plus before you've also paid for the TB-eSATA adaptor ($73 = £50). For example, simplistically assume four internal drives at $150 each versus the cost of the same put into that enclosure: the cost for the capability grows from (4 * $150) to (4 * $150 + $175 + $73) --> $600 vs $848--> 1:1.42 ...
that's a 40% cost growth just to maintain the same level of capability!.
And we've had this conversation before: the hitch is still that 10Gbit Ethernet isn't cheap yet, as neither the cMP nor nMP shipped with it.
With the same basic capability costing substantially more when implemented through TB.
The performance bar will always be on the move (and the bleeding edge will be expensive). I am a bit concerned that saying 5GBps+ may be a bit of a misnomer because that's faster than the nMP's internal blade's performance (approx. 1.5GBps in sequential), which threatens to then be the system level bottleneck.
And while ~1.5GBps of the nMP's internal blade is vastly better than what the cMP did out-of-the-box, much of this was due to neglect in the cMP not getting its SATA-II 0.3GBps interface to at least SATA-III...but this observation on its own is side-stepping this discussion's primary point, namely that the cMP's architectural design was an enabler for flexibility. Case in point: using post-OEM solutions, the cMP was able to double its 0.3GBps to 0.6-0.7GBps way back in 2012, and there was another doubling in 2015 to where a cMP can now match/beat the nMP's 1.5GBps:
http://barefeats.com/hard200.html
BTW, also do take heed to note that the deployment of this capability has been constrained ... commercially, not technologically.
-hh