Different folks have different expected workloads they want/need to run. For instance, running old software faster versus running new (or future) software faster. In a year, what excuse are your heavy data crunching app vendors going to have for not dropping a version that can easily go 8-way when the cores are available?
Similarly, is your daily workload's natural working set size of RAM above 12GB ? It is cheaper to do that with 2GB modules than 4GB ones.
8 cores means have 16 virtual threads. SMT (Hyperthreading) works better with I/O bottlenecks. If there were no I/O bottlenecks it would not be effective. Pragmatically, an 8x2.93GHz machine has more problems than a a 8x2.26GHz one if there are no updates in bandwidth. Even though have 2x the number of memory channels as the single Quad package as you increase the GHz you start to negate that advantage because can't pull data fast enough through the bottleneck. For a 4x2.93GHz vs. 8x2.26GHz, you have a 30% increase in speed, but a 50% decrease in max I/O throughput. For example, it wouldn't make lots of sense for most top of the line 8x2.93GHz users to pick the standard 6GB configuration and not update the RAM. That machines needs more RAM to take pressure off the I/O bottleneck.
Different user workloads have different bottlenecks. For a user who is highly bottlenecked on disk access, it might make sense to get a 2.66GHz quad and two SSDs. That is similar money and perhaps faster throughput if not maxed out on computational work ( i.e., need to stream lots of data in and out but don't relatively need to do high amounts of mutations of that data. ).
If one SSD is better, why not two ? [ may not if one moves the bottleneck somewhere else. ]
That's why there are six different basic models of the Mac Pro. Different users with slightly different problems. It is not one set of users with extremely similar problems.