Even though I understood non of what you said.... (giggles) You or so right. But we live and learn, and I have learned a great deal about Macs from this experience (and about myself). I may very well one day buy that Mac 4,1 or 5,1, but for now I like this old machine as we share a lot in common. (smiles).
Cores and Speeds in some easy explanation:
Look at the cores as trucks that transports something, and let's use the analogy that calculations/performance is measured in how much gravel each truck can transport in an hour.
You've got 8 trucks that can transport 10 tonnes of gravel in an hours, each truck transports 1.25 tonnes per hour
A i7 2600 instead is like having 4 trucks that still can transport 10 tonnes per hour, but 2.5 tonnes per truck
Some applications benefit more from the performance of a single core rather than all cores combined (like if a single customer had 5 tonnes to transport but refused to pay/use more than 1 truck, it would take longer on the 8-truck system than on the 4-truck system due to each truck having twice the capacity on the 4-truck system).
on the 8-truck system the customer would have to wait for 4 hours (5/1.25) to get the job done, while on the 4-truck system it would only take 2 hours (5/2.5). Some applications use multiple cores though, and those benefit from a system with many cores as if a customer would pay for all available trucks the job would get done in 30 minutes no matter what system they used.
PCIE:
PCIE (the expansion slots which you put your graphics card into) is built in lanes.
A PCIE1.1 bus can transport 200MB/s over each lane, a PCIE2.0 bus 400MB/s per each lane and a PCIE3.0 bus 800MB/s per each lane, the maximum amounts of lanes is 16 on any of them (but not all slots in the computer will be able to use 16 lanes at the same time)
the system needs to ship some stuff to the graphics card so that the graphics card can process the data and show it on the screen. Let's use the truck analogy again (But this time it's not that a truck is a core, but rather a bunch of data)
If we instead say that each lane represents a lane on a road, and each lane can take 1 truck per second on PCIE1.1, 2 per second on PCIE2.0 and 4 per second on PCIE 3.0, shipping 16 trucks on PCIE1.0 would take up all the available lanes on a x16 bus but only half of the lanes on PCIE2.0. You're graphics card might need 24 trucks of data to move between the cpu and graphics card every second, and if you have PCIE1.0 that is 8 trucks too many, and you will take a hit on performance.
RAM-bandwidth:
RAM provides the CPU with data to process, see it as the bulldozers digging up the gravel for the trucks in the first example to process/transport. So the faster your ram can communicate with your CPU, the faster the trucks in the CPU-example are loaded with gravel and can be shipped to the customer.
Not really 100% accurate, but close enough for you to get an idea why each of these affect your performance. The faster the RAM can ship data to the CPUs to be processed, the faster the system, the faster the system can move data to the GPU the faster the graphic card can start processing the data and the graphics performance will be better (as long as the GPU on the graphics card can keep up).