Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It will be funny when because of the Energy limits, companies like HP, Dell will start developing workstations like Mac Pro 6.1 ;).

What most naysayers will say then? :D
What energy limits? You're like Trump, saying something and expecting us to believe it, and then creating absurd scenarios based on your initial false claim.

Can you show us proof (as in links) that show that Dell and HP mini-towers use more energy than an MP6,1 doing similar work? No, you can't. You want us to accept the "big lie" without asking questions.

I use Watts Up? recording watt-meters to check on desktop system power usage (no need to do this for servers, the iLO provides real-time and historical power consumption). This is a serious issue for us, because our offices are wired for 16 amps per six to nine cubicles.

For our typical lower end Dell Precision Xeon quad core workstations with Quadro GPUs, the idle consumption is less than 40 watts. Throw a CPU+CUDA "power virus" load at it, and you can hit 300 watts - but you'll seldom even come close to that with real workflows.

Really, what are these "power limits"? Another "koyoot hallucination"?
[doublepost=1473904016][/doublepost]
They'd just have to lock it down to each individual machine...
Very easy to do with the TPM chip on the motherboard. Trivial. No hardware changes needed - just a check in the Apple OSX boot code to validate against the TPM chip.

TPM chips are not required, so no low end Hackintoshes could exist - no TPM, no boot.

Jobs would embrace TPM - it fits his authoritarian style.
 
Last edited:
Very easy to do with the TPM chip on the motherboard. Trivial. No hardware changes needed - just a check in the Apple OSX boot code to validate against the TPM chip.

TPM chips are not required, so no low end Hackintoshes could exist - no TPM, no boot.

Jobs would embrace TPM - it fits his authoritarian style.

evidently no one Remember the All Mighty DTK ADP2,1 The Mother of All Hackintoshes... pentium 4 Check **** iGPU Check had a PC BIOS Check Had a TPM chip Check.... and this Came from apple. what im trying to say here is it would not be hard for people to crack the need for a TPM and then let regular non-Macfyied HP Z840s or whatevers run OS X heh (tbh i dont care how apple does it Just give us the decent Full featured tower back please!)
 
Last edited by a moderator:
What energy limits? You're like Trump, saying something and expecting us to believe it, and then creating absurd scenarios based on your initial false claim.

Can you show us proof (as in links) that show that Dell and HP mini-towers use more energy than an MP6,1 doing similar work? No, you can't. You want us to accept the "big lie" without asking questions.

I use Watts Up? recording watt-meters to check on desktop system power usage (no need to do this for servers, the iLO provides real-time and historical power consumption). This is a serious issue for us, because our offices are wired for 16 amps per six to nine cubicles.

For our typical lower end Dell Precision Xeon quad core workstations with Quadro GPUs, the idle consumption is less than 40 watts. Throw a CPU+CUDA "power virus" load at it, and you can hit 300 watts - but you'll seldom even come close to that with real workflows.

Really, what are these "power limits"? Another "koyoot hallucination"?
Have you read about what I was writing or typically for you quoted out of context, and tried to disprove me.

https://www.techpowerup.com/225808/...egulation-threatens-pre-built-gaming-desktops

What you will do when, because of the governed power limit that computers can have, you will be forced to use external expansion hardware, and similar designs to trash can, that you and others were so displeased with? What you will say when Dell and HP, for that very reason will have to offer similar computer designs, and phase out upgradeable workstations.

I was writing this for very long time on this forum. Nobody believed me. And the funniest part is that in European Union this idea, to power limit computers was discussed in 2014 and 2015 respectively. So far, nothing happened from this, but knowing our politicians its more than guaranteed that we will see power limits in our computers.

I have also written that 95% of future market will be BGA-type. NUC, All-in-One, Laptops, tablets. What Intel say they will focus right now: Devices. Why AMD is saying that because of the costs, and the need to increase effciency scalability is more important than anything? You think that companies that build chips for your beloved, upgradeable computers are unaware of possibility that at some they will have to limit CPU TDP in Bioses, and their power draws?

I was warning about this on this forum. You still refuse to see the reality, despite it slowly turning true.

To wrap this up I have written an analogy to what can happen in future with power limits.
NUC power limit: 85W.
Small desktop computers - 150W power draw.
All-in-one - 250-300W power draw.
Workstation/Gaming desktop - 500W power limit.

Makes sense? Again this is only analogy. Ironic in this case is that tablets and laptops could be excluded from this because they have built in batteries, at least that was the reasoning behind excluding them from power limits in discussion in EU.
 
  • Like
Reactions: Plutonius
...

I still believe that Apple will not put a 7,1 MP on the market until they have the right combination of components and newer tech to offer a dramatic upgrade from the 6,1. Robust TB3 support for starters.

Does a 3x improvement in GPU compute capacity count?!

In the old days Apple would bump up the specs of Mac Pros whenever Intel released anything with 5-10% clock rate increase. The fact that they threw two GPUs into the nMP made me think they actually understood that compute advances are happening on the GPU side now, but the fact that they used a nonstandard form factor and haven't released any upgrades makes me doubt it.
 
  • Like
Reactions: DrMickeyLauer
The relevant point - there is no power limit.
But it will be, whether you like it or not. That is the most relevant point here. The fact that people in governments are discussing this, already should tell you that this is a reality.
 
So, a power limit in computers and not ovens, heaters, hair dryers, toasters, mixers, boilers, water heaters, tumble dryers, washing machines, electric tools (drills, hammers, saws etc), electric cars, air conditioners, freezers, laser printers, copiers, air compressors, elevators, welding machines and not to mention the industries etc.

The next step is what? to disassemble all the supercomputer clusters from universities, research centers, climate labs etc and replace them with a million SuperMario iPads?

And all these because Apple decided to present a sealed machine like nMP?

OK nMp is a nice machine and capable for many tasks, not all, but I think we 're really pushing the subject unnecessarily out of logic and common sense, just to prove some thoughts. Is this a VR world? :)
 
But it will be, whether you like it or not. That is the most relevant point here. The fact that people in governments are discussing this, already should tell you that this is a reality.

In europe, not everywhere...
 
FWIW, Apple has a history of introducing new models/major upgrades when several key technologies reach inflection points. I have also speculated that they are waiting for Kaby Lake as it appears to offer a more direct link from TB3 to the CPU,

History is right. Hooking the TB controller to the CPU has already been done ... 3 years ago.
MPsystemarch_north.png


http://www.anandtech.com/show/7603/mac-pro-review-late-2013/8

The TBv3 controllers are x4 PCI-e v3. You can take two of them snd directly couple them directly to a Xeon E5 v4 (available now) without the need for above diagram's PEX as a PCI-e v3-to-v2 bandwidth distributor anymore. Removing the PEX switch is available right now. There is nothing new coming with Kaby Lake on this front in the Xeon E5 class space. There is no magic pixie dust coming in two years that will improve that in the Xeon E5 1600 series (or whatever Intel names the equivalent in two years).

What looks like what may come with Skylake-W (the next workstation version likely coming Q2-Q3 next year) is more PCI-e lanes coming out of the CPU package. That would mean you could hook up three, as opposed to two, TB v3 controllers. It is a matter of how many not whether can roll out a product with TBv3 in it. 4 TBv3 ports is reasonable. Apple can throw 2 HDMI 2.0 ports on the back if want six display capable connectors on the back. [ It is extremely doubtful that there are very many folks running with 6 actual TB devices directly hooked to the back of the deployed MP 2013 models. Six was necessary because folks do tend to have a number of displays to hook up ( 3-4 screens probably isn't MPro normal but also probably not extremely rare either). HDMI (or DisplayPort ) would work just fine in that role. ]


which I would expect to be a better topology for eGPU support.

eGPU supports is primarily a problem of OS graphics driver stack support; not wires.

"... While Thunderbolt has in theory always been able of supporting external graphics (it’s just a PCIe bus), the biggest hold-up has always been handling what to do about GPU hot-plugging and the so-called “surprise removal” scenario. Intel tells us that they have since solved that problem, and are now able to move forward with external graphics. ... "
http://www.anandtech.com/show/9331/intel-announces-thunderbolt-3

I think the implication of Intel (and only Intel ) solving this problem isn't quite accurate. I think Intel now has a standard general approach to this which they have a specification for and a testing suite which the major OS vendors have more-or-less signed off on. (i.e., the OS vendors say "Yeah, we can probably do that and are willingly to add that at some point." )

It is largely a software problem. ( their are probably some minor firmware, EFI additions and GPIO handling issues) but the hardware wise. But the "unplug" is not an electrical (hardware) thing... it is what to do when event gets signaled. TB could always tell you something got unplugged. What happens after that is the primary problematical issue.

Apple's graphics stack being months ( sometimes years ) behind what is in the Windows graphics stack... not really new there either. And certainly nothing attached to specific hardware capabilities of Kaby Lake.


Does that suggest we are still a year or more away from a 7,1 MP? Perhaps. Will there be any pro users left on the platform in 2018? A few diehards, sure - but enough to make it commercially successful...




Many on these forums have noted the fundamental problem of creating a 450 watt envelope that can support "pro" use cases. Process shrink might eventually make that easy peasy -

Process shrinks aren't going to make that "easy peasy". This has echos of the same lame excuse trotted out back when Apple skipped the Xeon E5 v3 ( Haswell). "Oh the process shrink of Broadwell (v4) are going to solve the problem and then Apple will move on v4".

The TDP of the Xeon E5 1600 has been relatively flat since the beginning and will likely continue into the future. [ Power regulation moved on package with v3-v4, but that is a ballon squeeze of moving something off the board and into the package. Overall system TDP didn't go up] The TDP power budget is used to roll out more cores at higher clocks with each process shrink.

If Apple cut the top end core count to 4 cores then they could make a substantive shift in TDP. That is not process shrink driven and it really doesn't widen the customer market much.



So the 6,1 form factor is too limiting and Apple seems highly unlikely to go back to big towers.

The form factor and exactly the same current dimensions are two different things. Apple could grow the form factor an inch or so and get to a better solution. The 2013 design is by no means the perfect optimal configuration. There is no reason why the footprint of the Mac Pro has to be smaller than the Mac Mini. Honestly, that is kind of silly. 7.6" x 10.9" would still be substantially smaller than it old form factor. There would be no horror in having a footprint 0.2" larger than the mini. It is still the same new form factor.

The 6.6 and 9.9 of the current Mac Pro smacks more of some weirdo numerology fixation (multiples of 3) and/or attempt to mimic the monolith from '2001 A Space Odyssey' than any sound engineering design.
(Make the Mac Pro 1/3 its former size so the dimensions should be multiples of 3..... that's a conceptual blockbuster exercise, but then need to get down to the realities of what is really required. If one and only one fan .... it is too small. )


In my dream world, Apple would stick with the cylindrical shape and central cooling - but make it taller (in part to accommodate full length PCIe cards internally) with better cooling and a 650w PSU.

Full length cards aren't the core problem. Nor are they particularly necessary (with current tech.). The cards are custom so there no need to comply exactly with legacy slot dimensions.

The bigger problem is that there probably needs to be more cards, not longer ones. If the main and I/O board design iteration is going to be 2-3 years long then they need more cards spread over that interval.
(IHMO it is obviously that they also have a headcount/resource cap here too many cards in parallel is probably isn't going to happen but could keep iterating instead of going into rip-van-winkle mode and pragmatically deallocating for periods of time. )
 
Last edited:
So, a power limit in computers and not ovens, heaters, hair dryers, toasters, mixers, boilers, water heaters, tumble dryers, washing machines, electric tools (drills, hammers, saws etc), electric cars, air conditioners, freezers, laser printers, copiers, air compressors, elevators, welding machines and not to mention the industries etc.

The next step is what? to disassemble all the supercomputer clusters from universities, research centers, climate labs etc and replace them with a million SuperMario iPads?

And all these because Apple decided to present a sealed machine like nMP?

OK nMp is a nice machine and capable for many tasks, not all, but I think we 're really pushing the subject unnecessarily out of logic and common sense, just to prove some thoughts. Is this a VR world? :)
It is actually quite funny, but EU discussed how much water should be in toilet some time ago, as one of iterations of Efficiency Idea :)

Everything will be touched by power limits. Currently the most important part in discussion is Solar Panels on every house by 2020 year with maximum output of 40 KWh, and the financial possibilities, and costs(because EU would have to fund at least 50% of installation costs). There are ideas of building a micro power plants in every house built from wind mills, solar panels, and even Fuel Cells. It is a mess so far, but maybe they will come up with something good.
 
It is actually quite funny, but EU discussed how much water should be in toilet some time ago, as one of iterations of Efficiency Idea :)

Everything will be touched by power limits. Currently the most important part in discussion is Solar Panels on every house by 2020 year with maximum output of 40 KWh, and the financial possibilities, and costs(because EU would have to fund at least 50% of installation costs). There are ideas of building a micro power plants in every house built from wind mills, solar panels, and even Fuel Cells. It is a mess so far, but maybe they will come up with something good.
I believe any government limits on computer power will be a situation where most people go along with it, and then there will be the rest of us that disregard the rules and build / work around the limits with reckless abandon.

It won't be enforceable until all appliances are low power, because as mentioned, every hair/clothes dryer, oven, water heater, garage door opener, blender, microwave, garbage disposal, trash compactor, and all power tools in the garage will use more power than a computer, and these devices have much more development to go through to reduce their energy by far, don't you think? If Big Brother was monitoring household usage, we'd all just run our gear through some smart battery system (similar to a UPS) to flatten out the usage, and it would be undetectable.

Fear of power limits is a waste of stress, in my opinion.
 
It is actually quite funny, but EU discussed how much water should be in toilet some time ago, as one of iterations of Efficiency Idea :)

Everything will be touched by power limits. Currently the most important part in discussion is Solar Panels on every house by 2020 year with maximum output of 40 KWh, and the financial possibilities, and costs(because EU would have to fund at least 50% of installation costs). There are ideas of building a micro power plants in every house built from wind mills, solar panels, and even Fuel Cells. It is a mess so far, but maybe they will come up with something good.

Again, this isn't the case everywhere...
 
Again, this isn't the case everywhere...
I am discussing this from perspective of EU. But If you will have one place on earth where there will be power limits companies will design the computers with this in mind. What will last then? Especially when other places will see that this idea actually works?

Thankfully, for some people, power limits are not applying to supercomputers, and HPC market, which will be slightly different than consumer/pro market. Will be above in performance.
I believe any government limits on computer power will be a situation where most people go along with it, and then there will be the rest of us that disregard the rules and build / work around the limits with reckless abandon.

It won't be enforceable until all appliances are low power, because as mentioned, every hair/clothes dryer, oven, water heater, garage door opener, blender, microwave, garbage disposal, trash compactor, and all power tools in the garage will use more power than a computer, and these devices have much more development to go through to reduce their energy by far, don't you think? If Big Brother was monitoring household usage, we'd all just run our gear through some smart battery system (similar to a UPS) to flatten out the usage, and it would be undetectable.

Fear of power limits is a waste of stress, in my opinion.
Im not fearing the power limits. I actually would welcome them.
 
I am discussing this from perspective of EU. But If you will have one place on earth where there will be power limits companies will design the computers with this in mind. What will last then?

Thankfully, for some people, power limits are not applying to supercomputers, and HPC market, which will be slightly different than consumer/pro market. Will be above in performance.

Im not fearing the power limits. I actually would welcome them.

It's been ages since any tech has been designed with the european market in mind. The biggest tech market are the USA and China and neither of those are overly concerned with eu regulations.
 
LOL

That worked out really well the last time

The reason it didn't work out so well was because the clone manufacturers were building better (i.e. faster & more capable) clones than Apple was along with a quicker refresh cycle.

That wouldn't be an issue today - Apple has abandoned that segment of the market.
[doublepost=1473957076][/doublepost]
HP & Dell already promote Energy Star certification on tons of models. There's a market for it. They are not as sexy as nMP. You'll also notice they aren't the largest, most powerful, expandable or expensive models.

The base spec nMP is overkill for 80% (made up numbers, indulge me) of office workers - but too expensive!

A loaded nMP is probably more than enough for 10% of the remaining 20.

But for the final 10% who need a true beast... they aren't worried about Energy Star certification. They demand the power to get their work done, Bambi be damned.

We may get to the point again where most people are basically running thin clients, but in the foreseeable future, there will be users who demand HEAVY, LOCAL processing power - and they won't give a damn about electricity costs - those are passed on to the client.

But it is more than office workers using the nMP. Hobbyists use the nMP also. I know a number of folks that have them doing 3d art. I am one of them. We aren't doing in professionally, the software is inexpensive but requires as many cores & ram as they can throw at it.

And no, we don't care about the energy costs - that only kicks in when rendering.
 
The reason it didn't work out so well was because the clone manufacturers were building better (i.e. faster & more capable) clones than Apple was along with a quicker refresh cycle.

That wouldn't be an issue today - Apple has abandoned that segment of the market.

With the Mac Clones the motivating factor was low price, not expandability or hardware. A lot of people buy Mac for the operating system as much as the hardware. You can't get that on IBM compatibles. If you start building a premium brand PC with MacOS close to price of Apples offerings, would they really sell as much rather than buying the real thing?
 
Appreciate the detailed notes.

Removing the PEX switch is available right now.

In addition to removing the PEX switch, I'm looking for enough PCIe3 (or 4) lanes to support multiple high throughput clients in external chassis. I take your note that hardware is not the biggest obstacle to eGPU performance, but I'd still like the bandwidth and transactional speeds on tap to support software/firmware advancements.

Apple's graphics stack being months ( sometimes years ) behind what is in the Windows graphics stack... not really new there either. And certainly nothing attached to specific hardware capabilities of Kaby Lake.

As noted above, I don't dispute your point about the real culprit NOT being hardware. That said, if the hardware could support higher performance applications - perhaps that would spur greater effort to improve the rest of the equation.

Process shrinks aren't going to make that "easy peasy". This has echos of the same lame excuse trotted out back when Apple skipped the Xeon E5 v3 ( Haswell). "Oh the process shrink of Broadwell (v4) are going to solve the problem and then Apple will move on v4".

Moving to smaller nodes is clearly part of how lower energy usage AND higher performance will manifest. That said, I deserve a smack down for making it sound like it's simple and close at hand.

The form factor and exactly the same current dimensions are two different things. Apple could grow the form factor an inch or so and get to a better solution. The 2013 design is by no means the perfect optimal configuration.

While we may have somewhat different visions, I absolutely agree that a modest re-working of the current cylinder could retain many of the positive aspects while mitigating some of the most problematic constraints.

For context, I'm advocating for a 7,1 offering that could support substantial external resources. I envision an external chassis that hosts 3 full length PCIe slots and an 8-bay SSD RAID connected to the MP with a 4x TBv3 snake so each slot has 5GB/s available. I'm imagining multiple eGPUs, M.2 SSDs, specialty hardware cards, etc.
 
Just throwing this out there - we just added a new maxed out iMac to our fleet for one of our freelancers, and it's unable to render the comps out of AE. It keeps hitting us with a memory error that stops the render. Definitely can't replace the Mac Pro with iMacs. Full disclosure, it's a very heavy comp with lots of 3D and DOF.
 
Just throwing this out there - we just added a new maxed out iMac to our fleet for one of our freelancers, and it's unable to render the comps out of AE. It keeps hitting us with a memory error that stops the render. Definitely can't replace the Mac Pro with iMacs. Full disclosure, it's a very heavy comp with lots of 3D and DOF.
So sorry you have found it the hard way, unfortunately. Now what are you going to do? sell it ? or for lighter tasks?
 
So sorry you have found it the hard way, unfortunately. Now what are you going to do? sell it ? or for lighter tasks?

It was originally suppose to be for the lighter tasks yes, but I thought it was going to be able to handle this at least. This proved my point to the ones that handle the money @ the company for upgrades.
 
Just throwing this out there - we just added a new maxed out iMac to our fleet for one of our freelancers, and it's unable to render the comps out of AE. It keeps hitting us with a memory error that stops the render. Definitely can't replace the Mac Pro with iMacs. Full disclosure, it's a very heavy comp with lots of 3D and DOF.

How much ram do you have in it? The 27" retina imac can take up to 64GB of ram, which should be more than enough for a quad core system (AE loves more ram the more cores you add) https://eshop.macsales.com/item/Other World Computing/1867DDR3S64S/

After Effects can also be finicky and may need some finessing. Break up heavy sequences into different comps and bring them back in as rendered footage on your main, try turning off ray traced 3d if it's on, you may also have to go into the AE preferences and adjust the RAM that AE will reserve for itself, etc (this is a big one many people miss!).

Lots of things to try unless you've gone through those already. In which case that's one beast of a comp.
 
  • Like
Reactions: fastasleep
In addition to removing the PEX switch, I'm looking for enough PCIe3 (or 4) lanes to support multiple high throughput clients in external chassis. I take your note that hardware is not the biggest obstacle to eGPU performance, but I'd still like the bandwidth and transactional speeds on tap to support software/firmware advancements.

For an eGPU multiple TB ports aren't going to make a difference. Thunderbolt isn't "external PCI-e". That isn't the sole primary objective of the design.

Faster individual links is looming issue for Thunderbolt if they don't crack the "affordable fiber" issue. They are out in front of USB but if have to sit inside of a copper wire ceiling, that will evaporate over time.

In the period between TB first getting to market in 2011 and the current Mac Pro introduction there were exactly zero computer systems introduced with more than one TB controller. Since there there has been another zero systems introduced with more than one.

Throwing 2-4 TB controllers at the issue is dubious. 3+ controllers in a system is not what TB was principally designed for.

That said, if the hardware could support higher performance applications - perhaps that would spur greater effort to improve the rest of the equation.

higher performance at what volumes of additional users? If try to design something to be everything for everybody that to problems also. TB isn't going to be a "everything for everybody" port. USB has a much better positioning for that.


For context, I'm advocating for a 7,1 offering that could support substantial external resources. I envision an external chassis that hosts 3 full length PCIe slots and an 8-bay SSD RAID connected to the MP with a 4x TBv3 snake so each slot has 5GB/s available. I'm imagining multiple eGPUs, M.2 SSDs, specialty hardware cards, etc.

I believe that some limited amount of clustering of functionality is useful for TB externals. ( something that does X and Y, not just one single thing. e.g., display , USB , Ethernet, USB and embedded drives, ). However, if pile too high and too deep, TB isn't a good fit (e.g., pile everything and the kitchen sink into one box).

Something like 2-3 TB controller in the chassis ( one per pair of slots. And one of those pairs having a hardwired RAID chip )? The problem is that if you require that host computer system have > 2-3 controllers you are down to one and only one system that can drive it. That isn't likely to pass muster with Intel of being interesting or viable.

Even min requirement of 2 TB controllers in host system you are still at one and only one system that can drive it.

A product like that is not going to drive broad TB adoption at all. In fact, it would probably only dig deeper "for Apple only" perception hole to get out of that TB is only recently made some progress on.

A 8 port TB computer ( 4 controllers ) is way, way out in the kool-aid swap. If you bisection bandwidth needs are really that high for that large of a volume then TB is the wrong tool.

The better fit with the Mac Pro is a fast enough connectivity to the "computation box" so that can view the answers that remote box computes. That is more in line with the Mac Pro design than one where everything has to be collocated into a single large volume box.

Time will take some of the pressure off if the single cord TB bandwidths go up (and crack affordable fiber). That isn't coming any time soon. TB v3.1 (or whatever increment called) will probably just cover DP 1.3-1.4 and probably not due until late 2017 or earlier 2018). I don't think TB is getting another major speed bump until crack the adaption rate, base volume, and affordable fiber problems. If TB changes too fast it will scare off wide spread, cost sensitive adopters.

There is nothing on Intel's roadmaps that supports x4 TB controller bandwidth much at all with a single CPU package. ( and Apple is extremely unlikely going to double packages ). May get to 3 TB controllers + a dual 10GgE set up in next iteration Skylake (and Kaby Lake if likely using the same socket. ), but nothing like 4 TB + likely other contenders for CPU PCIe bandwidth. PCH implementation have no concurrent bandwidth like that even remotely on the tables. And won't any time into the foreseeable future either.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.