Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I do remenber two year ago I calculated a possible thermal core for an hypothetical 750W tcMP (Xeon E5v4 + dual Polaris GPU), just switching to copper and a faster fan where enough, of course a TC in copper will be really expensive, a more conservative solution would be thermal pipes + aluminum fins, so the arguments from Apple about the thermal corner in the tcMP where BS, I believe simple they decided to wait until AMD got ready Epyc CPUs or there where really big issues with desktop development oversight.

There are also alternatives to copper as Aluminum-carbon nanotube constructs, exotic but seems a sound alternative, notwithstanding I believe the mMP will use std 2-phase heat pipe cooling .

I never looked at the heat saturation characteristics, but I am in agreement with you over many of the details. One thing I wanted to reply was that manufacturing has a large scale impact on the price. For example, $2 item in manufacture is valued at $200 in assembly situation and $250-$300 in a white box on its own. Another example, HP makes optical fiber routers for about $12 with lifetime warranty and the installer wants $500 for this item on a payment agreement.

My point is, the cost of any specific material is not so special. We are talking about a few dollars in reality. Where companies "lose money" is they sometimes have to include specific objects in a build "at cost price" just to make the retail budget work. That is why I called the Mac Pro the "Veyron of computing" because the build cost is possibly "too close" to the actual retail price.

Overall, my other comment is the lack of air turbulence in the central thermal core and the lack of heat transfer surface area are the main issue. Possibly the dimensions are too tight to allow for the necessary thermal mass. But then you pointed out a good solution and it makes sense. Why not.

Even the current Mac Pro can be "helped" with an air spreader directly inside the empty core. You cant get cooling from air travelling in the middle of the channel.

A design option for the team is also to run good GPU at a lower speed but include 3 GPU. Underclocking is not the end of the world and I can already hear people punching their stress ball just reading this. Underclocking is one of the forgotten tricks in modern computing. For this reason they include speed throttle in CPU design to allow the CPU to control the speed.

As we push slowly towards CPU / GPU shared processing, the number of possible CPU and GPU will only increase.
 
Last edited:
A design option for the team is also to run good GPU at a lower speed but include 3 GPU. Underclocking is not the end of the world and I can already hear people punching their stress ball just reading this. Underclocking is one of the forgotten tricks in modern computing. For this reason they include speed throttle in CPU design to allow the CPU to control the speed.

As we push slowly towards CPU / GPU shared processing, the number of possible CPU and GPU will only increase.

Most "PRO" hardware are actually under-clocked version with tantalum capacitors and better cooling, I'm familiar with your concept.

As I see the only way to rise the tcMP design is with AMD's PRO APU, but it barely showed in early benchmarks (a 8 zen core with 24 gpu core) this is aimed to HPC in dual socket configurations, could be the only way to make the trash can form factor competitive or at least somewhat useful as real workstation (for those not requiring dual FULL GPU, and more than 16 cores).

But I dont believe Apple to re-launch the tcMP, as much they will provide an upscaled mac mini and a modular mac pro but not modular in the sense many hopes, but I think they at least should allow one or two 8x PCIe slots for hardware other than GPUs, and sell easy to install often updated new gpu asap are available (the classic minimal redesign of reference designs, not full custom since require much more twaking).

The mMP with derated hardware makes little sense, even the design should allow highr TDP and more PSU power for follow up updates/upgrades.
 
Last edited:
If Apple intends for the new MP to sell in any kind of volume, it needs to model it after the cheesegrater design. Shrink the case, sure, but provide PCI-e slots, user-configurable memory, storage, and GPUs, etc.

In other words, yes, sell a Mac version of the HP Z6/Z8.

If Apple doesn't do that, countless comments strongly suggest it shouldn't bother. Those few power users still on Macs need Apple to extract its head from its ass or they too will migrate away.
 
I never looked at the heat saturation characteristics, but I am in agreement with you over many of the details. One thing I wanted to reply was that manufacturing has a large scale impact on the price. For example, $2 item in manufacture is valued at $200 in assembly situation and $250-$300 in a white box on its own. Another example, HP makes optical fiber routers for about $12 with lifetime warranty and the installer wants $500 for this item on a payment agreement.

My point is, the cost of any specific material is not so special. We are talking about a few dollars in reality. Where companies "lose money" is they sometimes have to include specific objects in a build "at cost price" just to make the retail budget work. That is why I called the Mac Pro the "Veyron of computing" because the build cost is possibly "too close" to the actual retail price.

Overall, my other comment is the lack of air turbulence in the central thermal core and the lack of heat transfer surface area are the main issue. Possibly the dimensions are too tight to allow for the necessary thermal mass. But then you pointed out a good solution and it makes sense. Why not.

Even the current Mac Pro can be "helped" with an air spreader directly inside the empty core. You cant get cooling from air travelling in the middle of the channel.

A design option for the team is also to run good GPU at a lower speed but include 3 GPU. Underclocking is not the end of the world and I can already hear people punching their stress ball just reading this. Underclocking is one of the forgotten tricks in modern computing. For this reason they include speed throttle in CPU design to allow the CPU to control the speed.

As we push slowly towards CPU / GPU shared processing, the number of possible CPU and GPU will only increase.
Please, explain what is this device “optical fiber router” ? Does it route light? BGP or OSPF?
 
Please, explain what is this device “optical fiber router” ? Does it route light? BGP or OSPF?

In Australia, Singapore, UK, USA, Japan, and so on ... there are internet connections terminated with optical cable to the home. The 2 routers work together to deliver optical to copper connections.
[doublepost=1521026377][/doublepost]
If Apple intends for the new MP to sell in any kind of volume, it needs to model it after the cheesegrater design. Shrink the case, sure, but provide PCI-e slots, user-configurable memory, storage, and GPUs, etc.

In other words, yes, sell a Mac version of the HP Z6/Z8.

If Apple doesn't do that, countless comments strongly suggest it shouldn't bother. Those few power users still on Macs need Apple to extract its head from its ass or they too will migrate away.

The big pitch idea in the draft spec was the "jet engine" Mac Pro had great control over EMF / EMR for compliance standards. In many respects the design is actually a good solid concept. Heavy inner cooling core with no interference leakage and a solid metal outer shell. This type of sandwich gives you excellent design control over "radio" interference.

The concept was not a lightweight idea. But Apple changed the core from 4 to 3 sides. I think we all see that was one of the reasons for the main limitations in thermal mass and internal storage. When someone comes up with a very carefully made design and someone else makes alteration -> ooops.
[doublepost=1521027237][/doublepost]
Most "PRO" hardware are actually under-clocked version with tantalum capacitors and better cooling, I'm familiar with your concept.

As I see the only way to rise the tcMP design is with AMD's PRO APU, but it barely showed in early benchmarks (a 8 zen core with 24 gpu core) this is aimed to HPC in dual socket configurations, could be the only way to make the trash can form factor competitive or at least somewhat useful as real workstation (for those not requiring dual FULL GPU, and more than 16 cores).

But I dont believe Apple to re-launch the tcMP, as much they will provide an upscaled mac mini and a modular mac pro but not modular in the sense many hopes, but I think they at least should allow one or two 8x PCIe slots for hardware other than GPUs, and sell easy to install often updated new gpu asap are available (the classic minimal redesign of reference designs, not full custom since require much more twaking).

The mMP with derated hardware makes little sense, even the design should allow highr TDP and more PSU power for follow up updates/upgrades.

I agree with you, but I also want to share another thought on this topic. Clock speed can be pushed higher and higher but what we really need is optimal matched hardware.

The logic and reasoning goes like this .... you can obtain vastly superior performance from a system if you match the hardware carefully to the functions intended. This happened already once before in the last few years. A very old model of AMD graphic card was the "king" of bitcoin mining. Even now people are interested to buy in quantity of 100 and even 1000.

When a data process and a hardware system are perfectly matched, the speed can be more than impressive, it can be stunning. This is why some servers are crazy fast and others are not, for the same application. However, I believe that we are no longer moving slow enough in engineering to allow for this type of well planned design. One of the reasons is that chip makers no longer care if you miss the boat with a chip. If your design takes too long to make it to the market, you simply wont have enough parts to build it. You miss out. Start again.

Moore's law has confused itself. Real world performance does not rely on clock speed. It requires some help from data buffering and clock synchronisation. For example. You would be 10x better off installing your ram in banks where only 50% or 33% was addressable by the user. (you install 512GB ram but only get 256GB in the real time). The simple reason is that opening up data channels (DDR4 versus "pseudo DDR16") is faster than making the lanes run wider (8 bit versus 32bit).

But these ideas have died a long time ago except in file servers. When you can build a machine with "standby ram" and "spare ram" and "error correction for the error correction" then you can achieve the scalar effectiveness seen in many middle range servers of being up to 197X faster than a single 4 core server.


Dont get me wrong, I am not married to my DL980. It is just one of those strange machines with crazy specifications. I booted this machine with wrong ram, faulty boards, faulty CPU with 3/8 broken power supply and it still ran.
 
Last edited:
In Australia, Singapore, UK, USA, Japan, and so on ... there are internet connections terminated with optical cable to the home. The 2 routers work together to deliver optical to copper connections.
[doublepost=1521026377][/doublepost]

Stop digging a deeper hole... you have no idea what are you talking about...o_O
 
ONT is not an optical router. It is the device in your home. OLT is something that switches and routes, I have two in in my lab, a "real" one and PON blades in a CMTS. if you want i can give you access to have a look. They are a bit more expensive than 15 USD as claimed

Optical router does not exist yet in any meaningful way, same as the quantum computer.
All switching and routing is done on electrical level, doesn't matter what SPF you plug in your router.

There is a big difference between optical splitter used in PON solutions and the "optical fiber router".
 
ONT is not an optical router. It is the device in your home. OLT is something that switches and routes, I have two in in my lab, a "real" one and PON blades in a CMTS. if you want i can give you access to have a look. They are a bit more expensive than 15 USD as claimed

Optical router does not exist yet in any meaningful way, same as the quantum computer.
All switching and routing is done on electrical level, doesn't matter what SPF you plug in your router.

There is a big difference between optical splitter used in PON solutions and the "optical fiber router".

Its a bit like the guys at CERN laughing at me because I dont reference hydrogen spins and just call it hydrogen. Maybe to 99.99% of the World all physical layer or software layer mapping is referred to as the same object. Just like a car is a car and to you there are dozens of things that are not a car because one is an SUV and the other is a Sedan.
 
What I wrote in this thread

Clock speed can be pushed higher and higher but what we really need is optimal matched hardware....when a data process and a hardware system are perfectly matched, the speed can be more than impressive, it can be stunning.

When people say they want something new, it does not mean it has to be Xeon Gold or Xeon Platinum driven CPU architecture. It can be a really good implementation of a very solid and scalable architecture.However, nobody makes those chipsets now and buying into older hardware is a production nightmare. So who is at fault here ? For me, Apple is not deciding any more how computers are constructed (chipset, CPU, bus controller). Those choices are thrust upon them.

Apple insiders have leaked

The initiative, code named Kalamata, is still in the early developmental stages, but comes as part of a larger strategy to make all of Apple’s devices -- including Macs, iPhones, and iPads -- work more similarly and seamlessly together

Moving to its own chips inside Macs would let Apple release new models on its own timelines, instead of relying on Intel’s processor roadmap.


You dont need a crystal ball to see Apple is running away from having its hands tied. Apple has responded and shown it wants to control its own supply and product optimisation. I know nothing, but I am optimistic.

Stop digging a deeper hole... you have no idea what are you talking about...o_O

You are right that I don't know. This is speculation. Anyone who would think I was giving out secret information is at the wrong place.
 
Last edited:
Apple refers to their current Mac Pros and Minis as modular, so I read it as just a separate display, not an all-in-one. Whether this now also means separate eGPU and drive enclosures via cables is anyone's guess, but if so, I'm sure they'll allow third-party enclosures. I'm wishful but not hopeful for at least 7 internal PCI slots though.

Whatever it will be, I predict it will have glass. Jony loves himself some glass.

From the first eMate and iMac and 17" CRT display he's always liked the transparency aspect, so he should be looking at the recent high-end tempered-glass PC enclosures and thinking he could do better/classier and take back the design lead.
 
You dont need a crystal ball to see Apple is running away from having its hands tied. Apple has responded and shown it wants to control its own supply and product optimisation. I know nothing, but I am optimistic.


In the meantime, part of Apple's customer base is running away from having their hands tied .

Every major company wants to control parts supply ; trouble is , the benefits of doing so goes to profit margins .
Not to mention that proprietary technology in mass produced items tends to be inferiour, more expensive to buy and maintain, and less compatible .
 
In the meantime, part of Apple's customer base is running away from having their hands tied .

Every major company wants to control parts supply ; trouble is , the benefits of doing so goes to profit margins .
Not to mention that proprietary technology in mass produced items tends to be inferiour, more expensive to buy and maintain, and less compatible .

You don't need to win every round of a boxing match to win the fight. Apple has signalled very sharply they will have a lock-down on their processor technology to prevent Root-Kit attacks often used by intel hackers. What they lose in the elite computing end they will gain in having the processor inside a car, TV, phone, tablet, homepod, computer and laptop. For the first time they are indicating the unified computing environment. A fully optimal lifestyle-based technology company for every *computing* aspect of your life.

In effect, the Apple business is going back to traditional values. I admire Tim Cook for staying so close to the history of Apple design and expanding the resources to allow for further development in other lifestyle products. This will be an interesting period of development and I wouldn't be surprised if Tesla and Apple start to merge. Starting with a stake in the automotive and solar panel business.

#elonmuskagain

FORBES MAGAZINE
Capture.PNG



[doublepost=1522904520][/doublepost]
Whatever it will be, I predict it will have glass. Jony loves himself some glass.

I bet he got the idea while having a pint of Guinness.
 
Last edited by a moderator:
I wanted to add to this thread with a YouTube video made by Jon Rettinger.


The first few seconds of this video .... hmmmm.
 
^^ nobody wants that stacking crap.

I don't know, done right, it could be good - if the interconnect is sufficient (think the cMP CPU board connector).

By done right, I mean "so as to eliminate having to pay to replace parts you don't need to upgrade, in order to get parts you do".

It all comes down to granularity - if you can buy 1, 2 4, 6 etc slot PCI modules, and just put off the shelf cards in, if you can buy a storage module, and then put off the shelf storage in, if the cpu module doesn't have the external IO built in etc.

If it's an exercise in stuff-bundling & hardware bloatware (nMP dual GPUs), or force-upgrading like the way your mac and iOS device ecosystem is a constant merry-go-round as each software update on each device requires an update on all the other ones, until they're aged out, then it's not so good.
 
^^ nobody wants that stacking crap.

It's also never, ever going to happen.

Cost, complexity, failure points, environmental factors, supply chain blowouts, poor aesthetics...

The mind boggles that people are entertaining this doesn't it? Why on earth would there be a bunch of boxes??

These people haven't looked at Apple's product lineup apparently.
 
These people haven't looked at Apple's product lineup apparently.

33d.jpg


But in seriousness... People are looking at it, on the assumption that part of the "mea-culpa" is that Apple might be doing a proper change of direction with the Mac Pro - that dysfunctional minimalism might not be the primary aesthetic, that there might be a theme to this that's not "is this a computer, or a trophy to Jony Ive, by Jony Ive, for being so great at being Jony Ive?".

All your criticisms of the idea have opposing arguments in their favour that are no less valid.
 
Modular AND looks - done a long time ago already. This is what I expect Apple will do:

Modules accessible on the back:
o2-05.jpg


Front view:
o2-02.jpg


Enjoy those custom form factor parts. Priced like unobtanium. ;)
 
Worked on an Octane2 with Autodesk software for several years. I'll switch to Windows before going back to that. Hard to be creative when you're troubleshooting and managing the system 60% of the time. Service contracts were almost required. At least Windows 10 only needs major intervention once a month or so.
 
  • Like
Reactions: Dr. Stealth
Cluster computing and super-computing architecture are both shrinking in size, we agree on that !

Wut? Have you ever looked at what the DoE (or big Unis, or BioTech, or the intel community, or... etc) builds? Hell, they just announced a exascale machine under construction at ANL....

Modular AND looks - done a long time ago already. This is what I expect Apple will do:

I will say, I miss those SGI boxes :). They were great little machines, and they made very pretty doorstops in the lab when you didnt need them anymore
 
  • Like
Reactions: th0masp
I remember my Macintosh LC had internal modules. You could disassemble it in minutes using no tools because they just clipped in. I think the idea was for quick, easy repairs but as I remember the concept was dropped in the later models. Probably because the cost of parts outweighed convenience and less shop time.
 
I will say, I miss those SGI boxes :). They were great little machines, and they made very pretty doorstops in the lab when you didnt need them anymore

Exactly, very reliable (and easy to maintain) in my experience in a lab. The one shown was pretty much outperformed the day it released though. We called it the Slow2 and students avoided it in favour of the el-cheapo PCs placed elsewhere on campus.
 
It's also never, ever going to happen.
Cost, complexity, failure points, environmental factors, supply chain blowouts, poor aesthetics...
The mind boggles that people are entertaining this doesn't it? Why on earth would there be a bunch of boxes??
These people haven't looked at Apple's product lineup apparently.

The answer is very simple. This path offers scale. From 1GPU to 8GPU. From one CPU to a massive CPU cluster. No computer builder with any reasonable budget would risk a fixed architecture. To do so would be a massive step backwards.

To get that cosmetic appearance, all you need is an external cover panel. I predict 2019-2028 will be the era of scaled upgrades and "assemble to order" - the discrete modules will not be user configurable. You buy the math modules you need (GPU or CPU) and keep the same storage module. Math architecture will take a massive leap forward.
[doublepost=1553130892][/doublepost]
"is this a computer, or a trophy to Jony Ive, by Jony Ive, for being so great at being Jony Ive?".

Just one small detail. Apple didnt create the 2013 Mac Pro concept. They imported it from the outside world.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.