Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I very much doubt Apple had anything to do with the actual design 8the way it looks, that is) of the LG Ultrafine displays.
Come on, look at them and everything Apple. Any resemblance? Really?
I bet they controlled the hardware and firmware implementation (panel choice, TB3 requirement, power delivery, controls, ...) but the way it looks, Jony didn't loose any sleep over it.
Too bad, cause I was hoping for a 5K 27" 10b HDR Apple designed, gorgeous display, even if for 2000-2500EUR, which is already a stretch anyway.
I was also hoping to get a 16core, Vega II, 512GB SSD and 64GB RAM mMP for a lot less than what they'll probably be asking for it but hey, surprise!! :)
The 512GB SSD missing option really bugs me still...
[doublepost=1567183130][/doublepost]Right, that's more than enough for an event.
Although, like dec said, all that's missing for the mMP and PD XDR is a price list really. But they can still show it off with FCP X or something.
 
Apple will likely mention Catalina at the iPhone event on 10 September so also noting the Mac Pro order/ship date at that event would not be out of place.

I do not believe Apple has anything else in the Mac pipeline other than the 16-inch MacBook Pro, so I don't see a separate Mac-centric event in October. So the MBP might also be announced at the iPhone event just because it is the last live event Apple has planned for 2019.

The other option is to announce the Mac Pro order/ship date and the 16-inch MBP via Press Release (with select hands-on reviews of the MBP).


As for tariffs, as majus noted, Apple will absorb those costs themselves, at least in the short-term. If the trade battle lasts for an extended period of time (more than six months), then Apple might have to start passing that cost on to consumers (their margins are so high they could absorb them indefinitely, but it will be a drag on profits and that will eventually start to depress the stock price and force Apple to act).
In regards to the MacBook Pro, I think they’ll announce it and try to alleviate/address outright bans on air travel... just my two cents.

I mean seriously... has Apple really had an event this year (WWDC aside). Apple hasn’t had anything worth showing off and not having to do too much... okay Apple Card and AppleTV+... c’mon. Go ahead and skip an October event (you all just announce things not ready or shipping anyway).
 
Right, but Nvidia gives you 2080ti, Titan (x) etc for radically less money, and probably radically radically higher performance than Apple's Vega 2 Duos will likely cost. The undeniable problem with AMD, is they're simply not good at generating high-end realtime 3d environments. Look at 4k high framerate gaming scores, which are the best indicator of VR performance, AMD is nowhere to be seen - they've put all their eggs in 1080p / 1440p.

AMD isn't just slower on MacOS, it's slower on Windows as well. You literally cannot achieve the performance in VR with AMD, that you can get with Nvidia. Advancements in Metal will not cure hardware incapacity.
Indeed, just check HardOCP's VR reviews and forum. Nvidia is far ahead in VR performance.
 
Firmware/drivers are a factor. It is the principle reason why there are no Nvidia cards. No software to go with the hardware means no cards. Not what 2980-90's vintage physical form factor they fit into. That software is going to be influences with allocation of resources, alignment of goals , and relative priorities with other paths (e.g., Windows release, Linux release , etc. )

I generally agree with the sentiment you are sharing with this post, but there is an interesting wrinkle in this area that's maybe worth mentioning.

Apple is spending effort making eGPUs work during boot, using the existing EFI firmware on the GPU. Case in point is Catalina includes EFI updates for the 2018 Mac Mini that allows you to access the boot screen through an eGPU. At least with Beta 5/6, only HDMI and DisplayPort outputs worked. USB-C/TB3/USB-C->DP Alt Mode still waited until boot was complete. Unfortunately, I didn't have a 5K UF to test with, but I do have an older 21" 4K that I picked up used to play with (my thought was it'd make a great portrait monitor next to a 27" using the VESA mount).

It seems like at least in this scenario, Apple's a little more interested in the more common cases of eGPUs than strictly supporting TB3 monitors. It's also beta, so they may be trying to solve that too in time for public release.

It is interesting that Apple is putting effort being able to bring up display during boot with off the shelf GPUs that are running PC-centric firmware. So while I agree that Nvidia is still off the table due to drivers, I suspect that eGPUs are leading us to a world where 3rd party AMD GPUs are more likely to work out of the box once the drivers are included in the OS (i.e. we are still waiting for Navi drivers if we want 5700 support).

I half wonder if some of this work is because of the Mac Pro as a way to avoid having to roll a custom Mac-only EFI firmware, and T2-based Macs (apparently those are the ones that have a working eGPU boot screen) are reaping the benefit? Now that EFI is more common on PCs, and with eGPUs being something Apple seemingly takes seriously, I can't imagine they want to stay in the business of custom GPU firmware.
 
I think arguing about a killer VR workstation and cost is talking in two different directions. Do you want something cheap for games, or do you want a content creation workstation? One is cheap and the other isn't necessarily.

I think the thing you’re missing is that content creation for VR, is done *in* VR. It doesn’t matter how good AMD’s cards are for compute, if they can’t drive the headset at full resolution, and full frame rate as well as the “cheap gaming computer”, then they’re not as good as a “cheap gaming computer” at being a “content creation workstation” for VR.

Driving the headset as fast, and at as high a level of detail as the best end-user machine will produce, is table stakes.
 
  • Like
Reactions: Flint Ironstag
You meant "running industry-standard UEFI firmware", right? ;)

Eh, I can’t be too harsh on Apple in this case. They needed EFI before the rest of the market, so there was no UEFI cards to be compatible with for years. By the time UEFI started ramping up on the Windows/Linux side in earnest, the cMP was being replaced.

I’m just surprised they seem to care enough to address it now.
 
I think the thing you’re missing is that content creation for VR, is done *in* VR. It doesn’t matter how good AMD’s cards are for compute, if they can’t drive the headset at full resolution, and full frame rate as well as the “cheap gaming computer”, then they’re not as good as a “cheap gaming computer” at being a “content creation workstation” for VR.

Driving the headset as fast, and at as high a level of detail as the best end-user machine will produce, is table stakes.

I understand VR isn’t compute. But I’m also skeptical that a 2080 Ti is going to be faster than a Vega 2 Duo.

You can start stacking multiple GPUs on the Nvidia side as well... but my guess is that Apple is going to pitch the Vega 2 Duo as the workstation competitor to the 2080 Ti. Vega 2 Duo x 2 would be the SLI 2080 Ti equivalent.

That’s why they’re stacking multiple GPUs on a single board.
[doublepost=1567271676][/doublepost]
Eh, I can’t be too harsh on Apple in this case. They needed EFI before the rest of the market, so there was no UEFI cards to be compatible with for years. By the time UEFI started ramping up on the Windows/Linux side in earnest, the cMP was being replaced.

I’m just surprised they seem to care enough to address it now.

New Macs have boot screens on standard PC GPUs over Thunderbolt. It seems like they’ve addressed it now.
 
Why are people constantly talking about Nvidia GPUs & AMD CPUs? Those two things will never show up in a Apple tower. It's like lamenting that the new Ford doesn't have a GM engine in it. Never going to happen. It's dead.
 
If Apple has already started production on the Mac Pros then they'll ship most of those before the deadline and warehouse them (and eat the inventory cost). They'll ship those because weren't taxed, but the short term will probably end pretty close to after those run out if there is no movement.

Wasn't there an official statement from Tim Cook that the 2019 new Mac Pro's final assembly location would continue to be located at the same Texas facility that produced the 2013 cylinder Mac Pro? And specifically contradicting certain reports that the 2019 Mac Pro final assembly location would have shifted to an Asian location.
 
Wasn't there an official statement from Tim Cook that the 2019 new Mac Pro's final assembly location would continue to be located at the same Texas facility that produced the 2013 cylinder Mac Pro?

On the earnings call he stated:

"We've been making the Mac Pro in the United States and we want to continue doing that. We're working and investing currently in the capacity to do so. We want to continue to be there."

The WSJ said Quanta would manufacture it, but perhaps Quanta will be doing the initial production run until the US production facility is ready, at which point it would shift to the US?

Or Quanta will be producing it, but at a US assembly facility like they do for the iMac?
 
I understand VR isn’t compute. But I’m also skeptical that a 2080 Ti is going to be faster than a Vega 2 Duo.

There were plenty of claims that the Vega 64 was going to be equivalent to the 1080 / 1080ti, yet when it came to high resolution real-time 3D environments, it wasn’t remotely in the ballpark. Again it was the same story, a lot of hype about AMD’s manufacturing processes and memory bandwidth, yet in the real world, they simply couldn’t do the job, once you got to higher resolutions.

I see no reason to believe AMD will do anything differently in this generation.
 
  • Like
Reactions: Flint Ironstag
That’s what I was saying...

Ah, sorry, I thought you said you didn't see why they still hadn't addressed it.
[doublepost=1567320158][/doublepost]
There were plenty of claims that the Vega 64 was going to be equivalent to the 1080 / 1080ti, yet when it came to high resolution real-time 3D environments, it wasn’t remotely in the ballpark. Again it was the same story, a lot of hype about AMD’s manufacturing processes and memory bandwidth, yet in the real world, they simply couldn’t do the job, once you got to higher resolutions.

I see no reason to believe AMD will do anything differently in this generation.

I just don't see how a 14 tflops Nvidia card beats a 28.4 tflops AMD card. Assuming that vendors use the new Metal Infinity Fabric APIs. It just doesn't seem at all likely.

Again, if we're talking dual 2080Ti, then the Vega 2 Duo might have a problem. But I just don't see that performance gap being made up by a single 2080Ti.
 
I just don't see how a 14 tflops Nvidia card beats a 28.4 tflops AMD card. Assuming that vendors use the new Metal Infinity Fabric APIs. It just doesn't seem at all likely.

What I think is unlikely, is that AMD is capable of (or culturally interested in) producing any gpu that will offer higher 3D immersive performance than Nvidia will offer at that time, let alone higher for a given price-point.

Talk about all the tflops you like, I’m still betting that when the rubber hits the road, it won’t offer the sort of performance for VR it’s theoretical numbers will suggest. I’m happy to be proven wrong, but I’m not going to start wearing hats made of delicious bacon, in expectation of having to eat one, at AMD suddenly having higher 4K gaming scores (being the best current VR proxy) than Nvidia.
 
  • Like
Reactions: Flint Ironstag
...
The WSJ said Quanta would manufacture it, but perhaps Quanta will be doing the initial production run until the US production facility is ready, at which point it would shift to the US?

Mac Pro volume is way too low to set up twice. More likely Quanta would make most of the parts and ship those to the USA. "Make" here more likely consists of the final assembly from the completed parts. Perhaps it includes the case, but the electronics all probably made elsewhere. The tariffs just aren't on completely finished goods.

Quanta moved its data center motherboard business off to Taiwan

https://www.datacenterdynamics.com/...move-data-center-server-production-out-china/

"... while Quanta Computer has production lines in the U.S. ... "
https://www.networkworld.com/articl...are-makers-shift-production-out-of-china.html


Google helped push for that. There is more than decent chance that the relatively very low volume Apple Mac Pro motherboard may be there on Taiwan also. That gets that component around the tariffs.

Apple will ship the parts ( without the Apple tax ) to USA and then assemble those and apply the Apple Tax. That again gets them mostly around the painful part of the tariffs.


Or Quanta will be producing it, but at a US assembly facility like they do for the iMac?

If it is just final assembly they may or may not do that. Making in homely servers assemblies for Facebook, Google, etc. may not overlap enough with what Apple is doing. But there could be another assembly building around that Apple could buy/lease and rent back to the contractor for the work.


So there are tap dance Apple can do here. But I still think the markup on the Mac Pro is something that Apple won't want to 'give' much on because one of the main motivators is the higher than average markup ( a 'low volume' tax ) that is driving why they are even bothering with this.
 
A
I just don't see how a 14 tflops Nvidia card beats a 28.4 tflops AMD card. Assuming that vendors use the new Metal Infinity Fabric APIs. It just doesn't seem at all likely. .... .

It isn't about brute force computational horsepower. One of the core problems with VR is most of the classic approaches (including coming at it from a gaming angle) waste gobs of resources. Computing some high fidelity object rendering for what the eye can't see is a waste of time. Computing stuff that people can't see isn't really an advantage.

This is more about raster ops ( screen filling/refresh ) for high res screens ., consistent high frame rates , and most of all dropping details on what folks can't see. So something like Variable Rate Shading coupled to a sensors that can track what the eye is looking at can lead to foveon rendering. ( only the main focus point of the retina has high density sensors. Outside that center a low resolution rendering makes substantially less difference.

VR as basically being the same as gaming goes overboard into the "if all I have is a hammer everthing looks like a nail" zone. To do it right means being smarter about rendering; not some never ending cycle of "more" ( bigger hammers).

But correct in the sense of the folks who are knee depth in the "bigger hammers make for better VR" camp.... Duo Vega 2 properly optimized on shared workload should be quite capable as long as don't overtax the raster ops capability of the GPU that is hooked to the displays. ( 2080 and lower wouldn't have much of an edge and the 2080ti with more raster ops gap only being bigger as the screen resolution goes up to bleeding edge. )

AMD isn't hopelessly lost here though.

"... Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here. ..."
https://www.anandtech.com/show/14528/amd-announces-radeon-rx-5700-xt-rx-5700-series/2

Whether that makes it to the 'big compute' Vega 20 successor or not isn't as clear ( if AMD goes down two more separated tracks: Gaming and "big compute' ). Going to 7nm makes the max die size goes down. That means the "everything and the kitchen sink" approaches of making the GPU do everything aren't going to scale up so well. ( graphics compute + tensor or graphics compute + ray tracing will scale better than all three on a single die. )
[doublepost=1567363135][/doublepost]
What I think is unlikely, is that AMD is capable of (or culturally interested in) producing any gpu that will offer higher 3D immersive performance than Nvidia will offer at that time, let alone higher for a given price-point.

The cultural notion that VR is all about max 'grunt' graphics power then what AMD is doing does line up. Only moving the goal posts to say switch gears on that now that pure grunt is quite high ( if have highly elastic budget to pay for it. )

Apple adding Infinity Fabric support to their driver stack is probably going to be quite substantive as their have basically ignored Crossfire and SLI in the past iterations. Paiirng up is going to be decently linear and swapping frame buffer material in real time substantially more tractable at reasonable resolution scales.
 
Most of the VR coders I know rely on CUDA. Most of their tools are optimized for CUDA. Until that changes, nVidia owns that space. If Metal2 and/or other developments provide an attractive alternative authoring environment that could change, but until that actually happens...
 
VR as basically being the same as gaming goes overboard into the "if all I have is a hammer everthing looks like a nail" zone. To do it right means being smarter about rendering; not some never ending cycle of "more" ( bigger hammers).

So pretty much every VR app out there is written using game development environments (unity / unreal), and VR experiences are then more or less universally run within game engines. I'm yet to see, or hear, of anyone doing VR with AMD GPUs, and AMD is nowhere to be seen for high-performance, high-frame-rate, high-resolution, game-engine-dependent apps. But sure, Apple can just magically fix that because although Microsoft and Nvidia have been super-focussed on extracting every last drop of performance from game engines since forever, there's still this huge inefficiency on the table that Apple can just "solve" by doing things differently.

That's up there with "the megahertz myth" in the world of comforting fantasies.
 
  • Like
Reactions: ssgbryan
The cgmMP is not for ML/HPC unless Apple at least re-install openCL/ROCm(hip)and /or enable CUDA's ecosystem, or at least they grew metal API to the feature level in CUDA.
 
On the earnings call he stated:

"We've been making the Mac Pro in the United States and we want to continue doing that. We're working and investing currently in the capacity to do so. We want to continue to be there."

The WSJ said Quanta would manufacture it, but perhaps Quanta will be doing the initial production run until the US production facility is ready, at which point it would shift to the US?

Or Quanta will be producing it, but at a US assembly facility like they do for the iMac?

"Wanting" to do something is different than doing it. Cook was speaking to/at you-know-who.

It's pretty clear by now that the mMPs are not being made in the U.S., even partly. We'll hear a lot more definitive P.R. fanfare if and when assembly in the U.S. becomes a reality again.
 
Does anyone know why the Modular Mac Pro is quoted as having slower SSD speeds compared to iMacPro?

From Apple's tech specs page
Modular Mac Pro - 2.6GB/s read — 2.7GB/s write
iMacPro - 2.8GB/s read — 3.3 GB/s write


I'm currently testing out a 10core iMacPro to see if it would work in place of a Modular Mac Pro, its quite nice, but I don't like the glossy screen, keep being distracted by seeing myself in the screen!

The multicore rendering of the iMacPro 10core is slower than my not overclocked 8core i9-9900k in my PC, which I wasn't expecting. CinebenchR20 - iMacPro 4520, PC 4950. So 10core iMacPro is 91% of the speed of i9-9900k. weird no?

Can't wait for Modular Mac Pro benchmarks. Currently thinking of returning the iMacPro.
 
I had also noticed that, the lower SSD speed. Not a good sign.
Would it be the 256GB, single module? Not likely, Apple wouldn't advertise the worst case.

Fall is upon us, let's see when in the Fall we'll have availability info, and BTO options.
 
Does anyone know why the Modular Mac Pro is quoted as having slower SSD speeds compared to iMacPro?

From Apple's tech specs page
Modular Mac Pro - 2.6GB/s read — 2.7GB/s write
iMacPro - 2.8GB/s read — 3.3 GB/s write


I'm currently testing out a 10core iMacPro to see if it would work in place of a Modular Mac Pro, its quite nice, but I don't like the glossy screen, keep being distracted by seeing myself in the screen!

The multicore rendering of the iMacPro 10core is slower than my not overclocked 8core i9-9900k in my PC, which I wasn't expecting. CinebenchR20 - iMacPro 4520, PC 4950. So 10core iMacPro is 91% of the speed of i9-9900k. weird no?

Can't wait for Modular Mac Pro benchmarks. Currently thinking of returning the iMacPro.
non raid 0 in the base??

Stacked off of the PCH vs CPU pci-e?
 
The multicore rendering of the iMacPro 10core is slower than my not overclocked 8core i9-9900k in my PC, which I wasn't expecting. CinebenchR20 - iMacPro 4520, PC 4950. So 10core iMacPro is 91% of the speed of i9-9900k. weird no?

It is a little weird, but it depends a lot on the boost clocks getting reached. It sounds like the iMac Pro may not be able to fully boost like your i9 build can, and the i9 does have a clock speed advantage that helps negate the extra cores. Coffee Lake vs Skylake architecture at work there too.

That said, synthetic benchmarks suggest a Hackintosh i9-9900K should be about as fast as the 10-core iMac Pro for multi-core. So it’s not completely off the rails that there are specific workloads that it could beat the iMac Pro in.

It’ll be a different comparison with newer Cascade Lake parts and hopefully better thermals, but the i9 still has a clock speed advantage vs the base 8-core Mac Pro.

It’s one of the problems with consumer vs workstation chips right now. The non-workstation models overlap with the low end workstation parts in performance. And the workstation parts start at hundreds more for the CPU to get extra PCIe lanes and ECC memory support.

For the folks that mostly just need PCIe for a GPU (or two) and a couple SSDs, it’s not a great bang for the buck. Especially if ECC memory isn’t a need. The consumer side has scaled up into the low end workstation space in terms of performance, making it more “do I need ECC? Do I need lots of PCIe?” than performance unless you are buying a monster setup.

I still think the tcMP with TB3, a 9900K and a Vega 56 / 5700 XT would make a nice “Mini Pro” machine. I’d buy one in a heartbeat right now.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.