Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Does it? Does a spec for thunderbolt 4/next gen I/O even exist (honest question). I wouldn't be surprised if the "Pro" display is limited to the Mac Pro, but it's still anyone's guess.
maybe pci-e 4.0 based? they really should up the lanes to pci-e X8 or have an 4.0 that can use 8 pci-e 3.0 lanes.

pci-e 4.0 may not show up in amd or intel chips till 2020
 
Skylake x in mac pro, only goes up to 10 cores. Marketing wise thi wouldnt be great for apple, so it makes sense for apple to go for the EP variant.

If skylake x went over 10 cores then it could make sense.
 
Skylake x in mac pro, only goes up to 10 cores. Marketing wise thi wouldnt be great for apple, so it makes sense for apple to go for the EP variant.

If skylake x went over 10 cores then it could make sense.

From the way Apple is talking, I would be really surprised if Apple stepped back to the enthusiast parts. The E5 series sounds like where they will stay (unless they do something like Ryzen which would surprise me.)
[doublepost=1491948796][/doublepost]
Wishful thinking that Apple is going to adopt UEFI in its entirity so that Apple has exact same boot environment as Windows so that can buy general market Windows GPU cards for Macs. In other words, that Apple is going to bend over backwards to greatly enable the Hackintosh market. ..... Don't hold your breath.

All modern Macs already have. They just don't have PCIe slots.
 
Apple dropped the word "computer" from their name a few years ago as they became the world's dominant smart phone maker. They have kept making and selling computers, but their focus has clearly been where the volume and margins are greatest. Sounds like a major conglomerate run by capitalists, no surprise there. What is galling, is these highly paid execs have so little foresight.

Now that the smart phone market has matured, Apple is at least making noises about re-engaging with the computer marketplace. Many of us on this forum would argue they waited way too long. Did none of these highly paid "experts" see that the growth rate in smart phones would inevitably flatten out? News flash - while many consumers are thrilled with the tablet/phone/laptop options we have today - there are still plenty of people who use computers for things that iDevices are not well suited for. Why just hand that market to HP/Dell/etc on a silver platter? Poor margins, perhaps? That's an issue, no doubt, but Apple has proven many times over the last 3 decades that people will pay a premium for a better user experience. I don't believe that has changed.

I'd happily pay a couple grand over the cost of the box of components used in a 7,1 mMP for OS X, hardware/software optimization and efficient design. That's Apple's value add in my book, and I'm happy to pay for it. Over the years I have been more productive with less down time than my Windows comrades, enough so to easily wash out the higher initial cost.

Make a kick ass computer with an intuitive UX, give me a rock solid 3 year warrantee and charge me what it takes to make it a reliable professional tool. Make a reasonable profit, provide robust support - including enough coding resources to exploit the latest GPUs and graphics stacks.
 
  • Like
Reactions: Hank Carter
The lackluster performance of the Mac Pascal driver does make it seem like something thrown together at the last minute. Could mean they're talking with Apple or could me nothing at this point.
 
Skylake x in mac pro, only goes up to 10 cores. Marketing wise thi wouldnt be great for apple, so it makes sense for apple to go for the EP variant.

It wouldn't be Skylake-X it would be Skylake-W but the core count there probably won't be much of a difference.

What has been historically EP variant is being split into two different sockets (and essentially products). This has been on the Intel roadmap for years at this point. Circa 2015 or so:


L_Purley_roadmap_positioning.jpg


http://www.cpu-world.com/news_2015/2015052701_Details_of_Intel_Purley_server_platform_leaked.html

Note that in 2016-2017 the 1 and 2 processor are being implemented on the same platform: Grantly-EP 1S and 2S ( one / two socket). As of late 2017 ( which will be Q3-Q4 17) that splits into two separate platforms. Purley ( primarily aimed at 2+ sockets ) and Basin Falls aimed squarely at 1 socket workstation solutions. Two separate sockets. Two separate PCH chipsets.

Purley has a bunch of stuff that don't make much sense on the vast majority of workstations. OmniPath. Four 10GbE socket support in the PCH. Intel is going to charge serious dollars for that stuff. If you are not using most of it in your system, then you'll be wasting substantial money because Intel is going to make you pay for it.


Socket R ( https://en.wikipedia.org/wiki/LGA_2066) is really far, far closer to what Apple has been using all along. Before the split there were typically 2-3 dies that comprised the EP products. One "low core count" (lcc) and of late a "medium count" (mcc) and "high count". All three have pictures here (http://www.anandtech.com/show/10158/the-intel-xeon-e5-v4-review/2 ) but this is the medium and the low.

v4MCC_LLC.png



The there is a 'layer cake' of a stack of 5 bound together with some token ring like networking loops. The lcc has been the typical Xeon E5 1620-1680 range ( with direct mappings of these dies back to the -E Core i7 HEDT implementations using the exact same die with different functionality turned off/on. )


If skylake x went over 10 cores then it could make sense.

There is a very good chance there are not going to be any reasonably priced 4-6 core variants in the E5 2600 v5 range. ( that x6xx may change because new socket, perhaps 2700 but in that case there won't be a 2600. ). the Medium and High core counts will be the only options they build. The lcc would be separate and have substantively different uncore ( memory , pci-e , etc ) implementation since targeting a very different socket. ( no OmniPath , no very high number of DIMM memory banks, etc. etc. )


Nothing at the smaller end makes even less sense marketing wise for Apple than some tech porn lust high core count at the top. 10 cores at a 10% higher clock rate is better than 12 cores at a clock rate that is 20% lower that higher one for vast majority of Mac Pro user base.


One reason why Intel will probably stick with a 5 layer cake for the lcc versions instead of moving to a 6 layer ( max out at 12 cores) is that they'll want to get to 10nm faster with a smaller die.

Intel is on track to split the mcc and hcc designs into smaller chunks and use an external ASiC/FGPA/custom die to glue those back together again. Embedded Multi-Die Interconnect Bridge (EMIB). detailed at the end of this article. ( http://www.anandtech.com/show/11115...n-core-on-14nm-data-center-first-to-new-nodes ). Intel really don't need 6 layer designs going forward. Stacks of 5 (up to 10) bolted together to from 20 , 30 , 40 is enough for the higher core counts. That is what they were primarily doing internally anyway until started to hit the borders or large die sizes at smaller geometries.

Second, even at 14nm++ process that v5 (Skylake) will use, the 5 layer die is going to be cheaper to produce than a 6 layer one. AMD's Zen implementation is a creditable threat. This is heat from cost pressure for their workstation product offering. A bigger die isn't going to help with that. 14nm++ is going to get some moderately high 3D packing of the transistor but there is not overall huge shrink to pack 2 more cores and their associated 1.5MB worth of cache into the same amount of space.

Intel does the 6 layer stacks on the hcc die because they are charge "nosebleed" high prices for those product $2000+. It is a humongous die with likely lower yields.


So HP and Dell will probably have 2P Large Workstations that they sell with only one socket filled. Some of those will be in the 15 core range, but that is highly likely larger than Apple is aiming at. However, there Basin Falls 1P workstation Xeon E5 1xxx v5 products are going to have the same count count cap as the Apple product will. There is no marketing "falling sky" issue there at all.
[doublepost=1491973978][/doublepost]
From the way Apple is talking, I would be really surprised if Apple stepped back to the enthusiast parts. The E5 series sounds like where they will stay (unless they do something like Ryzen which would surprise me.)

Xeon E5 16xx and Core i7 HEDT are essentially the same die with some parts turns off/on depending upon the market segment. The core counts are a fundamental property of the die. They aren't going to radically change.

In the past the E5 1600 and 2600 parts shared the same socket. That was history that isn't the future. Intel has been laying roadmap for that split for years at this point. I highly doubt Apple is going to be blindsided by it as their NDA briefing are much better than the stuff that leaks out onto the Internet (from which materials the split is clearly outlined.).


All modern Macs already have. They just don't have PCIe slots.

"... If you have an Intel-based Apple system, it has a UEFI firmware, though there are special considerations involved in installing and booting Fedora on Apple systems due to the special Apple boot interface built on top of UEFI. ... "
https://fedoraproject.org/wiki/Unified_Extensible_Firmware_Interface

There is Apple stuff on top of the base layer. That's what I meant by implementing in its entirety. But yes the differences between the early boot stages and transitioning into handing things off to a bootstrapping macOS you do start to leave EFI directly and get into a OS specific loaders. That's where missing boot screens show up.

Apple isn't really proactively blocking folks with their UEFI/EFI ( .e.g, they could have turned on Secure boot to throw up active roadblocks ), but I doubt they are going to bend over backwards to make the OS bootstrapping process more Windows like to make the cards easier to use.
 
  • Like
Reactions: PortableLover
Intel launching Basin Falls sooner than expected:
https://videocardz.com/68219/intel-x299-basin-falls-platform-to-launch-in-june
Maybe SKL-W will come sooner too.

SKL-W sooner .... doubtful. From article.

"...
The new date for Skylake-X processor production is 25th week, which is 4th week of June (19th – 25th).

The date for qualification samples production has not changed and it’s still 22nd week (May 29th – June 4th), ..."

So the samples production didn't change at all and the go live date moved up. This smells like sales smoke and a bit of Ryzen fire. Ship it; figure out the details later mode. The Xeon options though will probably play it safer and stick with the original testing schedule and release target dates. Most real pro contexts "working correctly" is better than earlier "got it first" bragging rights.

That gap shrinkage is going to draw in more speculators too so the initial demand bubble on the Core i7 variants is probably going to be on the high side. Again, keeping the original schedule for the Xeon E5 version puts some space better that initial demand bubble and the Xeon variant's one. (i.e,, fewer shortages on the 2nd round of bubble. )




Won't help the new nMP though, since it's still 1+ years away anyway.

Indirectly. if the Skylake-E (and -W) roll out goes smooth perhaps the early engineering samples for Cannonlake-W will roll out early. (and it should be socket compatible) Apple needs a working prototype board and if there are stable SKL-W boards to work with then can make more progress without working around hiccups and defects of early access boards/CPUs . After all, part of the failure last time was accurate accounting for thermal workloads under real-world working conditions with real parts. Apple does need real working parts.
 
Last edited:
There is a even deeper problem. One reason why the dual GPU solution found a limited audience is that the implementation that Apple did was a "copy and then work and then copy back" GPGPU solution. For real "general purpose" ( the GP in GPGPU) you need a flatter, more uniform memory access solution. Apple pragmatically stopped on OpenCL 1.2 which stops way short of that. If want to broaden the use cases then need more of an architecture where the local GPU RAM is used more as a cache ( or general access store) than the model where "copy , work , and perhaps copy again " dominates.

I agree. There are some interesting things Apple could to make for some really interesting well integrated hardware. They could utilize AMD's Vegas ability to extend VRAM with an SSD. They could use Intel's Optane to blend traditional storage with RAM. They could also use either AMD's or Nvidia's proprietary solutions for communicating directly between GPUs (if Apple decides to include multiple GPUs). They also need to balance this integration with future upgradeability though, so something that ties them to say a specific graphics vendor would be a problem.
[doublepost=1492015186][/doublepost]
It wouldn't be Skylake-X it would be Skylake-W but the core count there probably won't be much of a difference.

Wasn't there a rumor awhile back that said skylake-W would have roughly the same core counts as Broadwell-EP? Given that Intel's strategy is to offer every possible configuration to every possible market I would think there would be a case for high core count (>10 cores) single socket workstation/server.
 
...
Wasn't there a rumor awhile back that said skylake-W would have roughly the same core counts as Broadwell-EP? Given that Intel's strategy is to offer every possible configuration to every possible market I would think there would be a case for high core count (>10 cores) single socket workstation/server.

I strongly suspect there was some implicit qualification there that go lost in the rumor. Skylake-W would have roughly the same core counts as the (low core count die) Broadwell-EP . There are really sub products (dies) inside of the EP designation. Some of those have always been designated as being single socket solutions. Skylake-W is a single socket solution. Matching Skylake-W to E5 1600 v4 series would be quite normal.

There are more than just x86 cores in the "CPU" dies. Major differences in I/O mean major differences in the "uncore" portion of the die.

There is a small chance that Intel takes the medium count count die, down shifts the uncore to a new die, and stuffs that in the Skylake-W package too. I'm doubtful of that because I don't think the volume would be there for Skylake-W product to support two different die designs. The large and medium count count dies are primarily targeted at 2P solutions. With the version 1-4 product sharing the same socket, some "extras" could be used in 1P boards, but that was probably just "extra gravy" for Intel. It is doubtful because Intel didn't do that in the v2 series. The Mac Pro's 12 core is E5 2697 v2. Intel didn't bother to repackage it as a E5 1697 v2.


There is almost no chance Intel was doing to do three different dies for the Skylake-W line up. The Omni Path and the pins for higher I/O to keep the max count cores fed with data buys less when you cut those pins off even if could stuff the die inside the smaller package. The major burst of Purley is to memory and bandwidth increases. At some point get to point where there are so many cores that you get to choke points. The cranked up socket pin count is in part to address that. The -W socket doesn't have it. So why do a mismatch?

As I said before. Maybe a 6 layer cake design so could max out at 12, but that would be lower margins for Intel. Best case it would be a 2 core increase... not the whole EP line up delta increase.


The competitive market is different now. If mainstream desktop Core i5-i7 moves up to six cores ( the GPUs stop sucking up most of the new transistor budget) then workstation class is going to need to move faster. AMD getting their act together only increases that more.

Also, it doesn't make as much sense to couple the PCH chipsets together with Data center target systems. Total network storage is extremely far from "evil" in a modern data center for most systems and in the others it is more like way more drives that will fit on a whole desk syndrome. 10bE baseline and all you can afford Inifinband for storage place very different I/O demands on the 2+ socket data center solutions.
 
I strongly suspect there was some implicit qualification there that go lost in the rumor. Skylake-W would have roughly the same core counts as the (low core count die) Broadwell-EP . There are really sub products (dies) inside of the EP designation. Some of those have always been designated as being single socket solutions. Skylake-W is a single socket solution. Matching Skylake-W to E5 1600 v4 series would be quite normal.

There are more than just x86 cores in the "CPU" dies. Major differences in I/O mean major differences in the "uncore" portion of the die.

There is a small chance that Intel takes the medium count count die, down shifts the uncore to a new die, and stuffs that in the Skylake-W package too. I'm doubtful of that because I don't think the volume would be there for Skylake-W product to support two different die designs. The large and medium count count dies are primarily targeted at 2P solutions. With the version 1-4 product sharing the same socket, some "extras" could be used in 1P boards, but that was probably just "extra gravy" for Intel. It is doubtful because Intel didn't do that in the v2 series. The Mac Pro's 12 core is E5 2697 v2. Intel didn't bother to repackage it as a E5 1697 v2.


There is almost no chance Intel was doing to do three different dies for the Skylake-W line up. The Omni Path and the pins for higher I/O to keep the max count cores fed with data buys less when you cut those pins off even if could stuff the die inside the smaller package. The major burst of Purley is to memory and bandwidth increases. At some point get to point where there are so many cores that you get to choke points. The cranked up socket pin count is in part to address that. The -W socket doesn't have it. So why do a mismatch?

As I said before. Maybe a 6 layer cake design so could max out at 12, but that would be lower margins for Intel. Best case it would be a 2 core increase... not the whole EP line up delta increase.


The competitive market is different now. If mainstream desktop Core i5-i7 moves up to six cores ( the GPUs stop sucking up most of the new transistor budget) then workstation class is going to need to move faster. AMD getting their act together only increases that more.

Also, it doesn't make as much sense to couple the PCH chipsets together with Data center target systems. Total network storage is extremely far from "evil" in a modern data center for most systems and in the others it is more like way more drives that will fit on a whole desk syndrome. 10bE baseline and all you can afford Inifinband for storage place very different I/O demands on the 2+ socket data center solutions.

This is the rumor I was referring to. It alleges that skylake-w will include HCC dies. While I doubt it goes as high as the rumored Sklake E5-2XXX series' 32 cores, it doesn't seem unreasonable Skylake-W could see the medium cut die and maybe even the biggest die, especially if AMD is going to be competitive with its Ryzen workstation variant.
 
This is the rumor I was referring to. It alleges that skylake-w will include HCC dies. While I doubt it goes as high as the rumored Sklake E5-2XXX series' 32 cores, it doesn't seem unreasonable Skylake-W could see the medium cut die and maybe even the biggest die, especially if AMD is going to be competitive with its Ryzen workstation variant.

There may not be a HCC die. OmniPath to couple two 16 core mcc dies together in a Multi-Chip Module would be an easier way for Intel to drop their 32 core monster version. AMD is on same path with theirs from accounts I've seen. AMD's Naples probably isn't a monolithic die. Also don't really buy Naples as primarily being their "workstation" offering. Ryzen 7 is more so aimed at Xeon E5 1600. Bandwidth wise it coms up short, but that seems why Naples is being thrown in the mix. Naples is targeted at 2P just like E5 2600 v5 will be.

If that is the case then the number of dies are getting smaller is suggestive of exactly why may not have many variations.

Perhaps they will, but if so I suspect the $/core will be in the painful range. If simply just need 20+ cores there will be cheaper ways to get there. I don't think the Mac Pro is going to follow that path, nor does it have to. Computational server farm boxes can leveraged via a workstation Mac for a large number of users and use cases. And Intel's data center marketing data probably points to the same thing.
 
goMac was saying that Fingers in Apple are pointed at AMD, for very hot GPUs.

You want to know why? AMD has not gave out to Apple any engineering sample of Vega GPUs(Apart from Raven Ridge APU eng. samples), and Polaris 10XTX/20XTX or whatever are they called are ~200W GPUs. Some versions of the GPUs based on those chips have even 8+6 pin connectors, and 250W TDP.

That is outrageous for mainstream chips.
 
It appears that latest OpenCL implementations in software are as fast as CUDA.

The problem? A LOT of work required by the developer to optimize it.

Here is example.
https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL
Don't get your hopes very high. The amount of work software devs. have to put into it is proportionally bigger than just implementing CUDA. It is worth it, but it also costs time and money.
Quote from the site in the context:
AMD on OSX (outdated section, to be updated)
AMD team who's working on OSX drivers for El Capitan (OS X 10.11) did really nice work on improving the driver which is now capable of compiling and running OpenCL megakernel. The following features are supported:

  • Hard and rough surface BSDF
  • Transparent shadows
  • Motion blur (camera, object, deformation)
  • Hair
Nothing special is needed for using OpenCL on OSX now, just go to the user preferences and enable OpenCL compute device.

The following features are to be investigated for inclusion into next Blender release:

  • Correlated multi jitter noise pattern
  • Volume scatter/absorption
Other features requires a bit bigger changes and will happen in one of the later releases.
 
I have never understood why these concepts always include TB 2 along with TB 3.

Because there are mostly composed by folks who are non-engineerings and far more "conceptual artists" than folks that really something about the real constraints ( thermal , RF (FCC compliance) , Power , etc. )

Almost every concept I've seen over last 3 days has more than major reality problems with it.

Having TB 2 ports just takes away real estate. There are $10 dongles that can be used in TB 3 ports for TB2 devices. Or you can simply daisy chain a TB 2 from the TB 3 port.

Like this. There are no $10 TB2 to TB3 converters. Not going to be long either except perhaps used. You need a chip from Intel ( kind of a specialized TB controller) and some more electronics than just a simple cable to serve as an adapter. If use decent quality bill of materials you aren't going to get there with $10 and also pay for other manufacturing , sales/distribution , R&D , etc. costs.

You are thinking of mini-DisplayPort to Type-C DisplayPort cable/adapter. That may cost $10 but it also isn't TB.
 
I have never understood why these concepts always include TB 2 along with TB 3.

Having TB 2 ports just takes away real estate. There are $10 dongles that can be used in TB 3 ports for TB2 devices. Or you can simply daisy chain a TB 2 from the TB 3 port.

I am looking for a dock with thunderbolt 2 or Mini DisplayPort for the LED Cinema Display from all I have read the Apple adapter won't allow you to run a LED Cinema Display from a thunderbolt 3. Even the spare on the Thunderbolt Display won't work with a LED Cinema Display. Right now my work around (for running off a MacBook Pro with thunderbolt 1) is to run the Thunderbolt Display to the MacBook Pro, a caldigit T3 RAID to the spare on the back of the thunderbolt and the LED Cinema Display to the RIAD... way more mish-mash than I'd like but it works... I've yet to find any evidence/experience of anyone running an LED Cinema Display from a TB3 connection, at least not without several adapters down the chain...

I snatched the LED of eBay this last fall when they departed the Display business, it came down to cost and cosmetics of the devices, I was looking at both thunderbolt and LED Cinema Displays since they are the same apart from logic board and I/O but sitting on the desk you wouldn't know which was which.
 
I have never understood why these concepts always include TB 2 along with TB 3.

Having TB 2 ports just takes away real estate. There are $10 dongles that can be used in TB 3 ports for TB2 devices. Or you can simply daisy chain a TB 2 from the TB 3 port.

The adapters aren't quite that cheap, but another problem is once you've moved off of TB2 you're just going to have a bunch of wasted PCIe lanes.
 
Because there are mostly composed by folks who are non-engineerings and far more "conceptual artists" than folks that really something about the real constraints ( thermal , RF (FCC compliance) , Power , etc. )

Almost every concept I've seen over last 3 days has more than major reality problems with it.



Like this. There are no $10 TB2 to TB3 converters. Not going to be long either except perhaps used. You need a chip from Intel ( kind of a specialized TB controller) and some more electronics than just a simple cable to serve as an adapter. If use decent quality bill of materials you aren't going to get there with $10 and also pay for other manufacturing , sales/distribution , R&D , etc. costs.

You are thinking of mini-DisplayPort to Type-C DisplayPort cable/adapter. That may cost $10 but it also isn't TB.

Ok I was under a small amount.

:p

$73 from Amazon.

I have one and it works fine.
71n7OJRqA2L._SL1500_.jpg


https://www.amazon.com/StarTech-com...lt+3+to+Thunderbolt+Adapter+-+Windows+and+Mac
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
goMac was saying that Fingers in Apple are pointed at AMD, for very hot GPUs.

You want to know why? AMD has not gave out to Apple any engineering sample of Vega GPUs(Apart from Raven Ridge APU eng. samples), and Polaris 10XTX/20XTX or whatever are they called are ~200W GPUs. Some versions of the GPUs based on those chips have even 8+6 pin connectors, and 250W TDP.


Vega is not "savior" low TDP GPU. The primary reason why the 'vega' subset in the Raven Ridge is a lower TDP is because many of apsect AMD crows about in Vega dog and pony show are gone and so are the clock rates. HBMs , memcache, etc. It is a much smaller and substantially different implementation of the same underly microarchitecture but it isn't in the same class as Polaris 10 or 20 implementation.

As for Polaris 20 XTX .... that 'XTX' may be as much bluster and puffery as the '20' .

As I posted in another thread though, it is kind of puzzling if Apple was trying to make a Polari 10/20 work in the Mac Pro why they would bury it. Something in the RX 580-570 would fit thermally and not be a thermal mismatch to the E5 v2. It probably would be more stable under high workloads than the D500 and D700 they are using now. They could get off of OpenCL 1.2 constraints in the hardware implementation.

It would not solve all of the problems they discussed at their roundtable but at least they would have done something. Even the 2012 Mac Pro has a firmware bump and some more CPU options. What they are done now is whole lot of nothing and announced going back down into the rabbit hole again for another year or so. They could at least demonstrate that they can execute on a "Plan B". [ There had to be some kind of entry and/or mid level card they were working on. No way none of those fit the current configuration either. Maybe mid range and definately the top end ... sure. At least one of the options they were working on fit to a decent extent. ] What Apple needed more for the last 6-9 months was something to limp alone along into the future on while they figured out what they really wanted to do. "we got nothing and back to the rabbit hole" only really magnifies the problem.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.