Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Happy with any tuning in Metal and ML for macOS.

AR is going forward, but nothing for Mac atm. Maybe next year... :rolleyes:

Very few new features for Mac this year. I hope it means macOS Mojave is more stable than the previous ones.
[doublepost=1528140028][/doublepost]I think WWDC 2018 made the topic of this thread more relevant than it was before it.
[doublepost=1528141337][/doublepost]"macOS Mojave will be available this fall as a free software update for Macs introduced in mid-2012 or later, plus 2010 and 2012 Mac Pro models with recommended Metal-capable graphics cards."
--
"
Deprecation of OpenGL and OpenCL
Apps built using OpenGL and OpenCL will continue to run in macOS 10.14, but these legacy technologies are deprecated in macOS 10.14. Games and graphics-intensive apps that use OpenGL should now adopt Metal. Similarly, apps that use OpenCL for computational tasks should now adopt Metal and Metal Performance Shaders."

https://developer.apple.com/macos/whats-new/
 
Last edited:
I was counting the DMI but my point was that the iMac Pro's PCIe lanes are underutilized and if it is using only 28 instead of 32 lanes even more so. There just isn't enough room inside the iMac case for anything more.

It is highly likely that has relatively little to do about room and more so about a "fall back" ( plan 'b') option Apple held in mind. The Core X ( i7 - i9 ) variants range in PCI-e output from 28-40. For example

https://ark.intel.com/products/1237...X-X-series-Processor-11M-Cache-up-to-4_30-GHz

Intel kneecaps those at 28 lanes. If Apple could get Intel to do the somewhat custom Xeon W variants at the price points they wanted, they may have designed the iMac Pro to fall back 'off the shelf' Core X. Apple doesn't particularly want unlocked CPUs but they also don't want to pay more than they want to either.

There is room with a slight tweak expansion to the main logic board taller to put two M.2 SSDs ( instead of Apple's split single logical SSD implementation) and just skip doing the T2 chip. ( two SSDs would drive up the count). It isn't a space thing. Getting a 2nd GPU in there to soak up x16 would be more of space thing (no room for appropriate cooling and layout. )

I don't think that Apple will leave room for a second GPU based on comments from them that dual GPU in the nMP was a mistake, the market demands a single powerful GPU. You can look at my argument earlier in this thread for further explanation.

Apple really didn't say that. Apple basically said that the thermal core was a "mistake" due to the mismatch between the CPU TDP and the higher end GPU TDP. All three don't approximately match. That is implicitly something they were not going to repeat. Also making dual GPUs manditory is something that didn't pan out. That is fundamentally different than dual GPUs aren't useful and don't want to enable them in next version. So from the transcript to bad those up. https://techcrunch.com/2017/04/06/t...-john-ternus-on-the-state-of-apples-pro-macs/

duals as default config not being a good match ( user base is broader than that)
"... I think one of the foundations of that system was the dual GPU architecture. And for certain workflows, certain classes of pro customers, that’s a great solution. But, to Phil’s point, ‘Pro’ is so broad that it doesn’t necessarily fit all the needs of all the pros. ..."

standard config as single GPU, but leaving from for thermal capacity growth.
"... Being able to put larger single GPUs required a different system architecture and more thermal capacity than that system was designed to accommodate. So it became fairly difficult to adjust.

thermal core and equal wattage ( and reinforcement on default config for dual ).
".. The triangle you mentioned, the thermal core, is designed to have three fairly similar loads – similarly balanced in power. And so the overall size of the product and the fan, that defines the overall thermal capacity for the enclosure. And we didn’t see as much take up in dual GPUs and we would have expected. ..."

software limitations of some packages to single GPU
"... Those can be in VR, those can be in certain kinds of high-end cinema production tasks where most of the software out there that’s been written to target those doesn’t know how to balance itself well across multiple GPUs but can scale across a single large GPU. ..."
[ More than a few of the machine learning ones can though. ML is a hot topic with Apple and it would more than loopy for Apple to partially shoot the next Mac Pro in the head by blocking the option (not standard just an option) for a second GPU. Similarly Apple just demo'ed at WWDC that DaVinci Resolve scales almost linearly with additionally GPUs using eGPUs. ..... what if can scale to two with no additional hardware. Wouldn't that be a very good idea? They are making the point that multiple GPUs are useful to some. ]


There is nothing there to exclude dual GPUs; there are some customer who want/need it. It is just that dual GPUs for 'everyone' didn't work with MP 2013. Notice that the iMac Pro has the similar GPU fixed count and thermal limits as the MP 2013. To differentiate the new Mac Pro would likely remove that "painted into corner" constraint in a substantive way.

IMHO whether Apple enables a 2nd GPU more so has to do if they are sticking with strict constraint that the next Mac Pro be a purely desktop machine or not. ( small, approximately Mac Mini sized desktop footprint. )


Apple may do a similar thing in the Mac Pro to save on PCIe lanes or perhaps hanging off PCH as you suggested.
That x4 PCI-e v3.0 ---> two x4 PCI-e v2.0 is more of "switch down shift" similar to what they did in the Mac Pro 2013 for the where took x8 PCI-e v3.0 and switch down into four x4 PCI-e v2.0 lleads. (and used three for Thunderbolt). It isn't so much saving lanes as much a PCI-e version matching bandwidth appropriately. There is v3.0 coming off the CPU and the things attaching to are v2.0.
[ technically the AQqtuion QC107 can be either v3 or v2 but the v3 mode is essentially overkill x4 PCI-e v2.0 is 20Gb/s which is enough x4 PCI-e v3 is 32 Gb/s which is enough to run two 10GbE ports.. so overkill. ]


There are strong marketing reasons for Apple to include 6 USB-C ports: it is 2 more than iMac Pro which is two more than standard iMac. Also 6 ports would match the Late 2013 Mac Pro which had 6 TB2 ports. The Mac Pro 2013 had 6 TB ports because 1-2 ports were expected be used by a monitor but Apple is pushing USB-C for monitors as well. Sure 6 ports may be poor bandwidth usage but it is good marketing. HDMI may very well be on there but Mini-DP surely won't because Apple combines USB-C, TB3, and Display Port all together into USB-C.

No there aren't strong marketing reasons.

Apple isn't pushing USB Type-C monitors. Thunderbolt v3 monitors perhaps but USB ones? No. But if detaching the monitor because the customers for those system put a high premium on monitor choice... it doesn't make any sense to then backslide and limited the choices. Marketing would be looking at which monitors are likely to be hooked to these new system. Marketing is not 'tail wagging the dog' of trying to make Mac Pro drive Thunderbolt Display docking station sales.

Marketing, real marketing, is identifying what folks needs are and producing product and services that match those needs. Marketing as a euphemism for sales pitch spin is more about creating smoke. 6 TB ports did not save the Mac Pro 2013. Saying that the next Mac Pro needs 6 TB ports because the last Mac Pro 2013 had 6 ports is pure hooey. (even more so when a substantive portions of the target audience moan and groan about the MP 2013 design choices for years. ) They isn't real marketing. The Mac Pro 2013 having six TB was dubious too. [ I outlined eerily the arm flapping that appeared shortly after the MP 2013 debut about monitors needs. Just as dubious now as it was then. ]

One of the factors is about folks and their commitment to the sunk cost equipment they already have. If they have monitors that have DisplayPort/mDP connectors and they already have miniDP-DP or miniDP-miniDP cables then that is a marketing miss if send them out to buy new cables when your competitors don't. I know Apple wants to pull folks into the Type-C future but one of reason have a Mac Pro ( in the context of having an iMac Pro) is that there is a substantive group of people who "don't want to".

If Apple wants to bump the number of Type-C ports on the Mac Pro then they could put 2 USB (only) Type-c ports on the front [ or one Type-A and one Type-C on front] . That would make them extremely easily distinguishable from the 4 Type-C Thunderbolt ones on the back [ The 4 Type A's presumably on back are visually distinguishable.] . The Mac Pro would 'win' the relatively lame Type-C port count war with the iMac Pro since it has no front ports. (could win both type-c and type-a if split the mix on front). Frankly there are going to be folk grumping about no front ports anyway if there are none. However, highly doubtful some type-C port count is going to win some tech porn port contest in any substantive numbers over the Mac Pro if it is as limited and closed as the iMac Pro.

If Apple adds a single std x16 PCI-e slot to the Mac Pro it will be far more differentiated from the iMac Pro than any port count pissing contest. That's what will separate the two.

Six TB ports doesn't make sense. Even less so with TBv3. There is no other system vendor out there with more than two; let alone six. The majority of the folks who are holding out for a revised Mac Pro are clamoring for more stuff inside the system not outside. So a M.2 slot or a standard x4 slot would fill more marketing need than continuing to agitate those folks with ports they didn't ask for. Honestly, who is actually using 6 TB ports as Thunderbolt ports ? What is there relative size to the remaining mac Pro market?


That is sensible and I am not completely versed in how Apple distributes all their PCIe lanes. What I do know is that the previously two Mac Pros used all available PCIe lanes and then some while the iMac Pro doesn't even use half and this is because of its form factor doesn't allow any more to be used. We already saw a RAID 0 storage solution used in the iMac Pro and it seems like Apple may expand on that idea in future Macs including the Mac Pro 7,1. Perhaps Apple may come up some other novel storage solution that will take up a few PCIe lanes.

Again the iMac Pro form factor is likely not the reason behind stopping on exactly 28.

It isn't PCI-e lanes that is a major brouhaha in the Mac Pro product space. it is allowing at least some user std PCI-e card usage. ( which optionally could be in some cases a second GPU card. Other folks other cards. )

There is no RAID 0 solution in the iMac Pro. There is one and only one SSD there. For whatever reason Apple spread the physical implementation of the single logical SSD into "dumb" NAND cards . There is only one SSD there. IF look at the internal schematic that is "STG PCIe". Not PCI-e that connects those dumb cards. The cards have no SSD controller on them. So they are aren't distinct drives. Other SSDs have separate feeds to separate groups of NAND chips on them using just one logic board. Technically those are not RAID 0. All modern SSDs use some variation of RAID ( multiple access streams) if reading and wriing are in the same range. What Apple did in iMac Pro just is laid out physically different, it is not different in conceptual sense.

Apple could use the T2 ( or T3 incremental bump) as the boot solution, but they aren't buying anything by ignoring standards and M.2 for a secondary storage slot on a machine where users can take off the cover and get inside. Same thing with second PCI-e slot. Similar issue, if Apple needs a custom GPU slot for the main GPU card then that should be tightly coupled to the second x16 PCI-e slot. It is unnecessary coupling that was the core problem issue with the Mac Pro 2013. The 1st and 2nd GPUs don't have to be extremely coupled to one another any more than they need to be to the CPU.
[doublepost=1528151642][/doublepost]
Happy with any tuning in Metal and ML for macOS.

AR is going forward, but nothing for Mac atm. Maybe next year... :rolleyes:

Metal doesn't run on Macs? The only thing Macs lack in the standard configuration is non-facetime cameras to drive AR.

ML doesn't work on macOS. I'd wait until all the WWDC sessions were done before proclaiming that. Again, there is even less reason why it wouldn't also run on Mac ( don't even need to use a camera and Siri (natural language) is more than present on the Mac. )
 
Metal doesn't run on Macs? The only thing Macs lack in the standard configuration is non-facetime cameras to drive AR.

ML doesn't work on macOS. I'd wait until all the WWDC sessions were done before proclaiming that. Again, there is even less reason why it wouldn't also run on Mac ( don't even need to use a camera and Siri (natural language) is more than present on the Mac. )

Sorry, there must have been something lost in translation. Metal sure does run on macOS, and so does core ML, and I was happy that there were some new tweaks coming with Mojave to improve both. Why would Apple depreciate openGL, if Metal is not working? And ML 2 was mentioned among other things, so definitely ML is working on Mac platform.

"Create ML
Create ML is a new technology for creating and training custom machine learning models on your Mac. Create ML works with familiar tools like Swift and macOS playgrounds to make it easier to train your own models.

For information about getting started with Create ML, see the Create ML developer documentation."
https://developer.apple.com/macos/whats-new/

https://developer.apple.com/documentation/coreml

And I was expecting some kind of VR/AR/UI framework for macOS, but its time has not yet arrived. Maybe I am few years too early with my expectation.
 
Last edited:
It is highly likely that has relatively little to do about room and more so about a "fall back" ( plan 'b') option Apple held in mind. The Core X ( i7 - i9 ) variants range in PCI-e output from 28-40. For example

https://ark.intel.com/products/1237...X-X-series-Processor-11M-Cache-up-to-4_30-GHz

Intel kneecaps those at 28 lanes. If Apple could get Intel to do the somewhat custom Xeon W variants at the price points they wanted, they may have designed the iMac Pro to fall back 'off the shelf' Core X. Apple doesn't particularly want unlocked CPUs but they also don't want to pay more than they want to either.

There is room with a slight tweak expansion to the main logic board taller to put two M.2 SSDs ( instead of Apple's split single logical SSD implementation) and just skip doing the T2 chip. ( two SSDs would drive up the count). It isn't a space thing. Getting a 2nd GPU in there to soak up x16 would be more of space thing (no room for appropriate cooling and layout. )



Apple really didn't say that. Apple basically said that the thermal core was a "mistake" due to the mismatch between the CPU TDP and the higher end GPU TDP. All three don't approximately match. That is implicitly something they were not going to repeat. Also making dual GPUs manditory is something that didn't pan out. That is fundamentally different than dual GPUs aren't useful and don't want to enable them in next version. So from the transcript to bad those up. https://techcrunch.com/2017/04/06/t...-john-ternus-on-the-state-of-apples-pro-macs/

duals as default config not being a good match ( user base is broader than that)
"... I think one of the foundations of that system was the dual GPU architecture. And for certain workflows, certain classes of pro customers, that’s a great solution. But, to Phil’s point, ‘Pro’ is so broad that it doesn’t necessarily fit all the needs of all the pros. ..."

standard config as single GPU, but leaving from for thermal capacity growth.
"... Being able to put larger single GPUs required a different system architecture and more thermal capacity than that system was designed to accommodate. So it became fairly difficult to adjust.

thermal core and equal wattage ( and reinforcement on default config for dual ).
".. The triangle you mentioned, the thermal core, is designed to have three fairly similar loads – similarly balanced in power. And so the overall size of the product and the fan, that defines the overall thermal capacity for the enclosure. And we didn’t see as much take up in dual GPUs and we would have expected. ..."

software limitations of some packages to single GPU
"... Those can be in VR, those can be in certain kinds of high-end cinema production tasks where most of the software out there that’s been written to target those doesn’t know how to balance itself well across multiple GPUs but can scale across a single large GPU. ..."
[ More than a few of the machine learning ones can though. ML is a hot topic with Apple and it would more than loopy for Apple to partially shoot the next Mac Pro in the head by blocking the option (not standard just an option) for a second GPU. Similarly Apple just demo'ed at WWDC that DaVinci Resolve scales almost linearly with additionally GPUs using eGPUs. ..... what if can scale to two with no additional hardware. Wouldn't that be a very good idea? They are making the point that multiple GPUs are useful to some. ]


There is nothing there to exclude dual GPUs; there are some customer who want/need it. It is just that dual GPUs for 'everyone' didn't work with MP 2013. Notice that the iMac Pro has the similar GPU fixed count and thermal limits as the MP 2013. To differentiate the new Mac Pro would likely remove that "painted into corner" constraint in a substantive way.

IMHO whether Apple enables a 2nd GPU more so has to do if they are sticking with strict constraint that the next Mac Pro be a purely desktop machine or not. ( small, approximately Mac Mini sized desktop footprint. )



That x4 PCI-e v3.0 ---> two x4 PCI-e v2.0 is more of "switch down shift" similar to what they did in the Mac Pro 2013 for the where took x8 PCI-e v3.0 and switch down into four x4 PCI-e v2.0 lleads. (and used three for Thunderbolt). It isn't so much saving lanes as much a PCI-e version matching bandwidth appropriately. There is v3.0 coming off the CPU and the things attaching to are v2.0.
[ technically the AQqtuion QC107 can be either v3 or v2 but the v3 mode is essentially overkill x4 PCI-e v2.0 is 20Gb/s which is enough x4 PCI-e v3 is 32 Gb/s which is enough to run two 10GbE ports.. so overkill. ]




No there aren't strong marketing reasons.

Apple isn't pushing USB Type-C monitors. Thunderbolt v3 monitors perhaps but USB ones? No. But if detaching the monitor because the customers for those system put a high premium on monitor choice... it doesn't make any sense to then backslide and limited the choices. Marketing would be looking at which monitors are likely to be hooked to these new system. Marketing is not 'tail wagging the dog' of trying to make Mac Pro drive Thunderbolt Display docking station sales.

Marketing, real marketing, is identifying what folks needs are and producing product and services that match those needs. Marketing as a euphemism for sales pitch spin is more about creating smoke. 6 TB ports did not save the Mac Pro 2013. Saying that the next Mac Pro needs 6 TB ports because the last Mac Pro 2013 had 6 ports is pure hooey. (even more so when a substantive portions of the target audience moan and groan about the MP 2013 design choices for years. ) They isn't real marketing. The Mac Pro 2013 having six TB was dubious too. [ I outlined eerily the arm flapping that appeared shortly after the MP 2013 debut about monitors needs. Just as dubious now as it was then. ]

One of the factors is about folks and their commitment to the sunk cost equipment they already have. If they have monitors that have DisplayPort/mDP connectors and they already have miniDP-DP or miniDP-miniDP cables then that is a marketing miss if send them out to buy new cables when your competitors don't. I know Apple wants to pull folks into the Type-C future but one of reason have a Mac Pro ( in the context of having an iMac Pro) is that there is a substantive group of people who "don't want to".

If Apple wants to bump the number of Type-C ports on the Mac Pro then they could put 2 USB (only) Type-c ports on the front [ or one Type-A and one Type-C on front] . That would make them extremely easily distinguishable from the 4 Type-C Thunderbolt ones on the back [ The 4 Type A's presumably on back are visually distinguishable.] . The Mac Pro would 'win' the relatively lame Type-C port count war with the iMac Pro since it has no front ports. (could win both type-c and type-a if split the mix on front). Frankly there are going to be folk grumping about no front ports anyway if there are none. However, highly doubtful some type-C port count is going to win some tech porn port contest in any substantive numbers over the Mac Pro if it is as limited and closed as the iMac Pro.

If Apple adds a single std x16 PCI-e slot to the Mac Pro it will be far more differentiated from the iMac Pro than any port count pissing contest. That's what will separate the two.

Six TB ports doesn't make sense. Even less so with TBv3. There is no other system vendor out there with more than two; let alone six. The majority of the folks who are holding out for a revised Mac Pro are clamoring for more stuff inside the system not outside. So a M.2 slot or a standard x4 slot would fill more marketing need than continuing to agitate those folks with ports they didn't ask for. Honestly, who is actually using 6 TB ports as Thunderbolt ports ? What is there relative size to the remaining mac Pro market?




Again the iMac Pro form factor is likely not the reason behind stopping on exactly 28.

It isn't PCI-e lanes that is a major brouhaha in the Mac Pro product space. it is allowing at least some user std PCI-e card usage. ( which optionally could be in some cases a second GPU card. Other folks other cards. )

There is no RAID 0 solution in the iMac Pro. There is one and only one SSD there. For whatever reason Apple spread the physical implementation of the single logical SSD into "dumb" NAND cards . There is only one SSD there. IF look at the internal schematic that is "STG PCIe". Not PCI-e that connects those dumb cards. The cards have no SSD controller on them. So they are aren't distinct drives. Other SSDs have separate feeds to separate groups of NAND chips on them using just one logic board. Technically those are not RAID 0. All modern SSDs use some variation of RAID ( multiple access streams) if reading and wriing are in the same range. What Apple did in iMac Pro just is laid out physically different, it is not different in conceptual sense.

Apple could use the T2 ( or T3 incremental bump) as the boot solution, but they aren't buying anything by ignoring standards and M.2 for a secondary storage slot on a machine where users can take off the cover and get inside. Same thing with second PCI-e slot. Similar issue, if Apple needs a custom GPU slot for the main GPU card then that should be tightly coupled to the second x16 PCI-e slot. It is unnecessary coupling that was the core problem issue with the Mac Pro 2013. The 1st and 2nd GPUs don't have to be extremely coupled to one another any more than they need to be to the CPU.
[doublepost=1528151642][/doublepost]

Metal doesn't run on Macs? The only thing Macs lack in the standard configuration is non-facetime cameras to drive AR.

ML doesn't work on macOS. I'd wait until all the WWDC sessions were done before proclaiming that. Again, there is even less reason why it wouldn't also run on Mac ( don't even need to use a camera and Siri (natural language) is more than present on the Mac. )
the Mac Pro needs 6 TB ports as 1 channel is eaten by display out. On the imac pro the build in screen does not eat up an channel.
 
"Create ML
Create ML is a new technology for creating and training custom machine learning models on your Mac. Create ML works with familiar tools like Swift and macOS playgrounds to make it easier to train your own models.

For information about getting started with Create ML, see the Create ML developer documentation."
https://developer.apple.com/macos/whats-new/

https://developer.apple.com/documentation/coreml

Create ML is the Biggest yoke of the Day, Training a NN on an iMac Pro alone? w.o Pyhon ?

FYI ML Training its insanely CPU intensive, w/o Rigs with dozens GPUs and countless TB or f Ram and hyper-fast interconnects, your ML model will look alike an amateur job.

Create ML maybe good to teach ML Training basics, but is unfit for serious development efforts as it dont a have access to compute Farms (either public or private), so the best you have to train a ML model is an 18 Core xeon plua a derated Vega64 GPU, meanwhile Android/Google developers have Access to actual GPU Farms as well multi-gpu Hardware (based on nVidia, also), and latest Tensorflow development plus Tons of Python/Java/Kotlin/GoLang to support and extend the models training and deployement.

And I was expecting some kind of VR/AR/UI framework for macOS, but its time has not yet arrived. Maybe I am few years too early with my expectation.
Apple is getting efficient on things like lets trains go away.

Why would Apple depreciate openGL, if Metal is not working? And ML 2 was mentioned among other things, so definitely ML is working on Mac platform.

Apple's ML 1/ML 2 cant be trained on a Mac, while you can export the models to ML 1, 2, no one is training ML model in macs,
 
Last edited:
Sorry, there must have been something lost in translation. Metal sure does run on macOS, and so does core ML, and I was happy that there were some new tweaks coming with Mojave to improve both. Why would Apple depreciate openGL, if Metal is not working? And ML 2 was mentioned among other things, so definitely ML is working on Mac platform.

I didn't get a chance to see the "State of the Platforms" talk in video feed, but there web pages for macOS 14
https://developer.apple.com/macos/

On "Learn about Metal" link from there (https://developer.apple.com/metal/ )
".. Further evolving its support for GPU controlled pipelines, Metal 2 in iOS 12, macOS Mojave, and tvOS 12 enables the GPU to construct its own rendering commands. Now complete scenes can be built and scheduled with little to no CPU interaction, freeing the GPU deliver maximum performance and minimizing interaction with the CPU. ... "

On eGPU tab link from there. (https://developer.apple.com/egpu/)
" Metal 2 provides powerful and specialized support for Virtual Reality (VR) rendering and external GPUs. "


don't think Apple is particularly leaving out AR on the Mac any more that VR. They are allowing 3rd party frameworks to build on top of Metal to deliver solutions. There is not AR oriented camera on any Mac ... yet. If going to be using 3rd party cameras than 3rd party frameworks probably are better integrated with those cameras and also whatever other platforms that vendor wants to take their "reality" infrastructure too (i.e., in other works those folks are primarily asking Apple for low level foundational tools like Metal and they bring their own stack). For iOS it is Apple's cameras only. That is a substantive difference.

If Macs pick up FaceID then perhaps they'll be an Apple ARKit for that camera ( since it is somewhat locked down.) but the orientation of that camera is limited. If camera (and whole system ) can't move then AR is where ?


And I was expecting some kind of VR/AR/UI framework for macOS, but its time has not yet arrived. Maybe I am few years too early with my expectation.

If Apple build their own headgear that was required to be tethered on a wire to a Mac then that would be a more reasonable expectation. It goes back to what is their VR/AR framework hooked too on a Mac?

When Apple does some headgear it won't be very surprising if it is running some new iOS derivative perhaps named vrOS ( or something like that). My guess is that they would be looking for something that wasn't tethered to a Mac any more than the Watch is tethered to an iPhone. AR works on on a iPhone..... why do you necessarily need a Mac/PC to tether to in another couple of iterations of iPhone sized SoC implementation ?


[edit addition ]

P.S. I think Apple is putting some substantive effort into what happens when they nuke OpenGL and OpenCL since they announced the deprecated them. That's probably a higher priority than AR/VR when put in perspective of the entire macOS software ecosystem. There is ton of entrenched OpenGL code out there. They may just leave the OpenGL code sitting there is "happens to work" mode for a long time but that's a big deal. Probably why OpenGL-> Metal gets a whole session later in the week ( intro to Metal for OpenGL developers) and there is porting lab.

P.P.S. there is a 'Metal for VR' session on Friday.
[doublepost=1528167722][/doublepost]
Create ML is the Biggest yoke of the Day, Training a NN on an iMac Pro alone? w.o Pyhon ?

Python is necessary for ML training. Chuckle.. ML techniques existed before Python did. It is hardly absolutely necessary. That's up there with the 6 core MacBook Pro that just had to drop at WWDC keynote.

They are implementing some learning networks, not the same ones that other folks are doing. That doesn't mean these new ones can't be effective.
 
Last edited:
ML techniques existed before Python did
Absolutely, but Python is the ML King and for a reason, productivity on Python about ML is superior, there are developments around Javascript, Java (kotlin) and GoLang but it will take long time to catch Python in ML, and it si due its community, there is no ML Community on Swift, worst you need this new Swift-ML community to have access to Python-Communities to borrow what they did, a very rare specie, if some language could take ML Flag from python, this is GoLang, while some people would say R, but are Go's Coroutines and binaries speed the key factor, an official TF Library in Go is actively developed, Meanwhile Swift seems hostile to such concept -coroutines-.

Today you can develop TF on Pycharm from a Mac and run/debug it on the cloud (private or pubic), even remotely you can debug TF-Cuda from a Mac, Create ML is years behind, DOA IMHO.
 
Last edited:
  • Like
Reactions: AidenShaw
Absolutely, but Python is the ML King and for a reason, productivity on Python about ML is superior,

Your talking about coding to a variety of tools/networks. CreateML wheelhouse is in augmenting an existing classifier to do something a bit more specialized. There will be a deeper session next week but if look at about the 56 minute mark (more depth around 1 hour mark but some context before is useful) in the "state of the platform' presentation things get down to more of the substance that what the stratospherically high level keynote covered.


https://developer.apple.com/videos/play/wwdc2018/102/


In right in that if don't have large baseline model done then their "easy augment" isn't enabled, but if you can't add that cloud inferencer into your device then what use is it ( hitting the same privacy parameters )? Apple isn't trying to take over the whole ML space. What their focus on is ML on device. ( and yes you can do substantive things on the bleeding edge modern "personal computers". Not super leap advances in ML but incremental, personally useful stuff is possible. )
 
Apple isn't trying to take over the whole ML space. What their focus on is ML on device.
If you where aware what ML is, you'll concede Apple is just providing a Learning tool, useless to develop any commercial product based on ML.

ML Training (build) is 4-5 orders magnitude more compute intensive than ML inference (execution), an iMac Pro, even full loaded isn't up for the task, Apple should enable at least some Cloud Based ML Training feature to consider it a product viable to run in an iMac Pro.

When I name ML in Cloud, I mean to say also private GPU servers clusters (the usual path for ML developers handling sensitive data is to build its own GPU farms), you have nothing in Create ML neither enabling such feature on a cluster of macOS computers.

By example, Today (now, not in the coming months) an Android Developer looking to train a ML model in TF 1.8, not just has available Linux Workstations with 4-8 nVidia GPUs (each 2-4 times more productive than each Vega 64) even he also can code on a Mac, train it in a rented or own GPU cluster, and have a product ready to deliver its ML enhanced android App when an iOS developer still at 1% training (and Create ML is still not released today).

I feel very sorry about Apple's tech leadership, its doomed unless a deep mind change happens at their CEOs.
 
If you where aware what ML is, you'll concede Apple is just providing a Learning tool, useless to develop any commercial product based on ML.

I don't think Apple ever advertised it as anything otherwise.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.