Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Meanwhile, the true future of the PC as main compute device is also questioned, as the proposed Heterogeneous System Architecture, could mean in the future you having a mobile smart device like an iPad or a Phablet or a VR headset with relatively low-power processors linked to an external box (or cluster) providing extra processing power when required to do some heavy task as AI, Rendering, Transcoding etc, I think Apple is also quietly betting in this direction (right now you can buy a bunch of mac mini and stack it to join its transcoding or compile power delegated to an macbook or iMac -google xGrid-).


That's already been going on for quite a while , just not in the consumer market .
Even though it's been the future for 10 years or so .
Little things like infrastructure, security concerns and anti trust laws got in the way ( of Apple, Adobe and others ) , I think .

It's like jet packs for everyone and robots doing your job while you're still getting paid - it just didn't turn out to be that way .
 
But, Intel has provided pretty solid graphics with their Iris solution for Macbooks, I can't see them ditch that overnight.

Iris is fairly solid, but who's got the best integrated graphics in the game? Apple.

The A series could provide a really nice boost in machines that have an integrated GPU. It's also very likely that ARM Macs will not get OpenGL, so Apple wouldn't have to write desktop OpenGL drivers for A series Macs.

The graphics on the A series might be the best argument for A series in laptops. Even when you are not using the discrete GPU, the integrated GPU would be more capable, and unlike Iris Pro, use a real HSA style architecture.

I also wonder if for something like the MacBook Pros, Apple could just put their own GPU on Infinity Fabric or something. Use an AMD CPU with an Apple GPU. Or an AMD CPU with a lower power Apple GPU and a higher powered Vega.
 
You are definitely right, I am sorry. I rushed my answer :oops:. It made sense to me, since they plan to implement PCI-e 4 on their new motherboards and since VII and MI60 had basically the same chip. After all, the price differentiation is a good reason to evirate the consumer card.

Apparently AMD fed Anandtech and several other tech reports bad info on the ROP number ( it is 64 not 128 ... which makes sense for a GPU targets at data centers and no displays hooked up). The issue with PCI-e v4 on new motherboards is that the number of those boards deployed about now in the PC market is about zero. By end of 2019 it will still be relatively close to zero if look at all of the active/running systems out there where it could physically fit ( if narrow on power might pop up into decent single digit percentage). They could try to spin it as a "Future proof" card but it is doubtful in the gamer space that would make any significant "game changing" difference. ( most games and apps preload data into VRAM and then run. Most active states of the game the data rate goes down low enough for x8 PCI-e v3 to handle more than reasonably well. )


The other issue is that since they have it working on "Vega 20" there is a high probability it will be working on the Navi solution coming up. Vega VII doesn't have to be their sole GPU answer for 2019. When there are far more affordable PCI-e v4 cards to sell , then that may help new boards where mainstream folks are chasing the latest tech porn.

AMD probably the Epyc "8000" series paired up with the MI50/MI60 ... not the Vega VII. More likely there are workloads there where PCI-e v4 makes a difference ( more and/or bigger datasets being moved back and forth).




I wouldn't be sure about that. Apple said and denied many things and changed its mind in a few years.

While not impossible, it is unlikely. Apple didn't say so more that they would not do desktops as much as they said they are highly focused on what they are doing. Making the best SoC possible for the iPhone is job number 1 ( and 2 and 3 ). Screw that up and the company would tank significantly. So the question is how likely is Apple going to take their eye off the ball to do a desktop. That is unlikely. Even more so now where iPhone is beginning to stall and the SoC is one of the only major differentiators they have left.

The iPad is taking iPhone "left overs". Again points to where the 'eye on the ball' is. The iPad Pro does have a derivative. It isn't iterating every year like the main iPhone process. Again points to where the "eye on the ball" is.

The t series is a "left over". ( a pruned down iPhone chip. some customized power management and some additions to run fans. ). Even still there is one and only one version they work on. ( on track to go into every Mac to reach acceptable volume. ).

The S series tuned down from iPhone levels. At 64 bits now so on track to follow tweaks to the "small core" in the iPhone ( and a bit vice versa.)

The W series ... is that really even Apple's baseline design???? Certainly it has been customized to Apple' specs but the ARM inside of a Wi-FI / Bluetooth controller .... there are a couple they could license.



For sure the next 2-3 years will see many CISC chip being installed aside ARM RISC, but I am pretty sure Apple will move to proprietary APUs on their notebook line, at least. ARM chips are less power hungry,

x86 isn't really CICS. Lots of internet forums have spun it that way but the folks who invented the term say it isn't really what they were talking about.

Many ARM chips are less power hungry because they are heavily tuned that way. Being incrementally smaller implementations allow many of them to jump onto new fab processes quicker in a more economical way but as the new fab processes arrive less frequently and cost more money , that will narrow. Not get bigger or stay constant over time.


offer higher IPC gain year-to-year,

About zero evidence that comes from the ARM instruction set or some ARM specific implementation magical property. From the article you link in later.

" ... “The ARM core significantly improves processor performance by optimizing branch prediction algorithms, increasing the number of OP units, and improving the memory subsystem architecture.” .."

Branch prediction is not a property of ARM instruction set design. Number of function units is not a property of ARM instruction set. The memory subsystem is not a property of the ARM instruction set.

What has been happening is that Apple's ARM has been adding the stuff that the Intel (and many cases the AMD) implementations already had. That isn't gains driven by the instruction set, it is just putting the functionality into the implementation. As they add this stuff to reach the "last 10-20%" of performance it is largely the same dual edge sword it is for them as it is for Intel/AMD.

If it was so "roll out of bed" easy the iPad Pro processor would still be on a yearly track. It isn't.

There is only so much IPC you can get out of von Neumann style code. At some point there is a branch and that will limit your parallelism ( unless want to start to open up security holes ).

give Apple full control on the instruction set to use

Where has Apple significantly forked off of the ARM architecture reference? They are implementing their own versions of the ARM architecture but they have not drifted off into a rogue implementation. There is close to zero benefit for Apple to go rogue. Apple has input into what new stuff goes in, but 'forked'... got proof ? [ a different cache coherency fabric is far more an implementation difference that a significant change in instruction set. ].

The vast bulk of the ARM instruction set support and optimization that is going into LLVM compiler for ARM is also applicable to Apple's implementation. There isn't a good reason for Apple to throw that away at all. That isn't what they have been doing. What they have been doing is to save money by not doing that.



and they just make sense, since Apple has been investing a lot to develop the Ax-Tx-Sx-Wx line and in a company optic it would not make sense to use that investment and percolate it onto the Mac line.

Again ... huge investment in Tx .... not really. Reusing the investin in Ax to redeploy as a non app performance solution for Macs ... yes. But is that really huge? Wx is that really even Apple work at the core? Adding some tweaks to bluetooth paring , some tweaks to Wi-FI direct , and perhaps deploy to not the cheapest. old process fab probably isn't a huge investment.

Besides, the ARM chips are making their way aggressively even in the server market. It's a siege! :D

https://www.nextplatform.com/2019/01/08/huawei-jumps-into-the-arm-server-chip-fray/

Two points you seemed to have glossed over in the article. First, the Ascend chips going from 12nm , 16TFLOP INT8 , 8W ( 2 TFLOP/W) to 7nm , 512 TFLOP INT8, 350W ( 1.46 TFLOP/W). Cranking performance into substantially higher levels isn't always linear. Some of that has to do with physical and transistor implementation. The effects have very little to do with instruction set variance. ( Ascend isn't ARM so it didn't get magical low power pixie dust ... .errrr probably not. ) bigger I/O (bandwidth) , more data to cache , etc. lead to stuff outside the instruction set implementation influences.

Second, is the general lower clock rates they are shooting for. "..Huawei reckons that the 64-core Kunpeng 920 running at 2.6 GHz ..." . This thing is "hot rodding" when it gets to 2.6GHz. The 2018 MBA Turbos to 3.6GHz ( 38% higher). The MBP 4.8GHz ( 84% higher).

If look at the chart this chip is a big winner on Hadoop ( pulling large amount of data off of spinning disks ) and "distributed fusion storage" ... again probably a very significant amount of data off of spinning disks. Apple's SoC isn't optimized for that all at. If you threw a A-series at that workload it would probably suck way worse than than the Intel/AMD implementations being compared where. Same ARM instruction set but differently tuned implementation.

Throw a iPad Pro A12X Bionic into a Retina Macbook sure. I'm not sure why wouldn't also just throw iOS (with iPad Pro optimizations ) at it too. Call it an iBook. If it has an ARM processor put the largest deployed OS that is available for ARM on it. That would fall into the exact same "reuse in different product" pattern that Apple has largely been following the last 2-3 years. That would be Apple keeping their "eye on the ball". That is at least as likely as some two headed ( ARM + x86_64 app processor) , more expensive ( no shared R&D from x86 market) , bloated binaries ( return of FAT binaries to lower capacity SSD drives ) strategy.


P.S. the other thing in that Kunpeng 920 is that they talked about better "performance / watt" but they avoided better " performance / $ ". I don't expect that it will be more expensive than the Intel/AMD offerings but it probably won't be 'dirt cheap' either. ( higher price at lower volumes than the phone chips. ). That is the other issue. Will Apple bring down the cost of Mac bill of materials with a shift to ARM at all ? If there is zero consumer facing system cost savings what do they actually get?
 
Where has Apple significantly forked off of the ARM architecture reference? They are implementing their own versions of the ARM architecture but they have not drifted off into a rogue implementation. There is close to zero benefit for Apple to go rogue. Apple has input into what new stuff goes in, but 'forked'... got proof ? [ a different cache coherency fabric is far more an implementation difference that a significant change in instruction set. ].

Apple has stuck with the ARM architecture 99.9% verbatim. There are only a few extremely minor differences, usually just due to scheduling, registry usage, or performance characteristics. I don't think there are even any unique instructions on Apple chipsets.

That said - with their control of the compiler - they COULD modify ARM. And they don't have a huge reason not to in the future, unless they want to maintain compatibility with stuff like ARM for Windows. I don't think they care much about keeping compatibility with assembly with other ARM architectures.

Throw a iPad Pro A12X Bionic into a Retina Macbook sure. I'm not sure why wouldn't also just throw iOS (with iPad Pro optimizations ) at it too. Call it an iBook. If it has an ARM processor put the largest deployed OS that is available for ARM on it.

On macOS on ARM - I don't see why they wouldn't want to use macOS ARM. Marzipan is clearly priming the pump to have a huge amount of ARM apps running on day 1. iOS and macOS even use the same core components, so treating iOS as a completely different OS doesn't seem like a great idea. And iOS's mouse and keyboard support, along with external device support, is simply horrible. People don't understand just how bad iOS is for input. Clearly there is no mouse support, but the keyboard support is just bad.

If you want an iOS laptop, get an iPad. There is no reason for Apple to double up with two iOS laptops, when macOS is the whole reason to get a MacBook over an iPad Pro with keyboard case.
 
All this AMD talk. Meanwhile I think the most relevant info is that Intel isn't putting out Xeons until probably spring.
I can envision Apple switch to AMD more believable than them switching to nVidia for default graphics.

But, Intel has provided pretty solid graphics with their Iris solution for Macbooks, I can't see them ditch that overnight.

AMD is increasing cores/clocks and shrinking dies much quicker than Intel is, but, Intel has better single thread performance. Time will tell if this is a strategy that Intel can keep up with.

I'm just now starting to use an eGPU with my 2013 (Rx 580/8GB) and so far, so good... I can see eGPU options (via "sidecars") with TB3+ become more real world going forward.

Problem here is that Intel isn't making the exact chips Apple would make. Witness the slight regression in graphics potential between the 2015 and 2016 MacBook Pros because of the change in Iris graphics in the same chips and loss of eDRAM. The Mac mini, meanwhile, took two steps forward switching to desktop processors but that sacrificed the GPU (on models without high amounts of RAM) because there aren't desktop parts with Iris Plus graphics.

Apple doesn't really want to make gaming-optimized machines but I'd bet they'd prefer better graphics in their MacBook Airs and the like than what they get from Intel's offerings.
 
Two big problems with Apple going ARM in the desktop (and even laptop) Mac space...

1.) Porting software - it is very likely that there is quite a bit of work involved in getting a big application moved over from x86 to ARM. An iPad application will run almost without modification, since iOS and MacOS libraries are quite similar. On the other hand, the full version of Photoshop is probably a bear to move. Even if you get moving Mac applications down to a recompile (even for the big stuff, Microsoft applications and other things that are trickier to port), you still have the problem of Windows applications people are running in BootCamp or Parallels. Some pros are running Access, or ArcGIS, or QuickBooks (first two don't have Mac versions, while QuickBooks isn't feature-equivalent).

2.) Let's say you magically port all the software including Windows software so it's all ARM-native, that would almost certainly mean that a lot of ARM Windows machines appeared for some reason. They couldn't just be tiny notebooks, either - without fire-breathing desktops and workstation laptops, ESRI won't port ArcGIS (for example). Even then, ARM's performance profile would have to change... Right now, the fastest ARM cores are about half the speed of the fastest x86 cores. Since the ARM cores can use less power, you can have more of them - but that depends on good multithreading. If Photoshop and Lightroom can't be trusted to fully use 4 cores, how are they going to like a 32-core machine where no single core can outrun a MacBook Air?
 
  • Like
Reactions: barmann
Iris is fairly solid, but who's got the best integrated graphics in the game? Apple.
AMD ;).

Radeon Vega 10 and Vega 11 are on the same performance level as MX150, from Nvidia/GT1030. Now think about what will 7 nm APUs do, if they will have HBM2, rumored, on package, at their disposal, and 1280(20 CUs) GCN cores ;). We are looking at Integrated GPUs with performance around GTX 1050 Ti.

Oh and BTW, it appears that 7 nm APUs will be monolithic, and VEGA based, not Navi.

https://twitter.com/KOMACHI_ENSAKA/status/1083437519356612608 ;)
 
Last edited:
  • Like
Reactions: ssgbryan
Two big problems with Apple going ARM in the desktop (and even laptop) Mac space...

1.) Porting software - it is very likely that there is quite a bit of work involved in getting a big application moved over from x86 to ARM.

Porting software won't be a problem because many products would be abandoned rather than ported (>7% marketshare). For most end-users, it would be easier to switch over to Windows (and probably a good bit cheaper) than buying all new Mac hardware/software.

We saw this during the transition from PowerPC to Intel. It would also take YEARS to get software ported. This would be an unprogrammed changeover, which would seriously disrupt every development cycle. How many people here would be all-in on ALL of their software being version 1?

In the PowerPC to Intel transition, it took nearly 5 years for my mission-critical software to reach feature compatibility with it's Windows brethren. That doesn't address the products that never transitioned.

Macs going ARM will push a lot of mac users out the door and over to windows.
 
Porting software won't be a problem because many products would be abandoned rather than ported (>7% marketshare). For most end-users, it would be easier to switch over to Windows (and probably a good bit cheaper) than buying all new Mac hardware/software.

We saw this during the transition from PowerPC to Intel. It would also take YEARS to get software ported. This would be an unprogrammed changeover, which would seriously disrupt every development cycle. How many people here would be all-in on ALL of their software being version 1?

In the PowerPC to Intel transition, it took nearly 5 years for my mission-critical software to reach feature compatibility with it's Windows brethren. That doesn't address the products that never transitioned.

Macs going ARM will push a lot of mac users out the door and over to windows.

The biggest hurdle to Macs going ARM is ARM can't emulate X64 very well, the two previous transitions required 68k/PPC emulation at acceptable speeds.
 
  • Like
Reactions: ssgbryan
People wait for a decent computer and getting an forced transition to ARM, would be a disaster. Even mac guys know whats up with hardware these days.
Intel is struggling to get decent die yields (high core count very expensive as result) and AMD is scalable in CPU and GPU (future proof). I think i would be the logical choice.
Personally, i like the clock speeds of i7-8700k and nvidia cards. So i made a pc for c4d. I wanted thunderbolt, otherwise i would likely have chosen for AMD.
 
And it's MacOS...sadly
My pedantic side can't resist correcting this error:

macOS2.jpg
;)
[doublepost=1547420543][/doublepost]
How so ? I see what you mean but it's pretty easy to replace touch with a mouse..
How do you "pinch" and "unpinch" with a mouse? You can define a bunch of ad hoc chords, but in fact "pinch" and "unpinch" are very natural movements that have been ingrained by using our phones - and "hold this key down, then press this other, and then move the pointer thingy" are rather artificial and arbitrary.

I've had Windows touch-screen laptops for many years (current is a T480s) there's a wonderful amalgam of mouse, keyboard and touch that just works. While reading a web page on the laptop the "muscle memory" from the phone for scrolling with your finger, or touch a link to change pages, or unpinching to enlarge is simply natural. (Also very convenient is that when reading documents on an airplane, touching the screen is much more convenient than using the mouse or touchpad.)

iOS apps forced into a mouse metaphor sounds like something from the 60's - I can't imagine iOS on an Apple OSX laptop without multi-touch support.
 
Last edited:
  • Like
Reactions: Mago
How do you "pinch" and "unpinch" with a mouse?

to be fair, gestures like pinch and swipe is something that's pretty much universal on macOS... if you have a trackpad or a magic mouse. my setup is a mouse on one side (of graphics tablet) and trackpad on the other, and it's pretty much seamless.
 
to be fair, gestures like pinch and swipe is something that's pretty much universal on macOS... if you have a trackpad or a magic mouse. my setup is a mouse on one side (of graphics tablet) and trackpad on the other, and it's pretty much seamless.

Aiden knows that he's just kinda trollin'. I get it. Aiden doesn't like anything new. Only OS 7 or DOS 6.22 for him !
:D
 
Iris is fairly solid, but who's got the best integrated graphics in the game? Apple.

The A series could provide a really nice boost in machines that have an integrated GPU. It's also very likely that ARM Macs will not get OpenGL, so Apple wouldn't have to write desktop OpenGL drivers for A series Macs.

'Best integrated graphics'. It won't help Apple overall in the pro space where they cherry pick "best" to be some some narrow match to their criteria. If Apple has accelerated throwing OPenGL out the window due in part to easing their porting task that is yet another move that will probably turn out to be a bozo move longer term.

I think the 'best' there is mostly Performance/Watt and not really best on a broad spectrum of merits ( driving multiple non-mirrored displays , computation levels , memory bandwidth, etc. ). Saddle all the GPUs options with plain DDR4 RAM and probably do pretty well. They've optimzed much to the context, but in next year or two that may not be as competitive.


The graphics on the A series might be the best argument for A series in laptops. Even when you are not using the discrete GPU, the integrated GPU would be more capable, and unlike Iris Pro, use a real HSA style architecture.

Iriis Pro was largely HSA. The shared eDRAM was available to the x86 memory system also ( pragmatically a L4 cache).

".. Rather than acting as a pseudo-L4 cache, the eDRAM becomes a DRAM buffer and automatically transparent to any software (CPU or IGP) that requires DRAM access. ... "
https://www.anandtech.com/show/9582/intel-skylake-mobile-desktop-launch-architecture-analysis/5

I think the bigger buzz saw that iris Pro ran into was expensive ( or bang for the buck) with Intel's pricing for eDRAM and the bigger iGPU block.


I also wonder if for something like the MacBook Pros, Apple could just put their own GPU on Infinity Fabric or something. Use an AMD CPU with an Apple GPU. Or an AMD CPU with a lower power Apple GPU and a higher powered Vega.

remote memory thru that that pipe to that many math cores would probably run into scaling problems. Talking 1,2, or 3 more magnitudes bigger than the 8 core chiplets AMD's design targets.

Again, it would not be hard to see something 'better' if the criteria mainly centered around lower power.
[doublepost=1547448607][/doublepost]
...
On macOS on ARM - I don't see why they wouldn't want to use macOS ARM. Marzipan is clearly priming the pump to have a huge amount of ARM apps running on day 1.

"Mazipan" is a dual edge sword. Apps aren't just going to move to macOS from iOS. Some things will probably move in the other direction also (e.g., Photoshop core ).

Apple's Pages , Numbers, Keynote , etc will be on both sides. ( and were Mac apps before they were iOS ones. ) The "ease of port" library will crank up the number of apps ported both ways.



iOS and macOS even use the same core components, so treating iOS as a completely different OS doesn't seem like a great idea.


And iOS's mouse and keyboard support, along with external device support, is simply horrible. People don't understand just how bad iOS is for input. Clearly there is no mouse support, but the keyboard support is just bad.

There is a different between keyboard support and folks bringing over their favorite key combo habits.

" Ethernet ... hardware keyboards ... "
https://9to5mac.com/2018/11/07/ipad-pro-usb-c-accessories/

The problem with "bad" keyboard support is that Apple wants people to pay. $179-199 for a keyboard. $200 for a "bad' keyboard. It isn't a bad keyboard; it is more so just software they need to do but are slacking on. With a Type-C port on the iPad Pro they are going to do incrementally more in this space.

An iBook (iOS) would have an Apple selected keyboard in it. Whatever USB drivers were needed for the keyboard could easily be done. That there are some $12.99 , "race to the bottom", USB keyboards that have glitches isn't really a material issue.

No Mac laptops come with a mouse / trackball. They don't have to make a mouse "work". If they worked on some overlay gestures for a trackpad ( a touch device for a touch interface is not wildly disconnected. )




If you want an iOS laptop, get an iPad. There is no reason for Apple to double up with two iOS laptops, when macOS is the whole reason to get a MacBook over an iPad Pro with keyboard case.

iPad Pro + $200 keyboard + $100 pencil ( $1,000 range ) versus Chromebooks and a host of 2-in-1 ( 360 hinge and detachable ) in the $400-800 range ... Apple can sit and watch what happens over time. Won't be much different than the whipping they are taking at the hands of Chromebooks.

If Intel's stuff manages to hit these levels on the next iteration.

"... As a result, combined with the new ‘1W’ display technologies the company introduced at Computex in June, we’ve been told that optimized Intel devices should now be able to achieve 25+ hours of battery life. ...'

Apple ARM because they "have to" rings a bit hollow. AMD will be iterating lower also. Apple could switch to A-series for Mac laptops if "but were are even thinner and lighter" is the only thing that matters. So think don't just get bending iPads but bending Macs. Apple could dive deep down that rabbit hole ...... but that will probably bite them in the butt in a couple of years.
 
'Best integrated graphics'. It won't help Apple overall in the pro space where they cherry pick "best" to be some some narrow match to their criteria. If Apple has accelerated throwing OPenGL out the window due in part to easing their porting task that is yet another move that will probably turn out to be a bozo move longer term.

OpenGL is dead. I mean, it's deprecated, but it's basically dead. So whether or not it's a bozo move, it's done. Looking forward from there it allows Apple to use their existing GPU design.

I'm fairly certain if Apple chose to, their GPUs could support desktop OpenGL, even though there is currently no driver. But again, OpenGL is dead on Apple platforms so it's irrelevant. And honestly, Apple was going to kill it no matter what hardware choices they made.

As far as accelerated, I don't think anything would be accelerated. It's been unofficially dead for the last several years. Now it's officially dead. And now Apple can say that if you're porting to ARM you need to port to Metal at the same time, as all deprecated API is likely to be removed on ARM.

Is that going to be a problem for pro apps already having to deal with porting to ARM? Sure. But Apple already has pulled this sort of nonsense before with Carbon 64 so it's not like this behavior is new from them.

Doesn't Apple's refusal to support touch screens on Apple OSX make that rather difficult?

It has already happened. Home/Stocks/News/Voice Recorder in Mojave are already iOS apps. They get reskinned on the fly by the virtualized iOS environment, but they're the same iOS apps you'd find on your iPad. They basically get recompiled for x86, and then run in a WINE-like environment that skins the UI. When you run these apps a lightweight virtualized iOS environment actually get fired up in the background.

And because they're the same build of the same app you have on your iPad, they already have a working ARM version too. And because it's a skinning environment, in theory it would work for every iPad app in existence.

I'm not endorsing running iOS apps in a WINE-like thing or saying it's a good idea. Just that Apple has a big bucket of ARM apps that work on the Mac now, and they haven't even launched ARM Macs yet.

It's also why ARM laptops running iOS is probably not a thing that's going to happen. Apple already has a virtual iOS environment running inside macOS. They've already put iOS in your Mac.
 
Last edited:
I'm interested in the 7,1 but the cost of the iMac Pro has really put me off. I shudder to think what the 7,1 will be.

I'm just about to upgrade my 2010 5,1 with 128 GB or RAM and the Highpoint/4x Samsung 970 Pro's and I just think that will keep me going for many more years at a small fraction of the cost.

This 5,1 was by far the best computing purchase I've made.
 
to be fair, gestures like pinch and swipe is something that's pretty much universal on macOS... if you have a trackpad or a magic mouse. my setup is a mouse on one side (of graphics tablet) and trackpad on the other, and it's pretty much seamless.
:oops: Thanks, I forgot about pad gestures....
 
Whatever they might be concocting at Apple by reinventing established workflows or patenting some magic hardware for the new MacPro, I hope they haven't forgotten the lessons learnt from the UNIX workstation market that boomed in the '90s.

That market essentially died because of the commoditization of cheap, plentiful and ever improving electronics in the advent of the PC era. I still have an old SGI Indy in my basement, what a great piece of kit it was!

HP, Sun Microsystems, SGI, IBM RS/6000 and to a certain extent NeXT were the kings in a very specialized Pro market kingdom that died because of the proprietary nature of the OSs and the architectures they ran on as the ubiquity of the PC swept through shrinking corporate budgets after the .com bust.

As this thread evolves through the speculations of what Apple might or might not do, its competitors, their market dominance and the existing open nature of their technology is not going to disappear.
I hope the new MacPro will not be a dead man walking on the same day it gets finally released...
 
MI50/60 numbers are in fact Peak INT 8 performance (aprox. that is). I guess they are, not that AMD said so anyway.
Again, another one of those silly naming schemes, since for MI8/25 they use FP16.
Go figure...
 
My pedantic side can't resist correcting this error:

;)
[doublepost=1547420543][/doublepost]

It will always be OSX for me, even though Gogle wants me to call it OS X .

How do you "pinch" and "unpinch" with a mouse? You can define a bunch of ad hoc chords, but in fact "pinch" and "unpinch" are very natural movements that have been ingrained by using our phones - and "hold this key down, then press this other, and then move the pointer thingy" are rather artificial and arbitrary.


And what about thumb typing ?
Tens of thousands of years mankind has used this magnificent digit only for basic support functions, until smartphones taught us what thumbs are really meant to do !
The times we live in ... ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.