Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Right. Whatever makes you happy.



That is not why Apple moved to Intel. The PPC Consortium was unable to develop a faster processor that met Apple's timetable. When Intel showed a line of processors that could meet Apple's goals, they got the gig. Also important was that the new processors did not require liquid cooling. The G5 had major heat problems as any of us who owned one knows full well and Apple was having major issues with the liquid coolant leaking on the last versions.

Google it if you like. Disagree if you want but I have dinner now and then with the person in charge of the transition to Intel (now retired) and know others who were involved in the project.

What are you eating for dinner ...

crow?!?!
 
  • Like
Reactions: Dysnomia
The ability to add "hardcoded" CPU/GPU features to support an OS is very strong and that is likely the driving force for an ARM switch.

I think it make more sense to build a desktop class iOS device. Sure, iOS need some new features like a more open file system, access to external drives, multiple instances of an app etc, but that should be doable within iOS. However, I do not think iOS is inefficient to actually run an app.

It can be a 21 inch iPad like thing that does not necessarily run on battery. Wireless keyboards and trackpad options is natural options and for some there are always software keyboard and possible system wide two finger activated software cursors and the pencil. Pity it is so difficult to use several CPUs/GPU at the same time: 20 A12X in parallel would draw about 150W.
 
Right. Whatever makes you happy.



That is not why Apple moved to Intel. The PPC Consortium was unable to develop a faster processor that met Apple's timetable. When Intel showed a line of processors that could meet Apple's goals, they got the gig. Also important was that the new processors did not require liquid cooling. The G5 had major heat problems as any of us who owned one knows full well and Apple was having major issues with the liquid coolant leaking on the last versions.

Google it if you like. Disagree if you want but I have dinner now and then with the person in charge of the transition to Intel (now retired) and know others who were involved in the project.

I actually know that. I say that was the reason they moved. But when they moved to Intel they were moving to the platform used by Windows.
 
...Marzipan is not dependent on having ARM-based Macs. ARM-based Macs are not dependent on Marzipan. Sure, the two ideas play nicely together, but either one can be done without the other.

As shown in other articles, the prospective move of Macs to ARM may be closely related to Marzipan:

https://www.forbes.com/sites/marcoc...ors-in-future-macintosh-systems/#1ecf31601d30

"Switching to an ARM-based SoC before Project Marzipan starts bearing fruit seems backwards...It makes more sense for Apple to wait until there is a healthy selection of native applications before making the switch"

No you don't absolutely need Marzipan. E.g, Apple could do a transition of the Mac CPU and software ecosystem in a slow, labor-intensive, desktop-centric manner which doesn't leverage mobile app development and further marginalizes the Mac as application development platform. However I don't think they will do that.

Past Apple CPU transitions were done in an era when apps were compiled to object code for a specific processor. Moving to a different CPU and OS framework often required significant code changes, testing and ongoing support.

During that CPU transition some app developers just dropped out. E.g, during the PPC/Intel transition, Delorme decided not to port their mapping software to OS X. They said it would have been an enormous investment and would not return sufficient revenue.

Today many people use Google Maps or other web-base products which are inherently cross-platform. Web-based apps are increasingly performant and often used instead of compiled object code. So -- unlike the past -- that ever-increasing market segment is inherently insulated from a CPU transition.

For existing complex apps which must be ported, the question is what method? Re-write, test and maintain separate apps for the years-long transition period? Or use a new development framework which leverages the massive mobile app development community?

Why would you want a junky little iOS app on your Mac? Because going forward iOS apps won't be limited -- they will be increasingly complex. They will use huge app development resources which could also target a Mac ARM platform. Adobe has already demonstrated full-featured Photoshop on iOS. This was likely developed in a labor-intensive fashion without Marzipan.

Imagine you are Adobe and facing moving full-featured Photoshop to an ARM Mac plus maintaining the x86 Mac version for five years, plus developing and supporting an iOS version. That is three separate versions. Or you could use Marzipan and move it one time, retain maybe 80% code commonality, and sell to three separate markets segments using a single development effort. It is unknown how well that can be realized, and valid concerns have been raised about achievable UI and code commonality. But it appears Apple is planning something like that. This in turn impacts the success and timing of a Mac move to ARM.

So there is more going on than a Mac CPU transition. There is also the need to facilitate and streamline iOS/MacOS cross-platform development. This broader view realizes this, so it's quite possible Marzipan will play a significant role in any upcoming transition to ARM-based Macs.
 
No you don't absolutely need Marzipan. E.g, Apple could do a transition of the Mac CPU and software ecosystem in a slow, labor-intensive, desktop-centric manner which doesn't leverage mobile app development and further marginalizes the Mac as application development platform.

Those are not alternatives. Marzipan will not help anybody port huge, legacy-ridden apps with x86-specific code to MacOS-on-ARM - and that's where the sticking point will be. Recently-written (last 10 years?) MacOS applications (apart from systems tools that need to grub around under the hood) are far less likely to be problematical - some will be 'tick-the-ARM-box', others have already done most of the hard work by porting the engine/backend to iOS.

Conversely, if you do write your new App to be Marzipan-compliant then it will - by definition - be processor-independent and adding x86 support will just be a case of ticking the box in Xcode. Building multi-processor binaries is pretty much a solved problem in iOS/MacOS - application framework, UI design, iOS rules on file handling etc. that divides iOS code from MacOS code. Marzipan is primarily about source-level compatibility and pretty much orthogonal to the whole ARM/Intel thing. (Fun fact: the iOS 'simulator' in Xcode isn't an ARM emulator - it "simulates" a mythical Intel-based iDevice and requires the App to have been compiled for x86).

Where Marzipan might help is where developers are developing new, or substantially re-written Apps and want to support one of iOS or MacOS but are borderline about supporting both. Writing a Marzipan app is probably always going to be more work than just writing for either iOS or MacOS and definitely more work if you want it to make full advantage of both operating systems. Developers will need a business case to justify supporting both to make the effort. I'd suspect that it will be mainly used for the sort of simple, limited functionality tool that really earns the "App." abbreviation, and only needs a simple point-and-click/tap UI that doesn't differ much between iOS and MacOS. Which is good, if it means that your e-banking client or the player for your preferred streaming platform makes it onto the Mac.

For more complex applications, I suspect there will come a point where catering for two very different UI paradigms becomes just as much, if not more, work as maintaining two sets of 'front end' code. A well-organised codebase aims to keep the UI/app framework code separate from the crossplatform 'engine' as far as possible, anyway.

NB: just because there are a gazillion iOS users doesn't mean that there are a gazillion customers who want to run 'pro' applications on their iDevices. Last time I looked, iPad sales were dwarfed by iPhone sales too, let alone the top-end iPads.
[doublepost=1554729606][/doublepost]
Or you could use Marzipan and move it one time, retain maybe 80% code commonality, and sell to three separate markets segments using a single development effort.

There's no reason that you can't have 80% code commonality between separate Mac and iOS versions, unless its very badly written code that jumbles up functionality and user interface. Its likely that a major part of the effort in porting Photoshop to Mac is making it use Metal etc. and making it play nicely in iOS's sandbox. If large tracts of that code can't be shared between iOS and a future Mac version then Adobe are holding it wrong.

Marzipan is likely to have the biggest impact on simpler Apps where, basically, the App is the user interface (how many apps are, basically, just skins on the browser or media player framework, or are otherwise 'thin clients' for cloud services?)
 
...Its likely that a major part of the effort in porting Photoshop to Mac is making it use Metal etc. and making it play nicely in iOS's sandbox...

A major problem is the UI issue of re-hosting a complex app with scores of desktop-specific UI widgets and menus designed for high-density mouse/keyboard use to a simpler touch-oriented mobile UI. The mobile UI constructs simply do not exist to accept that. Every one of those complex desktop menus and UI widgets is tied to functionality that people use. Unlike a port between two similar desktop windowing systems like macOS and Windows, to cram that into a mobile device requires total redesign. It might not even be possible with current mobile UI limitations, or the result might be too cumbersome to use.

This is a key reason that nobody -- not Microsoft, not Adobe, not Apple -- has ever ported a fully functional complex desktop app like Photoshop, Premiere Pro, or Excel to a mobile device.

Adobe's demo of (allegedly) fully-functional Photoshop on iOS would be the first ever. But so little of it was shown that we don't know how successful that would be if they ever complete and ship it.
 
I retired my 2008 MacBook Pro last summer when the display died. I have been hearing the ARM Mac rumors since around 2008 or 2009. I would ignore the noise until there's something concrete. There will be demand for Intel Macs for a long time to come for compatibility with other x86 Operating Systems. I'm looking at a 21 inch iMac myself. The 27 inch model is interesting but it would be too big on my desk with two other 25 inch monitors.
 
Thank everybody for the useful tips and suggestions!
I’m going to wait wwdc and see what happens... If they’re going to announce a Xeon Mac Pro I’ll buy iMac without a doubt, but if there’ll be an ARM Mac I think I’ll go back to Windows after 15 years of MacOS. It would be crazy from Apple to release an Intel Pro machine and not to support it for at least 6-8 years...

So... I hope for a Mac Pro announcement in June! My mid-2010 MacBook Pro is sluggish for my workflow, even if i installed a Samsung SSD, and I don’t want to be neither the last-unsupported-forgotten Intel nor one or the ARM debugger...
 
This is a key reason that nobody -- not Microsoft, not Adobe, not Apple -- has ever ported a fully functional complex desktop app like Photoshop, Premiere Pro, or Excel to a mobile device.

Another key reason might be that, at least until the latest generation of iPad Pros appeared, there haven't been any mobile devices that were viable platforms for a "fully functional complex desktop app" in terms of CPU, GPU, RAM, screen size, pointing device etc. The only other "pro" tablets around tend to run full desktop Windows - like the MS Surface.

One suspects that the Photoshop port is being encouraged by Apple as a "flagship" for the recent iPad Pros to try and establish a market for "serious" iPad apps. Here's me betting its not gonna run on an iPhone, though...

That said, its worth a look at Affinity Designer/Photo for iPad: https://affinity.serif.com/en-gb/designer/ipad/ - I'll concede that they're not really "like Photoshop, Premiere Pro, or Excel" in terms of price or target market, but I'd say that they are "fully functional complex apps" (take a look at the screenshots) with comparable features to the desktop versions for Windows/Mac.

Speaking of ARM and cross platform development, here's a fun read from the man himself, Linus Torvalds, on what he thinks about ARM in the cloud and ARM laptops.

https://www.realworldtech.com/forum/?threadid=183440&curpostid=183486

Yeah, seen that before. Ironically, the main reason I disagree with him is because of this little OS called Linux for which he may bear some small responsibility. Thanks to that, and the Unix principles of hardware-independence it engenders, it is possible to develop and test your web application on (say) Ubuntu 18.04 for x86 on your home machine and then deploy it to a server Ubuntu 18.04 for ARM64 running all the same server software - Apache/NGENIX/Node.js/MySQL/MongoDB/wherever with the same filesystem structure etc. - and if your web application includes any assembly code or processor-specific data structures then you are definitely holding it wrong. Companies chose x86 servers in the 90s/early 00s because they were (a) the cheapest and/or (b) they were often targeting Microsoft IIS or Microsoft SQL Server. Today (a) companies who choose ARM will do so mainly because they spend more on electricity than they do on hardware and (b) its 2019 and the success of Linux is such that Microsoft are now building Linux emulation into Windows.
 
I suspect Linus is coming from a much lower level as he's probably coming from the angle of kernels, device drivers, DMA controllers, etc.

There is no question in my mind that the people writing the x86 kernel for Linux need an x86 machine to work on. OTOH half of the web is hacked together in PHP, MySQL and Wordpress... I suspect Linus - a guy known for zero tolerance of sloppy coding - would have invent whole new classes of profanity to express his opinions on those :)

What he talks about is true even now. There's no way I'd have my QA department give their stamp of approval only having tested on the Simulator.

...and that doesn't really change if the development machines were running processors with the "same" instruction set as the target machine - because there are 1001 other differences in hardware and environment that could cause glitches. The ability to run your code right on your dev box is hugely useful, but its not a replacement for testing on diverse hardware (and with diverse users who will do crazy things that wouldn't occur to programmers).

The most recent one was an integer overflow on 32bit ARM iOS devices which took us a long time to discover.

Well, going forward, iOS and MacOS will be 64 bit across the board, which should put the lid on that particular can of worms.

P.S. I knew that Ubuntu can run on ARM. I wasn't aware they had all those applications working as well!

Source-code level compatibility has long been part of the Unix/Linux ethos and running on multiple architectures is par for the course for most of the big open-source projects. I'm not saying that people haven't sunk lots of work into getting those applications running on ARM, but the changes required are usually modest.

(As for ARM web servers... Back in the early-mid 90s I set up a web server for my university department - on an Acorn R260 ARM3-based Unix workstation that was knocking around. ISTR it only took a minor kludge to get the HTTPD source from CERN to compile - and that wasn't processor-related - it ran for years... )
 
They certainly have announced that everything will transition away from Intel to ARM. No timetable or product details have emerged. It’s reasonable to think that the upcoming Mac Pro will be Intel seeing that they’ve been working on it and field testing over at Lucasfilm, Pixar and other studios for over two years now.

Like most, I expect that it will show up in a laptop first. Maybe... makes sense... ok, we’re talking about Apple.
so many lies in one post
[doublepost=1554855936][/doublepost]
Right. Whatever makes you happy.



That is not why Apple moved to Intel. The PPC Consortium was unable to develop a faster processor that met Apple's timetable. When Intel showed a line of processors that could meet Apple's goals, they got the gig. Also important was that the new processors did not require liquid cooling. The G5 had major heat problems as any of us who owned one knows full well and Apple was having major issues with the liquid coolant leaking on the last versions.

Google it if you like. Disagree if you want but I have dinner now and then with the person in charge of the transition to Intel (now retired) and know others who were involved in the project.
you make crazy claims here and then you don't back it up and when people call you on it you deflect it. Sometimes I wonder how people justify these things in their heads and don't see the ridiculousness of their statements.
 
Can someone please explain to me what exactly is ARM and why it is supposedly so much better?

Currently Macs use Intel processors based on the "x86-64" architecture/instruction set, as, currently, used by most current PCs and servers. ARM is an alternative processor architecture/instruction set, used by most mobile devices, set-top-boxes etc. including iPad, iPhone, Apple TV. For more details on what that means I respectfully direct you towards Google and Wikipedia.

Historically, the fundamental difference is how you program them in low-level "assembly language" - Intel was a traditional "complex instruction set" processor, with quite powerful assembly language instructions that made a programmer's job easier (and had a lot of backward-compatibility with earlier processors), while ARM was a "reduced instruction set" processor that needed more, simpler instructions to do the same job as a single Intel instruction but, as a consequence was electronically simpler and more efficient. Anyway, more and more programming was done in high-level languages like C so making assembly language 'programmer friendly' was less important. The first ARM-based machines, from the late 1980s, ran rings around the Intel 286 chips of the day - there was even an ARM-based accelerator card to speed up number-crunching on IBM PCs

http://chrisacorns.computinghistory.org.uk/docs/Mags/PCW/PCW_Aug87_Archimedes.pdf
http://chrisacorns.computinghistory.org.uk/docs/Mags/PCW/PCW_Jan88_Springboard.pdf

...unfortunately, back in the late 80s/90s, if it didn't run MS-DOS/Windows (which was tightly tied to x86 processors) nobody wanted it so it never really got used on the desktop much beyond a niche of education users and enthusiasts in the UK. The saving grace was its ridiculously low power consumption (see the huge cooling fans in the images above? No? Exactly!) so it got adopted in embedded systems, set-top boxes and then Apple got involved - and ARM's next heroic failure was the Newton (killed by a Doonesbury strip...) but that set it up to totally dominate the smartphone industry as it does today. Subsequent ARM chips were, therefore, designed for low power applications rather than desktop workstations.

Nostalgia aside - why ARM might be better:
  • Low power consumption and cooler running - partly because an ARM processor core is fundamentally simpler than an Intel processor core but can do the same job.
  • Potential for more cores and other specialist add-ons (GPUs, video compression, encryption/security...) on a single chip for the same power consumption c.f. Intel - again, because the cores are less complex.
  • Bespoke processors - only Intel and AMD make x86 processors so Apple are stuck with whatever models they produce. By contrast, ARM themselves don't actually make processors - they sell licenses for their designs to companies like Samsung, Qualcomm and Apple who can "roll their own" systems-on-a-chip with the exact permutations of clock speed, number of cores, GPU, cache etc. that they need for their product. This is what Apple have done with the A-series processors in the iPhone and iPad.
    For example: the new Mac Mini's weak spot is its relatively low-grade integrated graphics because thats all Intel offer with their desktop processors (most desktop PCs have space for a graphics card). So, potentially, Apple could make an A-series chip with similar processing power and better integrated graphics, specifically tailored to the Mac Mini form factor.
  • Apple have already changed processor architecture twice (68k-to-PPC and PPC-to-x86) and operating system once (OS X was a completely different OS to 'Classic' MacOS). They know how to do this and don't have the drag-anchor of the corporate sector that makes it so hard for Windows to change.
Why ARM might be neutral:
  • There may not be a dramatic increase in raw power - laptops could be thinner, cooler and with better battery life while offering similar power the current models - but the discrete GPUs in the higher-end machines will still be power-guzzlers.
  • One way that (modern) ARM could add more power at the pro end is to bundle more cores and specialist processing units on the chips (people are making ARM supercomputers, but in those the ARM is primarily a 'controller' for specialist number-crunching hardware). That will only benefit software written to use those features.
Why ARM might be worse:
  • No Windows under Bootcamp or Parallels etc. Yup, MS have a version of Windows for ARM, but in the short term, I'd guess that's not what people currently running Windows on Macs will be wanting. Same for Linux, except Linux for ARM is already a lot more complete and useful than Windows for ARM.
  • Existing Mac software will have to be re-built for ARM to get good performance (Apple will probably include some sort of translator/emulator as they have in the past, but that won't get the best out of the new hardware). For many modern Mac applications, written in ObjC/Swift to Apple guidelines, that should be a case of developers ticking the ARM box in XCode and doing some testing. Other types of application will take rather more work, some might not be deemed to be worth the effort. I'd be most worried about specialist third-party plug-ins for the big 'pro' Apps (Apple will no doubt be lobbying the big guns like Adobe). Still - ARM or not - a lot of 'abandonware' is going to be wiped out by the next release of Mac OS, which won't run older 32-bit Intel code.
  • Apple might use this as an opportunity to "lock down" MacOS to only run Apple-sanctioned software from the store - just like iOS. There's no technical reason for them to do that, there's nothing stopping them from doing that on Intel, but if that is their long-term intention, a switch to ARM that is going to kill off some applications anyway, would be the perfect timing.
Why it might/might not happen:

All we know is that a little bird at Intel has told some little birds at various tech blogs that a little bird told them that Apple are planning to switch to ARM processors starting in 2020. So, plausible rumors, but still rumors.

The benchmarks for the new iPad Pro (ARM-based) put it in contention with the MacBook Pro - OK, benchmarks aren't everything, but it certainly sounds like it could give the 12" MacBook a sound thrashing.

There's a lot of interest in ARM processors for servers - with server-grade (32 core!) ARM chips starting to appear.

Intel are having problems 'shrinking' their processors to the latest chip fabrication technologies.
 
Today many people use Google Maps or other web-base products which are inherently cross-platform. Web-based apps are increasingly performant and often used instead of compiled object code. So -- unlike the past -- that ever-increasing market segment is inherently insulated from a CPU transition.

Credit Browser JITs which do on-the-fly compilation to machine code.

You can still get better performance from compile-to-object and then link because of things like profile-guided optimization and whole-program optimization and compiler directives and intrinsics to guide what the compiler does.
[doublepost=1554908268][/doublepost]
Can someone please explain to me what exactly is ARM and why it is supposedly so much better?

ARM is a CPU architecture that was traditionally designed for low-power environments. Intel and AMD processors have been designed for high power environments. There are hardware tradeoffs that you make for low-power vs high-power. Intel and AMD have had a duopoly on the architecture so there has been limited competition on pricing.

ARM is more open to change.

ARM is a simpler architecture - more RISC like, than x86 which is a CISC architecture even through it's RISC under the covers these days with microarchitectures.

Intel owns some ridiculous percentage of the server market. ARM Holdings would like to take some of that. So they are working to try to take share. Customers want lower costs and lower power consumption - but, there is no free lunch.

One other aspect on why ARM would be an advantage for Apple - they could build their own customer ARM chip that macOS would run on which would mean that you couldn't run macOS on other ARM hardware.
 
Last edited:
Right. Whatever makes you happy.
I mean, you're still wrong. Some unnamed source at Intel saying something to a media outlet publishing a rumor isn't concrete evidence. I still think it's going to happen, and likely in the next year or two, but it's still just rumors and speculation. We rarely talk in absolutes about unannounced Apple products here at MacRumors. Heck, even announced products like AirPower aren't absolute any more. Hopefully the modular Mac Pro isn't next!
 
I mean, you're still wrong. Some unnamed source at Intel saying something to a media outlet publishing a rumor isn't concrete evidence. I still think it's going to happen, and likely in the next year or two, but it's still just rumors and speculation. We rarely talk in absolutes about unannounced Apple products here at MacRumors. Heck, even announced products like AirPower aren't absolute any more. Hopefully the modular Mac Pro isn't next!

These rumors have been around since 2008.

Nobody ever says that they were wrong about their certainty that we'll be running ARM next year.

And there are very few that understand the architectural issues or the porting issues.
 
These rumors have been around since 2008.

Nobody ever says that they were wrong about their certainty that we'll be running ARM next year.

And there are very few that understand the architectural issues or the porting issues.
I was just saying that it's wrong to state a rumor as fact. The original post was them saying Apple has already announced the transition from Intel to ARM. That is patently false.
 
I was just saying that it's wrong to state a rumor as fact. The original post was them saying Apple has already announced the transition from Intel to ARM. That is patently false.

I've seen so many arguments of the form of Appeal to the Future. Of course you can't disprove it. But the burden of proof is on those saying it will be so. Most of them don't have the technical background to understand what a change in architecture requires.
 
These rumors have been around since 2008.
Nobody ever says that they were wrong about their certainty that we'll be running ARM next year.

Then again, I remember the skepticism and derision back in 2005 when the first rumors started to surface that Apple was going to switch from PPC to Intel (and seas would turn to blood, hell would freeze over and dogs would play poker)...

I wasn't reading the right usenet groups in 1993, but I'm sure there was similar pulling of beards about the 68k to PPC switch (not that people evangelise about CISC vs. RISC architectures at all).

This is something that Apple has successfully pulled off twice - and back in those days, there was less hardware abstraction in the OS and a lot more application software would have included assembly code and architecture dependencies (although I suspect that Mac, and certainly the Unix-flavoured OS X have never been as bad as Windows 9x in that respect). Heck, one of those (PPC to Intel I presume) also meant switching from a "big-endian" architecture to a "little-endian" one - and there's a reason why those terms were named after the warring factions in Gulliver's Travels.

Meanwhile, since 2008, people have started taking ARM more seriously as a server OS, with server-grade ARM chips appearing, MS are having another bite at Windows for ARM and Apple have released an iPad with a processor that apparently has the grunt to power a laptop. Oh, and killing off 32 bit applications on both iOS and MacOS is going to clear the decks of some of those old, difficult-to-port legacy apps.

Anyhow, for the moment its still a rumor. Still, I'd be really interested in seeing a 12.9" iPad Pro running MacOS... which might be a better way for Apple to sell the idea: don't take Intel away from the Mac, add MacOS to the iPad.
 
Then again, I remember the skepticism and derision back in 2005 when the first rumors started to surface that Apple was going to switch from PPC to Intel (and seas would turn to blood, hell would freeze over and dogs would play poker)...

I wasn't reading the right usenet groups in 1993, but I'm sure there was similar pulling of beards about the 68k to PPC switch (not that people evangelise about CISC vs. RISC architectures at all).

This is something that Apple has successfully pulled off twice - and back in those days, there was less hardware abstraction in the OS and a lot more application software would have included assembly code and architecture dependencies (although I suspect that Mac, and certainly the Unix-flavoured OS X have never been as bad as Windows 9x in that respect). Heck, one of those (PPC to Intel I presume) also meant switching from a "big-endian" architecture to a "little-endian" one - and there's a reason why those terms were named after the warring factions in Gulliver's Travels.

Meanwhile, since 2008, people have started taking ARM more seriously as a server OS, with server-grade ARM chips appearing, MS are having another bite at Windows for ARM and Apple have released an iPad with a processor that apparently has the grunt to power a laptop. Oh, and killing off 32 bit applications on both iOS and MacOS is going to clear the decks of some of those old, difficult-to-port legacy apps.

Anyhow, for the moment its still a rumor. Still, I'd be really interested in seeing a 12.9" iPad Pro running MacOS... which might be a better way for Apple to sell the idea: don't take Intel away from the Mac, add MacOS to the iPad.

We had plenty of warning on PPC to x86 and Intel's chips were more capable.

I've heard people say we'll have ARM Macs this year or next. As if you can do a port that quickly. From a slower architectural design to a faster one. Where one has absolute monstrosities for chips at the high end.

I would just like people that don't know what they are talking about in terms of architectures and the software engineering efforts to do ports to talk in terms of absolute certainty about stuff that they are clueless about. And when they are wrong, year after year after year, to just admit it.

I trade and would love to know what AAPL is going to be tomorrow, next week, next year. But I don't pretend to tell people with absolute certainty; or even moderate certainty; where it will be or even which direction it will go in.
 
We had plenty of warning on PPC to x86 and Intel's chips were more capable.
I've heard people say we'll have ARM Macs this year or next. As if you can do a port that quickly.

I actually looked up the PPC to Intel timeline and included it in an earlier post in this thread. Formally announced in June 2005 (after a few months of hard rumours), last PPC Mac models replaced by Intel in Aug 2006. Oh, and when the rumours started, the public hadn’t seen the new Intel “Core” chips that turned out to be the secret sauce for the transition, so we all “knew” that a Pentium/Netburst Mac laptop would be a joke...

I’d agree that, in an ARM transition, the high-end i9/Xeon Macs might need to hang around longer than that because (a) nobody has seen an ARM CPU in that class yet and (b) it’s likely to be the pro graphics/video/audio apps (and their third party plugins) that are the biggest headaches to transition. A 12” MB replacement as a toe in the water would be quite plausible, though.

That said, Apple will be making the processors this time around, so we wouldn’t see them until the machines launched. ARM isn’t fundamentally slower than x86, it’s just that most current chips have been designed for mobile, and it’s way more customisable than x86.

NB. I’ve never suggested that any of this is more than speculation, and I called out the poster claiming that Apple had announced the transition. However, the Intel-attributed rumours are a pretty good basis for speculation.
 
...ARM is a CPU architecture that was traditionally designed for low-power environments. Intel and AMD processors have been designed for high power environments. There are hardware tradeoffs that you make for low-power vs high-power...

This is the traditional view, long held by most people in the CPU architectural community. Namely that RISC-like CPUs have no true performance or power advantage, but that semiconductor fabrication, design priorities (power vs performance) and economies of scale determine this.

This was seemingly confirmed by remaining high-end RISC CPUs (IBM POWER, Oracle SPARC). They aren't generally any faster nor consume less power than Intel's top-end Xeons.

ARM does better on the low end, but the reasoning was if ARM ever scaled up to the performance levels of a high-end Xeon, it would burn nearly as much power. There were also questions where ARM could ever reach that level.

However -- recent advances demonstrate the A12X CPU performance is close to the x86 in the top-end 2018 MBP, despite consuming much less power. https://arstechnica.com/gadgets/2018/11/apple-walks-ars-through-the-ipad-pros-a12x-system-on-a-chip/

This seems to imply that something fundamental has changed in A-series CPU design. It's would still be a big stretch to beat an i9-9900k, but less so than previously -- and the trend looks promising. On the low-to-mid-range x86 Apple laptops, the A-series can beat those right now. Presumably a near-future A-series CPU that was optimized for laptop use (vs mobile use) would have even better performance while retaining better power efficiency than Intel.

Another big advantage is Apple would control the CPU architectural features. E.g, Intel won't put Quick Sync in Xeons. If Apple wanted to put the equivalent of Quick Sync in a future A-series desktop CPU, they could do that. Apple has already integrated a Neural Engine subsystem on recent A-series CPUs. In the future other CPU optimizations and accelerators tailored to Apple's specific needs would be possible.
 
I actually looked up the PPC to Intel timeline and included it in an earlier post in this thread. Formally announced in June 2005 (after a few months of hard rumours), last PPC Mac models replaced by Intel in Aug 2006. Oh, and when the rumours started, the public hadn’t seen the new Intel “Core” chips that turned out to be the secret sauce for the transition, so we all “knew” that a Pentium/Netburst Mac laptop would be a joke...

I’d agree that, in an ARM transition, the high-end i9/Xeon Macs might need to hang around longer than that because (a) nobody has seen an ARM CPU in that class yet and (b) it’s likely to be the pro graphics/video/audio apps (and their third party plugins) that are the biggest headaches to transition. A 12” MB replacement as a toe in the water would be quite plausible, though.

That said, Apple will be making the processors this time around, so we wouldn’t see them until the machines launched. ARM isn’t fundamentally slower than x86, it’s just that most current chips have been designed for mobile, and it’s way more customisable than x86.

NB. I’ve never suggested that any of this is more than speculation, and I called out the poster claiming that Apple had announced the transition. However, the Intel-attributed rumours are a pretty good basis for speculation.

You have a year+ which is pretty fast but there were some high-profile programs ported and a lot of stuff that would take a while.

ARM chips as usually designed are low-power chips. There are architectural design issues in making them high-power. This was as explained to me by a chip designer a few years ago. You don't use a process and design for a low-power chip that you use in a high-power chip. This is why Intel developed the Atom line - their regular x86 architectures couldn't scale that low with the older process geometries. So there is no reason why an ARM chip couldn't be higher power - it would just have to use design considerations for higher power - and it would lose its low-power consumption attributes. There is no free lunch.

I'm not necessarily talking about your speculations and I certainly didn't see you back in 2008. It's just this really annoying drumbeat of the press, random clueless people and others that are unfamiliar with chip architecture and design and the engineering process for porting large software projects that annoy me.
[doublepost=1554987820][/doublepost]
This is the traditional view, long held by most people in the CPU architectural community. Namely that RISC-like CPUs have no true performance or power advantage, but that semiconductor fabrication, design priorities (power vs performance) and economies of scale determine this.

This was seemingly confirmed by remaining high-end RISC CPUs (IBM POWER, Oracle SPARC). They aren't generally any faster nor consume less power than Intel's top-end Xeons.

ARM does better on the low end, but the reasoning was if ARM ever scaled up to the performance levels of a high-end Xeon, it would burn nearly as much power. There were also questions where ARM could ever reach that level.

However -- recent advances demonstrate the A12X CPU performance is close to the x86 in the top-end 2018 MBP, despite consuming much less power. https://arstechnica.com/gadgets/2018/11/apple-walks-ars-through-the-ipad-pros-a12x-system-on-a-chip/

This seems to imply that something fundamental has changed in A-series CPU design. It's would still be a big stretch to beat an i9-9900k, but less so than previously -- and the trend looks promising. On the low-to-mid-range x86 Apple laptops, the A-series can beat those right now. Presumably a near-future A-series CPU that was optimized for laptop use (vs mobile use) would have even better performance while retaining better power efficiency than Intel.

Another big advantage is Apple would control the CPU architectural features. E.g, Intel won't put Quick Sync in Xeons. If Apple wanted to put the equivalent of Quick Sync in a future A-series desktop CPU, they could do that. Apple has already integrated a Neural Engine subsystem on recent A-series CPUs. In the future other CPU optimizations and accelerators tailored to Apple's specific needs would be possible.

x86 processors are essentially RISC processors today.

A new study comparing the Intel X86, the ARM and MIPS CPUs finds that microarchitecture is more important than instruction set architecture, RISC or CISC.If you are one of the few hardware or software developers out there who still think that instruction set architectures, reduced (RISC) or complex (CISC), have any significant effect on the power, energy or performance of your processor-based designs, forget it.

Ain't true. What is more important is the processor microarchitecture — the way those instructions are hardwired into the processor and what has been added to help them achieve a specific goal.


This is the over-arching conclusion of a study recently published in the ACM Transactions on Computer Systems. In the paper, "ISA Wars: Understanding the Relevance of ISA being CISC or RISC," authors Emily Blem, Jakrishnan Menon, Thiruvengadam Vijayaraghavan, and Karthhikeyan Sankaralingam, report the results of a study over the last four years or so by the University of Wisconsin (Maidison) Vertical Research Group(VRG).


https://www.eetimes.com/author.asp?section_id=36&doc_id=1327016#

There are architectures that do well on some workloads and poorly on others. I recall having a SPARC box back in 2001. It cost $20K and the CPU performance seemed horrible. A similar Intel box cost about $4,000 but CPU and GUI performance were five times the SPARC box at one-fifth the cost. But the SPARC box ran Oracle RDBMS far better than the Intel box did. So it comes down to workload.

x86 runs well on a wide variety of workloads and Intel is willing to work with you to add instructions if you're a big or important customer. They will provide consulting if you have a tough execution or hardware/software problem and they provide the best x86 compilers and some awesome math, media and other science software libraries. The latter things are pretty pricey though but we do use their compilers at a pretty high cost per seat and they are great.

I have not looked at Apple's A12X but it could very well be an apples-to-oranges comparison or Apple may have built a better mousetrap. Intel has fallen behind on process geometry and that could also be a factor.

If it were the better mousetrap, though, Apple could just reintroduce the X-Serve with this processor and win the server market which would be worth a massive amount of money. Intel has a new 28-core 5 Ghz chip coming out - I think that will be the king of the hill for at least a few months.

Take Apple's A12X chip and run Oracle, SAP, Genomic workloads, Apache, a few cloud workloads, etc. on it and let me know how it compares. I remain skeptical outside of real-world workloads.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.