Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Mr.Blacky

Cancelled
Jul 31, 2016
1,880
2,583
Yeah, that's why we got the M1 in the first place: Because the Intel chips were sooo great and Apple needed to "compete" with them. ?
 
  • Like
Reactions: tranceking26

ian87w

macrumors G3
Feb 22, 2020
8,704
12,638
Indonesia
Competition is good, clearly, but intel is just shooting itself in the foot.
So they are behind on their own roadmap, and lost Apple as a customer. Instead of putting the money on their R&D to make sure they can improve their products and compete, they spent it on ads mocking Apple.

If intel goes south, it's on themselves.
 

tranceking26

macrumors 65816
Apr 16, 2013
1,464
1,650
Competition is good yes, but there's like 10% chance at best that intel will ever catch up to Apple.
 

Modernape

macrumors regular
Jun 21, 2010
232
42
Competition is good, clearly, but intel is just shooting itself in the foot.
So they are behind on their own roadmap, and lost Apple as a customer. Instead of putting the money on their R&D to make sure they can improve their products and compete, they spent it on ads mocking Apple.

If intel goes south, it's on themselves.
The budget for those ads will have been laughably small compared to their R&D budget.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Nobody is going to do that with a CPUs, there will be GPU's where most of that kind of processing gets done.
I think the GPU solutions now is even worst than the CPU solutions in terms of power. All VR demos I’ve seen thus far are tethered to a powerful PCs with beefy CPUs and GPUs. That kind of limits it’s application. Imagine a battery powered visor for AR/VR and the possibilities that this could bring.

10 years ago, I still rely on paper maps to navigate around. Now I just need my iPhone.

I think we are seeing the beginning of powerful mobile applications. Maybe in another 10 years, all solutions will be mobile, and CPU and GPU power required will be a lot more compared to what we could even imagine now. For example, think AI virtual assistant robots.

Anyway, it’ll be shorted sighted of me if I say that what we have now is already good enough with no need for further improvement.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I think the GPU solutions now is even worst than the CPU solutions in terms of power. All VR demos I’ve seen thus far are tethered to a powerful PCs with beefy CPUs and GPUs. That kind of limits it’s application. Imagine a battery powered visor for AR/VR and the possibilities that this could bring.

Yep, I’m surprised that it is not talked about more. Everyone praises how fast Nvidia 3xxx series are, but nobody mentions the fact that most of that speed is simply achieved by cranking up the power.

By the way, that’s also why I think Apple will reign in VR. Their TBDR GPUs are capable of VR rendering at a significantly lower power consumption and their variable rate rasterization seems to be developed specifically with VR in mind (it’s very different from the usual variable rate shading that aims to conserve shader resources instead).
 
  • Like
Reactions: Zorori and Andropov

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Yep, I’m surprised that it is not talked about more. Everyone praises how fast Nvidia 3xxx series are, but nobody mentions the fact that most of that speed is simply achieved by cranking up the power.
That would be like discussing how a 10 kg weight weighs 10 kg. ~300 watts can be seen as a design target for high-end consumer GPUs, because that's what a desktop computer can easily handle without any special cooling solutions. There is no point in pretending that a 100 W GPU is a high-end one, if you can easily make a much better 300 W GPU.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
That would be like discussing how a 10 kg weight weighs 10 kg. ~300 watts can be seen as a design target for high-end consumer GPUs, because that's what a desktop computer can easily handle without any special cooling solutions. There is no point in pretending that a 100 W GPU is a high-end one, if you can easily make a much better 300 W GPU.

I see it differently. The TDP of GTX 1080 was 180 watts, that of 2080 was raised to 215 watts, and the 3080 is a whopping 320 watts. There is a ridiculous power inflation happening here. Sure, desktops can handle it, but producing hotter and hotter hardware really should not be the goal. This is the kind of lazy solution that kills innovation.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
That would be like discussing how a 10 kg weight weighs 10 kg. ~300 watts can be seen as a design target for high-end consumer GPUs, because that's what a desktop computer can easily handle without any special cooling solutions. There is no point in pretending that a 100 W GPU is a high-end one, if you can easily make a much better 300 W GPU.
This is like saying that 640KB is more than enough for anyone. Imagine if the human race stopped when it was decided that cars are good enough traveling 100 miles using 20 gallons of fuel. Cars nowadays can achieve 3 gallons per 100 miles travelled. If we just continue with the trajectory of adding more power at all cost, we would be getting 50 gallons per 100 miles travelled, and we’d be running out of crude a lot sooner, and we’ll be back to the dark ages.
 

Andropov

macrumors 6502a
May 3, 2012
746
990
Spain
I see it differently. The TDP of GTX 1080 was 180 watts, that of 2080 was raised to 215 watts, and the 3080 is a whopping 320 watts. There is a ridiculous power inflation happening here. Sure, desktops can handle it, but producing hotter and hotter hardware really should not be the goal. This is the kind of lazy solution that kills innovation.
And it has zero scalability for laptops (which are, by far, the largest consumer segment now), where the maximum power dissipation is limited by the case design.
 

philstubbington

macrumors 6502a
I would like ARM to displace the x86 instruction set. Early in my career I found myself writing x86 assembler. It was a pretty ugly architecture compared to 68000 (in the original Mac & Amiga) or the 6502 (Apple II, Commodore 64 & BBC Micro).

Hopefully Apple inspires Microsoft and PC manufactures to get serious about Windows on ARM. Linux on ARM is already in wide use on Chrome Books, Android phones and the Raspberry Pi and available on Cloud providers like AWS. I think once ARM Macs are common and Docker for the ARM Macs is production ready, more companies will deploy their cloud workloads on ARM VMs and Docker clusters.
Same here pretty much - BBC Micro and Atari ST. Instruction sets on intel were horrible.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
I see it differently. The TDP of GTX 1080 was 180 watts, that of 2080 was raised to 215 watts, and the 3080 is a whopping 320 watts. There is a ridiculous power inflation happening here. Sure, desktops can handle it, but producing hotter and hotter hardware really should not be the goal. This is the kind of lazy solution that kills innovation.
The naming of specific models is a marketing choice. The highest-end models of the 1xxx and 2xxx series had a TDP of 250 W, and many manufacturers sold overclocked models that used more power.

Price and power consumption are both costs, and they should be treated in similar ways. Some consumers have a performance target, and they buy the cheapest they can get away with. Others have a budget, and they buy the best they can afford. Most are somewhere in the middle.
 
  • Like
Reactions: Zorori

Jack Neill

macrumors 68020
Sep 13, 2015
2,272
2,308
San Antonio Texas
Sort of ... you can get insider previews of WoA but, so far, for production its OEM only. I think he’s referring to being able to buy your own copy and install it on your own built machine or VM (or say M1 partition ? - yes I know about what would need to have happen first).
Exactly.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
The naming of specific models is a marketing choice. The highest-end models of the 1xxx and 2xxx series had a TDP of 250 W, and many manufacturers sold overclocked models that used more power.

The naming is, the base cluster configuration is not. The size of the latest Nvidia GPUs and the memory bus/interface choice make it very obvious that they are designed to achieve high performance by using more power.

Price and power consumption are both costs, and they should be treated in similar ways. Some consumers have a performance target, and they buy the cheapest they can get away with. Others have a budget, and they buy the best they can afford. Most are somewhere in the middle.

As @Andropov correctly points out, it's a questionable path with unclear sustainability. By designing for power instead of efficiency you are limiting yourself in markets that are increasingly becoming more important. Nvidia Ampere is great on desktop (mostly because of the massively increased power budget), but it's much less impressive in the laptops. Who knows, maybe Nvidia has some sort of secret architecture they are working on that will greatly improve power efficiency, and Ampere is just a stopgap to get some sales, but they can't continue doing this going forward. What next? 500W on the high end? The power consumption of enthusiast-level hardware is already barely sustainable today...
 

majormike

macrumors regular
May 15, 2012
113
42
Yes, competition is a very healthy thing. AMD's Ryzen might be the M1's stiffest competition at the moment.
A Ryzen 5950x for 750 outperforming the 28 Core Mac Pro is a very stiff competition.

M1 and Apple Silicon will make Apple devices finally properly powerful and also more affordable for all Apple users along with being predestined for mobile usage, beating out all competition in battery life.

Apple will keep their overpriced Intel Mac Pros for a while until they come up with something equally powerful, which might happen in 2+ years from now, if they keep their current speed up. Otherwise it'll be more in like 4 years after they've throughout updated their non-pro line, which however will outperform most of the nMPs already.

If you are a Pro user, your stationary machine is more than likely not an Apple anymore right now. I think Apple realized it themselves, how thinned out their Mac Pro Sales are and with the M1, they focus on the consumer market instead which will also be powerful enough for most Pros anyways.

They might even ditch the whole Pro moniker for their stationary computers.
 

qoop

macrumors 6502
Feb 4, 2021
440
424
THE UNITED KINGDOM
I was a designer on some of AMDs CPUs, and at one point i owned the integer execution units and dispatch. Not much reason x86 can’t go wider, other than the fact that code wouldn’t probably benefit too much from it, due to too much instruction interdependency, I suppose. Microcode is a disadvantage - when you send a complex instruction to the instruction decoder and it replaces it with a sequence of N microops, those microops will tend to have interdependencies which require them to be at least partially sequenced. If, instead, you have Arm, you can let the compiler do some of the work of ordering the instruction stream to take advantage of multiple pipelines, and the instruction stream that reaches the instruction decoder will tend to have fewer clumps of interdependent instructions.
I had a few of the processors you helped to design — very good they were too.
 

majormike

macrumors regular
May 15, 2012
113
42
The naming is, the base cluster configuration is not. The size of the latest Nvidia GPUs and the memory bus/interface choice make it very obvious that they are designed to achieve high performance by using more power.



As @Andropov correctly points out, it's a questionable path with unclear sustainability. By designing for power instead of efficiency you are limiting yourself in markets that are increasingly becoming more important. Nvidia Ampere is great on desktop (mostly because of the massively increased power budget), but it's much less impressive in the laptops. Who knows, maybe Nvidia has some sort of secret architecture they are working on that will greatly improve power efficiency, and Ampere is just a stopgap to get some sales, but they can't continue doing this going forward. What next? 500W on the high end? The power consumption of enthusiast-level hardware is already barely sustainable today...
Ampere is already incredibly efficient. They just have to make them less powerful and smaller for mobile use, something like a 1650 with Ampere.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Ampere is already incredibly efficient. They just have to make them less powerful and smaller for mobile use, something like a 1650 with Ampere.

I'm curious about the upcoming 3050 as well. So far, leaked scores put it at the same level as the 1660 with the same power consumption, so less interesting (but then again, who knows how reliable the leak is). As to the existing GPUs... a 95W mobile 3060 seems to be around 15-25% faster than the Turing GPUs with the same TDP, which is not bad at all, but not something I'd refer to as "incredibly efficient".
 

majormike

macrumors regular
May 15, 2012
113
42
I'm curious about the upcoming 3050 as well. So far, leaked scores put it at the same level as the 1660 with the same power consumption, so less interesting (but then again, who knows how reliable the leak is). As to the existing GPUs... a 95W mobile 3060 seems to be around 15-25% faster than the Turing GPUs with the same TDP, which is not bad at all, but not something I'd refer to as "incredibly efficient".
If they dropped the RT cores it would be, that's what I mean.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Unlikely to ever be their business model.

However Qualcomm just bought Nuvia filled with ex-Apple chip designers and are set to release a laptop chip late next year designed by them (current and near future releases are standard ARM cores). While I would imagine they’ll be selling to OEMs, that’s probably your best for the future of building your own performance ARM system. I don’t know if Qualcomm has any interest in doing so but they might if they want to take on AMD and Intel in all sectors.
“Filled with?” Three guys. One is an architect. Only two are chip designers. Not to mention lawsuits. I wouldn’t count on much from nuvia.
 

crevalic

Suspended
May 17, 2011
83
98
1) Early on, Intel was too focused on raw speed, at the expense of power management. This meant that they were never a viable contender to provide CPUs for the iPhone, or subsequent android smartphones as well. Because of this, Intel was effectively shut out of the growing smartphone market, at a time when PC sales were starting to stagnate. They basically locked themselves out of the next big thing.

2) Because of their early financial success in providing cheap x86 processors for servers and data centres, Intel never felt the need to innovate and move beyond x86 instruction. It's classic disruption theory - a company doubles down on that which made it successful in the first place, at the expense of missing the next big thing.
Wow, multiple massive inaccuracies in only 2 points, congrats.

1) Intel invested a massive amount of resources into the Atom development starting in 2004, years before smartphones ever appeared. They developed 1-3W cores, while standard low power laptop chips were in the 35W+ range and were initially very competitive on the market. However, internal struggles resulted in an effective abandonment of this segment. Firstly, the Atom was competing in a lower margin, low unit cost market with more competition than the x86 market (at Atom's release in 2008, Intel already regained leadership ahead of AMD). At that point, Intel sold everything they could make and it made sense to use fab capacity on the expensive parts where they were free to set the price. Secondly, since Intel's approach was so profitable (and, to be honest, still is), many people inside the company were against "rocking the boat" and risking their high margin, high-performance cash cows, especially when netbooks exploded in popularity.

2) This is just so wrong, like, you really need to know absolutely nothing about this field to be able to come up with this. In real life, Intel wanted to move on from x86 and invested absolutely monstrous resources, including the largest design team in history, to create a completely new architecture called IA-64 (also known as Intel Itanium). IA-64 was not compatible with x86/32-bit instructions, breaking compatibility with all older software, and would get rid of pesky x86 competition like AMD, if successful. Instead, AMD went the opposite way and extended the x86 instructions set with 64-bit support, enabling backwards compatibility. In the end, Intel was forced to effectively write off the investment in IA-64 and license x86-64 from AMD.
 
  • Like
Reactions: thekev
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.