Yeah, that's why we got the M1 in the first place: Because the Intel chips were sooo great and Apple needed to "compete" with them. ?
The budget for those ads will have been laughably small compared to their R&D budget.Competition is good, clearly, but intel is just shooting itself in the foot.
So they are behind on their own roadmap, and lost Apple as a customer. Instead of putting the money on their R&D to make sure they can improve their products and compete, they spent it on ads mocking Apple.
If intel goes south, it's on themselves.
I think the GPU solutions now is even worst than the CPU solutions in terms of power. All VR demos I’ve seen thus far are tethered to a powerful PCs with beefy CPUs and GPUs. That kind of limits it’s application. Imagine a battery powered visor for AR/VR and the possibilities that this could bring.Nobody is going to do that with a CPUs, there will be GPU's where most of that kind of processing gets done.
I think the GPU solutions now is even worst than the CPU solutions in terms of power. All VR demos I’ve seen thus far are tethered to a powerful PCs with beefy CPUs and GPUs. That kind of limits it’s application. Imagine a battery powered visor for AR/VR and the possibilities that this could bring.
That would be like discussing how a 10 kg weight weighs 10 kg. ~300 watts can be seen as a design target for high-end consumer GPUs, because that's what a desktop computer can easily handle without any special cooling solutions. There is no point in pretending that a 100 W GPU is a high-end one, if you can easily make a much better 300 W GPU.Yep, I’m surprised that it is not talked about more. Everyone praises how fast Nvidia 3xxx series are, but nobody mentions the fact that most of that speed is simply achieved by cranking up the power.
That would be like discussing how a 10 kg weight weighs 10 kg. ~300 watts can be seen as a design target for high-end consumer GPUs, because that's what a desktop computer can easily handle without any special cooling solutions. There is no point in pretending that a 100 W GPU is a high-end one, if you can easily make a much better 300 W GPU.
This is like saying that 640KB is more than enough for anyone. Imagine if the human race stopped when it was decided that cars are good enough traveling 100 miles using 20 gallons of fuel. Cars nowadays can achieve 3 gallons per 100 miles travelled. If we just continue with the trajectory of adding more power at all cost, we would be getting 50 gallons per 100 miles travelled, and we’d be running out of crude a lot sooner, and we’ll be back to the dark ages.That would be like discussing how a 10 kg weight weighs 10 kg. ~300 watts can be seen as a design target for high-end consumer GPUs, because that's what a desktop computer can easily handle without any special cooling solutions. There is no point in pretending that a 100 W GPU is a high-end one, if you can easily make a much better 300 W GPU.
And it has zero scalability for laptops (which are, by far, the largest consumer segment now), where the maximum power dissipation is limited by the case design.I see it differently. The TDP of GTX 1080 was 180 watts, that of 2080 was raised to 215 watts, and the 3080 is a whopping 320 watts. There is a ridiculous power inflation happening here. Sure, desktops can handle it, but producing hotter and hotter hardware really should not be the goal. This is the kind of lazy solution that kills innovation.
Same here pretty much - BBC Micro and Atari ST. Instruction sets on intel were horrible.I would like ARM to displace the x86 instruction set. Early in my career I found myself writing x86 assembler. It was a pretty ugly architecture compared to 68000 (in the original Mac & Amiga) or the 6502 (Apple II, Commodore 64 & BBC Micro).
Hopefully Apple inspires Microsoft and PC manufactures to get serious about Windows on ARM. Linux on ARM is already in wide use on Chrome Books, Android phones and the Raspberry Pi and available on Cloud providers like AWS. I think once ARM Macs are common and Docker for the ARM Macs is production ready, more companies will deploy their cloud workloads on ARM VMs and Docker clusters.
The naming of specific models is a marketing choice. The highest-end models of the 1xxx and 2xxx series had a TDP of 250 W, and many manufacturers sold overclocked models that used more power.I see it differently. The TDP of GTX 1080 was 180 watts, that of 2080 was raised to 215 watts, and the 3080 is a whopping 320 watts. There is a ridiculous power inflation happening here. Sure, desktops can handle it, but producing hotter and hotter hardware really should not be the goal. This is the kind of lazy solution that kills innovation.
Exactly.Sort of ... you can get insider previews of WoA but, so far, for production its OEM only. I think he’s referring to being able to buy your own copy and install it on your own built machine or VM (or say M1 partition ? - yes I know about what would need to have happen first).
The naming of specific models is a marketing choice. The highest-end models of the 1xxx and 2xxx series had a TDP of 250 W, and many manufacturers sold overclocked models that used more power.
Price and power consumption are both costs, and they should be treated in similar ways. Some consumers have a performance target, and they buy the cheapest they can get away with. Others have a budget, and they buy the best they can afford. Most are somewhere in the middle.
A Ryzen 5950x for 750 outperforming the 28 Core Mac Pro is a very stiff competition.Yes, competition is a very healthy thing. AMD's Ryzen might be the M1's stiffest competition at the moment.
I had a few of the processors you helped to design — very good they were too.I was a designer on some of AMDs CPUs, and at one point i owned the integer execution units and dispatch. Not much reason x86 can’t go wider, other than the fact that code wouldn’t probably benefit too much from it, due to too much instruction interdependency, I suppose. Microcode is a disadvantage - when you send a complex instruction to the instruction decoder and it replaces it with a sequence of N microops, those microops will tend to have interdependencies which require them to be at least partially sequenced. If, instead, you have Arm, you can let the compiler do some of the work of ordering the instruction stream to take advantage of multiple pipelines, and the instruction stream that reaches the instruction decoder will tend to have fewer clumps of interdependent instructions.
Ampere is already incredibly efficient. They just have to make them less powerful and smaller for mobile use, something like a 1650 with Ampere.The naming is, the base cluster configuration is not. The size of the latest Nvidia GPUs and the memory bus/interface choice make it very obvious that they are designed to achieve high performance by using more power.
As @Andropov correctly points out, it's a questionable path with unclear sustainability. By designing for power instead of efficiency you are limiting yourself in markets that are increasingly becoming more important. Nvidia Ampere is great on desktop (mostly because of the massively increased power budget), but it's much less impressive in the laptops. Who knows, maybe Nvidia has some sort of secret architecture they are working on that will greatly improve power efficiency, and Ampere is just a stopgap to get some sales, but they can't continue doing this going forward. What next? 500W on the high end? The power consumption of enthusiast-level hardware is already barely sustainable today...
Ampere is already incredibly efficient. They just have to make them less powerful and smaller for mobile use, something like a 1650 with Ampere.
If they dropped the RT cores it would be, that's what I mean.I'm curious about the upcoming 3050 as well. So far, leaked scores put it at the same level as the 1660 with the same power consumption, so less interesting (but then again, who knows how reliable the leak is). As to the existing GPUs... a 95W mobile 3060 seems to be around 15-25% faster than the Turing GPUs with the same TDP, which is not bad at all, but not something I'd refer to as "incredibly efficient".
“Filled with?” Three guys. One is an architect. Only two are chip designers. Not to mention lawsuits. I wouldn’t count on much from nuvia.Unlikely to ever be their business model.
However Qualcomm just bought Nuvia filled with ex-Apple chip designers and are set to release a laptop chip late next year designed by them (current and near future releases are standard ARM cores). While I would imagine they’ll be selling to OEMs, that’s probably your best for the future of building your own performance ARM system. I don’t know if Qualcomm has any interest in doing so but they might if they want to take on AMD and Intel in all sectors.
I also designed the first cut of the 64 bit integer math instruction set, so I guess pretty much everyone has had a cpu I helped design, even if it’s from that horrible Intel companyI had a few of the processors you helped to design — very good they were too.
Wow, multiple massive inaccuracies in only 2 points, congrats.1) Early on, Intel was too focused on raw speed, at the expense of power management. This meant that they were never a viable contender to provide CPUs for the iPhone, or subsequent android smartphones as well. Because of this, Intel was effectively shut out of the growing smartphone market, at a time when PC sales were starting to stagnate. They basically locked themselves out of the next big thing.
2) Because of their early financial success in providing cheap x86 processors for servers and data centres, Intel never felt the need to innovate and move beyond x86 instruction. It's classic disruption theory - a company doubles down on that which made it successful in the first place, at the expense of missing the next big thing.