Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Wouldn't a laptop with a dGPU consume more and be thicker losing the appeal that Arm-based laptops offer?

If a customer wants the workstation-level of performance offered by a dGPU, they will be willing to accept the increased bulk and draw of such a notebook. The SoC is very small (cf. Mac Pro motherboard), so a high-performance mobile graphics card will not add that much bulk. And the iGPU will handle a lot of the housekeeping graphics work, so the dGPU will not be drawing power unless it is actually chomping.

Why would Nvidia develop drivers for Arm-based notebooks? How could notebook manufacturers promote a notebook with an dGPU with few games and few productive applications?

It appears that x86->ARM64 is not that difficult of a port. Mostly just a recompile.
 
If a customer wants the workstation-level of performance offered by a dGPU, they will be willing to accept the increased bulk and draw of such a notebook. The SoC is very small (cf. Mac Pro motherboard), so a high-performance mobile graphics card will not add that much bulk. And the iGPU will handle a lot of the housekeeping graphics work, so the dGPU will not be drawing power unless it is actually chomping.
Apple has succeeded in creating thin but powerful notebooks and many expect Qualcomm's SoC to enable PC notebooks to be like this, but cheaper and running Windows. For many, the most demanding task they do is playing games at 1080p at 60fps and a good iGPU is enough for most games.
 
Apple has succeeded in creating thin but powerful notebooks and many expect Qualcomm's SoC to enable PC notebooks to be like this, but cheaper and running Windows. For many, the most demanding task they do is playing games at 1080p at 60fps and a good iGPU is enough for most games.
Yeah, many don’t understand how useless having a “workstation” solution is for a company that wants to (has to) sell a lot of devices. By many accounts, Apple doesn’t sell ANY “workstation” laptops, and they still sell almost 20 million a year.

If Qualcomm’s first year in operation led to 20 million units sold, that would be a very successful year with not a single workstation laptop (or single dGPU compatible system) sold. :)
 
The thread and it's from 1.5 years ago. I guess Qualcomm is really late to the party.

I've had a dream to have a decent performing Linux running ARM machine for years, ever since the Pi 2s came out (and that was 2015).

Although those cheap $100 Android TV boxes are technically computers running on Linux kernel, they are all with under powered chips with little RAM and most importantly closed source binary blob drivers that didn't allow running off the shelf Linux distributions (unlike x86). Many ARM based Chromebooks and a few ARM based Windows machines came later and had similar problems (low RAM, closed driver, either too low end or too expensive) The closest to my vision was the HiSilicon 960/970 development boards which had 6GB RAM (still not enough to do compiles of large projects on a desktop environment).

I would be perfectly content with mobile flagship/sub-flagship performance paired with plenty of RAM, and those MTK 9000/8100 phones overseas were not selling much more than $500. And 32GB of RAM is like $50 nowadays. I think the most difficult part is the drivers which are always closed sourced. It's also a mindset problem. Every manufacturer that is not Apple wants to make $150 Intel Atom PCs / $100 ARM PCs with 4GB of RAM. No one ever thinks about pro users. Nuvia will probably be like the Galaxy Book (for reference the Galaxy Book Go with SD 7c gen2 4GB/128GB eMMC is $350) - not competitive with Intel/AMD on the low end, not enough specs for the power users, dead on arrival.

Apple was actually the first one to come up with a desirable ARM based computer with decent specs because they aim for the top and not the bottom.
 
It appears that x86->ARM64 is not that difficult of a port. Mostly just a recompile.

Hmm, that's a point I haven't considered before. I think many of us think the compatibility issues for gaming on Apple silicon is an x86 vs ARM thing, but it's mostly about compatibility with macOS and what macOS does and doesn't support.

I know that's not 100% related to what you were talking about.
 
It's also a mindset problem. Every manufacturer that is not Apple wants to make $150 Intel Atom PCs / $100 ARM PCs with 4GB of RAM. No one ever thinks about pro users.
Most companies can't design their own cores, so they will wait until they think ARM designs good cores for notebooks. The same thing happened with ARM-based SoCs for servers. Companies waited for ARM to design Neoverse cores.

By the way, Qualcomm seems to be having some problems with its Oryon-based SoCs.
 
Last edited:
By the way, Qualcomm seems to be having some problems with its Oryon-based SoCs.

Not really a "breaking news Flash".... one of the "see also" articles linked at the bottom of that article from earlier this month.

"... The discussion happening on @Ravi_711’s post states that Nuvia will be sampled in late 2023, with the actual chipsets said to arrive in late 2024 or early 2025, while also mentioning that Hamoa, the codename given to the Snapdragon 8cx Gen 4, is running into undisclosed problems. Given that the upcoming silicon was said to be mass produced on TSMC’s 4nm process, ..."


[ Also back in Nov. 2022

".. Qualcomm faced a number of setbacks with its Nuvia-powered Snapdragon SoCs. Initially, the company planned to start sampling the processors in August 2022 and then ship them commercially in 2023. But the company then delayed sampling to 2023 and now expects the arrival of Windows systems based on its SoCs to hit the market in 2024. ... "

If that was sampling 1H 2023 and now slide to 2H 2023 then suggestive of at least some defect problems.
But also "systems in 2024" is mainly 'old news' since timeframe roughly reported last year. .

Most of the problem was wonky expectations from the start though. Nuvia had no working product. Let alone one targeting laptops.

]


If it isn't sampling yet is it 'problems' or just not done? Even Arm arch license holders have to submit their implementation to Arm for "Approval" that it is compliance with the Arm standards. Vendors may consider that a "onerous" task , but it is likely useful for a fresh , independent set of 'eyes' to look over implementation and try to find bugs. Pretty sure since the lawsuit build up enough animosity, Arm has stopped doing any validation and/or useful bug finding detective work. Qualcomm/Nuvia never actually shipped a finished, working Arm implementation before. So this is a 'version 0.x' product. A six , or so , month slide on a 'version 0' product shouldn't be all that surprising.

Problem for Qualcomm is that system vendor testers are likely pretty 'cold feet' also with the large slump in PC sales ( rocky profit outlook pictures for them). Never mind the 'doom and gloom' with lawsuit around the whole chipset thrown on top. ( Put in lots of effort and Arm files an injection to stop your product... could be even more expensive. product than planned. )

Snapdragon 8 gen 3 is far more strategically critical for Qualcomm to get out the door this year. If there is any sort of resource contention between the mobile flagship and this bumpy Nuvia thing .. it is probably going to the product that pays more of the bills around there.


Additionally, the push to support 3rd party GPUs probably isn't helping shorten the timelines either. Just even more complexity that has to be wrangled. ( more drivers , more validations , more inter-company coordination ---> more time. ) . Similarly, trying to roll out initially with 8 , 10 , 12 CPU core versions. ( if about as much variability in the GPU cores and had more than one die to roll out at exactly the same time ... again just that much more complexity layered on top; which won't make things 'shorter'. )

Is slide into very late 2024 the problem for Qualcomm is more so going to be AMD Strix Piiont

" ... The leaked 45W Strix Point APU reportedly has 12 Zen 5 cores with simultaneous multithreading (SMT). The distribution shows four P-cores and eight E-cores. Unlike Intel, AMD's E-cores support SMT. ...
... GPU ... It's equal to 16 compute units or 1,024 stream processor ....."


can point at the 45W there as being a 'problem' , but if 8cx gen 4 cannot keep up GPU-wise once saddle those systems with a dGPU then probably have farted away any power/thermal savings advantage. ( seriously doubt the dGPU function is really going to buy more than some kind of 'checkbox' feature credit with the system vendors. )
 
Last edited:
  • Like
Reactions: Xiao_Xi
If they can’t build a fast ARM SoC, why would you think they would be able to build a fast RISC-V SoC?
It will depend on how much help Qualcomm needs from Arm and whether Arm will provide that help. The lead between Arm and RISC-V in tools and ecosystem is slowly narrowing and RISC-V may be an alternative to Arm in the future.
 
It will depend on how much help Qualcomm needs from Arm and whether Arm will provide that help. The lead between Arm and RISC-V in tools and ecosystem is slowly narrowing and RISC-V may be an alternative to Arm in the future.

Building a fast CPU is a problem only loosely coupled to an ISA, I doubt Qualcomm will have more luck with RISC-V than with ARM here. Attraction of RISC-V lies in not needing to negotiate with ARM, so I would certainly understand if Qualcomm and others are interested. Still, it’s a long way to go until RISC-V is ready for general-purpose computing. In particular, I’m not sure how viable RIsC-V is going to be with only rudimentary HPC-oriented SIMD.
 
If Qualcomm continues to struggle with the laptop-class ARM SoC, I wonder if the PC world will jump straight to RISC-V. The RISC-V board of directors has elected one from Qualcomm and one from Intel (IFS) as new officers.


Legal vs technical issues are likely one of the biggest hold ups. I don't think RISC-V really viable. RISC-V is going to eat Arms other , thriving , business markets ( e.g., embedded controllers , IoT , etc) first before moving to something that really does not generate substantive money at this point.

And unless Microsoft is jumping up and down screaming super undying love for RISC-V .... then missing the absolute essential critical piece of the puzzle ; Windows.

[ MS Azure is far more looped into Arm than RISC-V right now also. So MS isn't also less likely to walk away from that. And Arm isn't doing too bad in the server space.
Arm-Neoverse-V2-HC35_Page_21-scaled.jpg



Nvidia's own cherry picked numbers, but very likely competitive on a broad enough front. Arm's Neoverse baseline designs are not 'bad' ; they are more than decent.


]
 
Arm isn't doing too bad in the server space
We should wait for a comparison between the latest chips from all competitors.
The correct comparisons for the Grace Superchip would have been Intel Xeon Max and Genoa-X. STH has had both Xeon Max and Genoa-X for months, so these should have been included.

And unless Microsoft is jumping up and down screaming super undying love for RISC-V .... then missing the absolute essential critical piece of the puzzle ; Windows.
Agreed. It seems more likely that Android phones will move to RISC-V before laptops, as Google has shown more interest in RISC-V.

I’m not sure how viable RIsC-V is going to be with only rudimentary HPC-oriented SIMD
What problems do you see with V-extension?
 
What problems do you see with V-extension?

RVV appears to be primarily focused on HPC-style vector computations. But general-purpose code often uses SIMD to do latency-sensitive processing. I am just not confident that RVV will scale well for both needs. Time will tell, I suppose.
 
  • Like
Reactions: Xiao_Xi
We should wait for a comparison between the latest chips from all competitors.
The correct comparisons for the Grace Superchip would have been Intel Xeon Max and Genoa-X. STH has had both Xeon Max and Genoa-X for months, so these should have been included.

Not really. I don't think that is really the right comparison. Yeah it will help if the 'left side' graph that Nvidia posted that I inluded with my post, but HBM memory and mega cache isn't going to do much for that right hand side graph ( which is pref/watt). What those two would likely show are some clear losses by Grace. It only demonstrates that AMD/Intel needs additional augments to pull clear of baseline V2 performance. The vast majority of deployments aren't going to use those augments.


Grace doesn't have to beat every options that Intel/AMD throw out there. Just on a wide variety of general workloads. The vast bulk of FLOPS workload that Grace powered systems are likely to see is going to be hefted by the Nvidia GPUs. There, NVLink is likely going to level the playing field on a fairly broad number of workloads. So Nvidia has "Other corner cases " to flush out for wins also. Basically going to offset the other corner cases that Intel/AMD have.


Agreed. It seems more likely that Android phones will move to RISC-V before laptops, as Google has shown more interest in RISC-V.

Google was 'committed' to Android on x86 too. ( Also MIPS was it was more viable previously) For some 'products'. Google has the attention span a gnat. There is a bit of a 'freebie' in the work being thrown at better Linux ports to RISC-V takes lots of the cost 'bite' of Android being there also. Android has never been hardcore fixed to just one architecture by 'dogma'.

When it looked like Nvidia was going to 'blow up' the smartphone market with an Arm buy there was lots of "Plan B" effort thrown at this. Whether that keeps up after the Arm IPO or not is an open question. ( If Softbank keeps 90% of the share for an extended period of time.... have much of the same 'back seat driver' problem at Arm. )


Google has emedded uses for RISC-V like many other folks to ( probably Apple also ). For example.


That isn't necessarily going to get big wins at the major common end user OS level though.


The RISC-V for the 'race to the bottom' priced Android phones out there. Not sure that will really hold Google's interests over the extended long term. (Especially if firmly grounded in the Chinese 'rogue' App Store ( no PlayStore) Android variants. )



What problems do you see with V-extension?

The bigger problematical issue is not so much the limited area the v-extensions cover, but how RISC-V could easily balkanize ( just as bit like Linux versus the fragmenting FreeBSD ecosystems. )
 
When it looked like Nvidia was going to 'blow up' the smartphone market with an Arm buy there was lots of "Plan B" effort thrown at this.
RISC-V may become the only plan for Chinese companies.

Google was 'committed' to Android on x86 too. ( Also MIPS was it was more viable previously) For some 'products'. Google has the attention span a gnat. There is a bit of a 'freebie' in the work being thrown at better Linux ports to RISC-V takes lots of the cost 'bite' of Android being there also. Android has never been hardcore fixed to just one architecture by 'dogma'.
Indeed, the push for porting Android to RISC-V comes from Chinese companies, not so much from Google.
The chair of the Android-SIG is from Alibaba and the vice-chair, from Imagination Tech.

how RISC-V could easily balkanize
Profiles should avoid that problem.
 
RISC-V may become the only plan for Chinese companies.
Not really.

For smartphone folks just 'internal only' China (or maybe China-Russia market) and don't worry about exposure in worldwide markets. A rogue, forked Arm is likely easier; at least for the intermediate term.

" ... based on the company's own TaiShan microarchitecture (which still looks to be found on the Armv8a ISA ) as well as the Maleoon 910 graphics processing unit operating at up to 750 MHz, ..."

At some point can go overboard on sanctions. At some point in certain cases, it will be just cheaper to 'steal' ( just not pay to license. )

The Chinese can't push RISC-V 'up the hill' all by themselves. About to loose 'most populous country' (if that is a 'feature') status too.


But RISC-V isn't going to turn "over eyeball deep" DUV multiple patterning gyrations into N3 class silicon. It isn't just the instruction set that is at an impasse.

RISC-V has some better resistance to being sucked into 'trade war' fallout , but it isn't completely immune. It likely isn't completely patent free is get down to the nit-picky level.



how RISC-V could easily balkanize

Profiles should avoid that problem.

It likely won't. Profiles more so allows clearly defined, explicit borders. Can more clearly see where you are crossing from one balkan 'state' to another. But it is not more holistic. The more players pull into the standards process the more clear it will likely get. It is partially a mechanism to pull a larger group of factions together , but long term tensions will likely pull those groups apart again with forces a labeling system isn't going to 'paper over' ( 'sweep under the rug' ).
 
For smartphone folks just 'internal only' China (or maybe China-Russia market) and don't worry about exposure in worldwide markets. A rogue, forked Arm is likely easier; at least for the intermediate term.
Why would Chinese companies limit themselves to selling only in China by using a fork of Arm ISA instead of selling worldwide using RISC-V?

The Chinese can't push RISC-V 'up the hill' all by themselves.
It wouldn't be the first market that China controls.

I also agree with you and I think they can't do it alone. That's why I think RISC-V is more likely to succeed in laptops when Intel moves to RISC-V by buying a startup. It tried to buy SiFive a few years ago and I think it will try again.

It likely isn't completely patent free is get down to the nit-picky level.
Do you have a link to confirm this?

It likely won't. Profiles more so allows clearly defined, explicit borders. Can more clearly see where you are crossing from one balkan 'state' to another. But it is not more holistic. The more players pull into the standards process the more clear it will likely get. It is partially a mechanism to pull a larger group of factions together , but long term tensions will likely pull those groups apart again with forces a labeling system isn't going to 'paper over' ( 'sweep under the rug' ).
Until you can provide some proof of this, I will consider it FUD.
 
Charlie Demerjian reports:


Charlie has usually got his underwear in a twist over something to complain about.

Put this in context. Nuvia/Oryon has NEVER SHIPPED ANYTHING. The is a 'version 1.0' chip. So out of the gate ... having never shipped anything they are suppose to jump to the front of the market? Really?????
There is some unoptimized subsystem in their SoC.... is this suppose to be a 'shocker' ?


Covering the M2 is more than decent enough to take share in the Windows market if the perf/watt is still competitive against Intel (and AMD). They don't have to steal share out of the relatively small Mac pool of unit sales. They should be trying to pull sales from the vastly much more bigger 'lake' ( as opposed to a walled 'pond' ).

" ... . It is a server core and was always meant to be a server core, not a consumer one. ..."

Intel's P core is not primary a server core? AMD's big performance core is not primarily a server core ? Basically all sitting in extremely similar boats.

Oryon being thrown at Qualcomm historic mid-range smartphone market is more than kind of loopy for the first generation. If Apple was trying to throw a M2,M3 into a phone they'd have trouble too.

Qualcomm/Nuvia is starting at the opposite end of the product spectrum , but that doesn't mean they cannot work their way down over time. AMD is . Intel has. Qualcomm moving down in the Mn zone is doable given who they are really primarily competing with. ( there is zero opportunity to display M series in macOS placements. ). Yes, Qualcomm has publicly thrown "we are competing with the M-series" as a 'stretch' goal. But actual displacements for the M-series in systems where the M-series is present ... that never was going happen. It is both a 'stretch' and 'misdirection' goal. ( Among other things, misdirection away from the large inertia Qualcomm would have to dislodge to make large progress. )


The part where Qualcomm seems really going off the rails in rhetoric is rumors of these Smartphone Oryon SoCs. Really? Power management problems there is a disaster. By generation 2 or 3 they might be able to go further across their line up with these cores. (this shouldn't need to be a 'version 1.0' fix. ) Apple didn't do that ( long gap between A8-A9 and A14 ). AMD .. nope. Intel nope.

Qualcomm mainly needs to ship version 1 and make version 2 better. That they stepped into a pile of legal 'poo-poo' makes that all the more relevant.
 
  • Like
Reactions: Unregistered 4U
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.