Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Colstan

macrumors 6502
Jul 30, 2020
330
711
I don't think ARC's hardware is gimped. They for sure have driver problems though.
Igor's Lab believes there's something wrong with the scheduler that can't be fixed with drivers. It's been covered by Linus and Moore's Law is Dead.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Igor's Lab believes there's something wrong with the scheduler that can't be fixed with drivers. It's been covered by Linus and Moore's Law is Dead.
I guess we will know if they send out another die revision for A380 that is different than what is in folks hands now.
 
  • Like
Reactions: Colstan

Colstan

macrumors 6502
Jul 30, 2020
330
711
I guess we will know if they send out another die revision for A380 that is different than what is in folks hands now.
I'd very much like Intel to succeed with Arc. Having an extra competitor in the GPU space would be great. I'm just not sure that they have the institutional fortitude for it. Larrabee followed a similar pattern, with expectations of a gaming card, but it ended up being shoved into the data center. The spiritual forefather of Alchemist, the i740, never made it to version two. Hopefully, Arc breaks the curse.
 

ondioline

macrumors 6502
May 5, 2020
297
299
By knowledge I mean that there’s no way for AMD or Intel to know exactly the systems their solutions are expected to go in. Apple’s chip makers know the dimensions of the systems (and even has say in it) they know ALL the compilers that will be used to write software (and has say in that) and even aware of the specific API’s their solution is expected to support (again, with input to that as well). AMD and Intel will never have that level of knowledge. And, they’ll never have the flexibility to do something like announce “no more 32-bit instructions. They will always have to support the years and years of cruft that are out there.
Sure they can, somebody doesn't remember Itanium!
 

MrGunny94

macrumors 65816
Dec 3, 2016
1,148
675
Malaga, Spain
I'd very much like Intel to succeed with Arc. Having an extra competitor in the GPU space would be great. I'm just not sure that they have the institutional fortitude for it. Larrabee followed a similar pattern, with expectations of a gaming card, but it ended up being shoved into the data center. The spiritual forefather of Alchemist, the i740, never made it to version two. Hopefully, Arc breaks the curse.
I mean they are committed to 3 generations at least, let’s see what happens. I think the Intel Arc GPUs will be quite helpful for Intel in the laptop space especially the integrated and cheap gaming laptops space
 
  • Like
Reactions: Colstan

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Sure they can, somebody doesn't remember Itanium!
If AMD didn’t exist at the time, if Intel REALLY had the market control that would mean that customers would continue to stick with them as they removed the backwards compatible stuff and made the replacement more performant, Intel would be in a better place today with a more streamlined simplified and efficient instruction set. As it is, they were forced to backpedal and here we are, at a point where the chip is so complex, it needs assistance from the OS scheduler to hit it’s full performance.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
If AMD didn’t exist at the time, if Intel REALLY had the market control that would mean that customers would continue to stick with them as they removed the backwards compatible stuff and made the replacement more performant, Intel would be in a better place today with a more streamlined simplified and efficient instruction set. As it is, they were forced to backpedal and here we are, at a point where the chip is so complex, it needs assistance from the OS scheduler to hit it’s full performance.
Eh, I think some of that is due to Intel not making consumer IA64 chips like AMD made consumer AMD64 ones. I think Intel believed they could get the server space to move, then maybe the consumer side. It didn't work out that way (clearly).
 

blazerunner

Suspended
Nov 16, 2020
1,081
3,998
Games designed for Apple Silicon, then yes, perfectly fine. However, I suspect you're tongue-in-cheek referring to game availability, not the functions of the actual silicon. If the hardware were gimped, then they'd be in the same situation that Intel is apparently facing with Alchemist.
Its gimped. If you think a mobile gpu from Apple plays games perfectly fine, then you clearly have a very loose definition of what a good performing GPU is.
 

exoticSpice

Suspended
Jan 9, 2022
1,242
1,952
Its gimped. If you think a mobile gpu from Apple plays games perfectly fine, then you clearly have a very loose definition of what a good performing GPU is.
Until we have a native AAA title on macOS with Metal 3 we cannot judge gaming pref on any M1. Resident Evil Village should give us a GREAT example on how Apple's GPU's perform when they are not hobbled back by old OpenGL or Rosseta translation.

Apple's GPUs are good. They on par with RDNA 2 iGPUs.
sotb.png
 
  • Like
Reactions: Colstan and Xiao_Xi

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Eh, I think some of that is due to Intel not making consumer IA64 chips like AMD made consumer AMD64 ones. I think Intel believed they could get the server space to move, then maybe the consumer side. It didn't work out that way (clearly).
Right, what I said was, “If AMD didn’t exist at the time”. And, if AMD didn’t exist, they wouldn’t have been making consumer AMD64 anything. Without AMD, the industry would have been pissed with Intel, sure, but there literally wouldn’t have been any other way forward and they’d just make the best of Intel’s offerings. Just like the industry made the best of the Skylake situation. :)
 
  • Like
Reactions: diamond.g

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
Its gimped. If you think a mobile gpu from Apple plays games perfectly fine, then you clearly have a very loose definition of what a good performing GPU is.
Apples GPUs are capable of very good performance for their power envelope. Expecting them to perform on par with GPUs with several times their power draw is clearly unreasonable (particularly on code targeting the PC offerings).

Conversely, ignoring the power side of the equation is also unreasonable since Apple clearly designs their SoCs to allow ergonomics and form factors they favour.

We’ll see where Apple goes from here, but it would surprise me greatly if they were to drastically change their design ethos to line up with desktop PCs. In my book, this is a good thing.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Until we have a native AAA title on macOS with Metal 3 we cannot judge gaming pref on any M1. Resident Evil Village should give us a GREAT example on how Apple's GPU's perform when they are not hobbled back by old OpenGL or Rosseta translation.

Apple's GPUs are good. They on par with RDNA 2 iGPUs. View attachment 2037299
On one hand it is cool that the M2 can out perform the 680M, on the other since it has more ALU's it seems like it should outperform the AMD part right?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Its gimped. If you think a mobile gpu from Apple plays games perfectly fine, then you clearly have a very loose definition of what a good performing GPU is.

M2 gets ~ 35FPS at 1080p medium in BG3 and scores over 6k in 3Dmark Wild Life. That is compatible with Nvidia's current lower-power 25-30W dedicated GPUs. Of course it's far from enthusiast gaming level but it's absolutely sufficient for an everyday user. As to the rest, refer to the excellent post by @EntropyQ3 above.
 

exoticSpice

Suspended
Jan 9, 2022
1,242
1,952
On one hand it is cool that the M2 can out perform the 680M, on the other since it has more ALU's it seems like it should outperform the AMD part right?
we can't do a proper comparasion with both these chips until M2 has a game that is fully optimised with the API.
That's why RE: Village is going to be a good benchmark because it will made using Metal 3. So we can finally see just how good the GPU in the M2 is compared to the 680M.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
we can't do a proper comparasion with both these chips until M2 has a game that is fully optimised with the API.
That's why RE: Village is going to be a good benchmark because it will made using Metal 3. So we can finally see just how good the GPU in the M2 is compared to the 680M.

Synthetic benchmarks are a decent proxy of the potential performance. In 3DMark, M2 is faster than the 680M by at least 20% and is positioned between the Max-Q and regular laptop variants of GTX 1650.

Real game results are obviously going to the the decisive test, but just because a game has been ported to Metal 3 doesn't mean that the port has been done well. One can hope of course. At any rate, I intend to purchase these games even if it's not really my cup of tea just to send the signal to the market :)
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
Right, what I said was, “If AMD didn’t exist at the time”.
It's interesting just how close that was to happening. According to Cliff:

What else other than opteron would we have worked on? I mean, the *original* K8 was a whole other thing, but we had no choice but to do what became Opteron because almost the entire design team resigned and what you had left were around 15-20 folks between circuit design, logic design, and architecture. And even Opteron (which we called sledgehammer) almost didn’t happen - there was a dinner at La Papillon that ended in a vote. The alternative was we go work elsewhere.
Anyway, what would have happened if Opteron wasn’t a thing? Well, either Merced/Itanium would have succeeded, for lack of alternative, OR PowerPC/Power would have succeeded, or maybe AMD would have done something else that would have pulled the industry along (there is some possibility it could have been ARM - it would have HAD to have been some form of RISC since we didn’t have manpower to do something huge. But whatever it would have been would have required Microsoft’s buy-in. That is often overlooked - we would never have done Opteron if Microsoft didn’t agree to put Windows on it.)
So, most of the design team left AMD, only 15-20 were left, and they had a vote whether to stay. In some alternate timeline, we're all using Itanium, like it or not.
 
  • Like
Reactions: Unregistered 4U

Colstan

macrumors 6502
Jul 30, 2020
330
711
Maybe we would have been better off if x86 had died back then.
Perhaps, we'll never really know. Cliff says that Intel "let the HP PA-RISC guys make a science project" and the end result was Itanium. Given his time at Intel's competitors, I understand why he's not a fan of team blue, but we'll ultimately never know. The market simply wouldn't accept non-x86 mass market CPUs at that time because AMD provided an alternative, and it's only now that we're getting some measure of competition with a different ISA. It's reminiscent of the 80s, when there were multiple competing designs.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Igor's Lab believes there's something wrong with the scheduler that can't be fixed with drivers. It's been covered by Linus and Moore's Law is Dead.

igor’s Lab is reaching for a root cause . There is a retest with updated drivers .



Intel doesn’t have a mega framerate gaming card here . Reviews highly skewed around the card‘s worth almost totally hindged on being a Nividia ’killer’ card are going to drift into pointing at hardware being ‘broken’ cause(s) .

much of the larger Xe core history was aimed more at workstation / server GPGPU compute than more maximum frame rate Monitor driving .just as much as a ‘hardware problem’ the results look like Intel is trying to goose the max frame rates too high and crappy edge cases pop up . You can frame that in the context of ‘ if hardware was faster you could flog it harder without glitches’ , but that also can be coach as match the drivers to what you have , not what you wished you had.


Intel has probably made a mistake chasing after gaming fanboy benchmarks. DirectX11 isn’t a focus area , but is soaking up resources. So is chasing after funky game engine stack 27 with quirks . If Intel had gone after mainstream apps , content creation with overlap with their leading Video de/encode abilities , mandatory resiable BAR, and 60-80 Hz frame rates they would probably be in better shape. OpenCL , OneAPI stuff that are aiming the Xe-HPC card at is probably a better fit with the shared Xe Core linage.
The cryptocaraze probably also plays a role here . Both in jumbling Intels’s foscus ( which not so great at anyway ) and worrying about hash rate limiters . [ if gpu draw call rate is slow is a hash calls rate going to vast multiples faster ? ]


what Intel really needed/needs was a good enough performance , cost effective cards for system vendors to include in systems for regular folks , not folks focused on games . That would not have ‘killed’ Nvidia but would give them a foothold closer to the iGPU market they dominated already. make it work and then make it fast In gen 2 and 3 . The volume would have been lower but the first gen should not need to be a high volume seller to be successful.

if look at the line up ( without he hype train from sales ) that is mainly what the hardware are looks like( low to midrange affordable option) . Unfortunately, some part of Intel Sales marketing put the gaming abilities on a hype train and some middle and upper management drank the company kool aid ( and/or got talked into making gaming reviewers happy to sell cards ) . Maybe also foolishly thinking the GPU supply scarcity was going to allow them time to chase after too many directions at same time and folks would just buy because it was there at close to MSRP . If so , That was huge blunder .



P.S. if there is a hardware problem I suspect at least part of that was Intel trying to put DisplayPort 2.0 on these . That was probably ‘ a bridge to far’ given scope of the work and newness to the territory. If there is firmware updates for display controllers that might be an contributing issue fix that could come with more time . ( Apple is not being very sgreesive there at all . M2 still has same video streams out limits Save complexity on sub component X so can add more polish to sub component Y . )
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
Curious, how does Optane compare with RAM or SSDs power-wise? Both of those get bloody hot if using high throughput. Is it actually worse?

My understanding was that, byte for byte, Optane drew more power than SSD. It could handle direct access/modification of individual bytes, unlike NAND storage, which has to transfer blocks of bytes at a time. Hence, in theory, the durability of individual cells could be greater, since they do not necessarily have to be involved in block writes.

However, data storage is generally handled in blocks rather than tiny fragments, and there really are practical advantages to doing it that way (like managing stored versions and keeping them consistent). In the end, it looks a bit like Optane was providing a solution to a problem that may not currently call for that approach, and possibly never will (I can easily see high-density NVMe embedded inside an SoC to be used as program code source, but that is memory that would be infrequently written).

Perhaps the biggest failure was that it never migrated to mobile platforms, where it might make a lot of sense, in part because the power budget of NAND + DRAM is still enough better than Optane to make the switch not worthwhile. But ultimately Intel and Micron were unable to make it pay for itself.
 
  • Like
Reactions: altaic

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Larrabee followed a similar pattern, with expectations of a gaming card, but it ended up being shoved into the data center.
Larrabee was actually a supercomputer chip first and foremost, something intended to compete with the then-new phenomenon of GPGPU in supercomputing. The ability to repurpose the chip as a GPU that would rasterize graphics was kind of a side project / goal.

For the full story from an insider, go here and click the "Why didn't Larrabee fail?" link in the sidebar on the right.


Perhaps, we'll never really know. Cliff says that Intel "let the HP PA-RISC guys make a science project" and the end result was Itanium. Given his time at Intel's competitors, I understand why he's not a fan of team blue, but we'll ultimately never know.
I've never worked for Intel or its competitors and I'm more than willing to just say it: the Itanium ISA was extremely stinky dog poop. It was so bad it just doesn't seem likely it could have taken over from x86 on technical merits, even though x86 itself is not an ideal ISA.

It's not clear when Intel would have tried to do that either, as they positioned Itanium as their answer to RISC workstations and servers, not something for mass market personal computers.

"Science project" is apt. It's a mashup of far-out ideas. Each is interesting in the academic sense, as in it might have made sense for a researcher to write a paper about the idea and figure out whether it had any merit. Instead of doing something like that, Intel just jammed a bunch of these ideas into their ISA of the future and pushed it into production without doing the homework to rigorously check whether any of it was likely to work out.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.