Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

When do you expect an iMac redesign?

  • 4rd quarter 2019

    Votes: 34 4.1%
  • 1st quarter 2020

    Votes: 23 2.8%
  • 2nd quarter 2020

    Votes: 119 14.5%
  • 3rd quarter 2020

    Votes: 131 15.9%
  • 4rd quarter 2020

    Votes: 172 20.9%
  • 2021 or later

    Votes: 343 41.7%

  • Total voters
    822
  • Poll closed .
It doesn't matter what is announced on Monday as 90% of you will bitch and moan over whatever Apple does. And then you'll buy a new iMac.

Apple will be 'judged' on whatever they offer to their Apple customers.

They set high expectations. And customers reflect upon this as they consider their wallet.

If the iMac meets those high expectations, it will be in demand.

Not too difficult.

The iMac is due a significant refresh. It's customer's know it's weak spots and it's up to a 1.5 trillion company to address them.

If that happens, they may be out pourings of joy.

But people will have their quibbles and are right to voice them if the 'new' iMac falls short. In any regard.

We don't always get what we want. And the forums are the place to reflect that.

Azrael.
[automerge]1592486304[/automerge]
I guess for the 27" imac or whatever, will be an 400$ option from the base model

A 400$ bto for a 5700XT sounds about right to me. Give or take what the Vega 48 is.

Azrael.
 
Last edited:
But paying through the nose for them. eg. The 5600M is £800. That's 8 times the baseline. It is 8 times faster? The 5500m seems better value to me. £100. (...save teh £700 for eGPU and RDNA2.) Or £200 if (!) you need the 8 gigs of vram.

If you are looking at high end "things" the price will always raise disproportional.
Look at lenses for cameras for example. You get a lense that is a ton better if you go for the 500€ lense instead of the 100€ one, but theres one that lets in a bit more light, that might be a bit sharper... and it's 2000+€ because it takes a lot more effort to get the performance bump and the best return on investment for the glass company is at that specialized high price range.

I think it's pretty amazing the MacBook 16 got that GPU option. It makes me feel really hopeful when it comes to the GPU upsell for the iMac. Probably its going to be a bit cheaper for the iMac because probably it takes less effort to put more power in a bigger device. Right now the Upsell is 585€ (8gb hbmr2) on iMac and 715€ or 910€ on the iMac Pro Cards (both 16gb HBM2!) vs the MacBooks 875€ with 8GB and a more current GPU.

I feel like we probably could stick in the 600-700€ range for the iMac Upsell. Maybe with a 900-1000€ option IF the iMac Pro gets joined with the 27 inch iMac line.
 
If you are looking at high end "things" the price will always raise disproportional.
Look at lenses for cameras for example. You get a lense that is a ton better if you go for the 500€ lense instead of the 100€ one, but theres one that lets in a bit more light, that might be a bit sharper... and it's 2000+€ because it takes a lot more effort to get the performance bump and the best return on investment for the glass company is at that specialized high price range.

I think it's pretty amazing the MacBook 16 got that GPU option. It makes me feel really hopeful when it comes to the GPU upsell for the iMac. Probably its going to be a bit cheaper for the iMac because probably it takes less effort to put more power in a bigger device. Right now the Upsell is 585€ (8gb hbmr2) on iMac and 715€ or 910€ on the iMac Pro Cards (both 16gb HBM2!) vs the MacBooks 875€ with 8GB and a more current GPU.

I feel like we probably could stick in the 600-700€ range for the iMac Upsell. Maybe with a 900-1000€ option IF the iMac Pro gets joined with the 27 inch iMac line.

Ah? But is it high end 'things?'

Here's an actual 'high end' card from the competition...


I think the 5600M is 'late' to the party. I don't think there is anything 'amazing' about it. As the '5700' is looking like it will be 'late' to the iMac party. I wouldn't class that as 'amazing' either.

Yes. It's a decent 'mid range' card. ...that happens to be priced above 'high end' desktop cards.


If you said it was 'amazing' that they got a 7 tflop card in at 50watts. Maybe that is 'note(book)worthy.'

HBM is excellent technology. But very pricey. As is this solution from Apple. GPU improvements are welcome. But rather than offering 'soft' gpu options as standard in their pricey computers, it would be nice to see better value and performing cards 'as standard.'

I'm still optimistic the iMac will have a decent gpu as standard. Though I'm not sure what I base my optimism on...

The 5600M is what happens when you don't have no competition in your gpu vendors.

Azrael.
[automerge]1592487986[/automerge]
I’m British, and we complain a lot. You’ll get used to it.

A gold standard quip.

My praise, Sir gusping! :)

Azrael.
[automerge]1592488050[/automerge]
Nobody's bitching......yet.

...and it's raining today.

That's British weather for you...

Azrael.
[automerge]1592488522[/automerge]
Out of curiosity, whatever happened to AMDs porting of a few libs? I vaguely remember there being some murmur about them trying to port a few ML libs to make them compatible with AMD GPUs but haven’t followed closely..

Doesn't AMD have their own render tech' which is open source..?

I suspect with they will bolster their equivalent to 'Cuda' with the emergence of RDNA2 and Arcturis? Now they've actually entered or about to...the 'HIGH END' gpu stack from their year long mid-range so-so ness. They have their new gpu architecture with new initiatives. So I'd expect momentum to gather around software performance libraries/apis to move forward with that and leave GNC behind.

Though they've been having problems with their drives for their gpus?

The Mac desktop hasn't even got last year's mid range yet... (please, don't say the Mac Pro...)

Still waiting on Radeon to deliver 'high end' products for Mac. That come from 'this' year.

Azrael.
 
Last edited:
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
  • Haha
Reactions: gusping
Vega 54 is rated to about 10 TFLOPS so this would have around 6-7 TFLOP (extrapolated from 2/3 of Geekbench for Vega 54). If it only draws 50W, they can put in two of these in an iMac to get 12-14 TFLOPS. TFLOPS. Still, that solution would be too expensive so that will not happen. 40 CU is more than the vanilla 5700M. Sorry people, but these GPU would fit nicely in a iMac enclosure.

As I said above, "mobile" definition has little meaning. Compute/ power ratio and total compute is interesting parameters not what they are binned at from marketing.

Power budget is way above 50W in an iMac. They have more than 150W on the GPU. They will not constrain it at 1/3 of the power it can have.

What is more likely is more HBM2 memory on-package in the GPU options.
Out of curiosity, whatever happened to AMDs porting of a few libs? I vaguely remember there being some murmur about them trying to port a few ML libs to make them compatible with AMD GPUs but haven’t followed closely..

Nvidia invest masssssssiiiiiiively into TensorFlow, PyTorch, MXNet. I think AMD just can't keep up with it. AMD is committed to use OpenSource software which is fine, but they never, ever invest the budget NVIDIA invested (we speak like billion USD here) to have the workforce to help developers of these ML libs. And OpenSource isn't there. AMD has its ROCm library, a spin-off of OpenCL 1.2. And OpenCL is such a pain in the a*s to code that it never very lift off. They even abandoned anything above v1.2. Virtually no one on the planet supported v2.2+. So the OpenCL 3.0 is a reset to 1.2 standard with newer C++ compiler (if I remember well).

So ROCm is not mature and I think there is only a branch of TensorFlow supporting some OPs in ROCm. It's likely the only surviving OpenCL framework.

AMD is really poor financially so cannot invest as much in its software as NVIDIA do.

OpenCL is suffering major problems, popularity loss, disinterest from developers.

CUDA is winning datacenter after datacenter after supercomputer after.... you got it.

Fortunately, Microsoft and Amazon didn't buy a single Intel server since arrival of AMD EPYC chip. So might help AMD's wallet somewhere.


AMD needs great software. Going open is not always the solution. OpenCL was an attempt to compute on anything that can compute and I personally think it failed. To have coded in OpenCL and CUDA, CUDA is light years ahead of OpenCL1.2. Far less complicated, far more user friendly. And the tools are there. Every major version of CUDA brings more debugging tools better profiler, etc...

When the creator of a lib (Apple) deprecates its own library where it all started (OpenCL), it's not a good thing generally speaking.

So we have:

CUDA on Windows (lol), Linux.
ROCm on Linux (you must be realllllyy motivated, it's a young lib).
Metal on macOS which is truly impressive on how easy it is to get started. You feel a bit of OpenCL legacy here and there but simply being in the macOS environment and Xcode secures a lot the developer. OpenGL being deprecated (and I've been watching a bit some app using it for rendering and NO ONE is getting rush to implement its rendering engine to Metal (VTK for example), so I'm freaking out at this moment because we ALL KNOW that when Apple switch to its own Apple A-chip, OpenGL is nothing else than t e r m i n a t e d.) OpenCL will also be terminated when switching to Apple A-chip because the chip won't be OpenCL compliant (you see why everything is being deprecated since a while now we know it's moving really soon).

And in fact, between you and me, the only platform on which AMD rocks is Apple. Why ? Because Apple *is* involved in every portion of it, from hardware design and integration, firmware of the GPU, driver, OS, and APIs it uses to render. No other OS can do this. And that's why you see all around people complaining about crappy AMD drivers on Windows and Linux (they even abandoned their own proprietary driver on Linux to contribute directly to the OpenSource, free, community made driver. Insane.)
 
Last edited:
Doesn't AMD have their own render tech' which is open source..?

I suspect with they will bolster their equivalent to 'Cuda' with the emergence of RDNA2 and Arcturis? Now they've actually entered or about to...the 'HIGH END' gpu stack from their year long mid-range so-so ness. They have their new gpu architecture with new initiatives. So I'd expect momentum to gather around software performance libraries/apis to move forward with that and leave GNC behind.

Though they've been having problems with their drives for their gpus?

The Mac desktop hasn't even got last year's mid range yet... (please, don't say the Mac Pro...)

Still waiting on Radeon to deliver 'high end' products for Mac. That come from 'this' year.

Azrael.

Slightly diff than the usual rendering stuff but a lot of ML and data science related libraries can only take advantage of the GPU if they're running on CUDA which means it's pretty much been limited to Nvidia GPUs. That's why apart from the the standard 20xx and 10xx lines you also have Titans and Quaddros for more intense tasks.

AMD tried to bridge the gap a bit and make themselves to be more competitive against Nvidia in data science settings through ROCm which was still in the process of being ported back when I briefly studied ML. But checking now, it seems they claim to have support for TensorFlow and PyTorch.. so who knows..

But for the most part CUDA remains the easier entry point to ML if you're just starting off as it's one less thing to worry about. Dunno if that would compel AMD to continue pushing professional GPUs since it's still a pretty big mountain to climb
 
Power budget is way above 50W in an iMac. They have more than 150W on the GPU. They will not constrain it at 1/3 of the power it can have.

What is more likely is more HBM2 memory on-package in the GPU options.


Nvidia invest masssssssiiiiiiively into TensorFlow, PyTorch, MXNet. I think AMD just can't keep up with it. AMD is committed to use OpenSource software which is fine, but they never, ever invest the budget NVIDIA invested (we speak like billion USD here) to have the workforce to help developers of these ML libs. And OpenSource isn't there. AMD has its ROCm library, a spin-off of OpenCL 1.2. And OpenCL is such a pain in the a*s to code that it never very lift off. They even abandoned anything above v1.2. Virtually no one on the planet supported v2.2+. So the OpenCL 3.0 is a reset to 1.2 standard with newer C++ compiler (if I remember well).

So ROCm is not mature and I think there is only a branch of TensorFlow supporting some OPs in ROCm. It's likely the only surviving OpenCL framework.

AMD is really poor financially so cannot invest as much in its software as NVIDIA do.

OpenCL is suffering major problems, popularity loss, disinterest from developers.

CUDA is winning datacenter after datacenter after supercomputer after.... you got it.

Fortunately, Microsoft and Amazon didn't buy a single Intel server since arrival of AMD EPYC chip. So might help AMD's wallet somewhere.


AMD needs great software. Going open is not always the solution. OpenCL was an attempt to compute on anything that can compute and I personally think it failed. To have coded in OpenCL and CUDA, CUDA is light years ahead of OpenCL1.2. Far less complicated, far more user friendly. And the tools are there. Every major version of CUDA brings more debugging tools better profiler, etc...

When the creator of a lib (Apple) deprecates its own library where it all started (OpenCL), it's not a good thing generally speaking.

So we have:

CUDA on Windows (lol), Linux.
ROCm on Linux (you must be realllllyy motivated, it's a young lib).
Metal on macOS which is truly impressive on how easy it is to get started. You feel a bit of OpenCL legacy here and there but simply being in the macOS environment and Xcode secures a lot the developer.

And in fact, between you and me, the only platform on which AMD rocks is Apple. Why ? Because Apple *is* involved in every portion of it, from hardware design and integration, firmware of the GPU, driver, OS, and APIs it uses to render. No other OS can do this. And that's why you see all around people complaining about crappy AMD drivers on Windows and Linux (they even abandoned their own proprietary driver on Linux to contribute directly to the OpenSource, free, community made driver. Insane.)
It’s nice to have someone who knows what they are talking about. Imagine if this thread was full of muppets like myself who just demand 10900Ks and 5700 XTs...
 
As a value judgement, the £800 uplift to get 5600m graphics could actually buy a real 5600XT + eGPU of your choice with a chunk of change left over. Assuming you don't need all that power on the move.

Or wait a few months and put a much more powerful RDNA2 6xxx series in next year.
 
  • Like
Reactions: Azrael9
My hardware hopes for WWDC
iMac redesign, no chin, 27”, 5k display
iMac Pro, redesign like the regular iMac, 32”, 6k display.
Stand alone 27” 5k displays $2k with stand, additional $500 nano option.
You know it actually would be nice if they at least announced the imp specs along with the new im at the event, so we could make a better decision about what model to purchase. Even if they say imp coming later this year.
 
  • Like
Reactions: Azrael9 and gusping
CUDA on Windows (lol), Linux.
ROCm on Linux (you must be realllllyy motivated, it's a young lib).
Metal on macOS which is truly impressive on how easy it is to get started. You feel a bit of OpenCL legacy here and there but simply being in the macOS environment and Xcode secures a lot the developer.

That kinda makes me sad.. I remember being really excited for ROCm and then realising there was hardly any support and it was a PITA to get working -_-
 
9to5Mac put out a roundup video on the iMac at WWDC yesterday.
Do not expect anything you have not already read here (obviously), but if you feel like "some of the info in order" it's a nice 15 Minutes to spend.


Good summative video, Dr. Ty for posting.

Geoff really puts the boot into the 'Fusion' drive, eh?

'A stop gap solution...a bottleneck, garbage that belongs in the trash.'

Yeah, Apple's marketing doesn't say when they charge you £1750 for an iMac, eh?

Azrael.
 
  • Haha
Reactions: gusping
Ah? But is it high end 'things?'

Here's an actual 'high end' card from the competition...


I think the 5600M is 'late' to the party. I don't think there is anything 'amazing' about it. As the '5700' is looking like it will be 'late' to the iMac party. I wouldn't class that as 'amazing' either.

Yes. It's a decent 'mid range' card. ...that happens to be priced above 'high end' desktop cards.

Its Apple high end. You know these are the "over"prices they will give us. I mean, right now you can buy a 5200rpm HDD in their 2600€ computer and pay about double of the marked price to replace and upgrade that thing for 1TB of SSD.
Theres reasons why they make so much money, pricing and marketing is one of them.
It's funny, i still prefer MacOS over Windows (wich i have around, 99% for gaming reasons) but both Catalina as well as iOS 13 are pretty bad... i probably would not want to go back to Leopard, but in my memory that just was such a beast of an OS compared to Catalinas finder freezing party and the horrible resource management in iOS 13. ...like, right now it is more known as a sales point rather than actually being a streamlined flawless OS behind its accessibility.
 
Last edited:
Power budget is way above 50W in an iMac. They have more than 150W on the GPU. They will not constrain it at 1/3 of the power it can have.

What is more likely is more HBM2 memory on-package in the GPU options.


Nvidia invest masssssssiiiiiiively into TensorFlow, PyTorch, MXNet. I think AMD just can't keep up with it. AMD is committed to use OpenSource software which is fine, but they never, ever invest the budget NVIDIA invested (we speak like billion USD here) to have the workforce to help developers of these ML libs. And OpenSource isn't there. AMD has its ROCm library, a spin-off of OpenCL 1.2. And OpenCL is such a pain in the a*s to code that it never very lift off. They even abandoned anything above v1.2. Virtually no one on the planet supported v2.2+. So the OpenCL 3.0 is a reset to 1.2 standard with newer C++ compiler (if I remember well).

So ROCm is not mature and I think there is only a branch of TensorFlow supporting some OPs in ROCm. It's likely the only surviving OpenCL framework.

AMD is really poor financially so cannot invest as much in its software as NVIDIA do.

OpenCL is suffering major problems, popularity loss, disinterest from developers.

CUDA is winning datacenter after datacenter after supercomputer after.... you got it.

Fortunately, Microsoft and Amazon didn't buy a single Intel server since arrival of AMD EPYC chip. So might help AMD's wallet somewhere.


AMD needs great software. Going open is not always the solution. OpenCL was an attempt to compute on anything that can compute and I personally think it failed. To have coded in OpenCL and CUDA, CUDA is light years ahead of OpenCL1.2. Far less complicated, far more user friendly. And the tools are there. Every major version of CUDA brings more debugging tools better profiler, etc...

When the creator of a lib (Apple) deprecates its own library where it all started (OpenCL), it's not a good thing generally speaking.

So we have:

CUDA on Windows (lol), Linux.
ROCm on Linux (you must be realllllyy motivated, it's a young lib).
Metal on macOS which is truly impressive on how easy it is to get started. You feel a bit of OpenCL legacy here and there but simply being in the macOS environment and Xcode secures a lot the developer. OpenGL being deprecated (and I've been watching a bit some app using it for rendering and NO ONE is getting rush to implement its rendering engine to Metal (VTK for example), so I'm freaking out at this moment because we ALL KNOW that when Apple switch to its own Apple A-chip, OpenGL is nothing else than t e r m i n a t e d.) OpenCL will also be terminated when switching to Apple A-chip because the chip won't be OpenCL compliant (you see why everything is being deprecated since a while now we know it's moving really soon).

And in fact, between you and me, the only platform on which AMD rocks is Apple. Why ? Because Apple *is* involved in every portion of it, from hardware design and integration, firmware of the GPU, driver, OS, and APIs it uses to render. No other OS can do this. And that's why you see all around people complaining about crappy AMD drivers on Windows and Linux (they even abandoned their own proprietary driver on Linux to contribute directly to the OpenSource, free, community made driver. Insane.)

An educating post.

Thank you. :)

Azrael.
 
  • Love
Reactions: pldelisle
That kinda makes me sad.. I remember being really excited for ROCm and then realising there was hardly any support and it was a PITA to get working -_-

AMD would need something like literally years of work and thousands of software/electrical engineers with parallel computing skills to build something just enough to really compete with Nvidia. Because that's what Nvidia has now as a workforce only for CUDA. And even more, their business process is mature, a well geared and oiled machine.
 
  • Like
Reactions: Azrael9
As a value judgement, the £800 uplift to get 5600m graphics could actually buy a real 5600XT + eGPU of your choice with a chunk of change left over. Assuming you don't need all that power on the move.

Or wait a few months and put a much more powerful RDNA2 6xxx series in next year.

Yeah.

The 5500M is perfectly respectable (in context) for £100.

Take the £700 saved, get any eGPU caddy (Razor perhaps?) and plough the rest into an RDNA2 variant.

Macbook in one hand...eGPU in the other. Two carrier bags on your way to 'serious 'Pro' work or LAN gAMoR party.

If you already have a Macbook? Or if you need to buy one? Get the 5500M. Far better bang for buck. For the price of hte Macbooks, I don't know why they don't include the 5500M 4 gig as standard. More worthy of the Macbook's 'marketing.'

I'd say wait on for RDNA2's 50% efficiency. And it's got ray tracing and other forward facing tech'.

Azrael.
[automerge]1592489901[/automerge]
Just like Apple’s software teams...

Not sure they draped themselves in glory with Catalina.

Azrael.
 
Yeah.

The 5500M is perfectly respectable (in context) for £100.

Take the £700 saved, get any eGPU caddy (Razor perhaps?) and plough the rest into an RDNA2 variant.

Macbook in one hand...eGPU in the other. Two carrier bags on your way to 'serious 'Pro' work or LAN gAMoR party.

If you already have a Macbook? Or if you need to buy one? Get the 5500M. Far better bang for buck. For the price of hte Macbooks, I don't know why they don't include the 5500M 4 gig as standard. More worthy of the Macbook's 'marketing.'

I'd say wait on for RDNA2's 50% efficiency. And it's got ray tracing and other forward facing tech'.

Azrael.
[automerge]1592489901[/automerge]


Not sure they draped themselves in glory with Catalina.

Azrael.
Razor Core X (£250) plus an RDNA2 GPU. Sorted.

I still believe most laptop owners don’t actually need a laptop, especially those doing heavy work. So an eGPU isn’t too much of an inconvenience in theory. I’ve heard they aren’t exactly super straightforward to use, or without bugs, in Catalina.
 
  • Like
Reactions: Azrael9
To finish (and for your personal info), I think AMD's effort are concentrating toward "translating" CUDA code for their GPU (when you are so low in your self esteem and acknowledge to have lost .......)
Or it's a tool to create interface between both technologies within the libraries. Something like that ...

[automerge]1592490375[/automerge]
Razor Core X (£250) plus an RDNA2 GPU. Sorted.

I still believe most laptop owners don’t actually need a laptop, especially those doing heavy work. So an eGPU isn’t too much of an inconvenience in theory. I’ve heard they aren’t exactly super straightforward to use, or without bugs, in Catalina.
I want a desktop for my next computer.

My iPad Pro is a more capable and portable machine than an apple laptop.

And more power budget in a desktop. Less chance to have a slowly exploding battery after 6.5 years of usage.
 
AMD would need something like literally years of work and thousands of software/electrical engineers with parallel computing skills to build something just enough to really compete with Nvidia. Because that's what Nvidia has now as a workforce only for CUDA. And even more, their business process is mature, a well geared and oiled machine.

Hmm. Seems like AMD Radeon has a mountain to climb. :eek:

But they did with Ryzen. ...and they're a few years into it now.

And RDNA2 and Arcturis(?) are probably the 1st significant steps in the fightback. With RDNA3 set to up the anti versus Nv'. Sounds like Radeon and AMD getting organised re: GPUs.

They can earn income from Ryzan and Epyc to boost their revenue for what is going to be an important market going forwards. GPUs. But they'll need to get sorted with their software.

They always seemed to have that 'Open GL' weakness traditionally vs Nv'.

I'm looking forward to seeing the benches from RDNA2 and Ampere. Plugging an RDNA2 into an eGPU for iMac is an option I'm looking forward to 'Barefeats.com' hooking up.

They tend to have good comparative Mac benches from Apple's line up.

Azrael.
 
Power budget is way above 50W in an iMac. They have more than 150W on the GPU. They will not constrain it at 1/3 of the power it can have.

What is more likely is more HBM2 memory on-package in the GPU options.
Probably not, but it gives an indication of how much performance you can get into a iMac if you really want. Put three in and you have 20 TFLOPS and at least ray tracing scales excellently with the number of GPU. Using vanilla desktops GPU is not necessary the way forward. They are sloppy designed and not power efficient. When they reach 200W, GPU vendors seen to stop caring about power consumption. I my world, it is better to use 50W for a job rather than 200W.

I like efficiency. I find that using desktop parts and design the case according not particularly inventive. The 5600M is impressive because it is a clever solution.

The 27 inch will have high TDP, the 23 may not.
 
Hmm. Seems like AMD Radeon has a mountain to climb. :eek:

But they did with Ryzen. ...and they're a few years into it now.

And RDNA2 and Arcturis(?) are probably the 1st significant steps in the fightback. With RDNA3 set to up the anti versus Nv'. Sounds like Radeon and AMD getting organised re: GPUs.

They can earn income from Ryzan and Epyc to boost their revenue for what is going to be an important market going forwards. GPUs. But they'll need to get sorted with their software.

They always seemed to have that 'Open GL' weakness traditionally vs Nv'.

I'm looking forward to seeing the benches from RDNA2 and Ampere. Plugging an RDNA2 into an eGPU for iMac is an option I'm looking forward to 'Barefeats.com' hooking up.

They tend to have good comparative Mac benches from Apple's line up.

Azrael.

Ryzen is hardware.

Software is another game ;)

OpenGL is dead anyway. It lives on Linux, but Vulkan is more and more popular. But not supported on macOS since ... Apple A-chip won't be compliant with OpenCL, so Apple never wanted to have compatibility with Vulkan, which integrated OpenCL2.0. You see it ...? ahahha

The era of cross-platform GPU rendering is over. Three times the maintenance cost now. And Windows have what ? DirectX but no CAD uses DirectX ? So ..... I don't know what's going to happen seriously.
[automerge]1592490766[/automerge]
Probably not, but it gives an indication of how much performance you can get into a iMac if you really want. Put three in and you have 20 TFLOPS and at least ray tracing scales excellently with the number of GPU. Using vanilla desktops GPU is not necessary the way forward. They are sloppy designed and not power efficient. When they reach 200W, GPU vendors seen to stop caring about power consumption. I my world, it is better to use 50W for a job rather than 200W.

I like efficiency. I find that using desktop parts and design the case according not particularly inventive. The 5600M is impressive because it is a clever solution.

The 27 inch will have high TDP, the 23 may not.

Yeah, 150w for the 27 inch, the 23 will likely stay with NAVI12 with 20-24 CUs.

Doing what I'm doing with a 50w GPU is impossible. It can be sure impressive to pack such amount of power in 50W, but it's nowhere near what power user really needs in a desktop form factor.

If, for the same number of (sustained) FLOPs you use half the power, fine. But if you don't, you are not better because you only use 50w.

A 5700XT not underclocked with 40 CUs acan make 8.6 TFLOPS sustained, 10 TFLOPS turbo. This is far from what we have in the macbook pro.

desktop hardware with desktop tdp is far better than severely downclocked mobile hardware.
We clearly won’t get a 225w GPU in a 27 inch iMac, but if apple can make the same sort of sorcery in this as the MBP, we might have something nice.
 
Last edited:
  • Love
Reactions: Azrael9
Razor Core X (£250) plus an RDNA2 GPU. Sorted.

I still believe most laptop owners don’t actually need a laptop, especially those doing heavy work. So an eGPU isn’t too much of an inconvenience in theory. I’ve heard they aren’t exactly super straightforward to use, or without bugs, in Catalina.

I think so.

As for eGPUs. Easy enough to use. But they're not 100% bullet proof in all test cases. Not enough to trouble you for gaming, I shouldn't imagine. And support is getting better all the time.

I find laptops over rated. They're just 'portable' desktops and I define them as so. Never did understand the great fuss about them. If the keyboard breaks, you're in a for an expensive repair. The monitor is attached. (Like in teh iMac.) So that's another potential expense. The tech' is thermally limited.

But I guess if you want to take your laptop to a desert or the arctic and do some work...lugging a tower case is more prohibitive. It's decent computing on the go, I guess. I tried the whole laptop on my lap thing. Gets too hot. Can't snuggle up with it on the sofa for 'lazy' computing. iPad for me, is light years ahead in that regard. I prefer the iPad's idiom to that of a 'lap'top.

I have done creative work on the old iBook G3 back in the day. It was ok. Pokey screen. Fans blew when you tried to push it. Keyboard was so-so. All felt a bit 'compromised.'

I'll assume Apple are reasonably serious about eGPUs as they introduced the tech' for their OS. *fingers crossed.

Azrael.
[automerge]1592491218[/automerge]
Ryzen is hardware.

Software is another game ;)

OpenGL is dead anyway.
It lives on Linux, but Vulkan is more and more popular. But not supported on macOS since ... Apple A-chip won't be compliant with OpenCL, so Apple never wanted to have compatibility with Vulkan, which integrated OpenCL2.0. You see it ...? ahahha

The era of cross-platform GPU rendering is over. Three times the maintenance cost now. And Windows have what ? DirectX but no CAD uses DirectX ? So ..... I don't know what's going to happen seriously.
[automerge]1592490766[/automerge]


Yeah, 150w for the 27 inch, the 23 will likely stay with NAVI12 with 20-24 CUs.

Doing what I'm doing with a 50w GPU is impossible. It can be sure impressive to pack such amount of power in 50W, but it's nowhere near what power user really needs in a desktop form factor.

Sound post.

It's all about the software. That's why I use Mac OS.

'People who are serious about software make their own hardware.' Is that how the saying goes?

Open GL had it's chance. Had its day. It's dead on Mac. Cross platform api that hurt performance in the main. Too much latency. Software wise. If developers bring 2nd rate ports with 2nd rate performance to Mac...it was only a matter of time before Apple put a bullet in it and brought us Metal. Those Open GL devs can spill their Mac tears elsewhere. Hungrier Metal devs will replace them.

iOS. Metal. Swift. X-Code. Apple's software train is going in one direction. And it's selling hundreds of millions of devices. Gl devs will have to hop on or get left behind.

I got tired of the Mac's 2nd rate port experience with framerates half that of the Windows equivalent whilst being charged 100% at the Mac cashier.

I think for your line of work, a desktop is the way to go. :)

Azrael.
 
Last edited:
I think so.

As for eGPUs. Easy enough to use. But they're not 100% bullet proof in all test cases. Not enough to trouble you for gaming, I shouldn't imagine. And support is getting better all the time.

I find laptops over rated. They're just 'portable' desktops and I define them as so. Never did understand the great fuss about them. If the keyboard breaks, you're in a for an expensive repair. The monitor is attached. (Like in teh iMac.) So that's another potential expense. The tech' is thermally limited.

But I guess if you want to take your laptop to a desert or the arctic and do some work...lugging a tower case is more prohibitive. It's decent computing on the go, I guess. I tried the whole laptop on my lap thing. Gets too hot. Can't snuggle up with it on the sofa for 'lazy' computing. iPad for me, is light years ahead in that regard. I prefer the iPad's idiom to that of a 'lap'top.

I have done creative work on the old iBook G3 back in the day. It was ok. Pokey screen. Fans blew when you tried to push it. Keyboard was so-so. All felt a bit 'compromised.'

I'll assume Apple are reasonably serious about eGPUs as they introduced the tech' for their OS. *fingers crossed.

Azrael.
[automerge]1592491218[/automerge]


Sound post.

It's all about the software. That's why I use Mac OS.

'People who are serious about software make their own hardware.' Is that how the saying goes?

Open GL had it's chance. Had its day. It's dead on Mac. Cross platform api that hurt performance in the main. Too much latency. Software wise. If developers bring 2nd rate ports with 2nd rate performance to Mac...it was only a matter of time before Apple put a bullet in it and brought us Metal. Those Open GL devs can spill their Mac tears elsewhere. Hungrier Metal devs will replace them.

iOS. Metal. Swift. X-Code. Apple's software train is going in one direction. And it's selling hundreds of millions of devices. Gl devs will have to hop on or get left behind.

I got tired of the Mac's 2nd rate port experience with framerates half that of the Windows equivalent whilst being charged 100% at the Mac cashier.

I think for your line of work, a desktop is the way to go. :)

Azrael.
When you are a student you don’t have choice to have a powerful machine you can transport ;)
But once you have done your time (lol), desktops are more appropriate. Especially with the power an iPad can have today and will have in one week with iPadOS14
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.