Since when is discussing what we would like to see bitching? Maybe go to another thread or forum?
Nobody's bitching......yet.
Since when is discussing what we would like to see bitching? Maybe go to another thread or forum?
i think he is referring that is mid/high end dGPU, because you still cant compare an 45-50W dGPU with desktop gpu...those 2 segments should be a separate thing'Mid-Range.'
And what it's compared to is 'mid-range'.
It doesn't matter what is announced on Monday as 90% of you will bitch and moan over whatever Apple does. And then you'll buy a new iMac.
I guess for the 27" imac or whatever, will be an 400$ option from the base model
But paying through the nose for them. eg. The 5600M is £800. That's 8 times the baseline. It is 8 times faster? The 5500m seems better value to me. £100. (...save teh £700 for eGPU and RDNA2.) Or £200 if (!) you need the 8 gigs of vram.
I’m British, and we complain a lot. You’ll get used to it.It doesn't matter what is announced on Monday as 90% of you will bitch and moan over whatever Apple does. And then you'll buy a new iMac. As for me I'll be cursing along on my 2013 I-7 until it dies and I'll buy whatever is on the market at that point. Carry on...........
If you are looking at high end "things" the price will always raise disproportional.
Look at lenses for cameras for example. You get a lense that is a ton better if you go for the 500€ lense instead of the 100€ one, but theres one that lets in a bit more light, that might be a bit sharper... and it's 2000+€ because it takes a lot more effort to get the performance bump and the best return on investment for the glass company is at that specialized high price range.
I think it's pretty amazing the MacBook 16 got that GPU option. It makes me feel really hopeful when it comes to the GPU upsell for the iMac. Probably its going to be a bit cheaper for the iMac because probably it takes less effort to put more power in a bigger device. Right now the Upsell is 585€ (8gb hbmr2) on iMac and 715€ or 910€ on the iMac Pro Cards (both 16gb HBM2!) vs the MacBooks 875€ with 8GB and a more current GPU.
I feel like we probably could stick in the 600-700€ range for the iMac Upsell. Maybe with a 900-1000€ option IF the iMac Pro gets joined with the 27 inch iMac line.
I’m British, and we complain a lot. You’ll get used to it.
Nobody's bitching......yet.
Out of curiosity, whatever happened to AMDs porting of a few libs? I vaguely remember there being some murmur about them trying to port a few ML libs to make them compatible with AMD GPUs but haven’t followed closely..
Vega 54 is rated to about 10 TFLOPS so this would have around 6-7 TFLOP (extrapolated from 2/3 of Geekbench for Vega 54). If it only draws 50W, they can put in two of these in an iMac to get 12-14 TFLOPS. TFLOPS. Still, that solution would be too expensive so that will not happen. 40 CU is more than the vanilla 5700M. Sorry people, but these GPU would fit nicely in a iMac enclosure.
As I said above, "mobile" definition has little meaning. Compute/ power ratio and total compute is interesting parameters not what they are binned at from marketing.
Out of curiosity, whatever happened to AMDs porting of a few libs? I vaguely remember there being some murmur about them trying to port a few ML libs to make them compatible with AMD GPUs but haven’t followed closely..
Doesn't AMD have their own render tech' which is open source..?
I suspect with they will bolster their equivalent to 'Cuda' with the emergence of RDNA2 and Arcturis? Now they've actually entered or about to...the 'HIGH END' gpu stack from their year long mid-range so-so ness. They have their new gpu architecture with new initiatives. So I'd expect momentum to gather around software performance libraries/apis to move forward with that and leave GNC behind.
Though they've been having problems with their drives for their gpus?
The Mac desktop hasn't even got last year's mid range yet... (please, don't say the Mac Pro...)
Still waiting on Radeon to deliver 'high end' products for Mac. That come from 'this' year.
Azrael.
It’s nice to have someone who knows what they are talking about. Imagine if this thread was full of muppets like myself who just demand 10900Ks and 5700 XTs...Power budget is way above 50W in an iMac. They have more than 150W on the GPU. They will not constrain it at 1/3 of the power it can have.
What is more likely is more HBM2 memory on-package in the GPU options.
Nvidia invest masssssssiiiiiiively into TensorFlow, PyTorch, MXNet. I think AMD just can't keep up with it. AMD is committed to use OpenSource software which is fine, but they never, ever invest the budget NVIDIA invested (we speak like billion USD here) to have the workforce to help developers of these ML libs. And OpenSource isn't there. AMD has its ROCm library, a spin-off of OpenCL 1.2. And OpenCL is such a pain in the a*s to code that it never very lift off. They even abandoned anything above v1.2. Virtually no one on the planet supported v2.2+. So the OpenCL 3.0 is a reset to 1.2 standard with newer C++ compiler (if I remember well).
So ROCm is not mature and I think there is only a branch of TensorFlow supporting some OPs in ROCm. It's likely the only surviving OpenCL framework.
AMD is really poor financially so cannot invest as much in its software as NVIDIA do.
OpenCL is suffering major problems, popularity loss, disinterest from developers.
CUDA is winning datacenter after datacenter after supercomputer after.... you got it.
Fortunately, Microsoft and Amazon didn't buy a single Intel server since arrival of AMD EPYC chip. So might help AMD's wallet somewhere.
AMD needs great software. Going open is not always the solution. OpenCL was an attempt to compute on anything that can compute and I personally think it failed. To have coded in OpenCL and CUDA, CUDA is light years ahead of OpenCL1.2. Far less complicated, far more user friendly. And the tools are there. Every major version of CUDA brings more debugging tools better profiler, etc...
When the creator of a lib (Apple) deprecates its own library where it all started (OpenCL), it's not a good thing generally speaking.
So we have:
CUDA on Windows (lol), Linux.
ROCm on Linux (you must be realllllyy motivated, it's a young lib).
Metal on macOS which is truly impressive on how easy it is to get started. You feel a bit of OpenCL legacy here and there but simply being in the macOS environment and Xcode secures a lot the developer.
And in fact, between you and me, the only platform on which AMD rocks is Apple. Why ? Because Apple *is* involved in every portion of it, from hardware design and integration, firmware of the GPU, driver, OS, and APIs it uses to render. No other OS can do this. And that's why you see all around people complaining about crappy AMD drivers on Windows and Linux (they even abandoned their own proprietary driver on Linux to contribute directly to the OpenSource, free, community made driver. Insane.)
You know it actually would be nice if they at least announced the imp specs along with the new im at the event, so we could make a better decision about what model to purchase. Even if they say imp coming later this year.My hardware hopes for WWDC
iMac redesign, no chin, 27”, 5k display
iMac Pro, redesign like the regular iMac, 32”, 6k display.
Stand alone 27” 5k displays $2k with stand, additional $500 nano option.
CUDA on Windows (lol), Linux.
ROCm on Linux (you must be realllllyy motivated, it's a young lib).
Metal on macOS which is truly impressive on how easy it is to get started. You feel a bit of OpenCL legacy here and there but simply being in the macOS environment and Xcode secures a lot the developer.
9to5Mac put out a roundup video on the iMac at WWDC yesterday.
Do not expect anything you have not already read here (obviously), but if you feel like "some of the info in order" it's a nice 15 Minutes to spend.
Ah? But is it high end 'things?'
Here's an actual 'high end' card from the competition...
![]()
NVIDIA GeForce RTX 2080 Mobile Specs
NVIDIA TU104, 1590 MHz, 2944 Cores, 184 TMUs, 64 ROPs, 8192 MB GDDR6, 1750 MHz, 256 bitwww.techpowerup.com
I think the 5600M is 'late' to the party. I don't think there is anything 'amazing' about it. As the '5700' is looking like it will be 'late' to the iMac party. I wouldn't class that as 'amazing' either.
Yes. It's a decent 'mid range' card. ...that happens to be priced above 'high end' desktop cards.
Power budget is way above 50W in an iMac. They have more than 150W on the GPU. They will not constrain it at 1/3 of the power it can have.
What is more likely is more HBM2 memory on-package in the GPU options.
Nvidia invest masssssssiiiiiiively into TensorFlow, PyTorch, MXNet. I think AMD just can't keep up with it. AMD is committed to use OpenSource software which is fine, but they never, ever invest the budget NVIDIA invested (we speak like billion USD here) to have the workforce to help developers of these ML libs. And OpenSource isn't there. AMD has its ROCm library, a spin-off of OpenCL 1.2. And OpenCL is such a pain in the a*s to code that it never very lift off. They even abandoned anything above v1.2. Virtually no one on the planet supported v2.2+. So the OpenCL 3.0 is a reset to 1.2 standard with newer C++ compiler (if I remember well).
So ROCm is not mature and I think there is only a branch of TensorFlow supporting some OPs in ROCm. It's likely the only surviving OpenCL framework.
AMD is really poor financially so cannot invest as much in its software as NVIDIA do.
OpenCL is suffering major problems, popularity loss, disinterest from developers.
CUDA is winning datacenter after datacenter after supercomputer after.... you got it.
Fortunately, Microsoft and Amazon didn't buy a single Intel server since arrival of AMD EPYC chip. So might help AMD's wallet somewhere.
AMD needs great software. Going open is not always the solution. OpenCL was an attempt to compute on anything that can compute and I personally think it failed. To have coded in OpenCL and CUDA, CUDA is light years ahead of OpenCL1.2. Far less complicated, far more user friendly. And the tools are there. Every major version of CUDA brings more debugging tools better profiler, etc...
When the creator of a lib (Apple) deprecates its own library where it all started (OpenCL), it's not a good thing generally speaking.
So we have:
CUDA on Windows (lol), Linux.
ROCm on Linux (you must be realllllyy motivated, it's a young lib).
Metal on macOS which is truly impressive on how easy it is to get started. You feel a bit of OpenCL legacy here and there but simply being in the macOS environment and Xcode secures a lot the developer. OpenGL being deprecated (and I've been watching a bit some app using it for rendering and NO ONE is getting rush to implement its rendering engine to Metal (VTK for example), so I'm freaking out at this moment because we ALL KNOW that when Apple switch to its own Apple A-chip, OpenGL is nothing else than t e r m i n a t e d.) OpenCL will also be terminated when switching to Apple A-chip because the chip won't be OpenCL compliant (you see why everything is being deprecated since a while now we know it's moving really soon).
And in fact, between you and me, the only platform on which AMD rocks is Apple. Why ? Because Apple *is* involved in every portion of it, from hardware design and integration, firmware of the GPU, driver, OS, and APIs it uses to render. No other OS can do this. And that's why you see all around people complaining about crappy AMD drivers on Windows and Linux (they even abandoned their own proprietary driver on Linux to contribute directly to the OpenSource, free, community made driver. Insane.)
That kinda makes me sad.. I remember being really excited for ROCm and then realising there was hardly any support and it was a PITA to get working -_-
Just like Apple’s software teams...And even more, their business process is mature, a well geared and oiled machine.
As a value judgement, the £800 uplift to get 5600m graphics could actually buy a real 5600XT + eGPU of your choice with a chunk of change left over. Assuming you don't need all that power on the move.
Or wait a few months and put a much more powerful RDNA2 6xxx series in next year.
Just like Apple’s software teams...
Razor Core X (£250) plus an RDNA2 GPU. Sorted.Yeah.
The 5500M is perfectly respectable (in context) for £100.
Take the £700 saved, get any eGPU caddy (Razor perhaps?) and plough the rest into an RDNA2 variant.
Macbook in one hand...eGPU in the other. Two carrier bags on your way to 'serious 'Pro' work or LAN gAMoR party.
If you already have a Macbook? Or if you need to buy one? Get the 5500M. Far better bang for buck. For the price of hte Macbooks, I don't know why they don't include the 5500M 4 gig as standard. More worthy of the Macbook's 'marketing.'
I'd say wait on for RDNA2's 50% efficiency. And it's got ray tracing and other forward facing tech'.
Azrael.
[automerge]1592489901[/automerge]
Not sure they draped themselves in glory with Catalina.
Azrael.
I want a desktop for my next computer.Razor Core X (£250) plus an RDNA2 GPU. Sorted.
I still believe most laptop owners don’t actually need a laptop, especially those doing heavy work. So an eGPU isn’t too much of an inconvenience in theory. I’ve heard they aren’t exactly super straightforward to use, or without bugs, in Catalina.
AMD would need something like literally years of work and thousands of software/electrical engineers with parallel computing skills to build something just enough to really compete with Nvidia. Because that's what Nvidia has now as a workforce only for CUDA. And even more, their business process is mature, a well geared and oiled machine.
Probably not, but it gives an indication of how much performance you can get into a iMac if you really want. Put three in and you have 20 TFLOPS and at least ray tracing scales excellently with the number of GPU. Using vanilla desktops GPU is not necessary the way forward. They are sloppy designed and not power efficient. When they reach 200W, GPU vendors seen to stop caring about power consumption. I my world, it is better to use 50W for a job rather than 200W.Power budget is way above 50W in an iMac. They have more than 150W on the GPU. They will not constrain it at 1/3 of the power it can have.
What is more likely is more HBM2 memory on-package in the GPU options.
Hmm. Seems like AMD Radeon has a mountain to climb.
But they did with Ryzen. ...and they're a few years into it now.
And RDNA2 and Arcturis(?) are probably the 1st significant steps in the fightback. With RDNA3 set to up the anti versus Nv'. Sounds like Radeon and AMD getting organised re: GPUs.
They can earn income from Ryzan and Epyc to boost their revenue for what is going to be an important market going forwards. GPUs. But they'll need to get sorted with their software.
They always seemed to have that 'Open GL' weakness traditionally vs Nv'.
I'm looking forward to seeing the benches from RDNA2 and Ampere. Plugging an RDNA2 into an eGPU for iMac is an option I'm looking forward to 'Barefeats.com' hooking up.
They tend to have good comparative Mac benches from Apple's line up.
Azrael.
Probably not, but it gives an indication of how much performance you can get into a iMac if you really want. Put three in and you have 20 TFLOPS and at least ray tracing scales excellently with the number of GPU. Using vanilla desktops GPU is not necessary the way forward. They are sloppy designed and not power efficient. When they reach 200W, GPU vendors seen to stop caring about power consumption. I my world, it is better to use 50W for a job rather than 200W.
I like efficiency. I find that using desktop parts and design the case according not particularly inventive. The 5600M is impressive because it is a clever solution.
The 27 inch will have high TDP, the 23 may not.
Razor Core X (£250) plus an RDNA2 GPU. Sorted.
I still believe most laptop owners don’t actually need a laptop, especially those doing heavy work. So an eGPU isn’t too much of an inconvenience in theory. I’ve heard they aren’t exactly super straightforward to use, or without bugs, in Catalina.
Ryzen is hardware.
Software is another game
OpenGL is dead anyway. It lives on Linux, but Vulkan is more and more popular. But not supported on macOS since ... Apple A-chip won't be compliant with OpenCL, so Apple never wanted to have compatibility with Vulkan, which integrated OpenCL2.0. You see it ...? ahahha
The era of cross-platform GPU rendering is over. Three times the maintenance cost now. And Windows have what ? DirectX but no CAD uses DirectX ? So ..... I don't know what's going to happen seriously.
[automerge]1592490766[/automerge]
Yeah, 150w for the 27 inch, the 23 will likely stay with NAVI12 with 20-24 CUs.
Doing what I'm doing with a 50w GPU is impossible. It can be sure impressive to pack such amount of power in 50W, but it's nowhere near what power user really needs in a desktop form factor.
When you are a student you don’t have choice to have a powerful machine you can transportI think so.
As for eGPUs. Easy enough to use. But they're not 100% bullet proof in all test cases. Not enough to trouble you for gaming, I shouldn't imagine. And support is getting better all the time.
I find laptops over rated. They're just 'portable' desktops and I define them as so. Never did understand the great fuss about them. If the keyboard breaks, you're in a for an expensive repair. The monitor is attached. (Like in teh iMac.) So that's another potential expense. The tech' is thermally limited.
But I guess if you want to take your laptop to a desert or the arctic and do some work...lugging a tower case is more prohibitive. It's decent computing on the go, I guess. I tried the whole laptop on my lap thing. Gets too hot. Can't snuggle up with it on the sofa for 'lazy' computing. iPad for me, is light years ahead in that regard. I prefer the iPad's idiom to that of a 'lap'top.
I have done creative work on the old iBook G3 back in the day. It was ok. Pokey screen. Fans blew when you tried to push it. Keyboard was so-so. All felt a bit 'compromised.'
I'll assume Apple are reasonably serious about eGPUs as they introduced the tech' for their OS. *fingers crossed.
Azrael.
[automerge]1592491218[/automerge]
Sound post.
It's all about the software. That's why I use Mac OS.
'People who are serious about software make their own hardware.' Is that how the saying goes?
Open GL had it's chance. Had its day. It's dead on Mac. Cross platform api that hurt performance in the main. Too much latency. Software wise. If developers bring 2nd rate ports with 2nd rate performance to Mac...it was only a matter of time before Apple put a bullet in it and brought us Metal. Those Open GL devs can spill their Mac tears elsewhere. Hungrier Metal devs will replace them.
iOS. Metal. Swift. X-Code. Apple's software train is going in one direction. And it's selling hundreds of millions of devices. Gl devs will have to hop on or get left behind.
I got tired of the Mac's 2nd rate port experience with framerates half that of the Windows equivalent whilst being charged 100% at the Mac cashier.
I think for your line of work, a desktop is the way to go.
Azrael.