Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Heavy Lifting on Linux can be done without command line. Also I am completely baffled - what is so difficult in the command line? When you get used to it - you want to do everything on your system through it, because its the fastest and easiest, and most realiable way to do it.

I thought the sarcasm would become apparent right there... but it should have become really apparent by the time you hit the lava flow.
 
  • Like
Reactions: AidenShaw

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053

I suggest watching this video about Intel Graphics, and adding the comments about the culture which is inside Intel, and thinking about the 10 nm fiasco, and the reason behind it.

Sometimes AdoredTV has good insight. And sometimes they take info and then drive off into the swamp like Charlie does at Semiaccurate. Charlie's "agenda" typically has to do with Nvidia or Intel is screwing up and warps it into a doom and gloom story. AdoredTV heavy bias is for 'enthusiast GPU consumers and community". If there is a that aspect then he tends to drive off into the swap with the company emphahsis isn't firmly directed in that direction.

The driving off into swamp is fairly evident in this one for the 3 minutes spends on "failure of Intel graphics" that is really about Intels CPU marketing campaign that was hooked to core counts. The i3 - i7 etc benchmark charts.... .really nothing to do with Intel graphics at all. It was Intel marketing money poured into enthusiast marketing campaigns, but actual graphics? Nope.

Other parts are bit warped also, but more clear when viewed through "only the enthusiast GPUs matters" lens. He comments how Tiger Lake iGPU looks OK but the DG1 is a disaster. Well they are probably both primarily the same Xe-LP implementation. There are three different implementations that Intel is pursuing.


Intel has covered this (and so did AdoreTV in a previous video but seems to have have not weaved in this one. )
DEVCON%202019_16x9_v13_FINAL%5B2%5D_47_575px.jpg



Those overlap in performance, but have different 'efficiency' in the overlapping range:

DEVCON%202019_16x9_v13_FINAL%5B2%5D_51_575px.jpg




The Xe-LP pushed up into the lower bounds of the Xe-HP targeted zone would not be a impressive GPU ... just like DG1 is. The DG1 is a lower-end entry ("desktop") with the baseline of the new common GPU architecture direction. The Xe implementations are a uniform chip implementations over the whole span though. The Nvidia MX250 mobile GPU isn't a 'failure' because it doesn't have all of the Tesla V-100 features in it. Which leads to the next "doom" in the video the "Ponte Vecchio" multiple chip solutions. Those are meant to compete in the space the V-100 is in. ( high end HPC. AI-ML workloads , etc.). In other words not in the enthusiast space either.


DG2, as described by the rumors so far, again probably first real competitive entry-mid level desktop price zone solution. So Nvidia 2070 like performance at entry level prices in 2021-2 on a, at that point, more bulk, affordable TMSC ("tailing edge" ) process makes sense. By that time most of TSMC's current 7nm customers would have moved off to 7-Plus nm , 6nm , 5nm or something else. There have been several indications that Intel is going to move to push chip dies with lower profit margins off onto external fabs. That is one reason they were resigned to chucking the mobile cellular modem business. Right now a substantive percentage of Intel's fab capacity is dedicated to churning out low margin modems for Apple and not for higher margin products. Sub $150-200 GPU cars in 2021-3, no way Intel wants to through their high end fab capacity at that.

Throw their 7nm at very high end HPC cards for expensive data center systems and supercomputers. Probably so. [ I wouldn't be looking for Xe-HPC so show up on TSMC 7nm at all. Xe-HP sitting in the "inbetween zone" could be a good case for something on some non-Intel process. Perhaps starting with TSMC 7nm and then moving on from there as move more 'enthusiats' range that doesn't match most of what Intel is doing at higher end fab processes. ]


Similarly DG1 spun also in the laptop space as useful. AdoredTV recently ran with a rumor that Rocket Lake comes with Gen12 10nm GPU coupled to the 14nm CPU core in the same package. Hmmm, if DG1 is Xe-LP and it is coupled to the a GPU-less CPU die ....... Intel would be pitching DG1 variants in the laptop at the same time selling an iGPU solution.


intel insiders who wanted Intel to jump into the middle 'enthusiasts' GPU space probably would be grumbling to AdoredTV. And the Marketing folks dragged over from AMD who chased Nvidia in that space probably would doing "great white whale" chasing with Intel's bigger pocket book.

Raja Koduri's ego may be pushing in all four directions at once. Xe-LP , both variants Xe-HP is suppose to cover , and Xe-HPC all at once. That would probably not end well for Intel. It doesn't look like they are. Intel is doing XP-LP first. ( since that is closed to the dominant GPU area they have the most control over) and are probably Xe-HPC next ( since that would be a new area with much better profitability). The 'enthusiast' part is probably the one they'll do last. However, they probably will stake out that zone with "marketing" hype which will be easy to dismiss as a "disaster" and "bunch of clowns".

If Intel's 7nm is tuned for better coverage of laptop power then Xe-LP would probably do quite well. Xe-HPC probably will need lots of large dies to push through extra large core counts at non extra high clock speeds. For embarrassingly parallel HPC workloads that is probably a pretty good match if the software stack matches up. It looks like Intel is trying to harmonize the various more specific AI-ML solutions they have also bough with this



To loop this all back to somewhat of a relevant Mac Pro context AdoredTV labels Vega as one of the top 3 tech disasters of the last decade. And yet thousands of folks get their profession work done on Mac Pro, iMac Pro , iMac, and MBP that use it. As an ultimate high end gaming card he has some points. As a computational get work down card; not so much. He has been throwing some 'hate' at Navi too and yet the W5700X will probably work out just fine in the Mac Pro user base.
 
Last edited:

Zdigital2015

macrumors 601
Jul 14, 2015
4,144
5,624
East Coast, United States

I am interested to see how this thing translates into day to day usage for the average variety of design tasks, audio and video editing, CAD, et al. versus lesser CPUs. To me, I don’t think it will be as impressive as people think. Single core result certainly is good, if not awe inspiring. Multi-core is nuts.

HOWEVER, when it comes to 3-D rendering and other associated tasks of this nature, I expect the 3990X to crush all comers.

Now that it is a real product, let’s watch the fun develop.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
HOWEVER, when it comes to 3-D rendering and other associated tasks of this nature, I expect the 3990X to crush all comers.
This really depends on the workload and the render engine, I’ve read many report of Vray underperforming(I know of Vray because that’s what I use but maybe other renderer are affected as well), and that despite synthetic benchmark(even Vray benchmark) reports amazing score for the latest TR family.
Here just an example I was reading today: https://forums.chaosgroup.com/forum/chaos-common/chaos-common-hardware/1058933-which-computer-advice

Since Vray forum requires registration I’ve copied and pasted a few line from a post:

“I have two custom made computers Intel i7-6950X 10-core and AMD Ryzen Threadripper 3960X 24-Core. I made a few benchmarks. V-Ray benchmark shows 12796 vs 36562. PassMark benchmark shows 19918 vs 46934. As you can see Ryzen is more than 2 times faster than i7 in two benchmarks. But when I render in 3ds Max using V-Ray 3.60 or V-Ray Next my i7 renders faster than Ryzen. The scene is pretty simple. It is a waterscape with the ship. There is a displacement for the water. Render time in 3dsMax2018 Vray 3.60 (bucket render) on i7 is 4:17 min, on Ryzen 7:20 min for resolution 1280x720. 3840x2160 resolution is even worse - 23 min vs 1h10mins. I am disappointed.
UPDATE. I made some research and have found that one parameter (Min shading rate) in render settings significantly increases render time on AMD CPU what doesn't make such impact on Intel CPU. If I switch to Progressive render on AMD CPU and keep 'Min shading rate' as default 6 then render time is better.

UPDATE #2. I made render test on my old i7-3930K 6-core machine, which is almost twiсe slower than i7-6950X. This is pity but it is faster than Ryzen 24-core CPU in this particular render task. Now I have next chart for 3840x2160 frame render time:
i7-6950X 10-core - 23 min
i7-3930K 6-core - 44 min
Ryzen TR 3960X 24-core - 1h 10 min”


Probably this is a specific worst case scenario, but I’ve seen other report as well(not that bad though).
And while we are on the subject that’s what Vlado(leading Vray developer) have to say about dual Epyc systems:

“I do not currently recommend using dual-CPU EPYC systems for rendering; due to their NUMA configuration, the memory becomes a major bottleneck when the many cores of one CPU attempt to access the memory attached to the other CPU which results in slow renders (sometimes slower than rendering on one CPU). Dual-CPU NUMA configurations are an area that AMD has traditionally struggled with.

On the other hand, single-CPU systems where the entire memory is attached directly to the one CPU seem to perform much better.

In any case, it is best if you get a chance to test the system before paying for it.

Best regards,
Vlado”

Long story short, the new TR are very nice CPU, even great depending on your workload, but your mileage may vary.
 

seek3r

macrumors 68030
Aug 16, 2010
2,564
3,779
Apple is for people who cannot do things themselves with computers. Linux is for people who can do that, and actually like it.


First of all, why are you even on a mac forum if you think that

Second of all, go to any major tech conference, the number of macbook pros around is astounding. If you think Apple machines aren't used by techies you aren't in tech. I work in an all mac shop, my last 2 jobs, including a fortune 50 company, were all mac shops on my team. My servers are all linux, my builds happen on linux machines, but my development machine and that of my coworkers, is a mac.
 

G4DPII

macrumors 6502
Jun 8, 2015
401
544
Sadly, Koyoot is now correct regarding Mac OS. It is no longer a tool to aid in productivity as it once was.

Apple have tried adding so much rubbish since Snow Leopard it is a bloated pile of poop. It isn't fast or responsive anymore.

OSX is slowly going the same way as OS 9, turning into an abomination and a nightmare to use.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Apple have tried adding so much rubbish since Snow Leopard it is a bloated pile of poop. It isn't fast or responsive anymore.
I cannot agree more, especially with last sentence.

I was running Windows and MacOS side by side, and added Linux Based system to equation, for comparison reasons. Only after that I experienced how an OS should behave and how fast it should be.

People may say whatever about Linux Kernel, that it is bloated, because of compatibility required. But it is still by far the fastest OS on this planet, and quite possibly - the most stable(especially Debian based ones).

Apple should get back to basics, and forget about features that are not really important for majority of professional users, and focus 100% on performance.

Only after that it will be once again second best OS in the world. Because no OS will beat Linux, for the time being.
 
  • Like
Reactions: ssgbryan

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
....

Development of MacOS Catalina running on AMD APUs, and potentially - CPUs, progresses. Enjoy.

Same stuff (December 2019 ) ...

"...
Van Gogh, that was found in Apple kexts in Catalina Beta is based on Zen 2 and RDNA(Navi, GFX10).

None of AMD's upcoming, standard Zen 2 APUs(Renoir and Dali) are based on Navi GPU. All of them have Vega(GFX9). ... "

[ Dali appears to have turned out to be "refreshed" 12nm GF Zen implementation; not Zen 2 . (to hit lower discount pricing. ). Similarly Renoir shrank the number of Vega CU units ( that are better optimized so this doesn't lead to net loss on performance. So a Van Gogh with perhaps the same number as before on a bigger die wouldn't be particularly super semi-custom. A variant that walked away from because didn't hit defect density + economics + power target wouldn't need much customization at all. Mac Mini or 21.5" iMac. ]

different day (November 2019 )

".. Renoir , "Vangogh", Navi21 & Navi21 Lite in MacOS Catalina 10.15.2 beta 3 "
https://www.reddit.com/r/Amd/comments/dzbr6t

Progress how if it has been there for over 2+ months.

Apple didn't dislike Qualcomm. They largely disliked Qualcomms high prices. If Intel throws larger profit margin at Apple with lower component pricing (and integrates Thunderbolt to lower build costs ), Apple may not leave. AMD is a creditable possible options , but betting the farm because of some lingering beta markers is probably a bit too far. Apple deliberately exposing their "Plan B" port exercise is a pricing negotiating tool with Intel also.

Once again. AMD Picasso .... what 2020 product is Apple going to use that for? And yet it is present in the list. That's suppose to be progress to a new Apple Mac product?
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Its not the same stuff. Look closer to the new stuff in the newest Beta, dec.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Its not the same stuff. Look closer to the new stuff in the newest Beta, dec.

Like what? Other than a feature that is new being rolled out across all the actively supported architectures in the list.

If hand waving at the LPDDR4 ... how did Renior not have LPDDR4 3-4 months ago? Can label that as progress but curiously adds to why it wasn't there in the first place if has been a 'top priority' CPU chip.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Like what? Other than a feature that is new being rolled out across all the actively supported architectures in the list.

If hand waving at the LPDDR4 ... how did Renior not have LPDDR4 3-4 months ago? Can label that as progress but curiously adds to why it wasn't there in the first place if has been a 'top priority' CPU chip.
I think adding microcode of Renoir to the kexts is meaningful.

Why there wasn't LPDDR4 before?

Because its not a support for Renoir, but for MacOS Memory type compatibility. For the whole OS. In that kext, you have ALL of memory types: DDR3, DDR4, GDDR5, GDDR6, HBM2. There was no LPDDR4 support.

Guess which AMD APUs have LPDDR4 support?

With each Catalina Beta they are laying ground work towards pushing Catalina to run on AMD APUs/CPUs. What for?

To get a better deal with Intel? Or because March is coming with huge steps and they need AMD based Apple computer ready for the Keynote that will happen that month?
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794

It will only get worse as the PCI 5 stuff starts coming out. Apple really needs to consider doing an AMD version of the Mac Pro. The 'desktop' version destroys the Mac Pro. The Eypc, of course, does as well.

I'm glad I got my 28 core Mac Pro. It's way overpriced, underpowered, and power inefficient compared to the AMD stuff, but the Mac OS has me locked in, so I suffer the indignity. But this chip embarrasses the intel offerings on every level.
 
  • Like
Reactions: throAU and ssgbryan

darthaddie

macrumors regular
Sep 20, 2018
182
222
Planet Earth
I think this video by max answers most of the questions and he totally nails the Mac advantages. I saw exactly the same things in comparison to my windows build.


summary. On some of the most important tasks that matter to me, 32 core threadripper is behind the 12 core Mac Pro. Not to mention the motorcycle rev up phenomenon of the fans with any windows pc. It’s literally annoying
 
  • Like
Reactions: Romanesco

blackadde

macrumors regular
Dec 11, 2019
165
242
Not to mention the motorcycle rev up phenomenon of the fans with any windows pc. It’s literally annoying

I'm reading 36db on my monitor at head-height while blasting my PC by rendering in Fusion360 (100% CPU) and FurMark (100% GPU) simultaneously. Even if I press up against the fan ducts in the back it's only 42db. I'm not even running anything special - a cheapie 120mm single radiator AIO cooler for my CPU and a bog standard 3-fan layout on an Nvidia GPU. I just set the fan curve to keep things quiet - no different than using smcFanControl in MacOS.

Of all the criticisms to lob at modern PC's, this is the most nonsensical.
 

th0masp

macrumors 6502a
Mar 16, 2015
851
517
Not to mention the motorcycle rev up phenomenon of the fans with any windows pc. It’s literally annoying

I don't have that on mine at all. GPU fan barely comes on during (3D-) work and the CPU fan is steady and barely audible (one of those big Noctua blocks that don't fit in every case).
That revving up syndrome is something I've only experienced on PCs with those small stock coolers.
 
  • Like
Reactions: throAU

Schismz

macrumors 6502
Sep 4, 2010
343
395
First of all, why are you even on a mac forum if you think that

Second of all, go to any major tech conference, the number of macbook pros around is astounding. If you think Apple machines aren't used by techies you aren't in tech. I work in an all mac shop, my last 2 jobs, including a fortune 50 company, were all mac shops on my team. My servers are all linux, my builds happen on linux machines, but my development machine and that of my coworkers, is a mac.
Intel is a dinosaur sinking into a tarpit, so what else is new. Different tools for different reasons.

No I don't love and adore the iOSification of the artist formerly known as NeXTSTEP (pick capitalization based upon change of seasons), then Rhapsody, then OS X, then I didn't keep track.. MacOS?, currently macOS. Anyway. If your OS and response time is super-slow, there's something wrong and either 1. you've used Migration Assistant 29 times in a row and have 12 years of cruft from 347 programs -- pardon me, Apps -- from other epochs wedged into your system, or 2. it's time to upgrade your hardware because it's not 2011 anymore.

Work = macOS. Servers are all Linux, firewalls and load balancers are often BSD-assisted behind Cisco. The right tools for the right tasks.

Having said that, 2002 will SURELY be the year of Linux as Desktop King! No, wait, I meant to say 2020 (holding breath). It's... what it is, but I wouldn't sit in front of it all day anymore than I'd use macOS on a server.

Anyhoo... it's Sunday, having coffee break, typing from Mac Pro on Mac Pro Forum, not trolling Linux or Ultimate Gaming sites, so... context.

In conclusion: I <3 dark mode. Mmmmm. Soothing. So there's that.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.