Which test/benchmark did you run to get this result?In Logic Pro, the Mac Pro 2023 M2 Ultra performs worse than the Mac Pro 2019 Intel Xeon 28 core.
View attachment 2226928
Which test/benchmark did you run to get this result?In Logic Pro, the Mac Pro 2023 M2 Ultra performs worse than the Mac Pro 2019 Intel Xeon 28 core.
View attachment 2226928
Got it, thanks.
Got it, thanks.
So I just ran it on Logic Pro (10.7.8) with Mac Pro M2 Ultra ( 24-core CPU, 76-core GPU, 32‑core Neural Engine - 192GB RAM)
I ran the test per the directions - https://music-prod.com/logic-pro-benchmarks/
I assume I have done it correctly but not 100%, I just kept duplicating tracks (COMMAND + CLICK) & playing it back.
I attached the updated project for you to check.
386 tracks was fine (Activity Monitor - 200% CPU), I doubled it to 772 tracks and that was good too (Activity Monitor - 400% CPU), I got up to 1001 tracks before Logic came up with a message about exceeding the 1000 tracks (see attached). It was running the 1001 tracks smoothly and Logic was using between 475-550% of CPU on Activity Monitor.
Okay I must have been doing something wrong...Test it. 550 tracks.
We did some tests on Mac Studio with M2 Ultra. The results were 373 and 374 tracks. In the test I gave you there are 550 tracks. Start with 350.Okay I must have been doing something wrong...
That sets off the System Overload.
Thanks for that.We did some tests on Mac Studio with M2 Ultra. The results were 373 and 374 tracks. In the test I gave you there are 550 tracks. Start with 350.
Thanks for that.
Okay I got it to 388 consistently, 389 gets the warning a couple seconds of into playback.
So an overall improvement of 2 tracks ( .52%)
Side note, I think Neil Parfitt is absolutely right about how unusable the 2023 Mac Pro is for music production.
Okay, hol up... I've been reading the past few posts about the Logic benchmark, and I understand how that can be helpful to measure performance. But isn't that Logic benchmark just like one instrument repeated hundreds of times? At least the one I know of is like that. Here's why that's not realistic—when are you EVER going to have 200 pianos, or 200 synths, or 200 whatever, playing at the same time? Different samples consume different amounts of RAM and CPU. I think the Logic benchmarks are good to get an overall idea, but if someone actually tried to write something useful with a lot of tracks (rather than 200 sine waves or whatever), that would be a MUCH better metric of how well it performs on different systems.
Side note, I think Neil Parfitt is absolutely right about how unusable the 2023 Mac Pro is for music production. I get it, everyone's saying he's like 1% of everybody out there, but there are a LOT of other people in his situation. I'm not quite at his stage yet in terms of what he's doing professionally, but I know how this new machine presents a problem for people like him.
I know this is a little off-topic, but to address your "Apple foldable" comment, I would like to say that before the Google Pixel Fold was released, I was very enthusiastic about Apple making their own foldable. See my topic here for details.I'd settle for an Apple foldable so I can bridge the iPhone and iPad experience with a single device. I have ZERO belief Apple will take such a step as they see 2 devices sold over 1 as the better 'outcome'. As a consumer though and similar to the Mac Pro group disappointed but accepting the Mac Pro is no longer for them, I moved to Samsung simply because they did cater for my needs. Apple continues to make money and impressively so but unless you're drinking the iPhone, iPad and MacBook koolaide you're not their primary focus!
Okay, hol up... I've been reading the past few posts about the Logic benchmark, and I understand how that can be helpful to measure performance. But isn't that Logic benchmark just like one instrument repeated hundreds of times? At least the one I know of is like that. Here's why that's not realistic—when are you EVER going to have 200 pianos, or 200 synths, or 200 whatever, playing at the same time? Different samples consume different amounts of RAM and CPU. I think the Logic benchmarks are good to get an overall idea, but if someone actually tried to write something useful with a lot of tracks (rather than 200 sine waves or whatever), that would be a MUCH better metric of how well it performs on different systems.
A person in our research group requested more storage and a 2nd GPU to increase our DL experiment workload. I installed an 8TB sabrent nvme in a PCI express card as well as an RTX A6000 GPU in our dell workstation (the other workstation we have is a custom build with a threadripper). I then setup linux drivers and had the A6000 running workloads in pytorch within an hour because Dell didn't throw a tantrum/hissyfit at nvidia and refuse to sign drivers thus NOT screwing over all their customers unlike our favorite company.
This scenario is apparently too beyond what apple can (read: wants) to provide. Which is more sad and pathetic? apples behavior or the apple apologists stating that this situation isn't "professional" because apple said so. I'm not quite sure.
The Apple apologists by far, impulse462. Apple can claim business strategy, the apologists have no excuse.
Again, as a lot of folks have pointed out including me, Apple’s been aiming at large scale corporate buyers with the Mac Pro since 2019, IT at a big company doesnt do custom builds like that unless there’s a very very very specific reason for it that cant be satisfied by a prebuilt solution. A uni lab or even a small research group within a company isnt the target market here.
Usually, IT departments wouldn't build those machines. They'd just source them from somewhere else. It's not unheard of, but you are right in that it's never going to be a standard deployment and will only go to those who really need a machine that isn't merely a Dell Precision or an HP Z6/Z8.You have 2 workstations, the team of 14 people I’m on has at least 5 desktop workstations for people who need it, MacBook Pros or Dell XPS laptops for everyone, a million or so dollars of racked hardware, and millions of spend on cloud infra. If a designer in an adjacent team wants a machine if they ask for a mac pro it’s quite likely a decent bet that itll be purchased and provisioned no questions asked beyond standard approvals, but if they ask for a custom machine IT has to build from parts it would be a nightmare to get
24-core M2 Ultra (no hyperthreading) versus 28-core Xeon (56 threads with hyperthreading) does not seem a fair match-up...
If it's fair for Apple to compare, then I don't see what it's not fair for anyone else to compare.
24-core M2 Ultra (no hyperthreading) versus 28-core Xeon (56 threads with hyperthreading) does not seem a fair match-up...
Maybe 24-core M2 Ultra (no hyperthreading) versus a 12-core (24 threads with hyperthreading) Xeon would be a more reasonable comparison...?
Dude, without ECC you have NO IDEA OF KNOWING when you had a memory error! That's one big point of having it, so that you get an actual warning of bad or failing memory. That's why EVERY device should have ECC. Computers have error detection in multiple places, what makes RAM, of all things, so unimportant to not have it there?But when was the last time you were working and an Apple Silicon Mac had a memory error?
Why? If I have a large memory pool and one bad or failing chip then the error rate as seen from the processor could be lower than when a smaller device with fewer chips has one bad or failing chip. And in either case, without ECC you'll have no idea that you have a problem and should stop trusting that device.Of course, the more RAM you use, the more important ECC becomes.
Dude, without ECC you have NO IDEA OF KNOWING when you had a memory error! That's one big point of having it, so that you get an actual warning of bad or failing memory. That's why EVERY device should have ECC. Computers have error detection in multiple places, what makes RAM, of all things, so unimportant to not have it there?
RAM tests exist for non-ECC machines. That all being said, the last time I experienced failing RAM on a Mac, said Mac had user-removable RAM and wasn't a 27-inch iMac. And mind you, I come into contact with A LOT of Macs.Why? If I have a large memory pool and one bad or failing chip then the error rate as seen from the processor could be lower than when a smaller device with fewer chips has one bad or failing chip. And in either case, without ECC you'll have no idea that you have a problem and should stop trusting that device.
Er... if you have a large memory pool, you have more memory chips to go wrong - and unless you stick 1TB RAM in a machine and then use it to play Minecraft, you've got lots of memory because you need large data sets loaded into RAM (spanning lots of chips) so your chances of hitting a bad one are higher. Anyway, the main purpose of ECC is to catch and correct 1 bit errors so your program doesn't crash and/or give garbage results - such errors happen fairly randomly due to overheating, electrical noise, radiation and cosmic rays and don't necessarily mean a failing chip.Why? If I have a large memory pool and one bad or failing chip then the error rate as seen from the processor could be lower than when a smaller device with fewer chips has one bad or failing chip.
...apart from extra cost and complexity, needing ~12% more physical memory for the same available RAM, wider data buses for "sideband" ECC or extra read/write operations for "inline" ECC (as would be used on Apple's LPDDR RAM).There's no reason to restrict ECC to high-availability systems.
People saying "I haven't had memory problems" are necessarily lying as, without ECC, there is no way they would have known about bit flips during normal operation.