That literally doesnt matter - at allIt isn't the support issue but more on how short the product cycle was before it was replaced.
That literally doesnt matter - at allIt isn't the support issue but more on how short the product cycle was before it was replaced.
Good point, I’d love to hear what the business case would be for all that ram in the first place. 64 Gb is a lot, 128 GB is twice a lot. But a TB? I suppose you could load your entire big freakin database and avoid SSD access. I suppose you could have hundreds of simultaneous queries on a web server (limited to physical cores) I guess to avoid SSD accessIs it really even necessary to build out that much unified memory? If you have 1.6TB/s access to 256GB of RAM in package, why not page it in from a marginally slower external bus to as many DDR5 DIMMS as you care to load?
The number of workloads that will truly require random access from anywhere to anywhere at minimal latency have to be tiny. As it stands, the purpose of system RAM is to keep the caches full without needing the high bandwidth/low latency you have on chip. Why wouldn’t you follow a similar architecture to keep the on package RAMs fresh without needing the moderate bandwidth/moderate latency you have to on package RAM?
Opening the Apple Music app with a large library..Out of curiosity, what kind of workflows need 1TB or even 2TB of RAM?
I don’t get it, I just opened Apple Music and it’s using 99 MBOpening the Apple Music app with a large library..
Its a joke. The Apple Music app has been noted by a plethora of users to be incredibly slow with large libraries. It has gotten better with 11/12 versions but its still not as smooth as iTunes. YMMV.I don’t get it, I just opened Apple Music and it’s using 99 MB
Now, the fact that you felt like you had to respond to that is funny! Got yaIts a joke. The Apple Music app has been noted by a plethora of users to be incredibly slow with large libraries. It has gotten better with 11/12 versions but its still not as smooth as iTunes. YMMV.
It isn't the support issue but more on how short the product cycle was before it was replaced.
128 GB is not twice as much as 64 GB, but only a little bit more.Good point, I’d love to hear what the business case would be for all that ram in the first place. 64 Gb is a lot, 128 GB is twice a lot. But a TB? I suppose you could load your entire big freakin database and avoid SSD access. I suppose you could have hundreds of simultaneous queries on a web server (limited to physical cores) I guess to avoid SSD access
I would expect that the market for high-memory computers is much larger than the market for workstations. However, I'm not sure how many people need high-memory workstations. Workstations are a small niche these days, because most people who need computational power can have it remotely.Seriously, anyone advocating for this, 1) can you spell out business case, and 2) define how big the market is?
heheee . cinema bench mark. ( thou i da dejavu da word) .This is such an awful YouTube channel that everyone seems to be linking to lately.
I assume you mean cores, not threads? (I.e. currently-executing streams, not executable streams)
128 GB is not twice as much as 64 GB, but only a little bit more.
At first I was skeptical, but after exhausting research I can confirm..128 / 64 = 2 is a fact. It isn't rationally subject to "alternative facts" or "in my opinion".
It might be time to go out and rake some leaves , do some chores , or take some other break because if the basic principles of math have to change to make your argument ... you are off in the weeds. Way off.
Impressive. I don’t know if I would have gone to the lengths you did to find that answer.At first I was skeptical, but after exhausting research I can confirm.. View attachment 1910259
You can't believe everything you read on the internet. Sometimes you have to do your own research to fight the misinformation out there...Impressive. I don’t know if I would have gone to the lengths you did to find that answer.
This is why Apple have gone to a unified memory architecture (and why they are making such a big noise about it) - and they are going to be doing anything they can to preserve that paradigm in order to maintain its advantages.
Bolting on pools of memory for the CPU, that the GPU can't access defeats that.
Out of curiosity, what kind of workflows need 1TB or even 2TB of RAM?
Is it really even necessary to build out that much unified memory? If you have 1.6TB/s access to 256GB of RAM in package, why not page it in from a marginally slower external bus to as many DDR5 DIMMS as you care to load?
Simultaneous Multithreading (SMT) [ a.k.a. as Hyperthreading in Intel marketing speak] is a real thing. Threads/processes that the OS has instantiated and are being scheduled for possible active execution time is different than thread assignments to the CPU for active/concurrent execution. Those two are different parts of the overall thread management task. "What is all the possible running stuff?" is substantively different from "Where did the currently active stuff get assigned?"
Thread is a overload term where need to use context to resolve what actually talking about.
Cores can have multiple active execution instructions streams going on concurrently (threads). There are 2, 4, 8 SMT implementations out there. Apple cores don't because they don't have SMT. iOS and iPadOS don't have coverage on cores with SMT. macOS does. macOS chokes/hiccups on AMD x86 cores with 64 cores (and 128 execution threads) now. macOS would have also had issues with a two socket Xeon SP 6200 series ( 2 * 28 = 56. 2 * 56 = 112 112 > 64 ).
A 'core' is more firmly grounded in a physical. Execution streams aren't solely grounded there. In modern operating systems each process gets its own virtual address space. "Load data from address 12345678 into register 22" actually is dependent upon which execution thread that is to determine what actual physical memory address that resolves to (on many systems '22' is virtual context relative also). Threads are pragmatically not solely in hardware implementation.
The macOS limitation has to do with tracking onto what where in the CPU the thread is assigned to active execution.
Apple is going to lots of effort to transparently to the macOS process/thread scheduler to offload workload off the execution units that aren't scheduled the classic way. AMX , NPU , IMG , GPU 'cores' are more scheduled by proxy by the operating system. ( and Apple presumably secures/translates the memory accesses so that don't loose virtual memory security features. )
Apple doesn't need a huge number of generic general purpose cores if their primary market objective is into more specific workloads with accelerators to match the workload. Unified memory means for whatever concurrent thread that is running the accelerator can grab and do a significant part of the workload. That way apps don't have to chop work into pieces for more generic cores/threads to extract more parallelism. The OS ends up with less scheduling work ( the proxy effect gives it a "force multiplier". )
/me starts singing "Only Amiiiiiiiga makes it possible....."it wouldn’t surprise me tremendously if Apple provided a Mac Pro where, say, the first 256GB are shared, and anything over that is not.
Good point, I’d love to hear what the business case would be for all that ram in the first place. 64 Gb is a lot, 128 GB is twice a lot. But a TB? I suppose you could load your entire big freakin database and avoid SSD access. I suppose you could have hundreds of simultaneous queries on a web server (limited to physical cores) I guess to avoid SSD access
Seriously, anyone advocating for this, 1) can you spell out business case, and 2) define how big the market is?
Many thanks
Yes, people who earn a living are disingenuous.
They presented the concept of a multi die M1 Max for pro desktops accessible to more people.
Especially those who do not want to read.
If a channel's sole purpose is profit, their opinions are most certainly biased and skewed towards that.
So your saying that if you do you job for the money you make you can't be trusted to do your job?
So your saying that if you do you job for the money you make you can't be trusted to do your job?
"Monetization" just means they make money from views which might lead them to do more clickbaited titles of videos just rehashing past rumours but thats a long way from being biased (beyond what their personal believes might be).
Mac Pro owners can still enjoy 5 years or more of use. Apple doesn't start winding down support for a Mac until 5 years after it was removed from sale. It is another two years before it is declared obsolete.
It isn't the support issue but more on how short the product cycle was before it was replaced.
I think you all may be misunderstanding how Apple's vintage & obsolete designation process works.The length of time a model was on sale only matters in so far as it effects the support lifetime.
Products are considered vintage when Apple stopped distributing them for sale more than 5 and less than 7 years ago.
Products are considered obsolete when Apple stopped distributing them for sale more than 7 years ago.