Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Gr1f

macrumors regular
Oct 1, 2009
160
29
I could totally see apple making ALoT of cash putting two m1 chip on a pcie card and use it as a coprocessor on the intel macpro...
sell it for 3k and dominate everything on the market...
a 16 core macpro with two m1 chip as coprocessor will make circle around a threadripper... after all the afterburner card is apple silicon used as a SOC/ASIC....
now imagine every 16x slot populated with this kind of card and you are looking at somthing that will litterally annihilate a 64 core epyc with 3090....
the «m1 » is a very,very powerful chip if you use it solely as a coprocessor in apps such as fcp or Ame or blender...
after seeing a m1 crunshing canon 1dxmk3 raw rushes on davinci resolve like if it was nothing, if apple don’t think about building such cards, well they absolutely should...
a mac mini m1 is over 7000 in GB5
so basically half the power of a 16core xeon that sits at 14000...
now each of those dual m1 pcie card would double the compute power of your MP7.1...
one 16 core with one card would equal a 28 core... with gpu power on top of that...
Was wondering when someone might pipe up about the potential for this as an MPX or PCIe. I wouldn't bet on it whatsoever BUT there is a case for it. MP users tend to keep their machines for at least 5 years. The 7.1 being expandable gives Apple an opportunity to offer a significant upgrade of their own making. Better to get ~$3k off a MP owner 2 years into ownership than not? Not sure if it was even possible with previous shifts of PPC to Intel but in this case it would be.

Or maybe I'm dreaming :)
 
  • Like
Reactions: Flint Ironstag

edgerider

macrumors 6502
Apr 30, 2018
281
149
If that's a PCIe card, but not MPX card, then that will be a big gift for the 5,1 users (unless that card can only work via the T2).
as much as I like, and have loved the 4.1/5.1 they are now obsolete completely for anything else than being repurposed file server .
the mac mini m1 is simply faster than a maxed out 5.1 with a rx580 on any aspect of the spectrum period.
If you work 8h a day just the drop in your electricity bill will pay a m1 macmini for itself over 2 years.

a pcie dual m1 would have absolutely no use on a 5.1 because a one M1 chip is already twice as fast as a dualx5690.

I have sold all my dual 12 core 5.1 and bought a couple of single for cheap slapped a atto 10gbe ethernet card and now it gives me a silent file server that read/write at 1000mo/s

but back to the radeon, I am quite tired to buy high end gpu at high price top end card that are unavailable and not well supported until they finaly are , and two weeks after they are already obsolete and the cycle keep going.

I have now a 25k€ macpro that is not faster at editing and grading 5,5k raw footage than a 2,5k€ macmini...

with the democratization of 8k h265 everywhere, I feel that any oncoming apple desktop silicon will be faster than average multicore intel silicon + highend gpu...

so, as soon as the test prove that the new 6000 serie are faster than a lets say a mpx vega2duo, and keep being cheaper, i will do it.
other wise I wil keep my x5700+2 w5700xt, or wait until i can get my hands on a second hand vega2duo
 

h9826790

macrumors P6
Apr 3, 2014
16,656
8,587
Hong Kong
a pcie dual m1 would have absolutely no use on a 5.1 because a one M1 chip is already twice as fast as a dualx5690.
I don't get it.

If a M1 PCIe card can increase 7,1's performance, why not for a 5,1?

X5690's performance is irrelevant unless the X5690 is too slow and even not fast enough to run the driver for M1 (which is very unlikely).

It's just like once we enable HWAccel, the X5690's performance is irrelevant. The video engine on the GPU will handle the H264 / HEVC codec.

What I expect is that if M1 PCIe card exist, macOS will allow the M1 to performance some specific workflow. So that alleviate the X5690's stress.

A M1 PCIe card can be very useful on a 5,1, but not absolutely useless.
 
  • Like
Reactions: Flint Ironstag

edgerider

macrumors 6502
Apr 30, 2018
281
149
I don't get it.

If a M1 PCIe card can increase 7,1's performance, why not for a 5,1?

X5690's performance is irrelevant unless the X5690 is too slow and even not fast enough to run the driver for M1 (which is very unlikely).

It's just like once we enable HWAccel, the X5690's performance is irrelevant. The video engine on the GPU will handle the H264 / HEVC codec.

What I expect is that if M1 PCIe card exist, macOS will allow the M1 to performance some specific workflow. So that alleviate the X5690's stress.

A M1 PCIe card can be very useful on a 5,1, but not absolutely useless.
because the host machine would be slowing the adon card...
memory/data path/pch would slow down the host card....
a M1 mac by itself is already two to 3 time faster than a maxed out 5.1....
It would be like droping a hayabusa engine in a Vintage Mack lorry truck to go faster: you would not get any benefits from it ...
I dont even know if a m1 could actually « talk » to an intel cpu via PCIE in a sufficient way not to be such a bottleneck to the M1....
I don’t know what the pcie architecture is on the m1 chip...
 

Flint Ironstag

macrumors 65816
Dec 1, 2013
1,334
744
Houston, TX USA
because the host machine would be slowing the adon card...
memory/data path/pch would slow down the host card....
a M1 mac by itself is already two to 3 time faster than a maxed out 5.1....
It would be like droping a hayabusa engine in a Vintage Mack lorry truck to go faster: you would not get any benefits from it ...
I dont even know if a m1 could actually « talk » to an intel cpu via PCIE in a sufficient way not to be such a bottleneck to the M1....
I don’t know what the pcie architecture is on the m1 chip...
Just because the proposed card might not perform to its full potential in a 5,1 - doesn't mean there's no market of people who will find the performance loss acceptable.

You see it every single day with GPUs.
 

edgerider

macrumors 6502
Apr 30, 2018
281
149
because the host machine would be slowing the adon card...
memory/data path/pch would slow down the host card....
a M1 mac by itself is already two to 3 time faster than a maxed out 5.1....
It would be like droping a hayabusa engine in a Vintage Mack lorry truck to go faster: you would not get any benefits from it ...
I dont even know if a m1 could actually « talk » to an intel cpu via PCIE in a sufficient way not to be such a bottleneck to the M1....
I don’t know what the pcie architecture is on the m1 chip...
because it would be faster by it’s own... the x5690 would only brings latency because of the narrower datapath...
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
There isn’t really anyway you could run M1 on a PCIe or MPX card that makes any sense for performance or normal use. M1 wouldn’t have performant access to built in memory or storage. And M1 wasn’t designed for multiprocessor situations, so it’s missing the bits you’d need to sync with the Intel CPU. Assuming the Intel CPU and M1 even had a compatible multiprocessing implementation. M1 and the Intel host would have to communicate on what data they each have checked out as part of their memory controllers and cache, and they are missing the on chip silicon to do that.
 

h9826790

macrumors P6
Apr 3, 2014
16,656
8,587
Hong Kong
because the host machine would be slowing the adon card...
memory/data path/pch would slow down the host card....
a M1 mac by itself is already two to 3 time faster than a maxed out 5.1....
It would be like droping a hayabusa engine in a Vintage Mack lorry truck to go faster: you would not get any benefits from it ...
I dont even know if a m1 could actually « talk » to an intel cpu via PCIE in a sufficient way not to be such a bottleneck to the M1....
I don’t know what the pcie architecture is on the m1 chip...
The whole idea of using expansion card is to increase the whole system's performance.

e.g. The CPU is so slow to encode HEVC, therefore, we install GPU onto the PCIe slot and let the GPU to do that at a mush faster speed.

If whatever we install onto the slot can only run at or below the original hardware's speed. Then what's the point of having expansion slot?

And from Intel, we know that CPU can be expand by using PCIe slot (Xeon Phi). So, I can't see why the X5690 is a limiting factor. All it need is just fast enough to run the driver. Especially the M1 itself is a SoC, it have its own memory etc.
 
  • Like
Reactions: Flint Ironstag

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
The whole idea of using expansion card is to increase the whole system's performance.

e.g. The CPU is so slow to encode HEVC, therefore, we install GPU onto the PCIe slot and let the GPU to do that at a mush faster speed.

If whatever we install onto the slot can only run at or below the original hardware's speed. Then what's the point of having expansion slot?

And from Intel, we know that CPU can be expand by using PCIe slot (Xeon Phi). So, I can't see why the X5690 is a limiting factor. All it need is just fast enough to run the driver. Especially the M1 itself is a SoC, it have its own memory etc.
Mac Pros already come with an M1 style chip to accelerate some operations. That’s the T2. :p

There isn’t really anything that would justify shoving an entire M1 into a Mac Pro. It’s going to contain a lot of general purpose silicon you’d never be able to use, and you’d need to run it as it’s own second computer running it’s own macOS instance. At that point just buy an M1 Mac Mini and network it to your Mac Pro.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
It's total nonsense to say the M1 is 2-3x faster than an X5690 MP. Barefeats did a comparison of the M1 MBP vs an upgraded 2010 MP, and although the former was significantly faster for some tasks, in many others the machines were broadly comparable. In fact, for some Metal / 3D game benchmarks, the 2010 MP was much faster. m1-macbook-pro-versus-2010-mac-pro.html

I can't see Apple releasing an Mx PCIe card; the 7,1 is too new to need it and the 5,1 MP hasn't even been supported by the last two versions of macOS. As far as the 4/5,1 is concerned, I'd rather put the money into an AS tower with PCIe 4.0 slots, M.2, USB-C etc. Half the appeal of AS is the power efficiency - a pair of idling Xeon's kind of defeats the point. I won't be paying £6000 though - £2000 seems more reasonable.
 
Last edited:

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
It's total nonsense to say the M1 is 2-3x faster than an X5690 MP. Barefeats did a comparison of the M1 MBP vs an upgraded 2010 MP, and although the former was significantly faster for some tasks, in many others the machines were broadly comparable. In fact, for some Metal / 3D game benchmarks, the 2010 MP was much faster. m1-macbook-pro-versus-2010-mac-pro.html

I can't see Apple releasing an Mx PCIe card; the 7,1 is too new to need it and the 5,1 MP hasn't even been supported by the last two versions of macOS. As far as the 4/5,1 is concerned, I'd rather put the money into an AS tower with PCIe 4.0 slots, M.2, USB-C etc. Half the appeal of AS is the power efficiency - a pair of idling Xeon's kind of defeats the point. I won't be paying £6000 though - £2000 seems more reasonable.

That's the other thing that makes no sense. A Radeon 6800 or 6900 would be 8x-16x times faster at an acceleration task than an M1. Why would Apple go out of their way to add a slower co-processor meant for ultralight portables/netbooks when there are much better options that are already compatible?
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
Why would Apple go out of their way to add a slower co-processor meant for ultralight portables/netbooks when there are much better options that are already compatible?
A: They wouldn't.

People seem to think that Apple has discovered some new way of using silicon that no one else knows about, where they will be suddenly making industry-leading GPUs that use 10W of power. Apple's priority is the iPhone; anything they make will be an SoC, and essentially an A-series processor with more cores and a higher TDP. Given how fast these are, this will be perfectly adequate for their volume MacBooks and iMacs. Apple will not make a standalone GPU - this has no application in an iPhone or iPad, and it will be much cheaper / easier to buy them in from AMD. AMD cater to the whole PC / console market, so can spread the costs of developing advanced features like hardware raytracing, that won't be relevant to a phone for a very long time. Why would Apple bother competing in this space?

I imagine it will be something like:

12" / low-end mini > M1

14" / low-end 16" MB / high-end mini / 21" iMac > M2

High-end 16" MBP / 27" iMac > M2 + mid-level AMD GPU

MP > 2/4x M2X + (2x) high-end AMD GPU

With the M2X just an M2 that can work in a multiprocessor configuration. I don't think they'll want to develop a significantly different SoC for the low volume MP.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
Apple will not make a standalone GPU - this has no application in an iPhone or iPad, and it will be much cheaper / easier to buy them in from AMD. AMD cater to the whole PC / console market, so can spread the costs of developing advanced features like hardware raytracing, that won't be relevant to a phone for a very long time. Why would Apple bother competing in this space?

I'd normally agree, but the rumor mill all seems to agree that Apple is making discrete GPUs.

AMD still hasn't delivered something that is definitively better than Nvidia. It might be that Apple is hoping they can finally put the Nvidia rumbling to bed.

For volume - it could be an issue. I don't know how cost effective it is. But if discrete GPUs make their ways back into iMacs and MacBook Pros it could be worth it.

The rumor is that the AMD GPUs Apple is using are actually already limited runs Apple is doing themselves. That's why you've had things like 7 nm Vegas. Apple is (supposedly) the ones producing those GPUs, and using their 7 nm capacity at TSMC to do so. So it could be that they're already doing limited production of AMD designs, why not do limited production of their own designs.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
AMD still hasn't delivered something that is definitively better than Nvidia. It might be that Apple is hoping they can finally put the Nvidia rumbling to bed.
So are you saying that Apple will be better at designing GPUs than Nvidia, who's designs not even the second-largest GPU maker (and leading x86 CPU maker) has been able to get close to in years? Whose R&D is supported by supplying GPUs to everything from laptops to high-end ML servers? And that they will keep up with them on an ongoing basis? Despite Apple's discrete GPUs only being used in 16" MBPs, higher end iMacs and the MP, and will never be sold to the wider market?
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
So are you saying that Apple will be better at designing GPUs than Nvidia, who's designs not even the second-largest GPU maker (and leading x86 CPU maker) has been able to get close to in years? Whose R&D is supported by supplying GPUs to everything from laptops to high-end ML servers? And that they will keep up with them on an ongoing basis? Despite Apple's discrete GPUs only being used in 16" MBPs, higher end iMacs and the MP, and will never be sold to the wider market?

Not sure if they will be better or not.

_But_ they do have access to TSMC's 5 nm production, and Nvidia does not. That gives Apple an advantage right out of the gate. Nvidia is still stuck at 8 nm. Apple could design a less optimized GPU and _still_ come out faster and cooler.

Apple having semi exclusive access to TSMC's 5 nm process, and soon their 3 nm process, will be something they can use to bludgeon all the CPU and GPU manufacturers.

It's totally a business monopoly that Apple has set up, and not an R&D advantage. But it still makes a huge difference.

Apple has to design a GPU for the SOCs whether or not they do discrete. It's not like they're going to be avoiding the GPU R&D if they avoid discrete GPUs.
 
  • Like
Reactions: Flint Ironstag

h9826790

macrumors P6
Apr 3, 2014
16,656
8,587
Hong Kong
Mac Pros already come with an M1 style chip to accelerate some operations. That’s the T2. :p

There isn’t really anything that would justify shoving an entire M1 into a Mac Pro. It’s going to contain a lot of general purpose silicon you’d never be able to use, and you’d need to run it as it’s own second computer running it’s own macOS instance. At that point just buy an M1 Mac Mini and network it to your Mac Pro.
I know, but that's not what we were discussing.

I you check the history, my post was referring to "If Apple release M1 PCIe card to the 7,1", which means, the assumption is "T2 isn't a factor anymore, because Apple already designed to release such card for 7,1".

Anyway, why Apple want to make such card? I don't know. That discussion is base on "Apple will do that", but not trying to find out "why Apple want to do that". But how about if Apple release a M1 PCIe card that's more expensive than the mini?
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
@goMac Fair enough, if whoever fabs for Nvidia is stuck on 8nm, and Nvidia is unwilling to pay to jump TSMC’s queue (their GPUs already lead the pack). If / when they do get their GPUs out on 5nm though, they’ll be pretty fearsome.

Apple will need to design GPUs regardless, but are their designs especially awesome, or is it just that they’re part of an industry leading SoC? And do any advantages transfer to a design on a PCIe interface, with its own VRAM etc?
 
Last edited:

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
I know, but that's not what we were discussing.

I you check the history, my post was referring to "If Apple release M1 PCIe card to the 7,1", which means, the assumption is "T2 isn't a factor anymore, because Apple already designed to release such card for 7,1".

Anyway, why Apple want to make such card? I don't know. That discussion is base on "Apple will do that", but not trying to find out "why Apple want to do that". But how about if Apple release a M1 PCIe card that's more expensive than the mini?

I don't really understand where this is going. T2 already provides the sort of acceleration being talked about. M1 wouldn't add much for a Mac Pro. T2 already has the same sorts of acceleration for tasks like encoding and decoding that M1 has.

There isn't a clear reason for Apple to make such a card. Especially in since such a card would never just be able to run ARM macOS apps.

@goMac Fair enough, if whoever fabs for Nvidia is stuck on 8nm, and Nvidia is unwilling to pay to jump TSMC’s queue (their GPUs already lead the pack). If / when they do get their GPUs out on 5nm though, they’ll be pretty fearsome.

Apple will need to design GPUs regardless, but are their designs especially awesome, or is it just that they’re part of an industry leading SoC? And do any advantages transfer to a design on a PCIe interface, with its own VRAM etc?

When Nvidia is on 5 nm, Apple will be on 3 nm. Because of their business investments, Apple can continuously leapfrog Nvidia and AMD. And... whatever Intel is doing.

I don't think adding VRAM for a GPU or moving to PCIe is a big deal for Apple. M1 already has a PCIe bus, I'm not sure if the GPU is already on PCIe inside the M1 chip. It might be!

The big risk to Apple is if someone like Samsung comes along and gets 3 nm production going at the same time as TSMC. Then Nvidia can go to Samsung, and take on Apple with the same fabrication side.

I think the biggest danger to Apple is actually if Intel gets access to 5 nm or 3 nm production. Intel's designs are pretty well optimized, and could come roaring back in CPU performance if they found a good manufacturer.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
I don't think adding VRAM for a GPU or moving to PCIe is a big deal for Apple
I’m not suggesting this would be a big technical challenge for Apple, just that one strength of an SoC is its tight integration with system memory and so on. Once you redesign it to be on PCIe, use its own VRAM etc. how similar is it really? And is there anything particularly exotic about Apple’s GPU cores vs typical PC GPUs anyway? The smartphone competition isn’t a high bar in general as their simple GPUs typically only support cut-down APIs like OpenGL ES.
 
Last edited:

SecuritySteve

macrumors 6502a
Jul 6, 2017
949
1,082
California
I’m not suggesting this would be a big technical challenge for Apple, just that one strength of an SoC is its tight integration with system memory and so on. Once you redesign it to be on PCIe, use its own VRAM etc. how similar is it really? And is there anything particularly exotic about Apple’s GPU cores vs typical PC GPUs anyway? The smartphone competition isn’t a high bar in general as their simple GPUs typically only support cut-down APIs like OpenGL ES.
A GPU optimized for Metal might actually offer tangible performance benefits over an NVIDIA or AMD card, as opposed to their generic API-agnostic platform. Dollars to dollars, I would expect an Apple GPU to outperform a similarly priced NVIDIA or AMD GPU of the same power-class in similar applications between platforms.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
Intel's designs are pretty well optimized, and could come roaring back in CPU performance if they found a good manufacturer.
Yeah, they’re surprisingly competitive really, considering they’re on 14nm and AMD is on 7nm. A 5nm Intel CPU would likely be pretty impressive. Seem to have a lot of security flaws though (even if most attacks seem pretty theoretical).
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
A GPU optimized for Metal might actually offer tangible performance benefits over an NVIDIA or AMD card, as opposed to their generic API-agnostic platform. Dollars to dollars, I would expect an Apple GPU to outperform a similarly priced NVIDIA or AMD GPU of the same power-class in similar applications between platforms.
Fair point. Not sure how easy it would be to calculate a ‘dollars to dollars’ comparison though.

GPU APIs seem to be quite different to CPU instruction sets. GPUs can support multiple APIs (DirectX, OpenGL etc.) and new APIs like Vulkan or Metal can be written to work with existing GPUs. AMD and Nvidia's internal architectures are likely wildly different, yet still support the same APIs. This suggests to me that it's more about the driver than the silicon, though I expect a GPU designed from the ground up to support Metal (and perhaps only Metal) would have some advantages. I'm sure others can speak more authoritatively on this though.
 
Last edited:

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
I’m not suggesting this would be a big technical challenge for Apple, just that one strength of an SoC is its tight integration with system memory and so on. Once you redesign it to be on PCIe, use its own VRAM etc. how similar is it really? And is there anything particularly exotic about Apple’s GPU cores vs typical PC GPUs anyway? The smartphone competition isn’t a high bar in general as their simple GPUs typically only support cut-down APIs like OpenGL ES.

SoC has a lot of tradeoffs, and Apple's GPU design might perform better as a discrete GPU.

The tight integration with system memory could end up actually being a bad thing for higher performance designs. System memory is relatively slow DDR4. An Apple GPU that had it's own dedicated bank of GDDR5 or HBM2 might actually perform better.

Apple's done a lot of optimization to try to work around the slow speed of DDR4. The shared memory addressing prevents copying. And the tiled renderer tries to minimize the amount of bandwidth they actually need to drive graphics. But both those tricks may not carry Apple's design all the way to where it needs to be for the Mac Pro. Especially when you get to stuff like compute where tiled rendering doesn't necessarily apply. For use cases where that memory bandwidth is really necessary, Apple's SoC designs won't hold up.

Apple might have some more tricks up their sleeve, but as someone who doesn't know what secrets they have, it would seem like they're going to need to switch to a discrete design with fast memory at some point to fulfill some use cases.

None of these changes would badly break their design. Adding more GPU cores and changing the memory controller comes with it's own challenges. But it's not unreasonable to expect Apple could take their existing GPU cores, add a bunch more on a larger die, and then change the memory controller to use discrete memory. That all builds on their existing GPU design and doesn't really break anything they've done. Metal already would support this configuration.

If they really wanted to preserve shared address spaces, they could always do something like Infinity Fabric.

A GPU optimized for Metal might actually offer tangible performance benefits over an NVIDIA or AMD card, as opposed to their generic API-agnostic platform. Dollars to dollars, I would expect an Apple GPU to outperform a similarly priced NVIDIA or AMD GPU of the same power-class in similar applications between platforms.

Looking at the performance of M1 and extrapolating, it seems very likely Apple could beat AMD. Maybe a tie against NVIDIA, but a tie against NVIDIA is still really good. And that might be enough to start converting more Windows users. Especially with the CPU performance Apple has already shown.
 
  • Like
Reactions: Flint Ironstag

edgerider

macrumors 6502
Apr 30, 2018
281
149
There isn’t really anyway you could run M1 on a PCIe or MPX card that makes any sense for performance or normal use. M1 wouldn’t have performant access to built in memory or storage. And M1 wasn’t designed for multiprocessor situations, so it’s missing the bits you’d need to sync with the Intel CPU. Assuming the Intel CPU and M1 even had a compatible multiprocessing implementation. M1 and the Intel host would have to communicate on what data they each have checked out as part of their memory controllers and cache, and they are missing the on chip silicon to do that.
they could totally do that very efficiently through a infiny band protocol on a x16 pcie gen3, that is basically what cluster do.
of course it would not be just a standard M1 chip, what i mean is that the afterburner card showed that apple know how to deal with fpga and asic like coprocessing.
so far my understanding is that anyhow the m1 chip doesnt have true pcie lane, or at least I could not find anything regarding pcie lanes on the m1 chip.
Anyhow, what I was referring to is that the next version of M1 chip that will need to support pcie because it is a standard and most professional thunderbolt gear use the pcie layer on thunderbolt.
so basically pcie interconnect over 16 is totally enough to pass the data.
even 12k BMD mini pro, which is actually the highest possible video format, is «only » 1.5 Gb/s so moving the data would not be an issue on a 11Gb/s bidirectional infiny band link.
Dataflow is never the issue on video workflow, crunching the numbers is.
asics/gpu can’t really be modified after being engraved. on the other hand FPGA dont show very good performance/cost ratio to do h265 codec specific encoding.
It looks that a lot of people here confuse very good and artistic video content creation, and true motion picture/broadcast workflow.

in first case you shoot mostly on Avhc or h264/265, cut, edit, grade, and export in that format. you may have several sources format with a mix of compressed format but they usually are on your drive or on a office nas. M1 is perfect for that.

on the other hand a motion picture/broadcast editing/grading station is all about being a swiss knife that can work over a unified network with shared projects and several peoples working on different thing.
you will have to work with very different format ranging from animated gifs to 12k final mastering on a 120 minutes film.
on those, your final product is never h265... compressed codecs are only used for customers review.
and .... unfortunately those are the exports you do the most....
this is why in my company I now have a macmini that do just that : h265 and quicktime encoding on ame or resolve. because I don’t make any money doing encoding, and blocking a 50k€ editing station just to watch a task bar encoding h265 for customers review, is a total waste of money.

the M1 doesn’t have hardware acceleration for h265 encode, but it is as fast as the macpro to export H265... and the mac mini draws 60w while the macpro and the rack draws 1000w.... so having an internal « cluster » of apple silicon on pcie card would actually be super smart.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.