Admitting I'm ignorant on the various broadcast tech and system bottlenecks that crunch the dataflow. I'm going to assume something like an external blackmagic decklink on TB with internal storage to buffer, connected to an imac pro or 'blackbox imacpro 2,1 ?
Ingesting 'day old' movie camera footage from a remote set isn't exactly the wheelhouse of the "National Association of Broadcasters". On the fly, live ingest of 8K-10K camera uncompressed footage from tethered cameras is a different ball game.
Look at the specs for the single 8k camera Declink
" ... PCI Express 8 lane generation 3, compatible with 8, 16 lane PCI Express slots ... "
https://www.blackmagicdesign.com/products/decklink/techspecs/W-DLK-34
if you go to the support page for the 8K model for what specific hardware systems.
" ...
Supported Hardware... "
https://www.blackmagicdesign.com/support/note/9568
That supported hardware section is completely blank of anything Mac right now. Zip. nothing.
Windows and Linux. ....... No macOS at all. That is currently a hole in the line up. NAB '19 it will still be a hole. When the "data hog footprint" cameras step up to 10K it will just be worse if Apple rigidly sticks to their current track.
Broadcasting 8K is pretty close to a solution in search of a problem, but folks are going down that rabbit hole.
Is the Mac product line up going to collapse if they don't fill that hole? No. Apple moving to focusing on "dailies" ingested from sneaker-net gathered camera drives can, does, and probably will work over time. Internet only "national" broadcasters are more than happy with 4K as an upper boundary for the immediate future. When Apple is a "national" broadcaster with their own streaming service ..... that could change their viewpoint of what the core of what National broadcasting is.
Can see GPU/Storage solutions being showcased, mentioned it from the start even
You are talking about Apple demoing cherry picked solutions that fit what they have. I'm more so talking about competitor solutions that have better economics and are aligned with the other infrastructure in future deployment. ( 2+ GPUs in a single box. not spread over 2+ ePGU boxes and bigger internal "working set" data capacity with overall system racked with discrete monitors. )
I think Apple wanted the imacpro to be the 'next macpro', I'd argue that Apple wants out of this space you think the next macpro should exist in.
Which Mac Pro? The iMac Pro to be the next Mac Pro 2013? Yes, there is substantially high overlap there in targeting. As Apple said explicitly they saw a substantive number of folks going from PowerMac and early Mac Pros to iMacs. ( e.g., some folks just wanted a desktop that "just worked". Not a tinker box. Not "has to look like my PC from 5-10 years ago" . Just plug-in ... good contemporary desktop processor performance, and simply just use it. ). The iMac is a more refinement of major portion of what the Mac Pro 2013 covered.
I don't think Apple thought they were completely covering the old market that the Power Mac / Mac Pro covered with the iMac Pro (or the MP 2013) . In fact, it is covering even less space than the Mac Pro 2013 did ( folks with hyper preference for non Apple monitor (including headless). Inherent desire for a 2 GPU set up. multiple tenant virtualization , "better than" SSD of performance than contemporary Macs. (iMac Pro is roughly same range as the MBP 2018 models ... as long as only one internal SSD it is going to be hard to gap significantly) , etc. )
It isn't so much that Apple wants to get out. It is whether the market is shrinking as people move to new generations of technology. Folks moved from Mainframes , to "Mini" Computers (really mid size) , to "personal" computers desktops , to personal "laptops" , to handheld computers. Which trends are up and which ones are down ( or grossly stagnant) coupled with opportunity and profitability. Being in something largely as "place holder" to a 'wide' line up isn't really an objective for Apple. ( and that isn't really a change since the "return of Jobs". ... been a couple of decades at this point. )
There are lots of workloads where Apple has no horse, ie enterprise outside of extreme niche ( that one mac mini datacenter ).
There are lots of lower end Mac where Apple has no horse either. the Mini is out of the $300-700 desktop computer market. There is no $500-700 laptop. No xMac. etc. The data center really doesn't make much sense for a GUI focused operation system. In the standard context there is no operator sitting in front of the GUI console of the server. Mac colocation is more small-medium business thing than enterprise ("mega war bucks budget" ) one. Have a mix of developers logging into yet another macOS instance (scaled developer seats ) which is distinctly GUI focused or have Q/A test farms ( non GUI per se but have to make them rentable/sharable if not centrally located . That is a shared cost of the system (which keeps Mac avg sales price higher but still affordable after spread the cost. )
The strategic issue for Apple though is that they have designed themselves "into a corner" a couple of times now. That is also coupled with the system upgrade rate is slowing down. Correcting out of the corner is taking longer. If they give themselves a standard PCI-e slot there are more corners they can work themselves out of in a timely fashion. They don't have to rigidly make everything a socket but balancing the system integration better would be a something. [ Even if they need their own card the R&D to do that if they have a reasonable socket to plug into will be less onerous than doing another new system with all of the function units touch points inside of Apple that might loop in. ]
Apple used to have a 'fairly robust' set of server/workstation offerings, hardware and software. Now all they have is a punchline, whether purposeful or reactionary, legitimate or not, it is what it is. It's not so much a criticism as an observation.
That is more criticism than observation. Apple server offerings consisted of a single 1U model. 2-4 U models no. Hot swappable dual power supplies for 24/7/365 uptime. No. 3+ CPU package servers. No. 'hot' RAS/Failover . Not really.
Appleshare (AFP ) and a couple of other Apple specific services really wasn't a robust nucleus. The highly standards based stuff basically feel into the "On the internet nobody knows if you are a dog" category. As long as the correct bits come back it doesn't matter ( more of a 'race to the bottom' and Linux is Free. Not much of a lower bottom than free beer. ).
And workstation offerings. When did they have more than one? Sequel upgrades for a single product is not a robust line up. Actually going to two workstation zone offerings with limited overlap would be robust in the context of mix consumer demand in the workstation sector.
Amiable to tinkering doesn't necessarily make something a robust set .
Without knowing what is going on internally, and just seeing the time slip through the hourglass between what information is shared, I truly hope for 'not consuming too many resources' otherwise they would have had something by now. If they need the time AND modest amount of resources, then I suspect they are aiming for 'revolutionary' and in this particular instance, it will inevitably lead to disappointment ( as modern Apple revolutionary means proprietary or incompatible )
Extremely often folks who are dogmatically devoted to "form over function" label "function over form" as being revolutionary. If in that dogma pragmatically that is somewhat applicable. If not in the dogma then it is more like evolution that revolution.
if there is functional driver as to why something does ends up being proprietary that is far more an example of "function over form".