In each and every one of your replies in this topic. Things aren't as simply as you think (see below for a further explanation).
Thanks for putting words into my mouth. Not what I said, but I guess when all you have is a hammer, everything looks like a nail.
It's not about wrong or right but about knowing the details and acting on facts instead of emotion. You are acting purely on emotion and overreacting on the matter. What you are forgetting here is that 90% of the code they write for Fusion and Workstation didn't come from the team that got replaced but from their core developers that didn't go anywhere. You are also forgetting that the virtualisation landscape changed considerably, we now have a fierce competition.
Basically that means 2 things: VMware can't afford to mess things up and when they do mess things up there are many alternatives. Therefore there is no reason whatsoever to be concerned. We've seen it in reality too with lots of people jumping ship between the various products. Not to mention that people/companies just buy something and stick with it.
I've worked with VMWare in a professional capacity (at a partner firm, not client), so as I said before I'm well aware of both the effort they put into their enterprise products, and the degree to which the virtualization landscape has become competitive (it certainly made my job more interesting). My concern for VMWare Fusion isn't about emotion, its about experience. Working at a large firm that develops enterprise hardware and software, I've seen the challenges that outsourcing something as simple as even the GUI can bring. Am I saying VMWare Fusion is done for? Of course not, but last years update was certainly fairly minor compared to their usual annual updates, and that doesn't inspire a lot of confidence. Still this is just my personal opinion.
Not to be rude but you suck at discussions. You fail to understand the basic principles such as what examples are, why people use them, why you never ever should take things literal as well as to summarise your points and don't write down all of the possible items in a list. In this case you even can't do that because the list of workloads pros have is endless. That's why you pick 1 or 2 examples and use that.
This is not about being 100% correct by listing every possibility known to man. A discussion is about getting your point across and for that you really do not need 100% correctness, 100% accuracy and 10.000 words per reply.
Learn the rules of discussion and abide by them, that way people won't get annoyed or offended by you.
Someone who isn't rude, and understands how to have a civilized debate wouldn't begin a sentence with "not to be rude but you suck at discussions,...." Perhaps you should reflect on your own behavior before criticizing others. I've tried to be clear, if not concise, in explaining my position without attacking you, if you can't do that then I'm not sure we have anything more to discuss.
I wouldn't call that "evenly". It mostly is listing benchmarks and other theoretical stuff. They don't go explore various workloads.
As I've said, I can appreciate that we have a difference of opinion here.
I'm sorry to say to that you really did no such thing and you still aren't doing it because you are so incredibly pinned down on virtualisation and development. I don't know how many times I have to say it but these are just 2 examples of workloads people use iMacs for.
That's what I've been saying, so thanks for admitting I'm right. If you'd like to point to any other workloads that have issues on Ryzen, be my guest.
That's not true and is a complete disservice to pros in general. Apple has been aiming at those creative professionals and from sales figures alone one can tell that they did a great job. They didn't do a great job for all the other kinds of professionals as Phil and Craig have described in the interview. The issue isn't faster GPUs but a whole lot more. For most of the users here, the only thing that matters is upgradability. Funnily the older Mac Pros never really were very upgradable to begin with. Yes you could insert a PCIe card but it was always a big question if it would actually work in OS X and if it did, if it would work properly. Even back then people were complaining about upgradability so that discussion has never been new.
I don't know if you just misunderstood what was said, are deliberately exaggerating what was said, or didn't actually bother to read the transcript. Whichever it is, your just flat out wrong on this one. Here's a link to the full transcript
(
https://techcrunch.com/2017/04/06/t...-john-ternus-on-the-state-of-apples-pro-macs/), but here are some choice (abridged) quotes.
"
John Ternus: I think one of the foundations of that system was the dual GPU architecture. And for certain workflows, certain classes of pro customers, that’s a great solution. But....The way the system is architected, it just doesn’t lend itself to significant reconfiguration for somebody who might want a different combination of GPUs.
That’s when we realized we had to take a step back and completely re-architect what we’re doing and build something that enables us to do these quick, regular updates and keep it current and keep it state of the art, and also allow a little more in terms of adaptability to the different needs of the different pro customers. For certain classes a single bigger GPU would actually make more sense."
"Craig Federighi: I think we designed ourselves into a bit of a thermal corner, if you will. We designed a system that we thought with the kind of GPUs that at the time we thought we needed, and that we thought we could well serve with a two GPU architecture… that that was the thermal limit we needed, or the thermal capacity we needed. But workloads didn’t materialize to fit that as broadly as we hoped.
Being able to put larger single GPUs required a different system architecture and more thermal capacity than that system was designed to accommodate. So it became fairly difficult to adjust."
Those are just two quotes, but the word GPU comes up 22 times during the transcript, and its clear if you read it in its entirety, that the ability to accommodate faster (hotter) single GPUs was the primary factor behind the redesign.
(Note: none of this is to say they didn't call out software developers as one of their fastest growing and possibly largest Pro groups, or that this group isn't important to them. I'm simply pointing out that this group was not the key group prompting them to rethink the nMP)
Actually you did say that, you didn't even think about the fact that there could be other possibilities, let alone multiple ones. That's why I mentioned IE6.
I like how you failed to think through the logic of an example you yourself threw out there, and then, when I poked holes in it, you just revert to attacking me. Classy.
There is no misinformation, you simply didn't understand it. Just because you fail to understand something and/or someone doesn't make it misinformation. In fact, what I said there is an undeniable given fact. For a switch to a different manufacturer or even a completely different architecture that manufacturer or architecture needs to have benefits over what they are currently using. Or in other words: why would Apple switch to AMD or ARM (for their Macs) when their current partner can do the same or something similar? We are not just talking performance here, it's about other things like contracts, relationship, production, supply, pricing and probably a plethora of other things I'm not mentioning here. It is perfectly possible for Apple to switch to AMD as well as ARM (for their Macs) but it isn't very likely that they do.
No, you're obviously the one who doesn't understand. If you can't understand how switching between Intel and AMD (both x86-64) and how switching between x86-64 and ARM (completely different architectures) is different then I'm not sure we can have a productive discussion until you educate yourself. Switching to ARM would be a monumentally more difficult task (not to mention much harder to reverse) for Apple than deciding to utilize AMD CPUs in their machines. The benefits of what AMD is offering have already been discussed. I'm not going to rehash them for you again here. That said, Apple regularly switched between Motorola and IBM back in the PowerPC days (with no adverse effects I might add), so IMO, you're opinion on the great difficulty of using different manufacturers implementations of the same fundamental architecture just doesn't hold water.
It isn't about getting macOS working on it
They probably already have it working in their labs. It isn't uncommon that manufacturers do things like that. Microsoft does so too. It actually is vital to do research into these things. That research doesn't start and end with getting the OS running on the hardware, it's also about researching what the hardware can and cannot be used for. By doing research like that you gain knowledge about it which helps you in deciding whether you should use it or not and if you are going to use it, for what.
One of the few things I think we can (mostly) agree on, although based on the rest of what you wrote, you're logical process in reaching this conclusion is very different from my own.
I appreciate the contributions you've made to this thread, and I've enjoyed having this discussion with you.That said, I honestly don't think we're going to be able to see eye to eye on this. I think we've both said what we wanted to say, although If you want to stop making personal attacks, I'd be happy to discuss this issue further with you.
But that's probably not what you want, and unless you demonstrate otherwise, I think it's time we stop filling this thread with essay length replies.