Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If he already have MP51.0089.B00, Titan XP with the webdrivers running probably can work from 10.13.6, you are correct. If not, he needs to install an Apple OEM or a Mac EFI card to get to MP51.0089.B00 first.

Some NVIDIA cards don't work for this, I had problems with TITAN X and a non-reference eVGA 970, but it's worth to try before anything.

Yeah, that’s my understanding as well. Really depends on his current firmware version. If he need to do a 2-steps firmware update, then a Mac EFI card is required.
 
The RX 5500 XT is PCIe 4.0 x8 electrically.

So do not buy the 4GiB model unless it is for a PCIe 4.0 system or if you only have 8 lanes available anyway.

It seems that this card was thought as an APU upgrade or for eGPU.
 
Last edited:
The 5500XT seems just consider a mid level GPU (at the RX580 level) today, PCIe 2.0 x8 should be good enough for most usage without showing any noticeable bottleneck.
 
  • Like
Reactions: bsbeamer
The 5500XT seems just consider a mid level GPU (at the RX580 level) today, PCIe 2.0 x8 should be good enough for most usage without showing any noticeable bottleneck.
A German site showed there can be big slowdowns with 3.0 if the VRAM is not enough.
 
  • Like
Reactions: koyoot
The 5500XT seems just consider a mid level GPU (at the RX580 level) today, PCIe 2.0 x8 should be good enough for most usage without showing any noticeable bottleneck.
I've read several reviews of 5500 cards and I liked a lot the reference card, but got an impression that is a very improved RX 560 successor and AMD is positioning the card more in this space than the RX 580 one.
 
  • Like
Reactions: h9826790
That's running out of VRAM, not a normal way to use a graphic card.

When running out of VRAM, the whole workflow will be slowed down a lot anyway.
Some newer games exceed 4GiB VRAM a bit, it is normal.
[automerge]1576957219[/automerge]
I've read several reviews of 5500 cards and I liked a lot the reference card, but got an impression that is a very improved RX 560 successor and AMD is positioning the card more in this space than the RX 580 one.
It is not an RX 560 successor. You can find 75W RX460, RX560D, and RX560.
 
Some newer games exceed 4GiB VRAM a bit, it is normal.

If the game use more than 4GB VRAM, then the FPS will drop a lot anyway regardless PCIe 2.0 or 4.0.

The point is, if need more than 4GB of VRAM, buy a card that has more than 4GB VRAM.

If gaming with a card that less than 4GB VRAM, use a setting that demand less than 4GB VRAM.

But not run a game that need more than 4GB VRAM with a 4GB VRAM card on PCIe 4.0 system, and avoid to do the same thing on a PCIe 2.0 system.

The way to handle this situation should be PCIe standard independent.

IMO, PCIe 2.0 x8 is good enough as long as the card can deliver it's full speed (or very close to full speed) under normal situation.

It's like you intentionally make your computer running out of RAM, then say using NVMe (for SWAP) is so much better than using HDD. Yes, for that particular situation is correct. However, we should not let the computer run out of RAM at the very beginning. NVMe is better than HDD, just like PCIe 4.0 is better than PCIe 2.0, however, should not because of that reason.
[automerge]1576958389[/automerge]
I've read several reviews of 5500 cards and I liked a lot the reference card, but got an impression that is a very improved RX 560 successor and AMD is positioning the card more in this space than the RX 580 one.

Yeah, I also think AMD position that card as the "Navi version of RX560" (low end card in that family). But what I was saying is that 5500XT is at the RX580 performance level (not at the RX580 position in AMD's marketing strategy).
Screenshot 2019-12-22 at 3.56.45 AM.png
 
Last edited:
  • Like
Reactions: tsialex
If the game use more than 4GB VRAM, then the FPS will drop a lot anyway regardless PCIe 2.0 or 4.0.

The point is, if need more than 4GB of VRAM, buy a card that has more than 4GB VRAM.

If gaming with a card that less than 4GB VRAM, use a setting that demand less than 4GB VRAM.

But not run a game that need more than 4GB VRAM with a 4GB VRAM card on PCIe 4.0 system, and avoid to do the same thing on a PCIe 2.0 system.

The way to handle this situation should be PCIe standard independent.

IMO, PCIe 2.0 x8 is good enough as long as the card can deliver it's full speed (or very close to full speed) under normal situation.

It's like you intentionally make your computer running out of RAM, then say using NVMe (for SWAP) is so much better than using HDD. Yes, for that particular situation is correct. However, we should not let the computer run out of RAM at the very beginning. NVMe is better than HDD, just like PCIe 4.0 is better than PCIe 2.0, however, should not because of that reason.
If most games that you play fall under 4GiB, it can be acceptable not to go for 8GiB.

The game shown runs at 25fps with 3.0 but 40fps with 4.0 . 30fps would be the bare minimum.

The lowest frequency supported by current FreeSync monitors is 30fps. There are 14 such monitors, while 151 go down only to 40Hz.
 
Last edited:
If most games that you play fall under 4GiB, it can be acceptable not to go for 8GiB.

The game shown runs at 25fps with 3.0 but 40fps with 4.0 . 30fps would be the bare minimum.

Do you understand? Intentionally run a software that demand more VRAM than the graphic has is not a proper way to determine "if PCIe 2.0 is good enough for this particular card".

When running out of VRAM, the GPU have to work with system RAM via PCIe connection. Therefore, in that particular gaming benchmark, it's pretty much measuring the bandwidth difference between PCIe 3.0 vs PCIe 4.0. Of course will achieve that result.

If want more than 30FPS, lower the setting, keep the VRAM demand below 4GB is the correct way to handle that situation, but not let the card run in a PCIe 4.0 system and constantly running out of VRAM.

Also, 30FPS is your own bare minimum. You use your own standard, and a "run out of VRAM" situation to determine that PCIe 2.0 is not good enough for 5500XT, sounds not very appropriate / objective.

If the VRAM demand is under 4GB, and the 5500XT can do 100FPS on a PCIe 4.0 x8 system (in GPU limiting situation), but only 50FPS on a PCIe 2.0 x8 system. Then yes, PCIe 2.0 x8 isn't good enough for the card.

However, you make the situation become PCIe bandwidth limiting, but not GPU limiting, and then say PCIe 2.0 is not good enough for the card. That's meaningless.

If this logic works, then you can lower the resolution and graphic setting to minimum, then say X5690 is not good enough for 5500XT, because on a 9900X system, the game can only run at 100FPS with X5690, but 200FPS with 9900X. And my monitor is a 144Hz monitor, 144FPS is my bare minimum...

The problem is, you intentionally use an improper setting to make the situation become "not GPU limiting", and then use your own standard to decide if the system is good enough for that GPU. In my example, the situation become CPU limiting. Then I have a conclusion "X5690 is not good enough for 5500XT". As you can see, it's not a proper conclusion, because we are not really using the graphic card properly at the very beginning.

Anyway, you can have you own opinion, you don't have to agree with me. But please do not make "5500XT need PCIe 4.0 to perform" as a fact, according to your description, that only true under "running out of VRAM case" (in fact, even this case is not true, because the card still can't perform when run out of VRAM. It's just "when running out of VRAM, this card runs better with PCIe 4.0 x8 connection than PCIe 3.0 x8 connection).

IMO, if you want to help the others, better to post the link to let the others have a full picture of what's happening. And let them to decide if 5500XT is a good card in their own situation.
 
Last edited:
Do you understand? Intentionally run a software that demand more VRAM than the graphic has is not a proper way to determine "if PCIe 2.0 is good enough for this particular card".

When running out of VRAM, the GPU have to work with system RAM via PCIe connection. Therefore, in that particular gaming benchmark, it's pretty much measuring the bandwidth difference between PCIe 3.0 vs PCIe 4.0. Of course will achieve that result.

If want more than 30FPS, lower the setting, keep the VRAM demand below 4GB is the correct way to handle that situation, but not let the card run in a PCIe 4.0 system and constantly running out of VRAM.

Also, 30FPS is your own bare minimum. You use your own standard, and a "run out of VRAM" situation to determine that PCIe 2.0 is not good enough for 5500XT, sounds not very appropriate / objective.

If the VRAM demand is under 4GB, and the 5500XT can do 100FPS on a PCIe 4.0 x8 system (in GPU limiting situation), but only 50FPS on a PCIe 2.0 x8 system. Then yes, PCIe 2.0 x8 isn't good enough for the card.

However, you make the situation become PCIe bandwidth limiting, but not GPU limiting, and then say PCIe 2.0 is not good enough for the card. That's meaningless.

If this logic works, then you can lower the resolution and graphic setting to minimum, then say X5690 is not good enough for 5500XT, because on a 9900X system, the game can only run at 100FPS with X5690, but 200FPS with 9900X. And my monitor is a 144Hz monitor, 144FPS is my bare minimum...

The problem is, you intentionally use an improper setting to make the situation become "not GPU limiting", and then use your own standard to decide if the system is good enough for that GPU. In my example, the situation become CPU limiting. Then I have a conclusion "X5690 is not good enough for 5500XT". As you can see, it's not a proper conclusion, because the we are not really using the graphic card properly at the very beginning.

Anyway, you can have you own opinion, you don't have to agree with me. But please do not make "5500XT need PCIe 4.0 to perform" as a fact, according to your description, that only true under "running out of VRAM case". IMO, if you want to help the others, better to post the link to let the others have a full picture of what's happening. And let them to decide if 5500XT is a good card in their own situation.
30fps has been the console standard for a long time.

Why do you think AMD has implemented caching in their GPUs? So that people can buy cheaper ones with less VRAM.

If someone prefers Ultra at 40fps to High at 60fps it is their choice.

You want a link? Here:

 
If 4GB VRAM is too limiting, then it should be "4GB VRAM is not enough". But not "PCIe 2.0 is not good enough".
[automerge]1576962239[/automerge]
30fps has been the console standard for a long time.

Why do you think AMD has implemented caching in their GPUs? So that people can buy cheaper ones with less VRAM.

If someone prefers Ultra at 40fps to High at 60fps it is their choice.

You want a link? Here:


Please see this

When using a GPU graphic card properly. PCIe 2.0 or 3.0 or 4.0 does't really matter.

This test run by 5700XT, which is stronger (more demanding) than 5500XT. And PCIe 2.0 x16 provide pretty much the same bandwidth as PCIe 3.0 x8.

If a 5700XT can perform with PCIe 2.0 x16, there is no way a PCIe 3.0 x8 connection can bottleneck a 5500XT that much.

The only reason that happen is because that user is not benchmarking the graphic card, but making a situation that basically benchmarking the PCIe connection's bandwidth.
 
Last edited:
If 4GB VRAM is too limiting, then it should be "4GB VRAM is not enough". But not "PCIe 2.0 is not good enough".
4GiB VRAM would not be good enough. But PCIe 2.0 x16 or 3.0 x8 can also be not good enough.

I have run a TB1 eGPU. Better than buying a Retina MBP.
 
Please see this

When using a GPU graphic card properly. PCIe 2.0 or 3.0 or 4.0 does't really matter.

This test run by 5700XT, which is stronger (more demanding) than 5500XT. And PCIe 2.0 x16 provide pretty much the same bandwidth as PCIe 3.0 x8.

If a 5700XT can perform with PCIe 2.0 x16, there is no way a PCIe 3.0 x8 connection can bottleneck a 5500XT that much.

The only reason that happen is because that user is not benchmarking the graphic card, but making a situation that basically benchmarking the PCIe connection's bandwidth.
This misses the point. The 5700 XT is a 8GiB card with PCIe 4.0 x16.
 
This misses the point. The 5700 XT is a 8GiB card with PCIe 4.0 x16.

No, your original post is about PCIe 3.0 x8 is not good enough for 5500XT.

If it's VRAM capacity's problem, then get a card that has more VRAM, but not blame the PCIe version.

You should not mix PCIe bandwidth with VRAM capacity.

They are two different problem.

And as I said in my last post, PCIe 2.0 x16 provide very similar bandwidth as PCIe 3.0 x8. And that test included PCIe 2.0 x16. Therefore, if PCIe 2.0 x16 is good enough for 5700XT, that imply PCIe 3.0 x8 is good enough for 5700XT, that imply PCIe 3.0 x8 is good enough for 5500XT.

This proved PCIe 3.0 x8 is good enough for 5500XT in general.

You should not blame the PCIe version then running out of VRAM.

If your logic works, then in CPU limiting's case, that CPU is not good enough for 5500XT. In I/O limit case, that HDD / SSD is not good enough for the 5500XT. When running out of system RAM, those DIMM are not good enough for the 5500XT....

You should not mix different problem into one. Too little VRAM is too little VRAM. Should not mix that with PCIe version.
 
  • Like
Reactions: Reindeer_Games
No, your original post is about PCIe 3.0 x8 is not good enough for 5500XT.

If it's VRAM capacity's problem, then get a card that has more VRAM, but not blame the PCIe version.

You should not mix PCIe bandwidth with VRAM capacity.

They are two different problem.

And as I said in my last post, PCIe 2.0 x16 provide very similar bandwidth as PCIe 3.0 x8. And that test included PCIe 2.0 x16. Therefore, if PCIe 2.0 x16 is good enough for 5700XT, that imply PCIe 3.0 x8 is good enough for 5700XT, that imply PCIe 3.0 x8 is good enough for 5500XT.

This proved PCIe 3.0 x8 is good enough for 5500XT in general.

You should not blame the PCIe version then running out of VRAM.

If your logic works, then in CPU limiting's case, that CPU is not good enough for 5500XT. In I/O limit case, that HDD / SSD is not good enough for the 5500XT. When running out of system RAM, those DIMM are not good enough for the 5500XT....

You should not mix different problem into one. Too little VRAM is too little VRAM. Should not mix that with PCIe version.
4GiB costs 30 euro less. Some people would want to save that.

If they don't have PCIe 4.0, they should get a PCIe 3.0 x16 card with at least 4GiB if they have 16 lanes.
 
4GiB costs 30 euro less. Some people would want to save that.

If they don't have PCIe 4.0, they should get a PCIe 3.0 x16 card with at least 4GiB if they have 16 lanes.

If 30 euro (or ~$33 USD) is your limiting factor, then whether you have a PCIe 2.0, 3.0, or 4.0 is a moot point. As long as you have a x16 lane connection, most individual users aren't likely to flood the PCIe unless they choose to exceed the VRAM resources of the GPU and requires using system RAM.

I use a 8GB RX580 and at 1440p when gaming I rarely exceed 4GB even on Ultra settings (~40-55 FPS). 4K is where you'd need that additional VRAM in gaming; but even then I doubt $33 is a concern of someone trying to achieve 60+ FPS at 4K: just my two cents.
 
4GiB costs 30 euro less. Some people would want to save that.

If they don't have PCIe 4.0, they should get a PCIe 3.0 x16 card with at least 4GiB if they have 16 lanes.

If want to save that 30euro, just lower the setting a bit. Then may be can enjoy 60FPS gaming, not just 30, even on a PCIe 2.0 x8 system. Is this a more reasonable solution for better gaming?
 
If want to save that 30euro, just lower the setting a bit. Then may be can enjoy 60FPS gaming, not just 30, even on a PCIe 2.0 x8 system. Is this a more reasonable solution for better gaming?
Some people might prefer to play Ultra at 40fps.

Anyway, you can buy an RX580 8GiB 3.0 x16 for 35 euro less than an RX 5500 XT 4GiB 4.0 x8.

It would also be better for compute.
[automerge]1577036809[/automerge]
I use a 8GB RX580 and at 1440p when gaming I rarely exceed 4GB even on Ultra settings (~40-55 FPS). 4K is where you'd need that additional VRAM in gaming; but even then I doubt $33 is a concern of someone trying to achieve 60+ FPS at 4K: just my two cents.
As the video shows, it is possible for some games to exceed 4GiB in 1080p Ultra.
 
Significant differences also seen at lower settings and the 8GiB model.

I think sometimes it might not be about overflowing the VRAM, but cycling it very fast. Unfortunately, no VRAM numbers are shown.

Not a good video title, it would not be 3.0 crippling but x8:

 
VII support came late in the game, but maybe it can work. You need to start from MP51.0089.B00 and 10.13.6. If you have an earlier BootROM version, you need to install your Apple OEM GPU and upgrade to it.

Ok I upgraded to the MP51.0089.B00 ROM using an old GT 120 card, rebooted into High Sierra with my Titan Xp and Nvidia web drivers, upgraded to 144.x.x.x ROM no problem, then continued with Mojave installation using Titan Xp card.

Once the install was complete, I replaced the Titan Xp with Radeon VII, booted into Mojave and removed all traces of Nvidia web driver and CUDA and now I'm on 10.14.6 without having to do a clean install. PERFECT!

Thanks guys!
 
9DD64F77-AE92-48EC-A73A-F84DB2267EBF.jpeg

I took a sapphire amd Radeon Vega 56 used, installed in MacPro in signature, connected by hdmi cable at the display of the display 20 "I have the black screen, but it seems to me that the Os has uploaded it correctly, Mojave 10.14.6 last.
What can I do?
 
View attachment 886650
I took a sapphire amd Radeon Vega 56 used, installed in MacPro in signature, connected by hdmi cable at the display of the display 20 "I have the black screen, but it seems to me that the Os has uploaded it correctly, Mojave 10.14.6 last.
What can I do?
Test via screen sharing from another Mac, use a DVI to HDMI adapter, etc.
 
  • Like
Reactions: Boomish69
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.