Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I came here just to post this be careful with the new firmware, I received my new drive today and it came with the new firmware, nothing but trouble , all my other drives work fine and is because it has the previous firmware
do not update you Samsung 960
http://www.bit-tech.net/news/tech/s...o-nvme-m2-drives-get-a-bad-firmware-update/1/

I be back later , but right now I'm trying to solve this problem because once the drive has the new firmware it can not be downgraded, so I have new useless and unreliable drive

I will have to send it back and get one with the previous firmware
who knows how long it will take for a new firmware and how reliable it will be
right now the only firmware I trust is the one I have
 
yes the bad firmware is version 3B7QCXE7
the drive is throttling to death with less than a minute of use
all the way down to 250mbps, even with thermal pads, none of my other drives do that
then I saw that the drive has a new firmware that is how I find out about everything

yes I'm very upset and disappointed , I can't use that drive,
I don't how long Samsung is going to take to release a new firmware with the fix and how reliable that new firmware is going to be, I don't understand why don't release a downgrade firmware, it seems that are having trouble figuring out how to fix the drive with a new firmware
well simply give us the option to downgrade the drive then take all the time in the world to make a new firmware but leaving the costumers hanging with a bad unusable drive is not cool at all
very irresponsible

most likely I'm going to send the drive back and ask for a replacement or just ask for a full refund
the drive came with manufacturer date 2017-11
nov 2017
I guess 2017-10 and below are not affected because those still have the previous firmware but just to be safe
I will ask for anything below 2017-9
PRO's and EVO's are both affected, all sizes
https://us.community.samsung.com/t5/Memory-Storage/960-pro-firmware-3B6QCXP7-its-a-crap/td-p/217627
[doublepost=1512838909][/doublepost]
Why would they release drives with bad firmware? Gawd what a cluster****
I hope my drive doesn't explode :D
 
Last edited:
Did you notice that Apple just released an update that will let you log in to Apple OSX as root if you just click the login box a few times without entering a password?

Give Samsung the same break that you're probably giving Apple - at least they didn't ship new firmware that will do a security erase and destroy all of your data.

And, by the way, list all of the Apple devices that support firmware downgrades. ;)
 
yes I know about the root password that requires 2 patches, the first one to fix the root problem but then that patch also created a problem with sharing some where I don't remember exactly the description of the 2nd problem right now but everything was posted here in Mac forums so I'm aware of the situation

I'm not attacking Samsung I'm just saying , I'm sure that many people will feel the same way if you paid money for something then wait for the device to arrive to then find out that it has a problem from factory

the downgrade comment was a suggestion because they are taking their time like everything is fine, if they can't offer an upgrade to fix the problem then offer a downgrade but try to fix the problem some way some how, don't wait until 2018 people need to use their computers
what about the persons that just has a unusable single drive
I understand what you are trying to say but I'm not a Samsung hater I have 6 Samsung 960 evo, so is clear I like Samsung and I buy their products but I'm honest and we have to be realistic and fair

I can't call a ball if is a strike

I love apple but when they screw up there is no point in defending them
I just hope that Samsung releases the new firmware soon and that it fixes the problem for good

on top of everything if I want to send the drive back I have to pay to send it back
and it doesn't end there, the company also charge a 15 or 20% for restocking fee

really I received a defective drive and now is going to cost me more money , so I will end up with some like 75 or 80% of what I paid for the drive, is not my fault I didn't flash the drive

I honestly think that none of that is fair
I just trow that disk to the side and order my last 3 at once but I will make sure first with the seller about the manufacturing date just to make sure that it doesn't have the defective firmware
 
Last edited:
It sounds like you're saying that gen 3 to gen 3 switching doesn't require 128b conversion? Or more likely the receiver needs to decode 128b to do the switching, but it doesn't need to re-encode it to pass it on to the 3.0 device? Encoding of 8b is much less taxing (simple lookup for the 10b)? In the case of a 3.0 host and 2.0 device, doesn't the PLX need to encode 128b when transmitting from the 2.0 device to the 3.0 host? How is that less of a problem then going from 2.0 host to 3.0 device?

It always needs to convert which overall is not really an issue outside of added latency (more than just switching lanes within same gen, the 2.0 chips have latency added on 1.1 source/dest but this is much lower, even with older tech/die shrink ability later on) - on the cheaper SKUs (or often not enough heatsink/cooling provided, Squid 2.0 can suffer this in some cases, but not likely in a cMP) and the extreme end (96 lane+ which SHOULD have a fan or specifically note to have chassis airflow as passive GPUs for servers do) you see TDP issues that can cause real performance bottlenecks (which obviously can also happen within same gen, but are less likely due to TDP kept low overall). This is well documented in the PLX docs that Avago/Broadcom moved around lile 5 times and then removed; mouser and some other stores do have copies though for some - google for the chip (subtract or add +-5 in case of certain 'custom' SKUs like on cheap ebay China boards) and PDF.

Customers do get access behind login wall, but i do not get deeper specsheets and sensitive deemed things anymore either unlike on nearly all 2.0 SKUs sadly, might need to actually buy a min order qty of the gen i want data for but i'll wait for the first 4.0 chips which do the same and are later on more useful for me.


Enterprise servers massively overcommit PCIe and SAS/SATA lanes because the TBs are usually more important than the GB/sec.

The standard HPE ProLiant NVMe card is a 32 lane PLX switch wired as a PCIe 3.0 x8 slot to six PCIe 3.0 x4 drives. So, yes, you're limited to *only* 8 GB/sec - but that's really a boatload of bandwidth.


In the real world, few (if any) workloads really require that every single component in the system must run at its theoretical max bandwidth. Some systems are OK with modest overcommitment, some with huge overcommitment.


The people here who diss "lane sharing" or "PCIe switches" are really victims of "can't see the forest for the trees".

PCIe - partially, only the new NVMe based servers do this - often by simple need to switch NVMe drives and PCIe slots around) and EPYC can go a very far way with 64-128 (-4) lanes available, Intel is just behind but has less "weird" issues due to the InfinityFabric being somewhat... custom.. compared to what we know from Opteron and Intels QPI/UPI. The HP card is not really industry standard here, usually you'd see x16 uplink (like on some Supermicro cards, notably excluding anything based on bifurcation obviously).

SAS - depends, these boards with an LSI RAID chip like still fairly common have them at 'dedicated' 12Gbit SAS PCIe wise with 6Gbit reality, which at 3.0 is barely x8 levels for 16-24 ports.

SATA - never seen, all that i have have their chipset SATA wired 1:1 as the chipset offers, splitters are uncommon and more on chassis level, where not supported by chipset SATA even on enterprise by lack of SAS entirely mostly.


I love PCIe switches for the flexibility they offer me to split lanes or aggregate; i do a lot of work with PLX RDKs and even Intel touched this base more now with eg. Z270 having 24 lanes off the PCH (others had 'splitting' before but not at 1:6) with only the equivalent of x4 3.0 uplink (with Intel optimizations on the customised PCIe bus, but nowhere near x8 or more).

They also have very real use cases as we see on the nMP's TB ports (naturally at 2.0) uplinked with 3.0 (good cooled, good firmware design) or in high-end (E7 SKUs, RAS, 8 socket CPU layouts) for interesting HA things you rarely see public (like a 2x Xeon Phi PCIe x16 cards in 4 way PLX controllable HA over 8 sockets (2 as cluster each and the PLX uplinked to 4 clusters at x16, a PLX 96 lane off the shelf SKU) - most you see this chips used for is REALLY worlds below what they can do, which also justified pricing with Avago (highly affected by certain PLX patents limiting other manufacturer availability until licensing deals went live later).

Apple generally likes to use them as gen converters and for 1:1 switching, yes the cMP has slot 3/4 shared but this is especially in single CPU config also a limitation by what Westmere provided in lanes, but the lane shari ng use overall is rare.
 
Last edited:
2nd 16x Amfeltec card arrives today, but I can't find the last 3 Samsung drives with the good firmware any where, I got an email from Samsung telling me that most likely they will release the firmware fix for the affected drives by January 2018, I returned the last drive that I bought because it had the new firmware and the drive was very unreliable, i will keep trying , never give up , never surrender, if I don't find the drives that I'm looking for then I will wait until I know for sure that the new firmware fixes the problem, if it fixes the problem then I buy the last 3 drives and flash them with the new firmware

I had a very good result in my last test before the new drive throttle and ruin everything, but I'm going to keep it a secret until I have everything ready I don't want to spoil the surprise but I have 2 configurations, it seems that configuration #2 will be faster in writing, ops too much information lol

anyway as soon as I have everything ready I will upload a video
 
the hunt is finally over, I bought all 3 drives brand new with production date 7- 2017 , I should be getting those drives next week if there is no delays due to Christmas season, so now I have everything , all I have to do is wait for the last 3 disks to arrive. my next post will be my last , well at least for 2017.
 
Last edited:
ok I will make a long story short, I get almost 11,000 in read in both card individually 10,800 to be more precise but when I add all 8 disk together on a single raid then I get the extra penalty that we talk about earlier, and it goes down to 17,000 still not bad, I have a few more things to say but I leave it for another day


960_X8.png


Speed_Test.png
 
ok I will make a long story short, I get almost 11,000 in read in both card individually 10,800 to be more precise but when I add all 8 disk together on a single raid then I get the extra penalty that we talk about earlier, and it goes down to 17,000 still not bad, I have a few more things to say but I leave it for another day
I would like to see how SoftRaid handles that compared to Disk Utility because on my Hackintosh with 4 drives, SoftRaid was 49% better than Disk Utility.
 
Last edited:
can you tell me what configuration you use
optimize for and striped size
maybe I'm hitting the ceiling or is just how it normally works
the more disk i add the bigger the price I pay

individually the both hit the same speed 10,800 but adding both cards together then it drops to 17,000
but it did multiply the writing speed by 2 , it just lower the reading speed

anyway I can easily copy and paste 60 gigs of data from card 1 to card 2 in 15 seconds
but if copy and paste 120 gigs of data then it takes a minute
but it should be just 30 seconds but after 60 gigs then it starts to get a little slower
they ares till fast but don't sustained the highest speed for long
but even that lower speed still fast
I really hate these drives 'I'm actually think about returning the cards and selling the drives on ebay

I had a rocket raid 4520 high point card with 8 ssd in raid0 and that was 3,200 in read and 3,200 in write and it was sustained speed I never drops but these samsung 960 drives throttle down even with thermal pads and fans

I think these drives are just good for reading and not for writing
I know I can't write that much if I want the drive to last but if I need to write something I want it to be fast
I spent over 2,000 and I'm a little upset and disappointed of these drives because it lower the speed a little

if I create a single raid then the writing is even lower
copy paste

yes the benchmark show a higher writing speed that a single card
but from card to card , is like copy paste from hard drive sata 1 to hard drive sata 2
and copy paste from a single raid using all 8 drives is like copy paste to the same drive which is slower

I don't even think that 1 tb version of these drives are worth it
what is the point of having all that space if the drive is going to throttle and get slower
so is technically useless for data transfer

I need to run a few more test to the decide if I'm going to keep the cards and the drives
maybe I'm being little to hard on them because of how much I paid for everything

I have some interesting benchmark results that shows the difference on speed

copy paste from one 960 to the same 960
copy paste from one 960 to the another 960
copy paste to a 7200 rpm hard drive with 64 mb cache
seeing the difference between those test makes appreciate the card and the drives more
I will post them later
 
Last edited:
ok guys I want to say good bye for now , merry Christmas and happy new year, I contacted Amfeltec to request a refund on both cards so I'm sending them back, they charge 15% restocking, no problem but I get most of my money back, there is nothing wrong with the card, the problem is the samsung drives , I can't deal with that tootle problem, I didn't pay 2,000 dollars to have that problem, anyway I will sell the drives on ebay. I stayed with 3 out of the 9 that I have , 1 for Mac , 1 for windows and another one for a Mac emergency back up, anyway later, joevt it has been good talking to you.:)
 
can you tell me what configuration you use
optimize for and striped size
maybe I'm hitting the ceiling or is just how it normally works
the more disk i add the bigger the price I pay
As stated in #214, I'm using SoftRaid 5.6.3 (64K stripe size – optimized for workstation) and AJA System Test Lite 12.4.3 3840x2160 4K RED HD, 16 GB file size, 16bit RGB. My previous tests with Disk Utility used the default values.

The UI of AJA System Test Lite 12.4.3 is different than what you show in your screenshot since you are not using the Lite version.

these samsung 960 drives throttle down even with thermal pads and fans
There's a "Expansion Slot Side Fan Mounting Kit" where you can mount two fans above the PCI cards. I don't know if that would be enough to improve this problem.

I need to run a few more test to the decide if I'm going to keep the cards and the drives
maybe I'm being little to hard on them because of how much I paid for everything

I have some interesting benchmark results that shows the difference on speed

copy paste from one 960 to the same 960
copy paste from one 960 to the another 960
copy paste to a 7200 rpm hard drive with 64 mb cache
seeing the difference between those test makes appreciate the card and the drives more
I will post them later
I hope they'll include SoftRaid benchmarks.
 
As stated in #214, There's a "Expansion Slot Side Fan Mounting Kit" where you can mount two fans above the PCI cards. I don't know if that would be enough to improve this problem.

I have 3 fans , I put both card with thermal pads in the pci-e slot then I put the 3 fans the closest possible to the cards, that's why I think is not a temperature problem, is more like a limit on the drive it self, I read some where that those drives are design to throttle after you write certain number of gigs even if they are not hot, I kept 3, they are good for reading speed to load the os etc, a single drive will do fine just for that, I never notice any big difrence between all the drives on raid and a single drive in performance, yes the raid are faster when it comes to numbers but there is actually no noticable difference, loading the os , using apps etc, I wanted the drives to boot the os but also to copy the data fast but those drives are not meant to write because they will suffer a painful death due to the limit that they have, I don't remember is like 200 tb something like that, I rather use iram disk to do all the heavy writing and keep the single non raid drives and anyway with the problem that high Sierra has that I can't use raid on APFS only Raid on HFS+ I think that's it for me

but I do have both of the mounting fan kit that you talked about
I have one that you put on top or below and the other one that you put next to it from the side
[doublepost=1513798213][/doublepost]
As stated in #214, I'm using SoftRaid 5.6.3 (64K stripe size – optimized for workstation) and AJA System Test Lite 12.4.3 3840x2160 4K RED HD, 16 GB file size, 16bit RGB. My previous tests with Disk Utility used the default values.

The UI of AJA System Test Lite 12.4.3 is different than what you show in your screenshot since you are not using the Lite version.


There's a "Expansion Slot Side Fan Mounting Kit" where you can mount two fans above the PCI cards. I don't know if that would be enough to improve this problem.


I hope they'll include SoftRaid benchmarks.
I already took everything apart and installed 2 pci-e 4x , now i'm just waiting for the RMA to send the cards back

I'll be happy if I can get 1,500 back out of the 2,000 that I spent
 
Last edited:
I have been looking through this thread but could not find any information if you can use bootcamp on the Amfeltec card. I have a spare XP941 SSD which I want to use as a Windows 7 boot drive. Holding out purchase until I get more evidence the Amfeltec supports this. Anyone knows or had any experience?
 
I have been looking through this thread but could not find any information if you can use bootcamp on the Amfeltec card. I have a spare XP941 SSD which I want to use as a Windows 7 boot drive. Holding out purchase until I get more evidence the Amfeltec supports this. Anyone knows or had any experience?
I think all the classic Mac Pro's use the BIOS compatibility layer (part of the Boot Camp feature in a Mac's EFI) instead of EFI to boot Windows. The BIOS compatibility layer doesn't have support for NVMe. Maybe someone has made a BIOS driver that can do NVMe? Then you would use that with one of the Linux booters (grub) or other BIOS booter on a hard drive in one of the Mac Pro's 4 drive bays to boot an NVMe drive. Checkout EasyBCD's neogrub. I haven't looked into this lately so I don't know if such support exists anywhere.

If the Mac boots Windows using EFI, then you need to add an EFI NVMe driver. People with Hackintosh's do this on PC's that can't boot NVMe. You need to get an NVMe driver from a PC or Mac that supports booting NVMe. Then an EFI boot loader (like rEFInd or Clover) on a normal drive can load the NVMe driver and use that to boot Windows EFI boot manager on an NVMe drive. I think you can boot Windows EFI boot manager on a drive in an M.2 slot if the drive uses AHCI instead of NVMe (only if your Mac can boot Windows using EFI).
 
AFAIK, cMP can boot a OS from NVMe SSD with boot redirection.

The cMP can also boot EFI Windows.

cMP can of course boot from AHCI SSD. The reason why it can’t boot Windows from PCIe SSD is because the PCIe SSD is considered external in a cMP. And Windows doesn’t like to boot from external drive. But this can be fixed. And from memory, at least one member claim that he did it already.

If the Amfeltec card can boot MacOS, most likely it can boot Windows as well (may required complicated work around procedure).

XP941 AHCI definitely can boot in cMP. This may be the very first PCIe SSD being widely used on the cMP (before the SM951).

However, the question is why you want to do that. There is not much benefit but most likely lots of trouble. Just boot from a SATA SSD connected in one of the native SATA port, then install whatever you want on the PCIe SSD is much much easier in general.
 
AFAIK, cMP can boot a OS from NVMe SSD with boot redirection.

The cMP can also boot EFI Windows.

cMP can of course boot from AHCI SSD. The reason why it can’t boot Windows from PCIe SSD is because the PCIe SSD is considered external in a cMP. And Windows doesn’t like to boot from external drive. But this can be fixed. And from memory, at least one member claim that he did it already.

If the Amfeltec card can boot MacOS, most likely it can boot Windows as well (may required complicated work around procedure).

XP941 AHCI definitely can boot in cMP. This may be the very first PCIe SSD being widely used on the cMP (before the SM951).

However, the question is why you want to do that. There is not much benefit but most likely lots of trouble. Just boot from a SATA SSD connected in one of the native SATA port, then install whatever you want on the PCIe SSD is much much easier in general.
Here's some info on Windows and EFI on MacPro3,1 (Mac Pro 2008) and earlier:
https://www.christopherprice.net/installing-windows-10-on-macpro31-mac-pro-early-2008-3215.html

Basically, the CSM works better for using a graphics card in Windows but the BIOS is limited to the 4 internal drive bays (and maybe some special handling of USB for installation purposes).

Later Mac Pro's may have better EFI for Windows.

I went with Windows on a SATA SSD in one of the 4 drive bays to avoid complications with BootCamp. Those bays are only 3 Gb/s but it's good enough (271 MB/s). I have a Hackintosh for games and can boot macOS from NVMe there. The Mac Pro 2008 is for work and I usually just use Parallels Desktop to use Windows. The virtual machine's hard drive is on an external USB 3.1 gen 2 (6 Gb/s * 2 -> 10 Gb/s -> 761 MB/s) hardware RAID. macOS is running from an SSD on a Sonnet Tempo SSD Pro Plus (6Gb/s -> 503 MB/s). I'm going to use the Amfeltec for something else.
 
I have been looking through this thread but could not find any information if you can use bootcamp on the Amfeltec card. I have a spare XP941 SSD which I want to use as a Windows 7 boot drive. Holding out purchase until I get more evidence the Amfeltec supports this. Anyone knows or had any experience?


I'm running the Amfeltec Squid PCI Express Gen 2 with (4) SM951's (AHCI). It boots Windows with no issues. No workarounds required. It just works. I'm not RAIDing any of the drives, just 4 separate volumes. High Sierra, Sierra, Windows 10 Pro and one spare, all running fast and furious.


Screen Shot 2017-12-30 at 12.06.16 PM.png


Screen Shot 2017-12-30 at 11.47.53 AM.png
 
Last edited:
  • Like
Reactions: ekwipt and h9826790
I'm running the Amfeltec Squid PCI Express Gen 2 with (4) SM951's. It boots Windows with no issues. No workarounds required. It just works. I'm not RAIDing any of the drives, just 4 separate volumes. High Sierra, Sierra, Windows and one spare, all running fast and furious.
Windows 10 on a Mac Pro 2010, booting in EFI mode?
 
Windows 10 on a Mac Pro 2010, booting in EFI mode?


Yes, Exactly. Win 10 Pro 1709 with all updates booting in EFI mode. No issues at all. It just works and it works very well. The setup was expensive (Amfeltec card and (4) 512GB SM951 AHCI blades) but it has been very fast and stable with both Win and macOS installs. It has given my 2010 cMP new life for at least a few years. I do Freelance CADD Design and CUDA Rendering for a living and stress my machine to 100% every day. It's rock solid. http://DG-Digital.com
 
Last edited:
  • Like
Reactions: JedNZ
I'm seriously on the verge of getting the Amfeltec card but cannot find AHCI blades anywhere. I know it's been asked in the past, but if there are new leads on M.2 AHCI cards. I'd go for the popular 4 x 512 gig cards on the Amfeltec. Any pointers most welcome. I'm just not finding them.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.