Author: parsec
Subject: SM951 as OS boot device information
Posted: 07 Jan 2017 at 11:27am
I can tell you (probably) why you have such poor performance. What RAID 0 stripe size did you use?
I believe you said you were using a 64K stripe? Or did you use the default 16K stripe size? As I recall, the TweakTown suggestion for the 64K stripe was for SATA RAID 0 arrays, although I may have missed the article you are referring to. Your performance with two 960 EVOs is less than a single drive, as I imagine you know. BTW, what capacity are each 960 EVO you have? It looks like the 250GB model.
When using the IRST RAID driver for a RAID array of PCIe NVMe SSDs, you don't need to, and cannot, install the NVMe driver. Two reasons for that, only one really counts. The IRST RAID driver acts as the NVMe driver for a RAID array of PCIe NVMe SSDs, so no need to install an NVMe driver. Second, the Samsung software cannot recognize their own drives when in a RAID array, same thing is true for the Magician software. I cannot install the Samsung NVMe driver either, that's normal and not the problem. An NVMe driver is not needed in this case.
I've been holding out on you, and being a hypocrite, as you shall see. Given all the (apparent) problems installing Samsung 960 drives (not only you) I became worried Samsung changed something that was the cause. So I was inspired, and had to see for myself. Yesterday, two 500GB 960 EVOs arrived at my door. I installed Windows 10 on a RAID 0 array of the two 960 EVOs, no problem at all. Of course I have experience doing this, a huge help. I used the 128K stripe size.
I ran ATTO, which I usually do not use, but did so to compare with others using 960 EVOs with ATTO. This is my result:
![]()
On the far right side of the bar graph, the Write and Read columns have a numeric value for each row of the bar graph. The maximum read speed is 3.638454GB/s at 128KB file size, the maximum write is 3.122887GB/s at the 4MB file size.
Ran a Crystal for you, to compare.
![]()
You can see the Intel RAID software, or possibly the limitation of the chipset, hits a wall at ~3.5GB/s for the large file sequential read speed. So we get virtually no RAID 0 scaling of performance with a SSD that is already at 3GB/s+ read speed.
We do see nice performance scaling for the write speed, doubled from the specs for this SSD, even at the 32KB file size. I was hoping the IRST version 15 software had improved the read performance, over the IRST version 14 software, the initial version that supports PCIe NVMe SSDs in RAID.
I'm a hypocrite, because I'll say that after the Windows installation, installing drivers and software was so fast, I was surprised. The fastest I've ever experienced. Yes, I'm using a Z270 board, which makes NO difference compared to a Z170 board. BTW, do you know what board Samsung used for their performance specifications of the 960 series SSDs? The ASRock Z170 Extreme7+.![Thumbs Up Thumbs Up]()
Sorry to say, you cannot change the stripe size of the RAID 0 array, without creating it again. Which means destroying your OS installation when you delete the RAID array, the only way to do it.
What is wrong on your side? Did you install the IRST 15.2 F6 driver during the installation? I did. Do you have any SATA SSDs on the SATA ports shared with the M.2 slots? They should be ignored AFAIK, but I've never tested that myself. I hate to say I told you so, but if you don't use the 128K stripe size, you get what you got. It was like that with IRST version 14, and no change with IRST 15.
Also, you need to configure the Cache mode of the RAID array, which is Off/disabled by default. Sorry, but this will NOT 100% fix your situation, it will improve it IF you are using the 64K stripe size. I'm using the Write back option. You can only configure the Cache mode if you have the IRST Windows utility installed.
WARNING!!! You MUST, MUST have at least one SATA drive connected to one of the available SATA ports BEFORE you attempt to install the IRST Windows software. If you don't the installation will freeze about 2/3 of the way through it, and if it does complete, it will say an unknown error occurred. The IRST Windows program will NOT run in that situation. You can't fix it by adding a SATA drive after the fact. At least it never worked for me.
Can you understand now why I am reluctant for those not familiar with the Intel RAID software and configurations, to use it for the first time with PCIe NVMe SSDs? I spent days playing with RAID 0 arrays of Samsung 950 Pros before I even dared to try installing an OS on one for the first time. And I've used the IRST software with SATA drives for years. How many times did I install Windows on a RAID 0 array of 950 Pros before I was happy with it? At least three times. But I'm trying to deal with people who have never even installed Windows on a single PCIe NVMe SSD, much less a RAID 0 array of NVMe SSDs. Sorry for my rant, but it's not easy. Everything I know was learned by trial and error, and believe me, there were plenty of errors.
Subject: SM951 as OS boot device information
Posted: 07 Jan 2017 at 11:27am
I can tell you (probably) why you have such poor performance. What RAID 0 stripe size did you use?
I believe you said you were using a 64K stripe? Or did you use the default 16K stripe size? As I recall, the TweakTown suggestion for the 64K stripe was for SATA RAID 0 arrays, although I may have missed the article you are referring to. Your performance with two 960 EVOs is less than a single drive, as I imagine you know. BTW, what capacity are each 960 EVO you have? It looks like the 250GB model.
When using the IRST RAID driver for a RAID array of PCIe NVMe SSDs, you don't need to, and cannot, install the NVMe driver. Two reasons for that, only one really counts. The IRST RAID driver acts as the NVMe driver for a RAID array of PCIe NVMe SSDs, so no need to install an NVMe driver. Second, the Samsung software cannot recognize their own drives when in a RAID array, same thing is true for the Magician software. I cannot install the Samsung NVMe driver either, that's normal and not the problem. An NVMe driver is not needed in this case.
I've been holding out on you, and being a hypocrite, as you shall see. Given all the (apparent) problems installing Samsung 960 drives (not only you) I became worried Samsung changed something that was the cause. So I was inspired, and had to see for myself. Yesterday, two 500GB 960 EVOs arrived at my door. I installed Windows 10 on a RAID 0 array of the two 960 EVOs, no problem at all. Of course I have experience doing this, a huge help. I used the 128K stripe size.
I ran ATTO, which I usually do not use, but did so to compare with others using 960 EVOs with ATTO. This is my result:

On the far right side of the bar graph, the Write and Read columns have a numeric value for each row of the bar graph. The maximum read speed is 3.638454GB/s at 128KB file size, the maximum write is 3.122887GB/s at the 4MB file size.
Ran a Crystal for you, to compare.

You can see the Intel RAID software, or possibly the limitation of the chipset, hits a wall at ~3.5GB/s for the large file sequential read speed. So we get virtually no RAID 0 scaling of performance with a SSD that is already at 3GB/s+ read speed.
We do see nice performance scaling for the write speed, doubled from the specs for this SSD, even at the 32KB file size. I was hoping the IRST version 15 software had improved the read performance, over the IRST version 14 software, the initial version that supports PCIe NVMe SSDs in RAID.
I'm a hypocrite, because I'll say that after the Windows installation, installing drivers and software was so fast, I was surprised. The fastest I've ever experienced. Yes, I'm using a Z270 board, which makes NO difference compared to a Z170 board. BTW, do you know what board Samsung used for their performance specifications of the 960 series SSDs? The ASRock Z170 Extreme7+.

Sorry to say, you cannot change the stripe size of the RAID 0 array, without creating it again. Which means destroying your OS installation when you delete the RAID array, the only way to do it.
What is wrong on your side? Did you install the IRST 15.2 F6 driver during the installation? I did. Do you have any SATA SSDs on the SATA ports shared with the M.2 slots? They should be ignored AFAIK, but I've never tested that myself. I hate to say I told you so, but if you don't use the 128K stripe size, you get what you got. It was like that with IRST version 14, and no change with IRST 15.
Also, you need to configure the Cache mode of the RAID array, which is Off/disabled by default. Sorry, but this will NOT 100% fix your situation, it will improve it IF you are using the 64K stripe size. I'm using the Write back option. You can only configure the Cache mode if you have the IRST Windows utility installed.
WARNING!!! You MUST, MUST have at least one SATA drive connected to one of the available SATA ports BEFORE you attempt to install the IRST Windows software. If you don't the installation will freeze about 2/3 of the way through it, and if it does complete, it will say an unknown error occurred. The IRST Windows program will NOT run in that situation. You can't fix it by adding a SATA drive after the fact. At least it never worked for me.
Can you understand now why I am reluctant for those not familiar with the Intel RAID software and configurations, to use it for the first time with PCIe NVMe SSDs? I spent days playing with RAID 0 arrays of Samsung 950 Pros before I even dared to try installing an OS on one for the first time. And I've used the IRST software with SATA drives for years. How many times did I install Windows on a RAID 0 array of 950 Pros before I was happy with it? At least three times. But I'm trying to deal with people who have never even installed Windows on a single PCIe NVMe SSD, much less a RAID 0 array of NVMe SSDs. Sorry for my rant, but it's not easy. Everything I know was learned by trial and error, and believe me, there were plenty of errors.