I read it but you should post pics like the last one.Read the context, the last part is just an extra.
I read it but you should post pics like the last one.Read the context, the last part is just an extra.
I did video record the process, just not clear enough to post out, but I can upload if neededI read it but you should post pics like the last one.
Which APP ?(Base on the monitoring APP, the RAM is not fully filled even on 4x32g)
iStat Menu and Activity monitor, but I will try the ramdisk to limit ram and see the resultWhich APP ?
Note that many memory monitoring apps do not count file system caches as "memory in use", since it will be released if processes request more memory. You might have a much larger file cache with the extra memory.
It would be helpful to know exactly what it is you're doing when you take the pictures. If you are performing a conversion of 309 RAW files to 8-bit TIFF files then your SSD I/O seems extremely low. A read speed of 6KB/sec and a write speed of 16.7MB/sec doesn't seem to match the type of work being done. I would expect much higher numbers than this. The files being cached in memory could explain the low numbers.I did video record the process, just not clear enough to post out, but I can upload if needed
View attachment 891457
I can see the icon on the menubar RAM usage is not filled up ram at all, the screenshot in end is doing the same task, which is the same thing.
when I'm taking this screenshot is in the middle of exporting TIFF, but one RAW file is just 30-40mb, even realtime reading from SSD and writing to SSD will not have very high read and write speed.It would be helpful to know exactly what it is you're doing when you take the pictures. If you are performing a conversion of 309 RAW files to 8-bit TIFF files then your SSD I/O seems extremely low. A read speed of 6KB/sec and a write speed of 16.7MB/sec doesn't seem to match the type of work being done. I would expect much higher numbers than this. The files being cached in memory could explain the low numbers.
Maybe use this command to adjust the amount of RAM the system can use....
Example... set RAM size to 8 GB
sudo nvram boot-args="maxmem=8192"
and then reboot
To reset RAM to what is physically available then....
sudo nvram -d boot-args
and then reboot
However, I'm unsure how setting "maxmem" to less than what is physically installed influences how much of each DIMM is used to accomplish this.
I would think that batch processing 309 files of approximately 30-40MB each at 28 files at a time (the number of cores) would consume more than 6KB/sec from the SSD. At 6KB/sec one 30MB file would take slightly more than 1 hour and 40 minutes to read. Is the system really taking this long to read one file? Obviously not so the measurement is in question.when I'm taking this screenshot is in the middle of exporting TIFF, but one RAW file is just 30-40mb, even realtime reading from SSD and writing to SSD will not have very high read and write speed.
the number Read and Write is keep changing during exporting
sometimes R14 W 20
sometimes R20, W150
sometimes R 200, W120
sometimes R 0, W0
The OP did perform GB benchmarks which showed an 8.36% increase when using six channels instead of four. This is what one would expect from a synthetic benchmark. However in most real world applications one typically doesn't see any increase in performance let alone an 8.36%. This is one reason I dislike using GB for anything other than how fast a system runs GB.I have done kind of test in past, but with 5,1.
X58 was triple channel but it has four/eight DIMM slots.
When only triple sticks used, Geekbench score was higher about 10-20% compared with all slots fully allocated.
The OP did perform GB benchmarks which showed an 8.36% increase when using six channels instead of four. This is what one would expect from a synthetic benchmark. However in most real world applications one typically doesn't see any increase in performance let alone an 8.36%. This is one reason I dislike using GB for anything other than how fast a system runs GB.
However, in this case the OP observed a 39.2% increase in a real world application. This is highly suspect because of its magnitude in a real world application use case and that it is significantly higher than a synthetic benchmark (which tends to best illustrate a change).
The question is: Is it noticeably helping out in real world usage workflow? Since all other factors weren't held constant we cannot conclusively say. Especially when the conclusion is well beyond what a synthetic benchmark, which is known to benefit from such configurations, is able to achieve.Well, actually I not fond of short-burst benchmarking measure such a Geekbench too…
Just to adding comparison with older system and backing OP statements, literally with correct channel memory setup most of all synthetic benchmark would gave improvements. If they can helped with OP real usage workflow and increase performance, that’s win-win situation.
The question is: Is it noticeably helping out in real world usage workflow? Since all other factors weren't held constant we cannot conclusively say. Especially when the conclusion is well beyond what a synthetic benchmark, which is known to benefit from such configurations, is able to achieve.
If the OP had configured the system with six modules which resulted in the same memory capacity as the four module configuration then I think it reasonable to conclude the memory channel configuration is what resulted in the increased performance.
However that was not the case. The amount of memory increased by 64GB. This increase in memory capacity could be the reason for the increase in performance. Perhaps, with 56 threads chewing through 309 images of 30 - 40MB each, the task benefitted from caching (made possible with an additional 64GB of memory to work with).
Your workflow.What extended workflow process would be a good test then of determining the real world influence of memory bandwidth , with various memory configurations ? Everything else being held the same . Assuming short term synthetic benchmarks are not useful in the real world ?
Your workflow.
Then it's the workflow of your "customer".But my workflow is to run synthetic benchmarks ? . I'm a System Builder .
As Aiden stated the workflow that you use. In the OPs case their workflow improved by almost 40% with the additional memory. However I think it's premature to conclude the increase was due solely to the increased memory bandwidth.What extended workflow process would be a good test then of determining the real world influence of memory bandwidth , with various memory configurations ? Everything else being held the same . Assuming short term synthetic benchmarks are not useful in the real world ?
UPDATE on Fab 2, 2020,
To further experiment of whether is just benefits from the extra 2 channel or it also benefits from an extra 64GB of RAM.
I have done the following test.
I have use this command line to Create a 64GB of RAMDISK,
diskutil erasevolume HFS+ “RamDisk64GB” $(hdiutil attach -nomount ram://131072000)
And Fill it up with about 50GB of files
To be more "TOUGH" to the new 6 Chanel of the test.
I create another 64GB of RAMDISK II
And fill it up with a similar 50GB of file. which "takes 100GB of RAM away"
Then the same test of the RAW to TIF export AGAIN
I can feel a little bit laggy using the mouse than before during exporting.
BUT, the result is almost the same!
4x32g RAM (with 128GB RAM available)
It takes 6:34 = 394s
6x32g RAM (with 192GB RAM avalible)
It takes 4:43 = 283s
6x32g RAM (with about just 92GB RAM available)
It takes 4:49 = 290s
It just 7 seconds more than before, and with
Even less RAM than the 4x32GB, THE 6 CHANEL STILL HAS a 39% percent performance increase.
And BTW, I have also given the RAMDISK a speed test
4700MB/S Read
4550MB/S Write
I know the test is not very professional, but it really reflects the real-world performance increase.
I'm not really sure how this conclusively makes the case the speed gain is solely from using a six channel memory configuration. Merely creating a RAM disk and populating it with information doesn't really say much.UPDATE on Fab 2, 2020,
To further experiment of whether is just benefits from the extra 2 channel or it also benefits from an extra 64GB of RAM.
I have done the following test.
I have use this command line to Create a 64GB of RAMDISK,
diskutil erasevolume HFS+ “RamDisk64GB” $(hdiutil attach -nomount ram://131072000)
And Fill it up with about 50GB of files
To be more "TOUGH" to the new 6 Chanel of the test.
I create another 64GB of RAMDISK II
And fill it up with a similar 50GB of file. which "takes 100GB of RAM away"
Then the same test of the RAW to TIF export AGAIN
I can feel a little bit laggy using the mouse than before during exporting.
BUT, the result is almost the same!
4x32g RAM (with 128GB RAM available)
It takes 6:34 = 394s
6x32g RAM (with 192GB RAM avalible)
It takes 4:43 = 283s
6x32g RAM (with about just 92GB RAM available)
It takes 4:49 = 290s
It just 7 seconds more than before, and with
Even less RAM than the 4x32GB, THE 6 CHANEL STILL HAS a 39% percent performance increase.
And BTW, I have also given the RAMDISK a speed test
4700MB/S Read
4550MB/S Write
I know the test is not very professional, but it really reflects the real-world performance increase.
configure equal amounts of memory: One in a four memory channel configuration and one in a six channel memory configuration?I'm not really sure how this conclusively makes the case the speed gain is solely from using a six channel memory configuration. Merely creating a RAM disk and populating it with information doesn't really say much.
IMO the only way to conclusively show a six channel memory configuration is 39% faster than a four channel memory configuration is to hold all other variable constant. IOW configure equal amounts of memory: One in a four memory channel configuration and one in a six channel memory configuration.
You are correct in that it is impossible to configure a four and six channel configuration with the same memory capacity. The best you could do is attempt to configure the system as close in capacity as possible.configure equal amounts of memory: One in a four memory channel configuration and one in a six channel memory configuration?
With no mix size DIMMs, Well, This is mathematical question.
The formula is
4x = 6y
8gb*6? 16gb*4? They will not be equal.
I there is no such size DIMMs exist
If not exist, the closest way to prove is to use other way to limit the ram that can be used, which is why I use Ramdisk to limit the avaliable ram.
do you have any suggestions?