base M1 MBA (using Rosetta 2):
16inch MBP with 9980HK and 32GB RAM:
Rosetta 2 somehow slows down the i/o. Also Escoufier's method is extremely slow. The are some benchmarks that are faster in the base MBA though.
Code:
> res = benchmark_std(runs = 3)
# Programming benchmarks (5 tests):
3,500,000 Fibonacci numbers calculation (vector calc): 0.176 (sec).
Grand common divisors of 1,000,000 pairs (recursion): 4.67 (sec).
Creation of a 3,500 x 3,500 Hilbert matrix (matrix calc): 0.214 (sec).
Creation of a 3,000 x 3,000 Toeplitz matrix (loops): 1.35 (sec).
Escoufier's method on a 60 x 60 matrix (mixed): 66.8 (sec).
# Matrix calculation benchmarks (5 tests):
Creation, transp., deformation of a 5,000 x 5,000 matrix: 0.454 (sec).
2,500 x 2,500 normal distributed random matrix^1,000: 0.148 (sec).
Sorting of 7,000,000 random values: 0.635 (sec).
2,500 x 2,500 cross-product matrix (b = a' * a): 7.32 (sec).
Linear regr. over a 5,000 x 500 matrix (c = a \ b'): 0.614 (sec).
# Matrix function benchmarks (5 tests):
Cholesky decomposition of a 3,000 x 3,000 matrix: 4.04 (sec).
Determinant of a 2,500 x 2,500 random matrix: 1.98 (sec).
Eigenvalues of a 640 x 640 random matrix: 0.443 (sec).
FFT over 2,500,000 random values: 0.135 (sec).
Inverse of a 1,600 x 1,600 random matrix: 1.62 (sec).
> res_io = benchmark_io(runs = 3)
Preparing read/write io
# IO benchmarks (2 tests) for size 50 MB:
Writing a csv with 6250000 values: 9.31 (sec).
Writing a csv with 6250000 values: 9.3 (sec).
Writing a csv with 6250000 values: 9.31 (sec).
Reading a csv with 6250000 values: 11.6 (sec).
Reading a csv with 6250000 values: 11.5 (sec).
Reading a csv with 6250000 values: 11.5 (sec).
# IO benchmarks (2 tests) for size 5 MB:
Writing a csv with 625000 values: 0.942 (sec).
Writing a csv with 625000 values: 0.943 (sec).
Writing a csv with 625000 values: 0.943 (sec).
Reading a csv with 625000 values: 1.14 (sec).
Reading a csv with 625000 values: 1.14 (sec).
Reading a csv with 625000 values: 1.14 (sec).
16inch MBP with 9980HK and 32GB RAM:
Code:
> res = benchmark_std(runs = 3)
# Programming benchmarks (5 tests):
3,500,000 Fibonacci numbers calculation (vector calc): 0.227 (sec).
Grand common divisors of 1,000,000 pairs (recursion): 0.762 (sec).
Creation of a 3,500 x 3,500 Hilbert matrix (matrix calc): 0.248 (sec).
Creation of a 3,000 x 3,000 Toeplitz matrix (loops): 0.936 (sec).
Escoufier's method on a 60 x 60 matrix (mixed): 0.897 (sec).
# Matrix calculation benchmarks (5 tests):
Creation, transp., deformation of a 5,000 x 5,000 matrix: 0.559 (sec).
2,500 x 2,500 normal distributed random matrix^1,000: 0.133 (sec).
Sorting of 7,000,000 random values: 0.576 (sec).
2,500 x 2,500 cross-product matrix (b = a' * a): 8.28 (sec).
Linear regr. over a 5,000 x 500 matrix (c = a \ b'): 0.624 (sec).
# Matrix function benchmarks (5 tests):
Cholesky decomposition of a 3,000 x 3,000 matrix: 3.99 (sec).
Determinant of a 2,500 x 2,500 random matrix: 2.63 (sec).
Eigenvalues of a 640 x 640 random matrix: 0.565 (sec).
FFT over 2,500,000 random values: 0.227 (sec).
Inverse of a 1,600 x 1,600 random matrix: 2.16 (sec).
> res_io = benchmark_io(runs = 3)
Preparing read/write io
# IO benchmarks (2 tests) for size 50 MB:
Writing a csv with 6250000 values: 4.92 (sec).
Writing a csv with 6250000 values: 4.88 (sec).
Writing a csv with 6250000 values: 4.87 (sec).
Reading a csv with 6250000 values: 1.97 (sec).
Reading a csv with 6250000 values: 1.9 (sec).
Reading a csv with 6250000 values: 1.89 (sec).
# IO benchmarks (2 tests) for size 5 MB:
Writing a csv with 625000 values: 0.508 (sec).
Writing a csv with 625000 values: 0.506 (sec).
Writing a csv with 625000 values: 0.505 (sec).
Reading a csv with 625000 values: 0.2 (sec).
Reading a csv with 625000 values: 0.198 (sec).
Reading a csv with 625000 values: 0.197 (sec).
Rosetta 2 somehow slows down the i/o. Also Escoufier's method is extremely slow. The are some benchmarks that are faster in the base MBA though.
Last edited: