Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I actually just test-ran the Collatz conjecture with Swift and C++ over the weekend. The results seem to find C++ being faster in this particular cases both with optimizations and without.

There seems to be an error in your benchmark comparison. You seem to be using 32-bit (int) integers in C++ and 64bit (Int) integers in Swift. The larger cache miss rate from the 2X bigger array might make up most most of the time difference with all optimizations on.

There was this same error in a tiny prime sieve micro-benchmark that someone posted in Apple's Swift developer forum.
 
The real question is: does it matter? Whether a program takes 0.1 ms or 10 ms to complete a task the user asked of it, it's instantaneous as far as the user is concerned, even though it took 100 times as long in the later case.

An iOS app that took 10 mS instead of 0.1 mS could burn up to 100X more of the user's battery life. If done repeatedly inside a game animation loop, the user would definitely notice the effect on their battery gauge after playing the game for awhile.

The faster some compute routine completes, the faster the OS can put an iOS device's CPU to sleep in a much lower power mode.
 
There seems to be an error in your benchmark comparison. You seem to be using 32-bit (int) integers in C++ and 64bit (Int) integers in Swift. The larger cache miss rate from the 2X bigger array might make up most most of the time difference with all optimizations on.

There was this same error in a tiny prime sieve micro-benchmark that someone posted in Apple's Swift developer forum.
Thanks, I'll check if this affects the results!
 
Swift is also still in beta, so you may have to come back yet another time. ;)
For sure. :)

firewood, after changing the C++ implementation to match with regard to the integer size I re-ran and found that the execution times increased to the range of 31-46ms with -O3 and 29-33ms with -Ofast. It seems that the margin is certainly small in this test and it’s possible that the small differences in code account for the rest of the difference. Fundamentally for this test, the solutions seem more or less equally fast.
 
Code:
       /*This check not needed - bounds check in loop checks for (ULONG_MAX - 1) / 3:
        if(n + 1 > ULONG_MAX){
            printf("out of bounds error\nnumber being checked > ULONG_MAX - ld\n", ULONG_MAX);
            exit(0);
        }*/
        n += 1;

Just a side note, I know you commented out this, but that test doesn't work. n is an unsigned long, it can never be bigger than ULONG_MAX instead it will be zero. To test this you would need:


Code:
if(n + 1 == 0)

//or 

if(n + 1 < n).
 
Just a side note, I know you commented out this, but that test doesn't work. n is an unsigned long, it can never be bigger than ULONG_MAX instead it will be zero. To test this you would need:


Code:
if(n + 1 == 0)

//or 

if(n + 1 < n).

You're right that this is typed wrong. I was careless about that. I put that there to note that the other bounds check would detect an overflow before that one and it was unnecessary. But this would test to see if the NEXT n would equal ULONG_MAX. The test occurs before n is incremented so it should be:

Code:
if(n + 1) == ULONG_MAX)

Yes, ULONG_MAX + 1 = 0;.

printf("ULONG_MAX = %lu\n", ULONG_MAX);
prints "ULONG_MAX = 18446744073709551615".

printf("ULONG_MAX + 1 = %lu\n", ULONG_MAX + 1);
prints "ULONG_MAX + 1 = 0".
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.