Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Senor Cuete

macrumors 6502
Original poster
Nov 9, 2011
432
32
Here's an interesting article about the performance of software written in Swift:

http://www.splasmata.com/?p=2798

I thought that the claims that it would run faster than C were dubious.

I agree with some things the author says. The only way to write the code that handles an application's GUI is with OOP and Cocoa and it's fast enough, but for numeric calculations I use C. I don't even write these in Objective C because making the functions objects adds overhead, both in writing the code and performance. Also he does manual memory management and so do I. I find it as easy as doing it automatically with garbage collection. The code analyzer finds any potential problems so you should run it on a final version.
 
I think the claim was that it was faster than Obj-C. I wouldn't take this test too seriously at this point. First read the disclaimer in the beginning. Secondly, how do you compare two languages? What ever you do, some expert in either language will come along and say that you can optimize x,y and z. I mean, right now Swift is 4 days old so I think we'll have to wait a while and see with some real code.
 
Has anyone compared it to Java or Python, the languages that I think Apple was really aiming at having Swift be better than?
 
I don't see any code posted at the link. Am I missing something, or maybe a victim of a decrepit browser?


The linked article mentions using x = x + 1, but neglects to say that this operator (+) has overflow-detection attached to it. This means that arithmetic with this operator will not overflow the operand size without triggering an error. Overflow-detection has a runtime cost. To correctly compare with C's operators, one must use the operators that allow overflow:
https://developer.apple.com/library...t_Programming_Language/AdvancedOperators.html
Unlike arithmetic operators in C, arithmetic operators in Swift do not overflow by default. Overflow behavior is trapped and reported as an error. To opt in to overflow behavior, use Swift’s second set of arithmetic operators that overflow by default, such as the overflow addition operator (&+). All of these overflow operators begin with an ampersand (&).
An optimizer will not (well, should not) remove overflow-checks, because that would be a change to runtime behavior that gives different results in limiting conditions.
 
Has anyone compared it to Java or Python, the languages that I think Apple was really aiming at having Swift be better than?
I don't have anything that will run it, so I am just going by the documentation, but so far it doesn't look any better than Ruby either. It definitely doesn't have anywhere near the cross-platform capability.
 
It might be too early to draw any conclusions.

I agree with this, but it doesn't mean that current performance shouldn't be examined.

Conclusions will change over time regardless. I'm not even sure I'd call them "conclusions"; it's more a snapshot of an evolving tool at a moment in time. There was a time when C was criticized because it lacked the performance of assembly language. C compilers got better over time.

Even comparing current Swift with later Swift can show useful results. For example, that feature X is faster or Y is slower.
 
We should also assume that the released version is not the version they were running on stage. So if the version is different, there would be no point of benchmarking our current old version.

I'm confident it will be improved when they ship Yosemite.
 
The linked article mentions using x = x + 1, but neglects to say that this operator (+) has overflow-detection attached to it. This means that arithmetic with this operator will not overflow the operand size without triggering an error. Overflow-detection has a runtime cost. To correctly compare with C's operators, one must use the operators that allow overflow:

That's a good point, it should also be possible for the Swift compiler to decide if overflow checks are necessary or not. Take this example for instance:

for i in 0..100

At compile time it's possible to asssert that i will not overflow based on the range alone since the default datatypes for ints are 64 bit wide. Another issue is if one would add manual overflow checks to C, it would very likely be slower since the x86 has an overflow register flag that is automatically set, which the Swift runtime can make use of.

Another issue is that in the grand scheme of things the cost of incrementing a variable is most likely neglectable in comparison to what ever it is you do in the loop body.
 
Here's an interesting article about the performance of software written in Swift:

http://www.splasmata.com/?p=2798

I thought that the claims that it would run faster than C were dubious.

I agree with some things the author says. The only way to write the code that handles an application's GUI is with OOP and Cocoa and it's fast enough, but for numeric calculations I use C. I don't even write these in Objective C because making the functions objects adds overhead, both in writing the code and performance. Also he does manual memory management and so do I. I find it as easy as doing it automatically with garbage collection. The code analyzer finds any potential problems so you should run it on a final version.

With absolutely no code attached these benchmarks are pretty useless. The first three tests are not even comparing to Objective-C where objc_msgSend is involved but rather standard C operators and pointer assignment that yes will be blazing fast.

Secondly I don't really trust anyone that says they're not using "ARC" just because it's "up to 40% slower" yet they're products don't really look like they'd ever show the difference while running.

Pre-optimising code that really doesn't need to be optimised is literally one of the very worst things a developer can do.

The only thing I read on that post that I agree with is that ARC is not a performance optimisation, it's a productivity optimisation that is the correct choice 99% of the time in apps ( Games maybe a different story ).
 
There are already a few benchmark results posted in the Apple developer forums.

Swift seems to be as fast as compiled C at small non-OO routines when the Swift LLVM optimizer is on and all type/bounds checking are turned off (e.g. to the same level as bare C). But the Swift beta is currently dog slow with optimization turned off and when the app needs to (re)allocate lots of bits of object memory.
 
We should also assume that the released version is not the version they were running on stage. So if the version is different, there would be no point of benchmarking our current old version.

I'm confident it will be improved when they ship Yosemite.

Problem is apple marketing has this as being faster. When you make the claim, even in beta stages you have to back it up.

I'd be looking at julia for this. It devs and it early adapters are backing it as a fast language and a potential taker of the the number crunching throne. Thing is....its backing up some of its claims. In beta. With a fair amount of code samples to back it up. And its not one sided, you have some of the Julia converts posting side by side code from the the language they left to show why they are backing this new pony.


So that we can make our own judgements. Sometimes this works out for julia. And yes she is fast by what I have seen. Other times....python or say R veterans go yeah she is looking good.....but try this here potentially better code sample from the older language.

Not saying the Julia backers are trying to make her look too good, sometimes its a case of you have to use the older language in a better way. Swift will have this issue for quite a while as some vets of obj-c I predict will show that old dog still has a new trick or 2.
 
Here's a little benchmark. A program to verify the famous Collatz conjecture for some small numbers (numbers from 1 to 1 trillion):

Code:
import Foundation

for i in 2 .. 1_000_000_000_000
{
    var x = i
    do {
        x = (x % 2 == 0 ? x / 2 : (3*x + 1) / 2)
    } while (x >= i)
    if (i % 1_000_000 == 0) {
        println ("Collatz conjecture is correct up to i = \(i)")
    }
}
Well, there's a small bug in this code. So after about four minutes on a Core 2 Duo machine, the program crashes when i is a bit above 8 billion because it detects an integer overflow. In C, C++, Objective-C, Java etc. it would have continued to run giving incorrect results. Do we prefer fast results or correct results?
 
Last edited:
Is it possible to be slower than Java? ;)

I recall one time hearing that Java took six times as long as C to run most equivalent code, while Python took 100 times as long as C to run most equivalent code, which would suggest that, yes, it is possible to be slower than Java.

The real question is: does it matter? Whether a program takes 0.1 ms or 10 ms to complete a task the user asked of it, it's instantaneous as far as the user is concerned, even though it took 100 times as long in the later case.

If the program works in either case, then the user doesn't care. So instead what matters is how quick the code was to write, and how maintainable it is. The user may not directly care about those things, but when they let you know about a bug, they'd like to have it be fixed ASAP.
 
gnasher729: C is not idiot-proof so you have to consider such things when you write code to test things like the https://en.wikipedia.org/wiki/Collatz_conjecture. Your choice might not be that simple. You might want both.

Art: you supposition that the difference is a choice between 0.1 ms or 10 ms might not be the case. It could be a choice between one minute and 100 minutes or one hour and 100 hours.

I got my first real programming job by rewriting an interpreted BASIC program in C so it would run 200 times as fast.
 
gnasher729: C is not idiot-proof so you have to consider such things when you write code to test things like the https://en.wikipedia.org/wiki/Collatz_conjecture. Your choice might not be that simple. You might want both.

But that's the point: A programming error (which I left in intentionally for demonstration purposes) that would give incorrect results in C leads to a crash in Swift (so the programmer can figure out what went wrong and fix it).
 
The very nature of C allows for side effects (passing pointers to writeable regions of memory), and due to its sequence point model it does not allow much optimization.

C was designed to generate assembly that closely matches the sequence in C to allow for low-level programming like kernels and device drivers.

If you design a language like Swift, you keep all the low level stuff abstract to allow for more optimization, not less.

Of course you will always be able to write faster C code, but you have to make more assumptions about possible inputs and side effects.

Swift is designed to provide a safe and small language that allows more interactive code analysis.

Think about all the great Java IDE's for example.

You can never do that in C because you cannot guess the intention of the author. He probably ordered memory access in certain ways to satisfy hardware requirements.

Swift is designed to allow more automatic code refactoring & suggestion, it is not designed to beat C in terms of raw speed.

By taking away the ability to modify low level stuff, the runtime can be optimized for new CPU features more easily. In C you almost always need to recompile for an improved architecture.

The Swing runtime could compensate for that without you even knowing, because you cannot rely on low-level features while writiing your code.

The advantage of swift is not speed, but providing a stable environmemt that allows to take advantage of a modern runtime environment including hardware features you would otherwise need to code for.

Think about the Accelerate Framework that nearly noone uses. Swift could just utilize it without you knowing.

But I'm getting repetitive. In short, you can always write faster code, but you have to adapt manually to changes in hardware or the runtime if you do. Otherwise your code will be stale in the future.
 
If the program works in either case, then the user doesn't care.

But when I run a program written in C++/Objective-C/Swift I am not bombarded with windows asking me to update the runtime for the language nor do I have to install software to run the software I wanted to run in the first place. Furthermore, every piece of software I've seen that is written in Java looks out-of-place and utilizes a bifurcated UI when running on OS X. So yes, the user does care.
 
No, that's the compiler's job.

Yes. But with C it cannot ... That was my whole point.

With Swing it can, because you can not control memory barriers, low-level thread locking, etc.

If a CPU features new multithreading primitives, you would have to rewrite most of your great C code to support that.

Note that GCD, the central dispatch API, uses pthraed under the hood with all its gory primitive lock types and memory barriers.

If you change that, GCD would need to change completly, probably offering you a more advanced API.

Ay least you would need to recompile and re-release.

In Swift, the runtime could simply use different low-level primitives because you cannot create a primitive lock or memory barrier in Swift.

That's the whole point.

(Sometimes the technical level in forums make me write alll that stuff... nothing to do with you.)
 
But when I run a program written in C++/Objective-C/Swift I am not bombarded with windows asking me to update the runtime for the language nor do I have to install software to run the software I wanted to run in the first place. Furthermore, every piece of software I've seen that is written in Java looks out-of-place and utilizes a bifurcated UI when running on OS X. So yes, the user does care.

Your complaints aren't really the fault of the language, I don't think. Having to install the runtime is a PITA that has been placed upon you by whoever manages your OS. It's Apple's fault that a JVM is no longer packaged with OS X, not Java itself or Oracle's.

You could also blame the developer.

Whenever I distribute a Python application, for example, I have it such that whatever the user clicks on is actually a small native program that quickly checks that all the frameworks are in place, or installs them if necessary (asking if it can't do so without permission). Once everything is in place, it launches the application.

I haven't made anything in Java in awhile, but I would take the same approach if I ever wanted to distribute a Java application.
 
Your complaints aren't really the fault of the language, I don't think. Having to install the runtime is a PITA that has been placed upon you by whoever manages your OS. ... It's Apple's fault ... You could also blame the developer.

Who cares? The point is, Java apps suck. It's the responsibility of an app developer to craft a non-sucky user experience.
 
...Do we prefer fast results or correct results?
I don't know who you think "we" is. You shouldn't presume to refer to yourself as "we". Here it is with speed AND bounds checking:

Code:
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <limits.h>


int main(int argc, const char * argv[])
{
    unsigned long n, x, overRun;
    clock_t time;
    
    overRun = (ULONG_MAX - 1) / 3;
    
    time = clock();

    n = 1;
    while(n <= 100000000000){  //1000000 < one second 10000000 five seconds 100000000 62 seconds
        x = n;
        while(x != 1){
            if(x % 2 == 0)
                x = x / 2;
            else{
                if(x > overRun){
                    printf("x * 3 + 1 over runs the size of an unsigned long for n = %lu\n", n);
                    printf("%ld seconds\n", (clock() - time) / CLOCKS_PER_SEC);
                    exit(0);
                }
                x = x * 3 + 1;
            }
        }
       /*This check not needed - bounds check in loop checks for (ULONG_MAX - 1) / 3:
        if(n + 1 > ULONG_MAX){
            printf("out of bounds error\nnumber being checked > ULONG_MAX - ld\n", ULONG_MAX);
            exit(0);
        }*/
        n += 1;
    }
    printf("conjecture valid up to %lu\n", n -1);
    printf("%ld seconds\n", (clock() - time) / CLOCKS_PER_SEC);
    return 0;
}
 
gnasher729: C is not idiot-proof so you have to consider such things when you write code to test things like the https://en.wikipedia.org/wiki/Collatz_conjecture. Your choice might not be that simple. You might want both.
I actually just test-ran the Collatz conjecture with Swift and C++ over the weekend. The results seem to find C++ being faster in this particular cases both with optimizations and without.

http://swift.svbtle.com/how-swift-is-swift-compared-to-c

Of course one shouldn't generalize the results from this one test to mean any difference at a general level.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.