Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ArtOfWarfare

macrumors G3
Original poster
Nov 26, 2007
9,672
6,212
The worst thing about exposing yourself to Python (or other, similar languages) is that later on, you no longer can love Obj-C after seeing BS like this (actual messages printed out during unit tests.)

Code:
("1") is not equal to ("1")
("2") is not equal to ("2")
("2") is not equal to ("2")
("1") is not equal to ("1")

What was the actual problem? The method on the left was returning NSUIntegers and the value it was being compared to was just "1" or "2". To make my tests pass, I had to explicitly cast the values on the right to NSUIntegers.

I could understand* if floating points were involved, but how the heck is it messing this up? Integral value on the left and integral value on the left. The fact one is unsigned should make no difference. The only difference I can see is one is 32-bit wide while the other is 64-bit wide, but I thought that was an implementation detail that was automatically handled?

*Sort of. Again, it seems like an implementation detail that aught to be handled automatically.

I can think of numerous examples of similar problems in C, C++, and Java, but rarely have I thought that Python had retarded issues like this.

About the only thing I run into Python that I find frustrating is when a method expects, for example, a list of strings, but instead I pass in just a single string. At which point I think, jeez, I wish I was working with jQuery, which tends to automatically handle items as being lists of length 1.
 
The worst thing about exposing yourself to Python (or other, similar languages) is that later on, you no longer can love Obj-C after seeing BS like this (actual messages printed out during unit tests.)

Code:
("1") is not equal to ("1")
("2") is not equal to ("2")
("2") is not equal to ("2")
("1") is not equal to ("1")

What was the actual problem? The method on the left was returning NSUIntegers and the value it was being compared to was just "1" or "2". To make my tests pass, I had to explicitly cast the values on the right to NSUIntegers.

I could understand* if floating points were involved, but how the heck is it messing this up? Integral value on the left and integral value on the left. The fact one is unsigned should make no difference. The only difference I can see is one is 32-bit wide while the other is 64-bit wide, but I thought that was an implementation detail that was automatically handled?

*Sort of. Again, it seems like an implementation detail that aught to be handled automatically.

I can think of numerous examples of similar problems in C, C++, and Java, but rarely have I thought that Python had retarded issues like this.

About the only thing I run into Python that I find frustrating is when a method expects, for example, a list of strings, but instead I pass in just a single string. At which point I think, jeez, I wish I was working with jQuery, which tends to automatically handle items as being lists of length 1.

Please post the actual code that didn't work.
 
The actual test code that yielded these problems:

Code:
YRFileSystemItem *testItem = [[YRFileSystemItem alloc] init];
    
    [testItem testOnly_setRelativePath:@"a.a"];
    XCTAssertEqual([testItem extensionDotLocation], 1, @"Failed to find dot in a.a");
    
    [testItem testOnly_setRelativePath:@"aa.a"];
    XCTAssertEqual([testItem extensionDotLocation], 2, @"Failed to find dot in aa.a");
    
    [testItem testOnly_setRelativePath:@"aa.aa"];
    XCTAssertEqual([testItem extensionDotLocation], 2, @"Failed to find dot in a.aa");
    
    [testItem testOnly_setRelativePath:@"a.aa"];
    XCTAssertEqual([testItem extensionDotLocation], 1, @"Failed to find dot in a.aa");

(I had to add casts to the numbers I was asserting to make them NSUIntegers and have these tests pass.)

Here's the code:

Code:
- (NSUInteger)extensionDotLocation {
    NSCharacterSet *set = [NSCharacterSet decimalDigitCharacterSet];
    NSUInteger length = relativePath.length;
    if (length < 3) {
        return NSNotFound;
    }
    NSUInteger location = 1; // Ignore periods indicating hidden files.
    do {
        NSLog(@"Running with location of %lu", (unsigned long)location);
        location = [relativePath rangeOfString:@"." options:0 range:NSMakeRange(location, length - location - 1)].location;
        if (location == NSNotFound) {
            return NSNotFound;
        } // ignore decimals in numbers, or dots immediately followed by more dots
    } while ((([set characterIsMember:[relativePath characterAtIndex:location - 1]] &&
              [set characterIsMember:[relativePath characterAtIndex:location + 1]]) ||
              [relativePath characterAtIndex:location + 1] == '.') &&
              ++location < length - 1);
    
    if (location >= length - 1) {
        return NSNotFound;
    }
    
    return location;
}

I'll give you that the code is quite ugly, but the fact that it passes all 15 unit tests I wrote for it (which a variety of funky file names) indicates to me that it works. IE, file names with multiple extensions, file names that start or end with a period, file names with version numbers mixed in. It fails on some particularly obscure things (IE, "1.7zip"), but most file names don't end in numbers and most file extensions don't start with numbers, so I'm content. If someone has suggestions for improvement in either readability or functionality though, I'd be happy to use them.

Oh, there's also this method:

Code:
- (void)testOnly_setRelativePath:(NSString *)newRelativePath {
    relativePath = newRelativePath;
}

I don't really want this method to exist, but I can't unit test my code without it.

I suppose that maybe the best thing to do would be to pull the method out of my class, stick it in a category on NSString, then write unit tests for that... it would get rid of the requirement to actually set up a YRFileSystemItem.
 
Last edited:
I think he was making an observation as opposed to asking a question. ;)

There are two question in the OP (bold added):

I could understand* if floating points were involved, but how the heck is it messing this up? Integral value on the left and integral value on the left. The fact one is unsigned should make no difference. The only difference I can see is one is 32-bit wide while the other is 64-bit wide, but I thought that was an implementation detail that was automatically handled?
C promotes smaller types to larger types in an expression, so a 32-bit value being compared to a 64-bit value should be promoted to 64-bits before comparing. It does this by sign-extension or zero-extension, depending on if the value is signed or unsigned.

So without seeing the actual code that was causing the problem, there's no way to answer the questions, or to investigate an explanation for the reported behavior.
 
The actual test code that yielded these problems:

Code:
YRFileSystemItem *testItem = [[YRFileSystemItem alloc] init];
    
    [testItem testOnly_setRelativePath:@"a.a"];
    XCTAssertEqual([testItem extensionDotLocation], 1, @"Failed to find dot in a.a");
    
    [testItem testOnly_setRelativePath:@"aa.a"];
    XCTAssertEqual([testItem extensionDotLocation], 2, @"Failed to find dot in aa.a");
    
    [testItem testOnly_setRelativePath:@"aa.aa"];
    XCTAssertEqual([testItem extensionDotLocation], 2, @"Failed to find dot in a.aa");
    
    [testItem testOnly_setRelativePath:@"a.aa"];
    XCTAssertEqual([testItem extensionDotLocation], 1, @"Failed to find dot in a.aa");

(I had to add casts to the numbers I was asserting to make them NSUIntegers and have these tests pass.)

Here's the code:

Code:
- (NSUInteger)extensionDotLocation {
    NSCharacterSet *set = [NSCharacterSet decimalDigitCharacterSet];
    NSUInteger length = relativePath.length;
    if (length < 3) {
        return NSNotFound;
    }
    NSUInteger location = 1; // Ignore periods indicating hidden files.
    do {
        NSLog(@"Running with location of %lu", (unsigned long)location);
        location = [relativePath rangeOfString:@"." options:0 range:NSMakeRange(location, length - location - 1)].location;
        if (location == NSNotFound) {
            return NSNotFound;
        } // ignore decimals in numbers, or dots immediately followed by more dots
    } while ((([set characterIsMember:[relativePath characterAtIndex:location - 1]] &&
              [set characterIsMember:[relativePath characterAtIndex:location + 1]]) ||
              [relativePath characterAtIndex:location + 1] == '.') &&
              ++location < length - 1);
    
    if (location >= length - 1) {
        return NSNotFound;
    }
    
    return location;
}

...(actual messages printed out during unit tests.)

Code:
("1") is not equal to ("1")
("2") is not equal to ("2")
("2") is not equal to ("2")
("1") is not equal to ("1")

That test code does not produce that output. Please post the actual code used to generate the messages from the original post.
 
There are two question in the OP (bold added):

I could understand* if floating points were involved, but how the heck is it messing this up? Integral value on the left and integral value on the left. The fact one is unsigned should make no difference. The only difference I can see is one is 32-bit wide while the other is 64-bit wide, but I thought that was an implementation detail that was automatically handled?
C promotes smaller types to larger types in an expression, so a 32-bit value being compared to a 64-bit value should be promoted to 64-bits before comparing. It does this by sign-extension or zero-extension, depending on if the value is signed or unsigned.

So without seeing the actual code that was causing the problem, there's no way to answer the questions, or to investigate an explanation for the reported behavior.

Since I don't see any casts in the code that was posted, I would guess the problem has to do with 32/64 bit and NSNotFound. NSNotFound is the 64 bit number 0xFFFFFFFFFFFFFFFF (that should be 16 F). Assigning it to unsigned int changes its value to 0xFFFFFFFF, and then it's not equal to NSNotFound anymore.

Here is what C and Objective-C do:

Converting an integer to bool gives 0 if the integer was 0, 1 if the integer was not 0.

Otherwise, if the original value can be represented in the new type, the value is unchanged.

Otherwise, when converting a positive number to an unsigned type that is too small, excess bits are thrown away.

Otherwise, when converting a negative number to an unsigned type, add 2^64 to the number until it is positive, then throw away any excess bits.

Otherwise, we are converting to a signed type with fewer bits, or from unsigned to a signed type with the same number of bits. The C Standard says that the result is implementation defined (that is check the compiler documentation) or throws a signal. MacOS X and iOS convert to an unsigned type with the same number of bits, then interpret the result as a signed integer.
 
Last edited:
That test code does not produce that output. Please post the actual code used to generate the messages from the original post.

It does for me. Screenshot attached as proof (I had to change it back to without the cast). relativePath is just an NSString. The code is being run in Mavericks 10.9.0, Xcode is version 5.0.1. Neither are beta versions.

My computer is 2007 iMac (the first aluminum model.)
 

Attachments

  • Screen Shot 2013-11-08 at 12.57.29 PM.png
    Screen Shot 2013-11-08 at 12.57.29 PM.png
    63 KB · Views: 152
See the discussion here:
http://stackoverflow.com/questions/19178109/xctassertequal-error-3-is-not-equal-to-3

To summarize it, the assertion macro expands to code that encodes the scalars into NSValues, which are then compared using isEqualToValue:. Since NSValue encodes the type of the value in the object, signed and unsigned integers produce different NSValues. The resulting comparison then differs from C's rules about signed and unsigned type promotions.

A solution is also summarized at the link above: XCTAssertTrue has a simpler macro expansion.

Finally, I suggest filing a bug with Apple about this. In my opinion, the assertions should match C's rules for scalars, not Cocoa's rules for NSValues, although I admit there's room for debate on this.
 
See the discussion here:
http://stackoverflow.com/questions/19178109/xctassertequal-error-3-is-not-equal-to-3

To summarize it, the assertion macro expands to code that encodes the scalars into NSValues, which are then compared using isEqualToValue:. Since NSValue encodes the type of the value in the object, signed and unsigned integers produce different NSValues. The resulting comparison then differs from C's rules about signed and unsigned type promotions.

A solution is also summarized at the link above: XCTAssertTrue has a simpler macro expansion.

Finally, I suggest filing a bug with Apple about this. In my opinion, the assertions should match C's rules for scalars, not Cocoa's rules for NSValues, although I admit there's room for debate on this.

O.O

I think you're laying the blame in the wrong spot! NSValue can't compare values worth crap if 1 is different from +1!

If it's going to insist on saying they aren't the same, they aught to at least have NSValue print out the type as part of its description rather than just the value.
 
O.O

I think you're laying the blame in the wrong spot! NSValue can't compare values worth crap if 1 is different from +1!

If you haven't yet, you should look at the code provided in the link. In particular, look at how the NSValues are constructed.

Whether 1 equals +1 or not depends on the signed/unsigned type of the value. Again, this is a side-effect of the mechanism used for performing the comparison. A fair case can be made that NSValue's behavior is reasonable and predictable, when the class is used properly. It's questionable whether this particular situation is a proper use.

I'm not sure where you think I'm laying the blame, since I agree that the current behavior goes against C's rules for comparing scalars. The values provided as args to the macro undergo a non-obvious conversion to NSValues. Why would an ordinary user of the macro be expected to know about this conversion?

I can't think of a good reason for an ordinary user to be expected to know the macro's internals; indeed, it violates encapsulation principles. Nor can I think of a good reason to deviate from C's rules for evaluating scalars, when scalars are given as args. I think the macro is poorly implemented, because it has non-obvious results that go against the simplest interpretation of its args (i.e. as C scalars).

If blame is to be assigned, I think it lies squarely in the laps of whoever at Apple decided to use NSValues instead of scalars. I don't blame NSValue for being misused in this situation ("misused" being my opinion). I think its current behavior is justifiable. However, I think its use here is unjustified.

In short, I think the implementers made a mistake by using NSValue.


If it's going to insist on saying they aren't the same, they aught to at least have NSValue print out the type as part of its description rather than just the value.
That seems like a good idea, especially since the type of the value is contributing to the assertion's failure.

Be sure to include that when you file a bug-report against XCTAssertEquals and its unexpected failure mode.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.