Hi everyone. Total noob question... especially directed at those that have read and gone through "Programming in Objective-C" by Stephen Kochan. I'm stuck at a stupid example in the book (chapter 6 - implementation of a Calculator class). I'll save you the details of the entire code unless you request it... in which case I can send directly. Here's the skinny of my problem:
The results of the code in the book are nowhere near what I'm getting. Through troubleshooting I've narrowed it to calling the variables in the program Float vs. Double. For instance:
Calling "value1" and "value2" as double (as it does in the book) gives me the wrong result:
...
double value1, value2;
I get the user imput of value1 and value2 (along with the arithmetic operator) like so:
scanf ("%1f %c %1f", &value1, &operator, &value2);
adding 1+1 (the assignments of value1 and value2) gives me a result of .02. Last I checked this is wrong. Adding 2+2 gives me 4.00. Ok that's great. But adding 2+3 gives me 34.00. And get this... adding 9+9 gives me 524288.25. WHAT ON EARTH???
By the way, in case you haven't figured it out the printf command looks something like this:
printf ("The result is %.2f\n", [deskCalc accumulator]);
Ok for kicks I decided to find out what would happen by calling "value1" and "value2 as a FLOAT. It seems to give me the right result, everytime:
float value1, value2;
Adding 1+1 gives me 2.00 (perfect). Adding 2+2 gives me 4.00 (perfect). And adding 2+3 gives me 5.00 (again perfect). Adding 9+9 gives me 18.00 (that's 4 in a row). So why on earth should changing it from double to float make any difference?? The book uses double so I should be able to as well, yes?
Again, be easy on me. I have never had any programming experience beyond simple Access and Excel macros... and some Adobe After Effects expressions so I'm sure it's a simple answer. And if you need me to post the full code somewhere let me know how. Finally if any of you know of a discussion board specifically for readers of the "Programming in Objective-C" book I mentioned that would be awesome too. Thanks everyone in advance!
The results of the code in the book are nowhere near what I'm getting. Through troubleshooting I've narrowed it to calling the variables in the program Float vs. Double. For instance:
Calling "value1" and "value2" as double (as it does in the book) gives me the wrong result:
...
double value1, value2;
I get the user imput of value1 and value2 (along with the arithmetic operator) like so:
scanf ("%1f %c %1f", &value1, &operator, &value2);
adding 1+1 (the assignments of value1 and value2) gives me a result of .02. Last I checked this is wrong. Adding 2+2 gives me 4.00. Ok that's great. But adding 2+3 gives me 34.00. And get this... adding 9+9 gives me 524288.25. WHAT ON EARTH???
By the way, in case you haven't figured it out the printf command looks something like this:
printf ("The result is %.2f\n", [deskCalc accumulator]);
Ok for kicks I decided to find out what would happen by calling "value1" and "value2 as a FLOAT. It seems to give me the right result, everytime:
float value1, value2;
Adding 1+1 gives me 2.00 (perfect). Adding 2+2 gives me 4.00 (perfect). And adding 2+3 gives me 5.00 (again perfect). Adding 9+9 gives me 18.00 (that's 4 in a row). So why on earth should changing it from double to float make any difference?? The book uses double so I should be able to as well, yes?
Again, be easy on me. I have never had any programming experience beyond simple Access and Excel macros... and some Adobe After Effects expressions so I'm sure it's a simple answer. And if you need me to post the full code somewhere let me know how. Finally if any of you know of a discussion board specifically for readers of the "Programming in Objective-C" book I mentioned that would be awesome too. Thanks everyone in advance!