Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mmmdreg

macrumors 65816
Original poster
Apr 14, 2002
1,393
0
Sydney, Australia
For a random reason, I have to split up a 2-byte number represented inputted as hex into the lower 8-bits and upper 8-bits and give those values to 2 different char variables.

If I have the original unsigned short int "key", I can get the lower and upper bytes respectively with simple bit-shift operations. Thus, the following yields the correct output:
Code:
cout << (key & 0xFF) << endl;
cout << ((key >> 8) & 0xFF) << endl;

Given input of 64569 (1111110000111001), it outputs 57 (00111001) and 252 (11111100) - which are clearly the lower and upper bytes respectively.

Now, given foo of type char,

foo = (key & 0xFF) yields 9, as does foo = (char)(key & 0xFF), instead of the 57 I'm expecting.

I'm not sure if this is relevant, but 9 has a binary value of 1001, which is the lowest 4 bits of the input. That said, a char is 8bits right?
What exactly am I missing here?
 

Gelfin

macrumors 68020
Sep 18, 2001
2,165
5
Denver, CO
I can't reproduce the problem you describe. (key & 0xFF) is 57 as expected. Are you sure you are not overlooking a typo and you actually put 0xF instead of 0xFF?
 

laprej

macrumors regular
Oct 14, 2005
108
2
Troy, NY
Your code is behaving correctly as you programmed it. You ask for the character represented by (key & 0xFF), which in this case gives 57. Translated into its ASCII-coded equivalent yields the character '9'.

-Justin

For a random reason, I have to split up a 2-byte number represented inputted as hex into the lower 8-bits and upper 8-bits and give those values to 2 different char variables.

If I have the original unsigned short int "key", I can get the lower and upper bytes respectively with simple bit-shift operations. Thus, the following yields the correct output:
Code:
cout << (key & 0xFF) << endl;
cout << ((key >> 8) & 0xFF) << endl;

Given input of 64569 (1111110000111001), it outputs 57 (00111001) and 252 (11111100) - which are clearly the lower and upper bytes respectively.

Now, given foo of type char,

foo = (key & 0xFF) yields 9, as does foo = (char)(key & 0xFF), instead of the 57 I'm expecting.

I'm not sure if this is relevant, but 9 has a binary value of 1001, which is the lowest 4 bits of the input. That said, a char is 8bits right?
What exactly am I missing here?
 

lee1210

macrumors 68040
Jan 10, 2005
3,182
3
Dallas, TX
as a note, storing text as shorts is called Hollerith notation. In FORTRAN 66 there was no character type, so this was the means of storing text. There is even a macro(operator?) #h, where # is a number. After that you type that number of characters, and the result is the series of shorts.

This is the path to broken homes and shattered dreams. I would recommend avoiding it if possible.

Also, the short doesn't have to be unsigned. You just want the tasty bits inside.

-Lee
 

Ti_Poussin

macrumors regular
May 6, 2005
210
0
Indeed you are printing the ASCII code 57 wich is '9'. If you want to printf the value of the short itself, you have to pass by something like printf or sprintf, just like this

printf( "Value of the short = %u", myShort );

To see the difference, put %c to see the 9 appear. Hope it help.
 

mmmdreg

macrumors 65816
Original poster
Apr 14, 2002
1,393
0
Sydney, Australia
Your code is behaving correctly as you programmed it. You ask for the character represented by (key & 0xFF), which in this case gives 57. Translated into its ASCII-coded equivalent yields the character '9'.

-Justin

I must've been tired. What a silly oversight. Thanks so much! =/
 

gnasher729

Suspended
Nov 25, 2005
17,980
5,566
The C++ "<<" operator for stream output formats the value according to its type.

For numeric types like int, short, long, it takes the value and displays it as a decimal, so 57 is displayed as two letters '5' and '7', 65 is displayed as two letters '6' and '5'.

For character types, it takes the value, interprets it as an ASCII code, and displays it as a letter. So since 57 is the ASCII code for the letter '9', (char) 57 is displayed as a single letter '9', and since 65 is the ASCII code for the letter 'A', (char) 65 is displayed as a single letter 'A'.

(Which is a pain in the *** if you have an array of a gazillion of very small integers, say one billion integers each in the range from 0 to 50. To save space, you want to store them as char mydata [1000000000], but you have to watch out every time you try to display them using <<).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.