For a random reason, I have to split up a 2-byte number represented inputted as hex into the lower 8-bits and upper 8-bits and give those values to 2 different char variables.
If I have the original unsigned short int "key", I can get the lower and upper bytes respectively with simple bit-shift operations. Thus, the following yields the correct output:
Given input of 64569 (1111110000111001), it outputs 57 (00111001) and 252 (11111100) - which are clearly the lower and upper bytes respectively.
Now, given foo of type char,
foo = (key & 0xFF) yields 9, as does foo = (char)(key & 0xFF), instead of the 57 I'm expecting.
I'm not sure if this is relevant, but 9 has a binary value of 1001, which is the lowest 4 bits of the input. That said, a char is 8bits right?
What exactly am I missing here?
If I have the original unsigned short int "key", I can get the lower and upper bytes respectively with simple bit-shift operations. Thus, the following yields the correct output:
Code:
cout << (key & 0xFF) << endl;
cout << ((key >> 8) & 0xFF) << endl;
Given input of 64569 (1111110000111001), it outputs 57 (00111001) and 252 (11111100) - which are clearly the lower and upper bytes respectively.
Now, given foo of type char,
foo = (key & 0xFF) yields 9, as does foo = (char)(key & 0xFF), instead of the 57 I'm expecting.
I'm not sure if this is relevant, but 9 has a binary value of 1001, which is the lowest 4 bits of the input. That said, a char is 8bits right?
What exactly am I missing here?