What if your datatype isn't an int?
While a compiler may optimize it, a bitshift on a CPU is much cheaper than an actual multiplication or divide.
If the right-hand operand is a variable, then you would have to do a lookup, calculation, etc. to see what to multiply or divide by. If you know how many bits you want to shift, why use a different operation? Multiplying or dividing by powers of 2 is not more fun than a bit shift.
Sometimes your intentions in code will be more clear/easier to read if you use a bitshift. For example, if you want to test that bit 14 is high, x&(1<<14)!=0 is easier for me to decipher than x&16384!=0.
Nowadays it's not that common to store statuses in int or short datatypes, you just use a BOOL or some other typedef, accept the wasted bits, and be done with it. This has not always been so, so if you are storing 16 discrete values in one short, it is pretty critical that you know how to shift around, test bits, etc.
Again, not common, but if you are storing 4 8-bit values in a 32-bit int, shifting/masking to pull out the values would be pretty commonplace.
Rarely is something there that is wholly useless. Uses might not readily present themselves to you, and they may be outdated, but they certainly exist.
-Lee