It's been probably 5 years since I heard the reasoning, and in abstract I agree with you. Micro-ops are micro-ops, in theory. I wish I could speak to it more specifically, but the way it was my understanding that the chip would have to essentially switch encode and decode "modes" in order to support functions that just weren't useful or efficient today.
To me it seems obvious that Apple is using the same methodology today in converting instructions to smaller micro-ops, but if it was as simple as allowing the encoder and decoder to simply support additional instruction sets than why did Apple push for the death of 32 bit apps today? I would personally think that it isn't just encode and decode blocks that make the micro-ops from RISC instructions, but reducing what's supported may make the execution pipeline more efficient as well as the different execution engines simpler thus faster and more efficient. This is a space well outside of my wheel house.