They are kind of the opposite thing. On a CISC processor, when a multi-cycle complex instruction was encountered, the CPU would resolve the arguments and deliver them to an internal micro-program to handle the operation. Perhaps the foremost examples of this are the string-move and string-compare ops that involve an arbitrary loop of memory operations that usually execute a tiny bit faster than program code to do the same thing would.
Micro-ops (μops) involve breaking a complex operation into its constituent parts and executing the parts individually, which is often faster than dispatching to micro-programs.
Microcode and uops aren't opposites at all. One uop is simply one microcode instruction. Every modern x86 implements those string instructions by decoding them to a loop composed of µops.
You know how Intel high performance cores had 4-wide decode for a very long time? One technicality is that only one of the four decoders can handle the entire x86 ISA. The other three are only capable of decoding 'simple' x86 instructions - that is, those which translate to a single uop. Or maybe it's at most two µops? I don't remember for sure, but regardless of that limit, the point is that only the one 'complex' decoder is capable of generating long sequences of µops from single x86 instructions. (IIRC, this is still true in Intel's new 6-wide decoder for high performance cores, it's just 5+1 instead of 3+1.)
You might think that only having one complex decoder is a problem, but in practice the vast majority of x86 instructions executed by a CPU fall into the 'simple' category. Furthermore, complex things like the string instructions generate a µop loop which might execute hundreds of times. Nobody cares if decode temporarily bottlenecks to 1-wide to handle them.
People get very wrong ideas about how complex average x86 instructions are. Most are quite simple. The encoding is awful, the register count is low, but if you kinda squint the ISA almost looks like an extremely awkward RISC.
The final thing I'll say is that, counterintuitively, µops aren't actually all that micro. They're a
decoded version of the instruction, fully prepped for ease of handling in the execution backend. So, if the instruction encodes a constant in some funky format, that constant is decoded and expanded to full width. When the instruction references a register, the register's architectural name is translated to the correct dynamic register name, which is larger (register renaming means the physical register file or the ROB is much larger than the architectural register file). So on and so forth.
In fact, µops can easily be hundreds of bits wide! The in-memory version of an instruction has to be compact because it's desirable to use less RAM, but the in-core version can be as wide as it needs to be. The job of a decoder is to translate that compact form to the full set of information necessary for the execution backend to do its job.
And yes, that means RISC CPUs also have µops. They too have all sorts of tricks to encode things as compactly as possible. Not as many as x86, which is why their decoders are much simpler than x86 decoders, but still, at the end of the decoding process, you end up with more bits, and that's OK and expected. The main difference is that you don't tend to need large microcode ROMs with complex microprograms to handle the deep weirdness lurking in x86; a clean RISC design decodes nearly everything to single µops.