CUDA cores are slower than RT cores, so people don't use them.
I think you misunderstand what the "CUDA core" is and what the "RT core" is. A "CUDA core" is the GPU compute shader core, the hardware capable of executing shader programs (be it for graphical or general purpose computation tasks). An "RT core" is an auxiliary hardware unit that accelerates the task of finding which triangle is hit by a ray. But "RT cores" cannot run programs, all they can do is take a ray and a list of triangles and say "oh, this triangle is hit", but they can do it really fast, much faster than if you wrote a program for it using the general purpose shader cores. You still need a program that generates the rays and decides what to do with the hit information (e.g. shade the pixel based on the light bounces etc.). So an RT accelerated GPU program runs on "CUDA cores" and uses "RT cores" to make raytracing fast.
What is the point of the optix renderer if the cuda renderer also uses the RT hardware?
It's been a while that I last worked with CUDA and I never touched Optix but from what I understand CUDA itself does not have any APIs to access the RT hardware and Optix is basically a framework that has access to RT hardware and uses CUDA to specify what to do with the RT results. It's a little bit confusing, but one (a bit naive and not entirely correct) way is to consider Optix as CUDA with hardware raytracing.