0.2ghz more is prone wrong? Not at all. Since you are not able to provide any proofs that ARM can go beyond 3.3ghz like 5.0, I would not respond.
I'll play devil's advocate here, and say that could be partially correct.
ARM may very well create their designs with a maximum design frequency in mind, much like TDP (thermal design power), and I would be surprised if they did not provide guidelines to licensees. This could provide limits to for the maximum frequency of an implementation based on the physical limits of the fabrication - i.e. you can't run a small chip that is designed to run at 10W at 100W, put a large heat-sink on it and hope you will be OK. Much in the same way you can't put a Formula 1 engine in a compact car and expect the rest of the car to handle the power.
The argument is that ARM *could* theoretically decide to design a chip that runs at 5GHz, running the same Instruction Set Architecture. There are important differences between Instruction Set Architecture, physical architecture, and implementation/fabrication. They haven't (yet) chosen to do this because their value proposition is running well at very low power compared to the competition. The vast majority of their designs end up in low power devices like phones, and this is where they make their money.
ARM is increasingly penetrating into the server space, where again, high frequency does not equate to optimal workflow. Lots of mid-speed cores run better than a smaller number of high-speed ones, for most workloads, and have fewer challenges with cooling.
What we are saying is that the fundamental architecture (the Instruction set architecture) is not limited per se by frequency, and more than natural language is - you can speed it up and slow it down and still be intelligible.
Very high frequency single-core execution might be useful in some cases, but I suspect it is only a few, which is why overall CPU design is moving away from this as a goal.