Funny how people selectively preach dorkbench when M1 does well but downplay a more relevant real world workload when M1 doesn't. Chess has been a relevant workload and benchmark going back to IBM Deep Blue to current DeepMind AlphaZero that topple all the grandmaster human players.
The OP's result are only relevant it you intend to use some version of stockfish on the M1 (we don't even know which version as the OP has not bothered to give details).
Do you not know that Geekbench and other benchmark tools consists in "real world workloads", which include chess (in SPEC) ? There are just many of them, which makes these tools more relevant than a single algorithm like Stockfish and Cinebench to evaluate a CPU performance.
What's more, benchmark tools use pieces of code that are designed to be as platform-agnostic as possible. It is no coincidence that SPEC tests, Geekbench, and Novabench results are in broad agreement WRT to CPU relative scores.
Stotckfish has never proven to be a relevant tool to compare CPU performance, especially CPUs from different architectures. It's not designed to. The same can be said for Cinebench BTW. It is unclear whether the level of optimisation is the same for X86 an ARM (the first ARM version is only months old).
You can find programs that run super slow on M1 compared to X86 CPUs. These are usually open-source programs that have a long history of optimisation for X86 and have only been ported recently to ARM with minimal/no optimisation. Just for compatibility.
In fact, some programs run much slower under ARM native than under Rosetta 2, which shows how much the code lacks ARM optimisations (no SIMD code), while the X86 version have optimisations that Rosetta can take advantage of.