This is provably false. Even if there's a serious hardware problem, often it takes a particular sequence of events to trigger the problem, and software changes can eliminate the possibility of that sequence of events occurring.
You can leave now with TL;DR, but I'll describe a particular case.
__
For example, in the mid-80's one of Digital's new MicroVAX systems ran fine when running VMS, but would frequently crash with impossible errors while running UNIX. (Impossible in the sense that "you can't get here from there" things would happen.)
It was a hardware design error, and was easily fixed in software.
The issue was the memory controller. Solid state DRAM loses its state over time, and a periodic
memory refresh (
https://en.wikipedia.org/wiki/Memory_refresh) is necessary to avoid the loss of the contents of memory.
The memory controller on those systems had a bug where the periodic refresh wasn't performed on chunks of RAM that had no activity for too long. The VMS systems had very random assignments of virtual pages to physical pages, so it was very unlikely that a "chunk" (which consisted of quite a few pages) would not be touched within that time period. The UNIX systems had less randomization, so idle chunks were more likely.
Therefore, the UNIX systems would have situations where a big chunk of RAM would revert to zeroes. Clearly, randomly zeroing big chunks of RAM was destabilizing.
The software fix for this hardware bug was simple. In one of the bookkeeping timer interrupts in the system a simple loop was inserted that would read one byte in each chunk (or probably one longword) at an appropriate frequency. (This was done for both operating systems - even though UNIX was showing the problem it was possible for it to hit VMS as well.)
Result - it was impossible for any chunk of memory to not be accessed often enough to trigger refresh.
Hardware problem - software fix.