"The metre or meter (American spelling), (SI unit symbol: m), is the fundamental unit of length (SI dimension symbol: L) in the International System of Units (SI), which is maintained by the BIPM.[1] Originally intended to be one ten-millionth of the distance from the Earth's equator to the North Pole (at sea level), its definition has been periodically refined to reflect growing knowledge of metrology. Since 1983, it has been defined as "the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second."
If you cannot measure the accuracy of your measurement or your calculations you do not have a tool you can use for science.
I know that roughly 10 loops of a square root exceeds the accuracy of my cheap calculator. I know how to certify my meter stick is one meter.
If I have no way to measure the precision and accuracy of my computer I do not know what it is. I do know changing hardware changes the accuracy, but there are no standard benchmarks to measure the changes.
You have no way to confirm a machine you are using is accurate or defective. Intel proved that in 1985.
It is not a problem for our needs at the present, but if you start to base science and engineering in tool that have now know accuracy, only guesses, you will run into problems.
I understand the precision issues of simple cpu math, but not sin cos tan and other functions. I understand the difference ECC ram would make to the accuracy, I understand how IEEE fixed the rounding errors in a standard was, but we still do not have the ability to measure the accuracy and precision of computers.
This is like the modern youth at the cash register who does not know how to count out change with out the computer figuring it out for him.
The only argument is the errors are more complex than most people can understand.
Okay, if you are talking about measurement uncertainties, then you'll have to agree that averaging is the best way to reduce errors. Precision is the same thing as I have pointed out.
You can always code (for physical systems) a meaningful highest-resolution that can be traded for more computational time. It is the same process in spirit as taking a meter stick and measuring say, a distance but doing it 100 times with 100 meter sticks to get a good reliable result on that measure through averaging. It takes more time, but your result has a precision which is beyond any single meter stick can give. Another example is dithering. In that case, you have say, 16 bits of precision but want to retain 17 bits of data. You add noise which is below 1 bit to the signal, so that a signal which originally would have been below the smallest bit becomes big enough to be coded, but it doesn't always happen, because the added noise is random, so on average, if you look at the signal over time, there appears to be 17bits of data because the smaller than 16bits signals are present. This situation is akin to taking your hand in front of your face which blocks some of your view, then you shake your hand rapidly left and right so that over time, you see a picture of what's in front of you.
Most physical simulations are run this way. One single simulation isn't usually helpful to understand a phenomena, unless the simulation is averaged a-priori like comparing lattice gas fluid dynamics to lattice Boltzmann. You run many, many simulations to get an idea what the final results look like, in a probabilistic way. Whether this is for astro-body trajectories or fracture-failures or whatever topic. The point is, your precision is not a limit to the reliability of these kinds of results.
Beyond this, most processors and systems are certified on some floating-point and integer arithmetic that it must comply with. This is kind of like the Si system of measurement standards that everyone uses. If you have a processor certified by one of these standards, you can be sure that it is accurate to its precision limit like a meter stick is accurate to the smallest notch you have on it. A good method by which most scientific and engineering simulations are validated for both the coding and the hardware is by running comparison and standardized suite of tests. In every application there are canonical examples of scenarios for which there is an accepted answer by analytical approach (solving the equation exactly, for example). If running the said scenario does not reproduce accepted results, then the code or the hardware may be an issue to be rectified. You can't be sure your own code is okay, and as well as the underlying hardware until you have validation against these types of scenarios. This obviously covers all problems down to the hardware precision. Therefore, as long as your machine+code reproduces known behavior to within the hardware precision limit or your application precision limit, you can be confident in its results when you use the code to simulate other scenarios.
Measurement uncertainties are central to the smallest physical phenomena of this universe - quantum effects. In that scenario, we can't possibly know to absolute certainty the position or velocity of quantum particles, since our attempts to measure the particle changes its properties in that our "meter stick" is too massive compared to the size of the particles. This is a fundamental limitation but is not otherwise a problem.
Last edited: