Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
"The metre or meter (American spelling), (SI unit symbol: m), is the fundamental unit of length (SI dimension symbol: L) in the International System of Units (SI), which is maintained by the BIPM.[1] Originally intended to be one ten-millionth of the distance from the Earth's equator to the North Pole (at sea level), its definition has been periodically refined to reflect growing knowledge of metrology. Since 1983, it has been defined as "the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second."

If you cannot measure the accuracy of your measurement or your calculations you do not have a tool you can use for science.

I know that roughly 10 loops of a square root exceeds the accuracy of my cheap calculator. I know how to certify my meter stick is one meter.

If I have no way to measure the precision and accuracy of my computer I do not know what it is. I do know changing hardware changes the accuracy, but there are no standard benchmarks to measure the changes.

You have no way to confirm a machine you are using is accurate or defective. Intel proved that in 1985.

It is not a problem for our needs at the present, but if you start to base science and engineering in tool that have now know accuracy, only guesses, you will run into problems.

I understand the precision issues of simple cpu math, but not sin cos tan and other functions. I understand the difference ECC ram would make to the accuracy, I understand how IEEE fixed the rounding errors in a standard was, but we still do not have the ability to measure the accuracy and precision of computers.

This is like the modern youth at the cash register who does not know how to count out change with out the computer figuring it out for him.
The only argument is the errors are more complex than most people can understand.

Okay, if you are talking about measurement uncertainties, then you'll have to agree that averaging is the best way to reduce errors. Precision is the same thing as I have pointed out.

You can always code (for physical systems) a meaningful highest-resolution that can be traded for more computational time. It is the same process in spirit as taking a meter stick and measuring say, a distance but doing it 100 times with 100 meter sticks to get a good reliable result on that measure through averaging. It takes more time, but your result has a precision which is beyond any single meter stick can give. Another example is dithering. In that case, you have say, 16 bits of precision but want to retain 17 bits of data. You add noise which is below 1 bit to the signal, so that a signal which originally would have been below the smallest bit becomes big enough to be coded, but it doesn't always happen, because the added noise is random, so on average, if you look at the signal over time, there appears to be 17bits of data because the smaller than 16bits signals are present. This situation is akin to taking your hand in front of your face which blocks some of your view, then you shake your hand rapidly left and right so that over time, you see a picture of what's in front of you.

Most physical simulations are run this way. One single simulation isn't usually helpful to understand a phenomena, unless the simulation is averaged a-priori like comparing lattice gas fluid dynamics to lattice Boltzmann. You run many, many simulations to get an idea what the final results look like, in a probabilistic way. Whether this is for astro-body trajectories or fracture-failures or whatever topic. The point is, your precision is not a limit to the reliability of these kinds of results.

Beyond this, most processors and systems are certified on some floating-point and integer arithmetic that it must comply with. This is kind of like the Si system of measurement standards that everyone uses. If you have a processor certified by one of these standards, you can be sure that it is accurate to its precision limit like a meter stick is accurate to the smallest notch you have on it. A good method by which most scientific and engineering simulations are validated for both the coding and the hardware is by running comparison and standardized suite of tests. In every application there are canonical examples of scenarios for which there is an accepted answer by analytical approach (solving the equation exactly, for example). If running the said scenario does not reproduce accepted results, then the code or the hardware may be an issue to be rectified. You can't be sure your own code is okay, and as well as the underlying hardware until you have validation against these types of scenarios. This obviously covers all problems down to the hardware precision. Therefore, as long as your machine+code reproduces known behavior to within the hardware precision limit or your application precision limit, you can be confident in its results when you use the code to simulate other scenarios.

Measurement uncertainties are central to the smallest physical phenomena of this universe - quantum effects. In that scenario, we can't possibly know to absolute certainty the position or velocity of quantum particles, since our attempts to measure the particle changes its properties in that our "meter stick" is too massive compared to the size of the particles. This is a fundamental limitation but is not otherwise a problem.
 
Last edited:
if you want to simulate a meteor's path within the next hundred years, you could comfortably do so to an accurate or accurate enough degree.

if you want to simulate the path for the next 10billion years, minuscule errors may begin to compound over that amount of time and at the end, you still may be left wondering if the rock is going to collide with the earth in 10 billion years.

yeah, but... if we have 10 billion years, i think advancements in precision during that time would more than make up for any errors and i would give us as little as 1000 years to do something about it.

(snip)

(snip)

I know that roughly 10 loops of a square root exceeds the accuracy of my cheap calculator. (snip)

(snip)

You have no way to confirm a machine you are using is accurate or defective. Intel proved that in 1985.

It is not a problem for our needs at the present, but if you start to base science and engineering in tool that have now know accuracy, only guesses, you will run into problems.

(snip)

(snip)
The only argument is the errors are more complex than most people can understand.

a) even if you had a calculator that went to 1,000,000,000,000,000,000,000 (add zeros ad nauseum) there will always be the possibility of more accuracy existing. Where is the line?

b) machines will always be some level of defective (no matter how minute) as they are physical things and not ideas. math is an idea. 1 + 1 is always 2 with 100% accuracy. but applying math to physical things, say apples for example, will produce variable results. 1 apple + 1 apple DOES equal 2 apples, but those apples can vary is size, shape, weight, etc. across an infinite amount of instances.

c) though those "guesses" are extremely close to precise at this point in time. when they become not close enough, we (as a species) will dive further into accuracy and precision to solve encountered problems.

d) yet, errors, or rather, imperfections will always exist and we will continue to do our best to refine them further to a point where they don't matter until they do again. rinse, repeat.

i would love to know what the OP requires so much precision for. and also, what would be your ideal benchmark for measure such accuracy? if it was software, wouldn't that software be limited by the computer's own accuracy limits? or would it be something else?

i would just like to say i'm enthralled with this thread and love the genuine discussion occuring. miles beyond typical "when's the new nmp coming" and "which graphics card to get" threads.
 
nuclear weapons simulation, weather prediction, atmospheric research ?


.

i kinda think the problem with simulations isn't the computer's precision limits.. more like we're feeding it bad data and/or a lot of inaccurate assumptions.

like- i don't think the hurricane center people are thinking "we fed it the best code but the reason we evacuated galveston instead of chorpus christi is IEEE754's fault"
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.