[previous_group] * [up] [next] [next_group]

Precision vs Accuracy

Let us make a few definitions:
precision
- the number of digits available to represent the mantissa.
accuracy
- the maximum error we introduce because we truncate the digits. This is half of the value of the least significant digit present.
For example, in the national debt
     4.137e12

we can be off by as much as
     0.0005e12

     5.000 e8

or $5 hundred million.

But the accuracy is dependent on the value of the exponent. For example, the number of people in this course is

     8.300e 1

which is off at most by
     0.0005e 1

     5.000 e-3

or only 0.005 people.

In both of these cases we have 4 digits of precision, but vastly different accuracy in the representation.

So we see that the limited number of digits affects the precision and accuracy with which we can store numbers in the computer. In addition, because of this limitation, performing arithmetic operations can affect the accuracy of the result.

For example, consider:

Division
Multiplication
Addition
Subtraction

What if we do multiple arithmetic operations ?

So we see that we cannot represent ALL numbers exactly with a finite number of bits for representation of floating point numbers. We also see that we can introduce errors in our results when we do arithmetic operations with floating point numbers, also due to the limited precision we have available.

However, all is not lost. There are techniques to iteratively compute more complicated functions such as sin, cos, etc., and manitain a desired accuracy.

Consider an example.


[up] to Overview.

[next]