I was just messing with IronPython and noticed that there was a Python FixedPoint library. Fixed point maths is at odds with the more generally used IEEE floating point standard. In-fact floating point Maths is so much more common that most modern CISC processors have an FPU for these operations.
The idea behind floating point is to abstract away the concept of the an integral and quotient parts of a real number. However, it cannot abstract away the reality of processor design; where registers have a fixed and finite number of bits typically 32 or 64. To see how bad an approximation float, double
and even decimal make of real-number operations doesn’t take much effort:
For example, the following:
Update: The code snippet below has a typo which prints out the wrong value for decimal. (Pointed out by Stu in comments). Decimal has 128-bits of precision making far more suitable for financial applications.
Will output this:
Ouch, not very precise. In fact float falls over at 1000 iterations of 0.001 because of it smaller 32-bit precision.
This is a trumped-up example but it makes the point and these discrepancies are even greater when divisions or multiplications are done of numbers that vary greatly in magnitude.
So gaving floatingpoint a try in Python:
Much better! My fp object has ruthlessly ignored the first addition because it fell outside its precision of 6, but has exactly the correct value after the summation. So whats the cost of this accuracy? Speed; the above takes many seconds to execute.
The above example involves a million fixed point additions which may seem silly but in a financial setting Monte Carlo simulation models (to name just one) rely on executing many millions of calculations as quickly as possible.
When you need speed; go for floating point. When you need accuracy; go for fixed.
If you need both… you might be in trouble.