Adrian Cochrane on Nostr: How do computers typically process floating point numbers? IEEE's standard stores a ...
How do computers typically process floating point numbers?
IEEE's standard stores a "mantissa" (with sign bit) & "exponent" (2's complement?), indicate the number mantissa*2^exponent. They are parsed from human-friendly "scientific notation" by dividing/multiplying by the appropriate exponent of 10, in heavily-optimized hard-to-follow code.
To multiply: Multiply mantissas, add exponents, & XOR signs. Then re-normalize.
Add/subtract's a little more complex...
1/?
IEEE's standard stores a "mantissa" (with sign bit) & "exponent" (2's complement?), indicate the number mantissa*2^exponent. They are parsed from human-friendly "scientific notation" by dividing/multiplying by the appropriate exponent of 10, in heavily-optimized hard-to-follow code.
To multiply: Multiply mantissas, add exponents, & XOR signs. Then re-normalize.
Add/subtract's a little more complex...
1/?