BACK TO INDEX In the non-computer world, a real number written using scientific notation is called normalized if the first digit, D, of the mantissa is 0 < D < 10. This means 1.23*10^3, 2.23*10^3, 9.23*10^3 is normalized but 0.23*10^3 is not. However, 0.23*10^3 can be re-written in normalized form as 2.3*10^2. This can be generalized to binary and hexidecimals by saying that that a number written in this form is normalized iff 0 < D < B where B is the base (i.e. 2 or 16 for bin/hex). Note that for binary, a special situation arises because D must always be set to exactly 1 for the number to be normalized. The thing about "floating" point numbers compared to fixed point numbers is that in floating point the number is encoded, sort of, by saving the bits for an exponent and a mantissa. The number of bits allocated for the exponent and mantissa respectively is fixed though. Further, the numbers are generally saved in normalized form. However, because the first bit in the mantissa is always 1 for binary this bit is implicit and not saved at all. This is why in IEEE 754 floating-point most people speak of a "fractional part" instead of talking about a mantissa (or more correctly a significand). The proper significand can be easily constructed from the fractional part by prefixing "1." before the fractional part though. However, there is one big exception to this rule. For the lowest possible exponent the IEEE 754 specification says that the implicit "1." should be an implicit "0." instead. Whenever the floating point number has the lowest possible exponent it's said that it's "denormalized". The IEEE 754 specification also says that the exponent is always encoded with a certain "bias" so that the lowest possible exponent is always encoded as all zero bits. This means, for example, that a "double" is denormalized if and only if bits 52 up to and including bit 62 is set to zero. Some processors (on devices) revert to software arithmetic for denormal numbers so they might be slightly slower (see Wikipedia article "Denormal number"). For normalized floating point numbers it's basically the exponent that determines how many digits the number has when converted to a base10 decimal expansion in a string. However, denormal numbers usually become even longer because they can start with 0.000000....0001 and then they add the full effect of the lowest possible exponent which becomes 2^-1022 for IEEE 754 double. For details, see: http://stackoverflow.com/questions/1701055/what-is-the-maximum-length-in-chars-needed-to-represent-any-double-value