Hexadecimals come up in a lot of places in programming and computer science.
I see them as a shorthand notation for raw strings of bits.
I am talking about things of this kind:
0x is used to indicate that the number should be interpreted as hexadecimal,
but does not affect its value in any other way.
The actual value is encoded after the prefix using digits
0-9 and letters
I am not going into how that works,
but it is really no different from binary representation of integers.
Try a quick Google search if you are unfamiliar with these things,
but they are really just notation.
The first part denotes the type which is a single precision (32 bit) floating point number in this case. The second part should be its value, but what number does this string represent?
Of course, Stackoverflow had the answer for me3.
The part before the
p encodes the significand (mantissa) and the rest is the exponent in decimal.
So this is a combination of decimal and hexadecimal notation.
The exponent can be negative and is taken with base 2,
effectively multiplying the significand with .
The full specification can be found as part of the WebAssembly specification itself4.
Now we can decode the example from above.
The significand is
0x1 and corresponds to the decimal number . The exponent is
Therefore the value represented is in decimal.